Platform Performance 30th Edition (Fall 2011)

Fall 2011 Platform Performance 30th Edition

Chapter 3 (Software Performance) discussed some best practices for building high performance map services, and the importance of selecting the right software technology to support your business needs. Chapter 9 (Performance Fundamentals) will provide an overview of capacity planning performance models assuming all hardware platforms are the same. This chapter will focus on hardware platform performance, and share the importance of selecting the right computer technology to support your system performance needs.

Platform Performance Baseline
The world we live in today is experiencing the benefits of rapid technology change. Technology advancements are directly impacting GIS user productivity—the way we all deal with information and contribute to our environment. Our ability to manage and take advantage of technology benefits can contribute to our success in business and in our personal life.



To develop a system design, it is necessary to identify user performance needs. User productivity requirements are represented by the workstation platforms selected by users to support their computing needs. GIS users have never been satisfied with platform performance, and each year power users are looking for the best computer technology available to support their processing needs. As platform technology continues to improve, performance expectations may change. It is not clear just when computers will be fast enough for the GIS professionals – there is always more we could do if we had the power. Application and data servers must be upgraded to continue meeting increasing user desktop processing requirements.

GIS user performance expectations have changed dramatically over the past 10 years. This change in user productivity is enabled primarily by faster platform performance and lower hardware costs.

Figure 7.1 identifies the hardware desktop platforms selected by GIS users as their performance baseline since the ARC/INFO 7.1.1 release in February 1997. This Performance Baseline History has improved user productivity and expanded acceptance of GIS technology.



Performance Baseline History
Figure 7-2 provides a graphic overview of Intel workstation performance over the past ten years. The chart shows the radical change in relative platform performance since 2000. Technology change introduced by hardware platform vendors made a major contribution to performance and capacity enhancements over the past 11 years.

The boxes at the bottom of the chart represent the performance baselines selected to support Esri capacity planning models. These performance baselines were reviewed and updated each year to keep pace with the rapidly changing hardware technology.

 The Intel founder Gordon E. Moore released a paper in 1965 that predicted that the number of components in integrated circuits would double every year through at least 1975. His prediction, known today as Moore’s law, has proven true over the last 50 years and continues to contribute to computer performance gains.

Reducing the distance between integrated circuits has a direct impact on platform processor compute performance. A function that doubles every two years when plotted on a chart would produce an exponential growth curve. Figure 7-3 shows a plot of the Esri performance baselines with an exponential curve overlay. From this chart, you can see that the performance gains we have experienced since CY2000 track remarkably close to the exponential performance gains predicted by Gordon E. Moore. In fact, if past experience is a good predictor of the future, we can expect some remarkable per core performance gains over the next couple of years as indicated on the chart.

There is some discussion questioning whether platform per core performance will continue to improve as it has over the past 10 years. Moore's law deals with components getting smaller and closer together on integrated circuits with each production design cycle. Performance has improved due to shorter travel distances between processor chip components. Distances between components today are approaching atomic size, and there could be physical limits to how much faster the chip will perform. Other factors such as temperature and cooling limitations could limit further processor performance gains, and performance per core could start to level off with future chip releases (represented by the red curve on the chart). High capacity chip configurations (10 core per chip, 12 core per chip, etc) are better for virtual server deployment, while at the same time increase heat generation and reduce peak processing speeds (MHz); another factor limiting per core performance gains. 

The change in hardware performance over the years has introduced unique challenges for capacity planning and for software vendors trying to meet customer performance and scalability expectations. Understanding how to handle hardware performance differences is critical when addressing capacity planning, performance, and scalability issues.

Figure 7-4 shows how user expectations have changed over the past 11 years. An ArcGIS Desktop simple dynamic map display processing time in CY2000 would take almost 6 seconds. That same map display today can be rendered in less than 0.25 seconds – over 23 times faster than just 11 years earlier. Most of this performance gain can be accounted for by the change in platform technology.

Figure 7-4 identifies a minimum user performance expectation range (1-2 seconds) which we believe may open new opportunities on how we may use GIS. Traditional heavy map displays can now be rendered in less than 1 second, suggesting hardware technology may no longer be a limitation on GIS user productivity. IT departments would like to buy higher capacity platforms and leverage virtual server environments for simpler administration (exchange user performance for lower administration costs). I expect GIS users will see this as an opportunity to incorporate more complex analysis into their user workflows, leveraging more compute intensive statistical analysis and logistics routing functions for use in standard business workflows. Heavier processing flows will require more hardware performance to keep user productivity at a peak level.



Knowing how to account for platform technology change is fundamental to understanding capacity planning. Figure 7-5 identifies a simple relationship that we have used since 1992 to relate platform performance with capacity planning.

The relationship simply states that if one can determine the amount of work (display transactions) that can be supported by server A and identify the relative performance between server A and server B, then one can identify the additional work that can be supported by server B. This relationship is true for single-core servers (servers with a single computer processing core) and for multi-core servers with the same number of cores. This relationship is also true when comparing the relative capacity of server A and server B.



SPEC Performance Benchmarks
Identifying a fair measure of relative platform performance and capacity is very important. Selection of an appropriate performance benchmark and agreement on how the testing will be accomplished and published are all very sensitive hardware marketing issues.

Figure 7-6 shares the mission statement published by the Standard Performance Evaluation Corporation (SPEC), a consortium of hardware vendors established in the late 1980s for the purpose of establishing guidelines for conducting and sharing relative platform performance measures.



The SPEC compute-intensive benchmarks have been used by Esri as a reference for relative platform capacity metrics since 1992. The system architecture design platform sizing models used in conjunction with these relative performance metrics have supported Esri customer capacity planning since that time. The SPEC benchmarks were updated in 1996, 2000, and 2006 to accommodate technology changes and improve relative performance metrics.

SPEC provides a separate set of integer and floating point benchmarks. Computer processor core are optimized to support integer or floating point calculations, and performance can be very different between these environments. Platform capacity test results with Esri ArcGIS Desktop and Server software has tracked quite close to the SPEC integer relative performance benchmark results, suggesting the Esri ArcObjects software predominantly uses integer calculations. The SPEC integer benchmarks should be used for relative platform performance estimates when using ArcGIS software technology.

SPEC also provides two methods for conducting and publishing benchmark results. The SPECint2006 benchmarks measure execution time for a single benchmark instance and use this measure for calculating relative platform performance. The SPECint_rate2006 benchmarks are conducted using several concurrent benchmark instances (maximum platform capacity) and measuring executable instance cycles over a 24-hour period. The SPECint_rate2006 benchmark results are used for relative platform capacity planning metrics in the Esri system architecture design sizing models.

There are two results published on the SPEC site for each benchmark, the conservative (baseline) and the aggressive (result) values. The conservative baseline values are generally published first by the vendors, and the aggressive values are published later following additional tuning efforts. Either published benchmarks can be used to estimate relative server performance, although the conservative benchmarks would provide the most conservative relative performance estimate (removes tuning sensitivities). We recommend using the conservative benchmarks for capacity planning.

Figure 7-7 provides an overview of the published SPEC2006 benchmark suites. The conservative SPECint_rate2006 benchmark results are used in the Esri system architecture design documentation as a vendor-published reference for platform performance and capacity planning.



The SPEC performance benchmarks are published on the Web. The Esri Capacity Planning Tool release site includes a HardwareSPEC workbook that shows a list of published SPECrate_integer benchmarks. The SRint2000 tab includes all vendor published SPECrate_int2000 benchmarks available on the SPEC site (SPEC stopped publishing new SRint2000 benchmarks in January 2007). All the new platform benchmarks are now published on the SPECrate_integer2006 site (SRint2006 tab). The last date the benchmark tab was updated is shown with the link name. A hot link to the SPEC site is included on the top of the Capacity Planning Tool (CPT) hardware tab.

Figure 7-8 identifies the location of the SPEC link on the CPT hardware tab and provides some views of the HardwareSPEC workbook.



Several benchmarks are published on the SPEC Web site. You will need to select and go to the SPECrate2006 Rates and then scroll down to the configurable request selection - you can then select specific items that you want included in your display query. I like to include the processor and the Processor MHz in my display, which was not included in the default selection. The Processor Characteristics include maximum Turbo boost MHz which can be used to estimate maximum performance at low utilization levels.

The Esri provided HardwareSPEC workbook tabs include an additional column (baseline/core) that I add to the table. This column identifies the processing performance of an individual core, a value that is used to estimate relative platform processing performance for a single sequential display. The relative processing performance per core values will be used in this chapter to comparing user display performance.



Platform Performance
Hardware vendor technology has been changing rapidly over the past 11 years. Improved hardware performance has enabled deployment of a broad range of powerful software and continues to improve user productivity. Most business productivity increases experienced over the past 11 years have been promoted by faster computer technology. Technology today is getting fast enough for most user workflows, and faster compute processing is becoming less relevant. Most user displays are generated in less than a second. Access to Web services over great distances is almost as fast. Most of a user's workflow is think time—the time a user spends thinking about the display before requesting more information.

Most future user productivity gains will likely come from more loosely coupled operations, higher capacity network communications, disconnected processing, mobile operations, pre-processed cached maps, and more rapid access and assimilation of distributed information sources. System processing capacity becomes very important. System availability and scalability are most important. The quality of the information product (display design) provided by the technology can make a user's think time more productive.

Hardware processing encountered some technical barriers during 2004 and 2005 which slowed the performance gains experienced between platform releases. There was little user productivity gain by upgrading to the next platform release (which was not much faster), so as a result, computer sales were not growing at the pace experienced in previous years. Hardware vendors searched for ways to change the marketplace and introduced new technology with a focus on more capacity at a lower price. Vendors also focused on promoting mobile technologies, wireless operations, and more seamless access to information.

Competition for market share was brutal, and computer manufacturers tightened their belts and their payrolls to stay on top. CY2006 brought some surprises with the growing popularity of the AMD technology and a focus on more capacity for less cost. Intel provided a big surprise with a full suite of new dual-core processors (double the capacity of the single-core chips) while at the same time significant processing performance gains at a reduced platform cost. Hardware vendor packaging (Blade Server technology) and a growing interest in virtual servers (abstracting the processing environment from the hardware) is further reducing the cost of ownership and provide more processing capacity in less space.

Figure 7-9 provides an overview of vendor-published single-core benchmarks for hardware platforms using Intel processor technology.

The Intel Xeon 3200 MHz platform (single-core SPECrate_int2000 = 18 / SPECrate_int2006 = 8.8) was released in 2003 and remained one of the highest-performing workstation platforms available through CY2005. The SPECint_rate2000 benchmark result of 18 was used as the Arc04 and Arc05 performance baseline.

CY2005 was the first year since CY1992 that there was no noticeable platform performance change (most GIS operations were supported by slower platform technology).

There were some noticeable performance gains early in CY2005 with the release of the Intel Xeon 3800 MHz and the AMD 2800 MHz single-core socket processors. An Arc06 performance baseline of 22 (SPECrate_2006 = 10.5) was selected in May 2006. Since May, Intel released the Intel Xeon 5160 4 core (2 chip) 3000 MHz processor, a dual-core chip processor with a single core SPECrate_int2000 benchmark of 30 (SPECrate_int2006 = 13.4) and operating much cooler (less electric consumption) than the earlier 3.8 MHz release. The Arc07 performance baseline of 14 (SPECrate_int2006 = 14) was selected based on the Intel X5160 technology.

Intel technology continued to improve in CY2008 and server pricing was even more competitive. Hardware vendors were promoting platforms with dual core chips and reducing the price on lower performance low power configurations. The Xeon 5260 4 core (2 chip) platform (SPECrate_int2006 = 17.5) was selected as the 2008 baseline.

2009 was another great year for performance gains. Intel released a new chip technology that was over 70 percent faster per core than their 2008 release. Hardware vendors stop providing Dual core chip options, and all entry level commodity servers include Quad core or higher capacity chips. The Intel Xeon 5570 8 core (2 chip) 2933 MHz platform was over 3.3 times the capacity of the 2008 baseline at about the same platform cost.

Intel introduced a new chip design in 2010 that was 15-20 percent faster than the previous year. Virtual server technology is being adopted as the framework for many IT data centers. Private and public Cloud hosting is becoming more popular. Hardware vendors are building higher capacity platforms focusing on the virtual server and cloud computing markets. Intel introduced new 6 core per chip 5600 series platforms and a range of higher capacity 7500 series platforms with 6 and 8 core per chip configurations. Intel released a Xeon X7550 32 core (4 chip) 2000 MHz platform that was about 40 percent slower per core than the X5677 platform and about 2.5 times the capacity. SGI released an Intel Xeon X7560 512 core (64 chip) 2266 MHz platform with per core performance the same as the Xeon X7550 and about 40 times the capacity of the X5677 platform. The Xeon X5677 8 core (2 chip) 3467 MHz platform was selected as the CY2010 performance baseline (SPECrate_int2006 baseline = 35/core).

Intel introduced a new high capacity 10 core chip design in 2011, introducing commodity 2 chip platforms (E7-2870) with 20 core and 4 chip platforms (E7-4870) with 40 core. Intel demonstrated per core performance improvements with the E3-1280 4 core (1 chip) 3500 MHz server with per core SPECint_rate2006 baseline throughput of 40 per core, almost 15 percent faster than the 2010 platforms. The higher capacity E7-2870 and E7-4870 servers operated at a lower speed (2400 MHz) resulting in SPECint_rate2006 baseline throughput of 25-26 per core.



Figure 7-10 provides an overview of vendor-published single-core benchmarks for hardware platforms using AMD processor technology.

AMD platforms were very competitive with Intel in the 2004 - 2005 timeframe. Since that time, Intel processor performance improvements have been much more impressive than available AMD alternatives. AMD per core performance has seen some minor improvements since 2005, falling behind after 2007.

AMD introduced some high capacity platforms in 2010. The AMD Opteron 6174 24 core (2 chip) 2200 MHz processor with more throughput than the Intel Xeon X5677 8 core (2 chip) 3467 platform. AMD per core performance was about 36 percent of what Intel was offering. These high capacity platforms are attractive for IT departments that wish to consolidate many light back office applications on lots of virtual servers in a single platform. Virtual servers perform best with dedicated processor core, so a slower platform with more core can host more virtual servers.

Figure 7-10 provides an overview of vendor-published per core benchmarks for hardware platforms supporting UNIX operating systems.

AMD platforms did not see a big performance gain in 2011.



The UNIX market has focused for many years on large "scale up" technology (expensive high-capacity server environments). These server platforms are designed to support large database environments and critical enterprise business operations. UNIX platforms are traditionally more expensive than the Intel and AMD "commodity" servers, and the operating systems typically provide a more secure and stable compute platform.

IBM (PowerPC technology) is an impressive performance leader in the UNIX environment. The high capacity Intel and AMD platforms are starting to penetrate most of the remaining UNIX vendor market. 

2011 Technology Changes
Figure 7-11 highlights the technology changes that are making a difference in 2011. Hardware vendor focus on higher capacity servers continues to be driven by IT adoption of Virtual Server and Cloud Computing as a better way to consolidate and manage adaptive data center environments. Platform core performance continues to increase, and turbo boost technology automatically adjusts display performance based on server utilization.



Hardware vendor efforts to reduce cost and provide more purchase options make it important for customers to understand their performance needs and capacity requirements. In the past new hardware included the latest processor technology, and customers would expect new purchases would increase user productivity and improve operations. In today's competitive market place, new platforms do not ensure faster processor core technology. You must understand your performance needs and use relative hardware benchmarks in selecting the right platform.

How we identify the platform configuration we want has been changing. Hardware vendors are providing a wide range of choices at different performance levels for different user communities. New processors may perform faster than older processors that run at a higher clock speed (MHz), and processor speed is no longer a good measure of performance. Figure 7-13 shares how vendors have responded to this problem, and the nomenclature we use to make sure we understand the platform we are talking about.



Hardware vendors have identified specific model numbers that are unique for each processor chip configuration (E3-1280, E7-4870, Opteron 6180SE). Hardware vendors use these chips as components in building their server offerings. There are a limited number of platform chip manufactures still competing in our marketplace (Intel and AMD provide all of the Windows processor technology). These processor chips are used in building all of the hardware vendor platforms offerings.

The total number of processor core identifies how many user requests will be processed at the same time. Total core is a key parameter for establishing appropriate memory and identifying the proper software configuration and platform capacity. You may find vendors identifying platforms by number of chips and how many core per chip – you need to do your math to identify the total number of core. This can be confusing, and for this reason the CPT terminology we will use includes the total number of core. Total number of chips is provided for information purposes (not as important as understanding the total number of core). Some vendors refer to chips as sockets – a chip is the board that holds the integrated communication circuits and the core processor. The chip plugs into a socket, so the terms are used interchangeably. I use chip in the CPT nomenclature because this is the term used by SPEC and is shorter than using socket.

Getting the Right Hardware
When you go to purchase a platform, vendors are not very good at providing the performance numbers. I will say, to the vendor’s credit, that they are good at providing their performance numbers on the SPEC site (but not on their sales page). You need to do your homework before you buy your hardware. With GIS servers, platform performance is important both for optimum user productivity and to reduce overall system cost. The good news (for GIS users) is that the best performing hardware often delivers the lowest overall system costs. If you don’t do your homework, you might miss the savings.

Figure 7-14 provides an overview of platform configuration options available on a DELL site. This is just an example. You should do your own homework with your own vendor and pricing. The 2010 recommended ArcGIS Server container machine configuration is used in our example.

The ideal platform would include the right processor, memory, and hard drive configuration. Configuring a Dell PowerEdge R710 server with two Xeon X5677 quad core 3.46 GHz processors, 24GB 1333 MHz memory (minimum of 3 GB per core), 64 Bit Windows Server Standard operating system, and three RAID 5 450GB 15,000 RPM disk drives cost just over $10,300.

Processors: The X5677 quad core 3.46 GHz processors provide the best performance/core. Selecting the "Energy Efficient" Xeon E5640 quad core 2.66 GHz processors reduce the overall cost by about $2,300. The “High Efficiency” Xeon L5640 six core 2.26 GHz processors reduce cost by $484. The “High Capacity” X5680 increases cost by over $1,000.

What is not shown? You need to lookup SPEC throughput and calculate per core performance to get the rest of the story. Per core performance of the E5640 core processor is 16 percent slower than the X5677. Performance of the Xeon L5640 core processor is 35 percent slower than the X5677. The X5680 has 33 percent more capacity, but here again user display performance is 11 percent slower than the X5677.

Figure 7-15 takes a closer look at the server options from an overall system cost perspective. For this example, we wish to purchase a server to host our ArcGIS Server Web mapping services. We want a server solution that will host estimated peak loads up to 30,000 transactions per hour. We estimate ArcGIS Server software licensing is about $5000 per core. We will use the platform pricing identified in Figure 7-14 for the hardware. Our IT department will deploy ArcGIS Server in multiple 2 core virtual servers on the selected physical platforms and we will price based on required server core during peak loads.

The CPT Calculator was used to identify the total number of virtual servers required to support the business requirements with each of the candidate physical platforms. The Xeon L5640 12 core (2 chip) 2.26 GHz platform will require three virtual servers (6 core), the Xeon E5640 8 core (2 chip) 2.66 MHz platform will require three virtual servers (6 core), and the Xeon X5677 platform will require two virtual servers (4 core) to process the projected business loads. The cost analysis shows the X5677 platform would save about $10,000 on the initial procurement, with reduced software maintenance costs accrued over the life of the system. The X5677 was also the better performing solution, with 15 to 35 percent improved user display performance with the faster platform environment.



Figure 7-16 completes the same analysis using the CPT Design tab. An additional performance and price comparison was made using the Xeon X5680 platform – a slightly higher performance 12 core platform configuration that was not included in the Calculator analysis. The Design tab allows you to configure separate workflows and install them on separate platform tier to complete the comparison – providing a single information product showing direct comparison of the four platform solution options.

The first three platform tier show the same analysis results provided by the CPT Calculator, with overall savings of about $10,000 by purchasing the high performance server. We see about the same results when reviewing costs for the high capacity server configuration – again we save over $10,000 on the initial purchase cost. Software cost for the 12 core platforms is 33 percent more than the 8 core high performance server, which is driving the system level savings. These savings will also reduce cost over the system lifetime by accruing a 33 percent annual software maintenance cost savings over the life of the system.



Selecting the right platform is more challenging today than ever before. You need to do your homework before you buy and know the model number you are looking for or you may be paying more for a platform that gives you less. Figure 7-17 provides a graphic overview of the platforms we just discussed showing the relative performance per core for each. It also includes the previous year’s platforms which are still for sale at a reduced price – we saw some good performance gains in 2010. The new Turbo Boost capability increases the processor MHz during light power loads to improve user performance.

The platforms that run with reduced power are slower than the full power configurations (reduced power means reduced user productivity). Know what you are shopping for before you buy and you will be much happier with the performance of your new platform selection.

When using a basemap layer or accelerated raster layer in ArcGIS 10 applications, multiple threads are started to perform drawing and blending operations, and because these operations occur in another thread, they can take advantage of another processor core.

ArcGIS Desktop Platform Sizing
Figure 7-18 provides an overview of supported ArcGIS workstation platform technology. This chart shows the Intel platform performance changes experienced over the past five years. The new Intel Core i7-2600 3400 MHz quad-core processor is more than 3 times faster and over 6 times the capacity of the Intel Core 2 Quad Q6700 MHz platform that supported ARC/INFO workstation users in 2007. The advance of GIS technology is enriched by the remarkable contributions provided by Esri's hardware partners.



Full release and support for Windows 64-bit operating systems provide performance enhancement opportunities for ArcGIS Desktop workstation environments. The increasing size of the operating system executables and the number of concurrent operations supporting GIS operations makes more memory and improved memory access an advantage for ArcGIS Desktop users. Recommended ArcGIS Desktop workstation physical memory with an ArcSDE data source is 3 GB, and 6 GB may be required to support large image and/or file-based data sources.

Most GIS users are quite comfortable with the performance provided by current Windows desktop technology. Power users and heavier GIS user workflows will see big performance improvements with the faster Xeon i7 quad-core technology. Quad-core technology is now the standard for desktop platforms, and although a single process will see little performance gain in a multi-core environment there will be significant user productivity gains by enabling concurrent processing of multiple executables. Desktop parallel processing environments are leveraged when using a basemap layer or accelerated imagery layer in ArcGIS 10 applications. 3D image streaming with ArcGIS Explorer 900 and future enhancements with 3D simulation and geoprocessing also leverage the increased capacity of multi-core workstation environments.

Video graphics cards enhance the ArcGIS Desktop user display environment, particularly for 3D Analysis performance and Imagery display quality. ArcGIS 3D Analyst requires OpenGL-compatible graphics cards, as OpenGL technogy is used for 3D display in the ArcGlobe and ArcScene applications. ArcGIS Explorer for Desktop also uses OpenGL technology for 3D rendering. Frequently asked questions for selecting a video card is provided in the 3D Analysis for ArcGIS Desktop 10 help documentation. 

Windows Terminal Server/Remote Desktop Services Platform Sizing
Windows Terminal Server supports centralized deployment of ArcGIS Desktop applications for use by remote terminal clients. Figure 7-19 identifies three standard Windows Terminal Server software configurations. The ArcGIS Desktop direct connect architecture will be used to demonstrate how Windows Terminal Server sizing has been influence by hardware technology change.



Figure 7-20 identifies how vendor hardware improvements have made a difference in Windows Terminal Server sizing over the past 5 years. The improvements in processor core performance in conjunction with more processor core per chip have significantly increased server throughput capacity (number of concurrent users supported on a single platform). As the number of concurrent user sessions on a platform increase, the memory requirements must also increase to accommodate the additional concurrent user sessions. Heavier workflows can require more memory per session than lighter workflows. Servers must be configured with sufficient physical memory to take advantage of the higher platform processing capacity.



Windows Terminal Servers were limited to about 15 concurrent users on a 2 processor (chip) platform back in 2006. Platform performance and capacity have really changed, supporting up to 91 concurrent users on a 4 core chip platform today.

It is important to take advantage of the Windows 64-bit Operating System for the new Intel platforms, since these higher capacity servers require much more physical memory to handle the high number of current active client sessions. Up to 48 GB of memory is required to take full advantage of the 46 – 91 concurrent user capacity available with the Citrix XenApp Server hosted on Xeon E3-1280 4 core (1 chip) 3500 MHz platforms. Deploying Windows Terminal Server in a 2 core Virtual Server environment would reduce server capacity to 12 – 23 concurrent users. 64-bit Operating system is critical for these high capacity servers, improving memory management and providing up to 10 percent performance gains over the Windows 32-bit Server Advanced Operating Systems.

These high performance servers push capacity to new levels, and GIS applications may push platform and disk subsystems to their limits. You should monitor disk I/O and platform paging during peak loads to ensure these subsystems are not overloaded. More memory can reduce paging, and data on disk can be distributed to reduce disk contention. You need to know you have a problem before you can fix it, so keep an eye on platform performance metrics to see all is working as it should.



ArcSDE Geodatabase Server Sizing
Figure 7-21 identifies software configuration options for the geodatabase server platforms. The geodatabase transaction models apply to both ArcGIS Desktop and Web mapping service transactions. Normally a geodatabase is deployed on a single database server node, and larger capacity servers are required to support scale-up user requirements. We will use the Direct Connect architecture for the database sizing demonstrations.



The ArcSDE and DBMS display processing times (service times) are roughly the same for capacity sizing purposes, so the DBMS Server and ArcSDE Remote Server platform sizing would be about the same.

Figure 7-22 identifies the impact of hardware technology change on ArcSDE Geodatabase server sizing over the past 6 years. Improvements in processor core performance in conjunction with more processor core per chip have significantly increased server throughput capacity (number of concurrent users supported on a single platform). As the number of concurrent user sessions on a platform increase, the memory requirements will increase to accommodate the additional concurrent user sessions. Heavier workflows can require more memory per session than lighter workflows. Servers must be configured with sufficient physical memory to take advantage of the higher platform processing capacity.



Back in 2006, a database platform running on a commodity Intel server would be limited to around 55 – 105 concurrent users. Enterprise GIS systems supporting more than 100 GIS clients required higher capacity platforms, monolithic UNIX platforms that were several times more expensive that the Intel commodity servers. During that same period, Oracle developed their Real Application Cluster (RAC) architecture that could leverage multiple commodity servers in a single database cluster environment – the idea was to extend the database server capacity by leveraging lower cost commodity servers. Esri also introduced the ArcSDE distributed geodatabase technology around this same time – again looking for ways to better manage enterprise GIS environments with lower cost commodity servers.

The 2010 Intel Xeon X5677 8 core (2 chip) 3467 MHz commodity platform with 96 GB memory can support over 1400 concurrent geodatabase clients – over 13 times the capacity of the 2006 commodity platforms. The 2011 Xeon E3-1280 4 core (1 chip) 3500 commodity platform can support over 820 concurrent users. Four core virtual servers deployed on the 2011 Xeon E7-2870 20 core (2 chip) server can support over 310 concurrent geodatabase users. This has changed how we think about Enterprise capacity – the database hardware is no longer the limiting technology. 

ArcGIS Desktop Standard Workflow Performance
Figure 7-23 provides an overview of the display performance for Standard Esri Workflows used in the Capacity Planning Tool.



The 20 workflow combinations identified above can be generated from just three Standard Esri Workflows included on the Capacity Planning Tool workflow tab. The first chart shows workflows using the ArcGIS 10 Desktop Medium Workstation workflow, while the second chart shows the same workflows with ArcGIS Desktop application supported on a Windows Terminal Server configuration.



Web Mapping Servers
Web mapping services platform sizing guidelines are provided for the ArcIMS and ArcGIS Server software technology. The ArcIMS image service is deployed using the ArcIMS software, and the ArcGIS Server map services are deployed using the ArcGIS Server software. All Web mapping technologies can be deployed in a mixed software environment (they can be deployed on the same server platform together). All mapping services can be configured to access a file data source or a separate ArcSDE database. Geodatabase access can be through direct connect or an ArcSDE server connection.

Web Mapping Performance Changes
Web mapping services have experienced dramatic performance changes over the past 5 years. These performance enhancements improve Web user productivity and reduce deployment cost. Some of these performance changes were due to expanding software deployment options and others were due to improved hardware processing speed and platform capacity changes.

Figure 7-24 identifies recommended software configuration options for standard two-tier Web mapping deployments. This configuration option supports the Web server and spatial servers (container machines) on the same platform tier. The following charts will show how technology has changed and its impact on Web server platform sizing.



2007-2008 Web Performance
Figure 7-25 provides an overview of available 2007 - 2008 technology. ArcGIS Server Web mapping applications were gaining market share, with ArcIMS mapping services retaining a major market share. ArcIMS and ArcGIS Server ADF display performance improved slightly ranging from 1 - 2.5 seconds over remote 1.5 Mbps connections. Typical entry level ArcIMS Image Service configurations supported peak throughput of 19,000 to 55,000 transactions per hour, while richer ArcGIS Server ADF applications supported about half this capacity.

ArcGIS Server REST services and a new Map Cache data source were introduced in 2008 expanding ArcGIS Server development options. ArcGIS Server REST services improved entry level Web dynamic mapping services throughput capacity by over 20 percent over similar ArcGIS Server ADF deployments. Map Cache data source reduced dynamic server loads to almost zero (preprocessed map services), with remote client display performance determined primarily be network bandwidth. Map cache tiles would be retained in the local browser cache, providing very fast Web mapping experience for clients working in an established local area.



2009 Web Performance
Figure 7-26 provides an overview of available 2009 technology. This was the first year ArcGIS Server providing a dynamic Web mapping deployment pattern that outperformed the ArcIMS Image service, removing any remaining ArcIMS benefits over ArcGIS Server and providing a broad range of proven functional benefits encouraging ArcIMS migration to current Web mapping software technology. Web mapping performance improved to a range from 0.5 - 2.0 seconds over remote 1.5 Mbps connections. Typical entry level ArcIMS Image Service configurations supported peak throughput up to 63,000 transactions per hour, while similar ArcGIS Server dynamic mapping applications supporting peak throughput loads up to 80,000 transactions per hour.

ArcGIS Server REST MSD services and improved Map Cache base layer mashups were introduced in 2009 enhancing and expanding ArcGIS Server development options. ArcGIS Server REST MSD services improved entry level Web dynamic mapping services throughput capacity by over 100 percent, significantly enhancing performance and quality of maps provided by ArcGIS Server REST MXD deployments. Map Cache base layer mashups significantly reduced dynamic map layer transaction loads, introducing a new back-office data management strategy (pre-processing map cache basemap layers) for publishing fast interactive mapping services.

The 9 workflow combinations above provide a representative subset of the Standard Esri Workflows for Web Mapping Services included on the Capacity Planning Tool workflow tab. The CPT Calculator can generate hundreds of customer workflow performance targets based on software technology selection and map service configuration parameters. The ArcGIS Server 9.3.1 software technology options, along with platform performance improvements of over 70 percent per core, make 2009 a record breaking year for Web service performance improvements.

<br style="clear: both" />

2010 Web Performance
Figure 7-27 provides an overview of available 2010 technology. Web mapping performance continued to improve to a range from 0.3 - 2.0 seconds over remote 1.5 Mbps connections. Typical entry level ArcIMS Image Service configurations supported peak throughput up to 78,000 transactions per hour, while similar ArcGIS Server dynamic mapping applications supporting peak throughput loads up to 97,000 transactions per hour.

<br style="clear: both" />

2011 Web Performance
Figure 7-28 provides an overview of available 2011 technology. Web mapping performance continued to improve to a range from 0.25 - 1.9 seconds over remote 1.5 Mbps connections. Typical entry level ArcIMS Image Service configurations supported peak throughput up to 88,000 transactions per hour, while similar ArcGIS Server dynamic mapping applications supporting peak throughput loads up to 109,000 transactions per hour.

Network bandwidth is currently one of the primary factors impacting Web client display performance. Server processing load variations of the different ArcGIS Server deployment patterns have a secondary impact on client display performance. Server platform technology (processor performance and platform capacity) along with the software technology and display performance parameters determine platform sizing and peak server throughput capacity. <br style="clear: both" />

Platform Selection Criteria
Figure 7-29 provides a summary of the factors contributing to proper hardware selection. These factors include the following:



Platform Performance: Platform must be configured properly to support user performance requirements. Proper platform technology selection based on user performance needs and peak system processing loads significantly reduces implementation risk. Esri performance sizing models establish a solid foundation for proper hardware platform selection. The Esri Capacity Planning Tool automates the System Architecture Design analysis, providing a framework for coupling enterprise GIS user requirements analysis with system architecture design and proper platform technology selection.

Purchase Price: Cost of hardware will vary depending on the vendor selection and platform configuration. Capacity Planning Tools can identify specific technology required to satisfy peak system processing needs. Pricing should be based on the evaluation of hardware platforms with equal display performance platform workflow capacity.

System Supportability: Customers must evaluate system supportability based on vendor claims and previous experience with supporting vendor technology.

Vendor Relationships: Relationships with the hardware vendor may be an important consideration when supporting complex system deployments.

Total Life Cycle Costs: Total cost of the system may depend on many factors including existing customer administration of similar hardware environments, hardware reliability, and maintainability. Customers must assess these factors based on previous experience with the vendor technology and evaluation of vendor total cost of ownership claims.

Establishing specific hardware technology specifications for evaluation during hardware source selection significantly improves the quality of the hardware selection process. Proper system architecture design and hardware selection provide a basis for successful system deployment.

Previous Editions
Platform Performance 29th Edition (Spring 2011) Platform Performance 28th Edition (Fall 2010) Platform Performance 27th Edition (Spring 2010)

Page Footer Specific license terms for this content System Design Strategies 26th edition - An Esri ® Technical Reference Document • 2009 (final PDF release)