Platform Performance 32nd Edition

Spring 2013 Platform Performance 32nd Edition

Chapter 3 (Software Performance) discussed some best practices for building high performance map services, and the importance of selecting the right software technology to support your business needs. This chapter will focus on hardware platform performance, and share the value of selecting the right computer technology to support your system performance needs.

Selecting the right hardware will improve user performance, reduce overall system cost, and establish a foundation for building effective GIS operations. Selecting the wrong hardware can contribute to implementation failure - spending money on a system that will not support your business needs.

Hardware vendors do not know what hardware is required to satisfy your GIS needs. This chapter shares the system architecture design methodology developed to help you select the right hardware for your planned GIS operations. This chapter also shares information for justifying hardware purchases based on expected return on investment.

Platform Performance Baseline
The world we live in today is experiencing the benefits of rapid technology change. Technology advancements are directly impacting GIS user productivity—the way we all deal with information and contribute to our environment. Our ability to manage and take advantage of technology benefits can contribute to our success in business and in our personal life.



To develop a system design, it is necessary to identify user performance needs. User productivity requirements can be represented by workstation platforms selected by users to support their computing needs. GIS users have never been satisfied with platform performance, and each year power users are looking for the best computer technology available to support their processing needs.

As platform technology continues to improve, user performance expectations may change. It is not clear just when computers will be fast enough for GIS professionals – there is always more we could do if we just had the power. As user productivity improves, application and data servers must be upgraded to service the increasing user desktop processing requirements.

GIS user performance expectations have changed dramatically over the past 10 years. This change in user productivity is caused primarily by faster platform performance and lower hardware costs. Figure 8.1 identifies the favorite hardware platforms selected by GIS users since the ArcGIS Desktop 8.2 release in July 2002. High performance desktop workstations have made a primary contribution to improving user productivity and expanding acceptance of GIS technology.

Each year we review hardware vendor technology to identify the best available platform for GIS professional workstation users. The highest performing platform is used to identify our performance baseline for each calendar year. The Xeon E5-2643 4-core (1 chip) 3300 MHz was identified as our favorite 2012 workstation.

Xeon E5-2643 4-core (1 chip) 3300 MHz platform
 * includes four of the fastest processors released by Intel in early 2012.
 * ArcGIS for Desktop workstation recommended memory is increased to 8 GB to accommodate expanding GIS use of large imagery files and increasing emphasis on time aware geoprocessing analysis.



Performance Baseline history


Figure 8-2 provides a graphic overview of relative Intel processor performance since 2003. Platform per core performance has increased by a factor of 10 over the past 10 years. Hardware vendor platform performance improvements contribute to improved business productivity and system computer capacity, reducing the overall cost of automated business systems.

The boxes at the bottom of the chart represent the performance baselines used to support the Esri capacity planning models over the past 10 years. Performance baselines are specified by calender year based on the per core performance of available platform technology. These performance baselines are reviewed and updated each year to keep pace with the rapidly changing hardware technology.



Moore's Law


The Intel founder Gordon E. Moore released a paper in 1965 that predicted that the number of components in integrated circuits would double every year through at least 1975. His prediction, known today as Moore’s law, has proven true over the last 50 years and continues to contribute to computer performance gains.

Reducing the distance between integrated circuits has a direct impact on platform processor compute performance. A function that doubles every two years when plotted on a chart would produce an exponential growth curve. Figure 8-3 shows a plot of the Esri performance baselines with an exponential curve overlay. From this chart, you can see that the performance gains we have experienced since CY2003 track remarkably close to the exponential performance gains predicted by Gordon E. Moore. In fact, if past experience is a good predictor of the future, we can expect some remarkable per core performance gains over the next couple of years as indicated on the chart.

There is some discussion again this year questioning whether platform per core performance will continue to improve as it has over the past 10 years. Moore's law deals with components getting smaller and closer together on integrated circuits with each production design cycle. Performance has improved due to shorter travel distances between processor chip components. Distances between components today are approaching atomic size, and maybe there are physical limits to how much faster the chip will perform. Other factors such as temperature and cooling limitations could limit further processor performance gains, and performance per core could start to level off with future chip releases (represented by the red curve on the chart). High capacity chip configurations (10-core per chip, 12-core per chip, etc) are better for virtual server deployment, while at the same time generate more heat and limit peak processing speeds (MHz); another factor limiting per core performance gains. 

Faster platforms provide more service with less hardware
Figure 8-4a represents the relationship between server platform performance and peak entry level Web mapping service throughput. The chart shows rapidly increasing software license service capacity rendered by platform performance improvements shown in Figure 8-2.

Platform performance improvements reduce software cost. Dynamic web mapping services deployed using an entry level ArcIMS software license in 2003 could support up to 25 concurrent users (10,000 TPH). Those same mapping services deployed by an entry level ArcGIS for Server license with 2012 platform technology can support over 175 concurrent users (65,000 TPH) with more quality and functionality.

Web deployment timelines are significantly reduced with new software, reducing software development and deployment expenses. Web mapping software that took over 6 months to develop and deploy in 2003 can be deployed within a few days with 2013 technology.

Improved hardware platform performance is driving a significant reduction in overall Enterprise GIS system cost.



Web map display performance history
The change in hardware performance over the years has introduced unique challenges for capacity planning and for software vendors trying to satisfy customer performance and scalability expectations. Understanding how to represent hardware performance differences is critical when addressing capacity planning, performance, and scalability issues.

Figure 8-4 shows how user expectations have changed over the past 10 years. An ArcGIS Desktop medium dynamic map display processing time in CY2003 would take over 4 seconds. That same map display today can be rendered in less than 0.4 seconds – over 10 times faster than just 10 years earlier. Most of this performance gain can be accounted for by faster processor core.

Figure 8-4 shows a minimum user performance expectation range (1-2 seconds) which we believe may open new opportunities for GIS analysis and display. Traditional heavy map displays can now be rendered in less than 1 second, suggesting hardware technology may no longer be a limitation on GIS user productivity. IT departments see this as an opportunity to buy higher capacity platforms and leverage virtual server environments and cloud computing to simplify their administration workload (exchanging user display performance for lower administration costs). I expect GIS users will see this as an opportunity to incorporate more complex analysis into their user workflows, leveraging more compute intensive statistical analysis, logistics routing functions, and business analytics for use in their standard business workflows. Heavier processing workflows will require continued hardware performance improvements to keep user productivity at a peak level.



Relative platform performance
Knowing how to account for platform technology change is fundamental to understanding capacity planning. Figure 8-5 identifies a simple relationship that we have used since 1992 to relate platform performance with capacity planning.

The relationship simply states that if one can determine the amount of work (peak throughput) that can be supported by server A and identify the relative peak throughput performance between server A and server B, then one can estimate the amount of work that can be supported by server B. This relationship is true for single-core and for multi-core servers.



Platform performance resources
Having a fair measure for relative platform performance and capacity is important. Selection of an appropriate performance benchmark and agreement on how the testing will be accomplished and published are all very sensitive hardware marketing issues. You need a performance measure that is accepted by the vendors – preferably one they provide themselves.

SPEC performance benchmarks
The Standard Performance Evaluation Corporation (SPEC) is a consortium of hardware vendors initially established in the late 1980s for the purpose of establishing guidelines for conducting and sharing relative platform performance measures. They have developed a variety of standard benchmarks with testing performed by the hardware vendors and published on the SPEC Web site.

The SPEC compute-intensive benchmarks have been used by Esri as a reference for relative platform performance since 1992. The system architecture design platform sizing models used in conjunction with these relative performance measurements have supported Esri customer capacity planning since that time. The SPEC benchmarks were updated in 1996, 2000, and 2006 to accommodate technology changes and improve relative performance measurements.

SPEC provides a separate set of integer and floating point CPU intensive benchmarks. Computer processor core can be optimized to support integer or floating point calculations, and performance can be very different between these environments. Platform capacity test results with ArcGIS for Desktop and Server software have tracked quite close to the SPEC CPU integer relative performance benchmark results. This confirms that the ArcGIS component software code predominantly uses integer calculations. The SPEC CPU integer benchmarks provide the best relative platform performance estimates for representing ArcGIS software technology.

SPEC provides two methods for conducting and publishing their CPU integer benchmark results. The SPECint2006 is a speed benchmark measuring execution time for a single benchmark instance and then uses the result for calculating relative platform performance. The SPECint_rate2006 is a throughput benchmark, and is conducted using several concurrent benchmark instances (one for each core thread) and measures executable instance cycles over a 24-hour period. The SPECint_rate2006 benchmark results are used for relative platform capacity planning metrics in the Esri system architecture design sizing models.

There are two results published on the SPEC site for each benchmark, the conservative (baseline) and the aggressive (result) values. The conservative baseline values are published first by the vendors, and the aggressive values are published later following additional tuning efforts. Either values can be used to estimate relative server performance, although the conservative benchmarks provide the most conservative relative performance estimate (removes tuning sensitivities). We recommend using the conservative benchmarks for capacity planning, and the published SPEC CPU benchmark baseline values are used in the Capacity Planning tool.

Several benchmarks are published on the SPEC Web site. You will need to select and go to the SPECrate2006 Rates and then scroll down to the configurable request selection - you can then select specific items that you want included in your display query. I like to include the processor and the Processor MHz in my display, which was not included in the default selection. The Processor Characteristics include maximum Turbo boost MHz which can be used to estimate maximum performance at low utilization levels.

More information on the SPEC CPU benchmarks can be found on the SPEC Web site.


 * CPT Hardware tab

CPT Hardware tab includes a list of Desktop and Server SPEC CPU platform benchmark baseline values used as a lookup table by the CPT Calculator, Design, Test, and Favorites tabs.

Published vendor benchmark values are used to identify relative throughput and performance for selected hardware platforms. Platforms are arranged by vendor and year in two lookup lists. Desktop candidates are located at the top of the list. Server candidates are located at the bottom of the list. Project platform candidates are located in the middle of the list and included with the Desktop and Server list selections.

The SPEC Web site is the primary source for the platform performance metrics. Information from the SPEC Web site is entered into the CPT Hardware tab for capacity planning. Copy of the SPEC benchmark information is provided in a HardwareSPEC Excel workbook for easy access. The SPEC benchmark values are used to adjust baseline service times to selected platform service times for capacity planning analysis.


 * HardwareSPEC Excel Workbook

The Esri Capacity Planning Tool release site shares a HardwareSPEC workbook with an Excel table of platform relative performance values from the published SPECrate_integer benchmarks.


 * Adding a new platform to the CPT Hardware tab

New hardware platform benchmark values are published on the SPEC Web site each month throughout the year, so the platform you need for your design analysis may not be included in your version of the CPT. You can locate the new benchmark values on the SPEC Web site and then add them to your CPT Workflow tab.

Platform Performance
Hardware vendor technology has been changing rapidly over the past 10 years. Improved hardware performance has enabled deployment of a broad range of powerful software and continues to improve user productivity. Sub-second server processing times suggest that future user productivity gains will likely come from more loosely coupled operations, higher capacity network communications, disconnected processing, mobile operations, pre-processed cached maps, and more rapid access and assimilation of distributed information sources.

System processing capacity becomes very important. System availability and scalability are most important. The quality of the information product (display and database design) provided by the technology can make a user's think time more productive. Proper tradeoff between display quality and performance contributes to optimum user productivity.

Hardware vendor performance gains
Much can be learned about server platform competition from vendor published SPEC benchmarks. Figure 8-9 provides an overview of relative per core performance for key vendor-published benchmarks from 2008 to 2012.

Intel processors have established a strong performance leadership position since their 2006 release of the Xeon 5160 processor.


 * Intel Xeon E5-2637 4-core chips set 2012 per-core performance over 45, with high capacity (32 core) E5-2690 platforms just over 20.
 * IBM Power7 2011 performance peaked just under 45; expect some 2012 gains later in the year. IBM AIX chip has been very competitive with Intel in the Linux environment.
 * AMD Opteron's best performance peaks just over 17. We have seen only minor AMD performance gains since 2006.
 * Oracle (SUN) SPARC64 VII+ Solaris 2011 peak performance reached 17. We have seen only minor SPARC Solaris gains since 2006.
 * Most other processor vendors have dropped out of the competition.

Esri ArcGIS software is deployed on Intel and AMD processors, supporting both Intel Linux and Windows deployments. The Intel platforms deliver a strong performance advantage over AMD, improving server peak throughput capacity and reducing overall software licensing costs. 

Processing speed drives platform throughput
Figure 8-10 shows the relationship between display processing time and system throughput.

There is an indirect relationship between server processing time and peak system throughput. Faster per core processor performance reduces service processing time. Shorter processing time means each processor core can service more requests. More service requests per core means more peak throughput.

ArcGIS for Server licensing is based on number of hardware platform physical or virtual server cores (all core are treated equal). More processor core implies more server throughput, with license cost based on number of server core. Faster processor cores optimize platform peak throughput for a given server license (more bang for the buck).

Warning: Deploying ArcGIS for Server on platforms with slower processor core will increase customer cost per service transaction.



2012 Technology Changes
Figure 8-11 highlights the technology changes that are making a difference in 2012. We are seeing a growing number of higher capacity servers. We are seeing more processor core per chip, more chip per server, and a growing number of platforms configuration options offered by the hardware vendors.

Processor vendors are responding to new data center platform environments. Virtual server deployments are becoming standard practice for enterprise data centers. Vendors are introducing faster 4, 6, 8, 10, and 12 core chips, with a growing number of high capacity 4 chip servers. Data Center consolidation efforts are expanding rapidly. The number of private and public cloud hosting vendors is rapidly growing in number.

Data Center consolidation can save operational costs. Cloud hosting can change how we manage and support our enterprise business operations.

Warning: All of the Cloud vendor administrative savings may not be passed on to the customer.

Hardware processing performance continues to improve. New 2012 processor cores are over 12 percent faster than 2011. New multi-core desktop and server processors include turbo boost technology.

Enterprise license agreements provide a more adaptive and cost effective way to manage enterprise GIS operations. ArcGIS for the cloud provides an expanding range of deployment options. (ArcGIS Online software as a service offering, Amazon Cloud platform as a service offering).

ArcGIS Online provides a rich platform to share, collaborate, and deploy your GIS services within your organization or with a shared community of users.



Platform identification
How we identify the platform configuration we want has been changing. Hardware vendors are providing a wide range of choices at different performance levels for different user communities. New processors may perform faster than older processors that run at a higher clock speed (MHz), and processor speed is no longer a good measure of performance. Figure 8-12 shares how vendors have responded to this problem, and the nomenclature we use to make sure we understand the platform we are talking about.

Hardware vendors have identified specific model numbers that are unique for each processor chip configuration (E5-2643). Hardware vendors use these chips as components in building their server offerings. There are a limited number of platform chip manufactures still competing in our marketplace (Intel and AMD provide all of the Windows processor technology). These processor chips are used in building all of the hardware vendor platforms offerings.

The total number of processor core identifies how many user requests will be processed at the same time. Total core is a key parameter for establishing appropriate memory and identifying the proper software configuration and platform capacity. You may find vendors identifying platforms by number of chips and how many core per chip – you need to do your math to identify the total number of core. This can be confusing, and for this reason the CPT terminology we will use includes the total number of core. Total number of chips is provided for information purposes (not as important as understanding the total number of core). Some vendors refer to chips as sockets – a chip is the board that holds the integrated communication circuits and the core processor. The chip plugs into a socket, so the terms are used interchangeably. I use chip in the CPT nomenclature because this is the term used by SPEC and is shorter than using socket.



2012 Intel processor performance
The right platform selection is based on a balance between server capacity, performance, and power. Figure 8-13 identifies the Intel processors available for the vendor 2012 hardware.

Intel provides four 2012 platform commodity servers deployment strategies.
 * Dual core chip configurations (1 and 2 chip configurations)
 * Quad core chip configurations (1 and 2 chip configurations)
 * Six core chip configurations (1, 2, and 4 chip configurations)
 * Eight core chip configurations (1, 2, and 4 chip configurations)

What platform should I buy? Dual and Quad core chip configurations provide the highest per-core performance. Six and Eight core chip configurations can support a higher number of virtual servers per platform.

The high core performance for the 2012 six and eight core chip configurations is quite impressive, making your final server decision a challenging choice (particularly for larger virtualized data center environments that can take advantage of the higher capacity servers).

Stay away from the slower performing platform models, they will likely end up costing you more in software licensing. 

2012 ArcGIS for Server platform selection
When you go to purchase a platform, vendors are not very good at providing the performance numbers. I will say, to the vendor’s credit, that they are good at providing their performance numbers on the SPEC site (but not on their sales page). You need to do your homework before you buy your hardware. With GIS servers, platform performance is important both for optimum user productivity and to reduce overall system cost. The good news (for GIS users) is that the best performing hardware often delivers the lowest overall system costs. If you don’t do your homework, you might miss the savings.

Figure 8-14 provides an overview of platform configuration options available on a DELL site showing their relative per core performance and dollars per transaction. The dollars per transaction is calculated by dividing the total cost of the server divided by the relative SPEC benchmark throughput value. *Hardware pricing was based on list values.
 * Esri Software estimated at $5,000 per core.
 * SPEC baseline used for throughput.
 * SPEC baseline/core used for processor speed.

Highest performance processors provide best return on investment.


 * CPT used to evaluate best buy

CPT was designed to automate the system architecture design analysis. It is particularly suited to translate user business requirements to appropriate hardware platform selections.


 * business needs.

Proper hardware selection depends on a clear understanding of your business needs. Figure 8-15 will show how to select the optimum platform for publishing ArcGIS 10.1 REST mapping services.


 * Platform pricing analysis


 * Completing the capacity planning analysis

Business workflow requirements are used to identify the required server cores for each analysis.

Best Buy:
 * Xeon E5-2690 platform provides best performance.
 * Xeon E5-2690 platform has the lowest cost.


 * Design platform analysis summary report

The CPT Design tab can be used to evaluate all servers in a single report.

ArcGIS Desktop Platform Sizing
ArcGIS for Desktop system requirementsare identified in the ArcGIS 10.1 help. Figure 8-20 shows Intel platform performance gains experienced over the past five years. The new Intel Xeon E5-1620 3600 MHz quad-core processor is more than 2.5 times faster and over 5 times the capacity of the Intel Core 2 Duo E8500 3166 MHz platform that supported ARC/INFO workstation users in 2008. The advance of GIS technology is enriched by the remarkable contributions provided by Esri's hardware partners.

Workstation life cycle upgrades depend on user performance needs.
ArcGIS for Desktop power user productivity is often limited by processor per-core performance, and upgrading power user workstations can increase user productivity. Upgrade ArcGIS for Desktop power user workstations whenever there is a large improvement in processor per-core performance. Typical power user workstation life cycle is 1-2 years.

Warning: A single user display session takes advantage of a single processor core. Display performance is determined by per-core processor speed, not by the total number of available processor cores.

ArcGIS for Desktop standard users can work fine with slower display performance. It is a good practice to upgrade ArcGIS for Desktop casual users every 2-3 years to maintain work productivity.

Windows terminal clients and web clients require much less processing, and can work fine with most standard office workstations. Upgrade terminal and browser client users every 3-5 years to maintain work productivity.

Workstation operating system
Full release and support for Windows 64-bit operating systems provides performance enhancement opportunities for ArcGIS for Desktop workstation environments. Windows 64-bit OS improves ArcGIS for Desktop memory access, supports a larger number of concurrent background sessions, and takes advantage of higher memory capacity for data caching. Working with imagery, cached basemap tiles, and local feature cache are examples where ArcMap display performance can be improved when accessing data from local memory cache.

Make sure your workstation has sufficient physical memory to handle your application workflow. 3 GB memory is adequate for most SDE geodatabase workflow clients. 6 GB memory or more may be required when working with large data files (imagery, shape files, etc). Additional memory may be required when working with several different applications on the same workstation.

Workstation performance
Most GIS users are quite comfortable with the performance provided by current Windows desktop technology. Power users and heavier GIS user workflows will see big performance improvements with the faster E5-1600 and E5-2600 technology.
 * Quad-core technology is now the standard for desktop platforms, and although a single process will see little performance gain in a multi-core environment there will be significant user productivity gains by enabling concurrent processing of multiple executables.
 * Turbo boost increases per-core processing performance when supporting a single user session and background processing is not required.
 * Desktop parallel processing environments are leveraged when using a basemap layer or accelerated imagery layer in ArcGIS 10 applications.
 * 3D image streaming with ArcGIS Explorer 900 and future enhancements with 3D simulation and geoprocessing also leverage the increased capacity of multi-core workstation environments.

Video display processing
Video graphics cards enhance the ArcGIS for Desktop user display environment, particularly for 3D Analysis performance and imagery display quality. ArcGIS 3D Analyst requires OpenGL-compatible graphics cards, as OpenGL technology is used for 3D display in the ArcGlobe and ArcScene applications. ArcGIS Explorer for Desktop also uses OpenGL technology for 3D rendering. Frequently asked questions for selecting a video card is provided in the 3D Analysis for ArcGIS for Desktop 10 help documentation.



Windows Terminal Server/Remote Desktop Services Platform Sizing
Windows Terminal Server supports centralized deployment of ArcGIS Desktop applications for use by remote terminal clients. Figure 8-21 identifies three standard Windows Terminal Server software configurations. The ArcGIS Desktop direct connect architecture will be used to demonstrate how Windows Terminal Server sizing has been influence by hardware technology change. 

ArcGIS for Desktop terminal server platform capacity changes
Figure 8-22 identifies how vendor hardware improvements have made a difference in Windows Terminal Server sizing over the past 5 years. The improvements in processor core performance in conjunction with more processor core per chip have significantly increased server throughput capacity (number of concurrent users supported on a single platform). As the number of concurrent user sessions on a platform increase, the memory requirements must also increase to accommodate the additional concurrent user sessions. Heavier workflows can require more memory per session than lighter workflows. Servers must be configured with sufficient physical memory to take advantage of the higher platform processing capacity.

Server capacity has tripled over the past 5 years.
 * 2008 4-core server supported 20 - 39 concurrent users (24 GB RAM)
 * 2010 4-core server supported 40 - 79 concurrent users (40 GB RAM)
 * 2012 4-core server supported 51 - 101 concurrent users (48 GB RAM)
 * 2012 2-core virtual server supports 19 - 39 concurrent users (24 GB RAM).

These are approximate capacity estimates based on our capacity planning models.

If you are going to run XenDesktop on a virtual server (i.e. on top of a hypervisor), then it is best to use the Citrix XenServer's hypervisor (rather than the VMware ESXi hypervisor). XenServer is supported by the same software company (Citrix) and customers have been more successful configuring and supporting XenDesktop on XenServer than using products from two different companies (i.e. XenDesktop on VMware ESXi).

Higher performance platforms require more physical memory
Servers must be configured with sufficient physical memory to take advantage of the higher platform processing capacity. Heavier user workflow can require more session memory than lighter workflows.

It is important to take advantage of the Windows 64-bit Operating System for the new Intel platforms, since these higher capacity servers require much more physical memory to handle the high number of current active client sessions. 64-bit operating systems improve memory management and provide up to 10 percent performance gain over Windows 32-bit Server Advanced operating systems.

These high performance servers push capacity to new levels, and GIS applications may push platform and disk subsystems to their limits. Monitor disk traffic and platform paging during peak loads to ensure these subsystems are not overloaded. More memory can reduce paging and reduce disk contention by improving data caching. Data can be distributed over multiple disk volumes to reduce file access contention. You need to know you have a problem before you can fix it, so keep an eye on platform performance metrics to see all is working as it should.

Several best practices and practical limitations for deploying ArcGIS Desktop on Windows Terminal Server are identified in the GIS Product Architecture chapter.


 * for Windows Terminal Server platform sizing

CPT Calculator tab can be used for platform sizing.

The recommended platform solution is generated by Excel once you enter your business requirements and make your hardware selections. You can try different platform configurations and experiment with different workflow complexities.

The CPT Calculator tab can be used for single workflow platform sizing. The CPT Design tab should be used for more detailed enterprise design planning.

ArcSDE Geodatabase Server Sizing
Figure 8-28 identifies software configuration options for the geodatabase server platforms. The geodatabase transaction models apply to both ArcGIS Desktop and Web mapping service transactions. Normally a geodatabase is deployed on a single database server node, and larger capacity servers are required to support scale-up user requirements.



Three standard SDE geodatabase software configurations are shown above.

then connects to the SDE geodatabase.
 * ArcGIS for Desktop direct connect to an SDE geodatabase.
 * ArcGIS for Desktop SDE connect to a remote SDE server which
 * ArcGIS for Desktop SDE connect to an SDE geodatabase.

The ArcGIS for Desktop direct connect architecture will be used to demonstrate how SDE geodatabase sizing has been influence by hardware technology change. The ArcSDE and DBMS display processing times (service times) are roughly the same for capacity sizing purposes, so the DBMS Server and ArcSDE Server Basic platform sizing would be about the same.

Figure 8-22 identifies the impact of hardware technology change on ArcSDE Geodatabase server sizing over the past 5 years. Improvements in processor core performance in conjunction with more processor core per chip have significantly increased server throughput capacity (number of concurrent users supported on a single platform). As the number of concurrent user sessions on a platform increase, the memory requirements will increase to accommodate the additional concurrent user sessions. Heavier workflows can require more memory per session than lighter workflows. Servers must be configured with sufficient physical memory to take advantage of the higher platform processing capacity.



SDE geodatabase server platform capacity changes
Database platform capacity has continued to grow over the past 5 years. The new E5-2643 4-core platforms provide over 2.5 times the capacity of the X5260 4-core platforms available in 2008.
 * 2008 4-core server supported 177 - 354 concurrent users (24 GB RAM)
 * 2010 4-core server supported 357 - 715 concurrent users (40 GB RAM)
 * 2012 4-core server supported 455 - 909 concurrent users (48 GB RAM)

Many data centers are now deploying the database on virtual server machines. The faster host machines increase the capacity of the virtual machines, requiring a smaller number of virtual core to satisfy capacity requirements.
 * 2012 2-core virtual server supports 218 - 351 concurrent users (24 GB RAM).

The approximate capacity estimates provided above are generated by our capacity planning models.

Higher performance platforms will require more physical memory
Servers must be configured with sufficient physical memory to take advantage of the higher platform processing capacity. Heavier user workflows can require more session memory than lighter workflows.

It is important to take advantage of the Windows 64-bit Operating System for the new Intel platforms, since these higher capacity servers require much more physical memory to handle the high number of current active client sessions. 64-bit operating systems improve memory management and provide up to 10 percent performance gain over Windows 32-bit Server Advanced operating systems.

These high performance servers push capacity to new levels, and GIS applications may push platform and disk subsystems to their limits. Monitor disk traffic and platform paging during peak loads to ensure these subsystems are not overloaded. More memory can reduce paging and reduce disk contention by improving data caching. Data can be distributed over multiple disk volumes to reduce file access contention.

You need to know you have a problem before you can fix it, so keep an eye on platform performance metrics to ensure all is working as it should.


 * CPT for ArcSDE Geodatabase platform sizing

CPT Calculator tab can be used for ArcSDE Geodatabase platform sizing.

Identify your peak user requirements and the SDE_DBMS data source. Select a 2 tier platform architecture and your SDE GDB hardware platform choice.

The recommended platform solution is generated by Excel once you enter your business requirements and make your hardware selections. You can try different platform configurations and experiment with different workflow complexities.

The CPT Calculator tab can be used for single workflow platform sizing. The CPT Design tab should be used for more detailed enterprise design planning.

Web Mapping Servers
Web mapping services platform sizing guidelines are provided for the ArcIMS and ArcGIS Server software technology. The ArcIMS image service is deployed using the ArcIMS software, and the ArcGIS Server map services are deployed using the ArcGIS Server software. All Web mapping technologies can be deployed in a mixed software environment (they can be deployed on the same server platform together). All mapping services can be configured to access a file data source or a separate ArcSDE database. Geodatabase access can be through direct connect or an ArcSDE server connection.

Web mapping services have experienced dramatic performance changes over the past 5 years. These performance enhancements improve Web user productivity and reduce deployment cost. Some of these performance changes were due to expanding software deployment options and others were due to improved hardware processing speed and platform capacity changes.

Figure 8-24 identifies recommended software configuration options for standard two-tier Web mapping deployments. This configuration option supports the Web server and GIS server components on the same platform tier. Results for three standard two-tier web server software configurations will be shown.
 * Legacy ArcIMS image service direct connect to an SDE geodatabase.
 * ArcGIS 10.0 for Server direct connect to an SDE geodatabase.
 * ArcGIS 10.1 for Server direct connect to an SDE geodatabase.



Web mapping server platform capacity changes.
Vendor hardware improvements have made a difference in web server sizing over the past 5 years.



Intel Xeon X5260 4-core server was the baseline platform for 2008. ArcIMS image service supported up to 43,000 transactions per hour, ArcGIS for Server ADF MXD mapping services supported up to 17,000 transactions per hour, and ArcGIS for Server REST MXD mapping services supported up to 33,000 transactions per hour. Platform memory recommendations were 2 GB per core, with 8 GB memory recommended for a 4-core server. ArcIMS provided a simpler higher capacity map service, while ArcGIS for Server REST services provided a simpler development environment and higher quality map services.

Intel Xeon X5677 4-core server was the baseline platform for 2010. ArcIMS image service supported up to 88,000 transactions per hour, ArcGIS for Server REST MXD mapping services supported up to 66,000 transactions per hour, and a new ArcGIS for Server REST MSD mapping services supported up to 86,000 transactions per hour. Platform memory recommendations were 3 GB per core, with 12 GB memory recommended for a 4-core server. ArcGIS for Server with new MSD map rendering engine outperformed ArcIMS and provided better map quality on the same platform environment.

Intel Xeon E5-2643 4-core server was the baseline platform for 2012. ArcIMS image service supported up to 112,000 transactions per hour and ArcGIS for Server REST MSD mapping services supported up to 110,000 transactions per hour with our current planning models. The ArcGIS 10.1 release shows good performance gains for ArcGIS for Server, and map publishing is fully supported by the new MSD rendering engine. The new ArcGIS 10.1 functionality makes it easy to publish and share map services and geoprocessing models to a local server and to ArcGIS Online cloud services.

Higher performance platforms require more physical memory
Servers must be configured with sufficient physical memory to take advantage of the higher platform processing capacity. Heavier user workflows can require more session memory than lighter workflows. It is important to take advantage of the Windows 64-bit Operating System for the new Intel platforms, since these higher capacity servers require much more physical memory to handle the high number of current active client sessions. 64-bit operating systems improve memory management and provide up to 10 percent performance gain over Windows 32-bit Server Advanced operating systems.

These high performance servers push capacity to new levels, and GIS applications may push platform and disk subsystems to their limits. Monitor disk traffic and platform paging during peak loads to ensure these subsystems are not overloaded. More memory can reduce paging and reduce disk contention by improving data caching. Data can be distributed over multiple disk volumes to reduce file access contention.

You need to know you have a problem before you can fix it, so keep an eye on platform performance metrics to see all is working as it should.


 * CPT for ArcGIS for Server platform sizing

CPT Calculator tab can be used for ArcGIS for Server platform sizing.
 * Select the workflow description that represents your user performance targets.
 * Identify your peak user requirements and your selected data source.
 * Select your platform architecture and your hardware platform choice.

The recommended platform solution is generated by Excel once you enter your business requirements and make your hardware selections. You can try different platform configurations and experiment with different workflow complexities.

The CPT Calculator tab can be used for single workflow platform sizing. The CPT Design tab should be used for more detailed enterprise design planning.

Platform Selection Criteria
Figure 8-30 provides a summary of the factors contributing to proper hardware selection. These factors include the following:



Esri system design role
User requirements analysis: Proper platform selection is driven by your business needs. You need to know what you need to do before you can identify what you need to do it. Once you identify what you need to do, you can use the CPT to identify your platform needs.

Platform processor selection: Platform must be configured properly to support your user performance requirements. Proper platform technology selection based on user performance needs and peak system processing loads significantly reduces implementation risk. Esri performance sizing models establish a solid foundation for proper hardware platform selection. The Capacity Planning Tool automates the System Architecture Design analysis, providing a framework for coupling enterprise GIS user requirements analysis with system architecture design and proper platform technology selection.

Hardware vendor role
Purchase Price: Cost of the hardware will vary depending on the vendor selection and platform configuration. Capacity Planning Tools can identify specific technology required to satisfy peak system processing needs. Pricing should be based on the evaluation of hardware platforms with equal display performance platform workflow capacity.

System Supportability: Customers must evaluate system supportability based on vendor claims and previous experience with supporting vendor technology.

Vendor Relationships: Relationships with the hardware vendor may be an important consideration when supporting complex system deployments.

Total Life Cycle Costs: Total cost of the system may depend on many factors including existing customer administration of similar hardware environments, hardware reliability, and maintainability. Customers must assess these factors based on previous experience with the vendor technology and evaluation of vendor total cost of ownership claims.

Establishing specific hardware technology specifications for evaluation during hardware source selection significantly improves the quality of the hardware selection process. Proper system architecture design and hardware selection provide a basis for successful system deployment.

Previous Editions
Platform Performance 31st Edition (Fall 2012) Platform Performance 30th Edition (Fall 2011) Platform Performance 29th Edition (Spring 2011) Platform Performance 28th Edition (Fall 2010) Platform Performance 27th Edition (Spring 2010)

Page Footer Specific license terms for this content System Design Strategies 26th edition - An Esri ® Technical Reference Document • 2009 (final PDF release)