Platform Performance 37th Edition

Fall 2015 Platform Performance 37th Edition

Chapter 3 (Software Performance) discussed some best practices for publishing high performance map services, and the importance of selecting the right software technology to support your business needs. This chapter will focus on hardware platform performance, and share the value of selecting the right computer technology to support your system performance needs.

Selecting the right hardware will improve user performance, reduce overall system cost, and establish a foundation for building effective GIS operations. Selecting the wrong hardware can contribute to implementation failure - spending money on a system that will not support your business needs.

Hardware vendors do not know what hardware is required to satisfy your GIS needs. This chapter shares the system architecture design methodology developed to help you select the right hardware for your planned GIS operations. This chapter also shares information for justifying hardware purchases based on expected return on investment.

Platform Performance Baseline
The world we live in today is experiencing the benefits of rapid technology change. Technology advancements are directly impacting GIS user productivity—the way we all deal with information and contribute to our environment. Our ability to manage and take advantage of technology benefits can contribute to our success in business and in our personal life.



To develop a system design, it is necessary to identify user performance needs. User productivity requirements can be represented by workstation platforms selected by users to support their computing needs. GIS users have never been satisfied with platform performance, and each year power users are looking for the best computer technology available to support their processing needs.

As platform technology continues to improve, user performance expectations may change. It is not clear just when computers will be fast enough for GIS professionals – there is always more we could do if we just had the power. As user productivity improves, application and data servers must be upgraded to service the increasing user desktop processing requirements.

GIS user performance expectations have changed dramatically over the past 10 years. This change in user productivity is caused primarily by faster platform performance and lower hardware costs. Figure 8.1 identifies the favorite hardware platforms selected by GIS users over the past 10 years. High performance desktop workstations have made a primary contribution to improving user productivity and expanding GIS technology capabilities.

Each year we review hardware vendor technology to identify the best available platform for GIS professional workstation users. The highest performing platform is used to identify our performance baseline for each calendar year. The Xeon E3-2637v3 4-core (1 chip) 3500 MHz was identified as our favorite 2015 workstation.

Xeon E3-2637v3 4-core (1 chip) 3500 MHz platform 
 * includes four of the fastest processors released by Intel in early 2015.
 * ArcGIS for Desktop workstation recommended memory is increased to 16 GB to accommodate expanding GIS use of large imagery files, concurrent use with ArcGIS Pro, and increasing emphasis on time aware geoprocessing analysis.

Performance Baseline history
2015 Arc15 Performance Baseline = 58 (SPECrate_int2006 per core baseline)



Figure 8.2 provides a graphic overview of relative Intel processor performance over the past 10 years. Platform per core performance today is over 5.5 times faster than 10 years ago. Hardware vendor platform performance improvements contribute to improved business productivity and system computer capacity, reducing the overall cost of automated business systems.

The boxes at the bottom of the chart represent the performance baselines used to support the Esri capacity planning models over the past 10 years. Performance baselines are specified by calendar year based on the per core performance of available platform technology. These performance baselines are reviewed and updated each year to keep pace with the rapidly changing hardware technology. 

Moore's Law
The Intel founder Gordon E. Moore released a paper in 1965 that predicted that the number of components in integrated circuits would double every year through at least 1975. His prediction, known today as Moore’s law, has proven true over the last 50 years and continues to contribute to computer performance gains.

Reducing the distance between integrated circuits has a direct impact on platform processor compute performance. A function that doubles every two years when plotted on a chart would produce an exponential growth curve. Figure 8.3 shows a plot of the Esri performance baselines with an exponential curve overlay. From this chart, you can see that the performance gains we have experienced over the past 10 years track remarkably close to the exponential performance gains predicted by Gordon E. Moore. In fact, if past experience is a good predictor of the future, we can expect some remarkable per core performance gains over the next couple of years if the trend continues.

There is some discussion over the past couple of years questioning whether platform per core performance will continue to improve as it has over the past 10 years. Moore's law deals with components getting smaller and closer together on integrated circuits with each production design cycle. Performance has improved due to shorter travel distances between processor chip components. Distances between components today are approaching atomic size, and maybe there are physical limits to how much faster the chip will perform. Other factors such as temperature and cooling limitations could limit further processor performance gains, and performance per core could start to level off with future chip releases (represented by the red curve on the chart). High capacity chip configurations (16-core per chip, 24-core per chip, etc) are better for virtual server deployment, while at the same time generate more heat and limit peak processing speeds (MHz); another factor limiting per core performance gains. 

Faster platforms provide more service with less hardware
Figure 8.4 represents the relationship between server platform performance and peak entry level Web mapping service throughput. The chart shows rapidly increasing software license service capacity rendered by platform performance improvements shown in Figure 8.2.

Platform performance improvements reduce software cost. Dynamic web mapping services deployed using an entry level ArcGIS for Server software license in 2006 could support up to 26 concurrent users (11,000 TPH). Those same mapping services deployed by an entry level ArcGIS for Server license with 2015 platform technology can support over 250 concurrent users (80,000 TPH) with more quality and functionality.

Web deployment timelines are significantly reduced with new software, reducing software development and deployment expenses. Web mapping software services that took over 6 months to develop and deploy in 2006 can be deployed within minutes to hours with 2015 technology.

Improved hardware platform performance is driving a significant reduction in overall Enterprise GIS system cost. 

Relative platform performance
Knowing how to account for platform technology change is fundamental to understanding capacity planning. Figure 8.5 identifies a simple relationship that we have used since 1992 to relate platform performance with capacity planning.

The relationship simply states that if one can determine the amount of work (peak throughput) that can be supported by server A and identify the relative peak throughput performance between server A and server B, then one can estimate the amount of work that can be supported by server B. This relationship is true for single-core and for multi-core servers. 

Platform performance resources
Having a fair measure for relative platform performance and capacity is important. Selection of an appropriate performance benchmark and agreement on how the testing will be accomplished and published are all very sensitive hardware marketing issues. You need a performance measure that is accepted by the vendors – preferably one they provide themselves.

SPEC performance benchmarks
The Standard Performance Evaluation Corporation (SPEC) is a consortium of hardware vendors initially established in the late 1980s for the purpose of establishing guidelines for conducting and sharing relative platform performance measures. They have developed a variety of standard benchmarks with testing performed by the hardware vendors and published on the SPEC Web site. Additional information on Intel processor configurations is available on the Intel Architecture site.

The SPEC compute-intensive benchmarks have been used by Esri as a reference for relative platform performance since 1992. The system architecture design platform sizing models used in conjunction with these relative performance measurements have supported Esri customer capacity planning since that time. The SPEC benchmarks were updated in 1996, 2000, and 2006 to accommodate technology changes and improve relative performance measurements.

SPEC provides a separate set of integer and floating point CPU intensive benchmarks. Computer processor core can be optimized to support integer or floating point calculations, and performance can be very different between these environments. Platform capacity test results with ArcGIS for Desktop and Server software have tracked quite close to the SPEC CPU integer relative performance throughput (rate) benchmark results. This confirms that the ArcGIS component software code predominantly uses integer calculations. The SPEC CPU integer benchmarks provide the best relative platform performance estimates for representing ArcGIS software technology.

SPEC provides two methods for conducting and publishing their CPU integer benchmark results. The SPECint2006 is a speed benchmark measuring execution time for a single benchmark instance and then uses the result for calculating relative platform performance. The SPECint_rate2006 is a throughput benchmark, and is conducted using several concurrent benchmark instances (one for each core thread) and measures executable instance cycles over a 24-hour period. The SPECint_rate2006 benchmark results are used for relative platform capacity planning metrics in the Esri system architecture design capacity planning models.

There are two results published on the SPEC site for each benchmark, the conservative (baseline) and the aggressive (result) values. The conservative baseline values are published first by the vendors, and the aggressive values are published later following additional tuning efforts. Either values can be used to estimate relative server performance, although the conservative benchmarks provide the most conservative relative performance estimate (removes tuning sensitivities). We recommend using the conservative benchmarks for capacity planning, and the published SPEC CPU benchmark baseline values are used in the Capacity Planning tool.

Several benchmarks are published on the SPEC Web site. You will need to select and go to the SPECrate2006 Rates and then scroll down to the configurable request selection - you can then select specific items that you want included in your display query. I like to include the processor and the Processor MHz in my display, which was not included in the default selection. The Processor Characteristics include maximum Turbo boost MHz which can be used to estimate maximum performance at low utilization levels.

More information on the SPEC CPU benchmarks can be found on the SPEC Web site.


 * CPT Hardware tab

CPT Hardware tab includes a list of Desktop and Server SPEC CPU platform benchmark baseline values used as a lookup table by the CPT Calculator, Design, Test, and Favorites tabs.

Published vendor benchmark values are used to identify relative throughput and performance for selected hardware platforms. Platforms are arranged by vendor and year in two lookup lists. Desktop candidates are located at the top of the list. Server candidates are located at the bottom of the list. Project platform candidates are located in the middle of the list and included with the Desktop and Server list selections.

The Capacity Planning Tool updates page is the primary source for the platform performance metrics. Information from the SPEC Web site is entered into the CPT Hardware tab for capacity planning. Copy of the SPEC benchmark information is provided in a HardwareSPEC Excel workbook for easy access. The SPEC benchmark values are used to adjust baseline service times to selected platform service times for capacity planning analysis.


 * HardwareSPEC Excel Workbook

The Esri Capacity Planning Tool release site shares a HardwareSPEC workbook with an Excel table of platform relative performance values from the published SPECrate_integer benchmarks.


 * Adding a new platform to the CPT Hardware tab

New hardware platform benchmark values are published on the SPEC Web site each month throughout the year, so the platform you need for your design analysis may not be included in your version of the CPT. You can locate the new benchmark values on the SPEC Web site and then add them to your CPT Workflow tab.

Web map display performance history
The change in hardware performance over the years has introduced unique challenges for capacity planning and for software vendors trying to satisfy customer performance and scalability expectations. Understanding how to represent hardware performance differences is critical when addressing capacity planning, performance, and scalability issues.

Figure 8.6 shows how user expectations have changed over the past 10 years. An ArcGIS Desktop heavy dynamic map display processing time in CY2006 would take over 2 seconds. That same map display today can be rendered in less than 0.3 seconds – over 7 times faster than just 10 years earlier. Most of this performance gain can be accounted for by faster processor core and a new ArcGIS for Server display rendering engine.

Figure 8.6 shows a minimum user performance expectation range (1-2 seconds) which we believe may open new opportunities for GIS analysis and display. Traditional heavy (5x Medium > 3x Heavy) map displays can now be rendered in less than 1 second, suggesting hardware technology may no longer be a limitation on GIS user productivity. IT departments see this as an opportunity to buy higher capacity platforms and leverage virtual server environments and cloud computing to simplify their administration workload (exchanging user display performance for lower administration costs). I expect GIS users will see this as an opportunity to incorporate more complex analysis into their user workflows, leveraging more compute intensive statistical analysis, logistics routing functions, and business analytics for use in their standard business workflows. Heavier processing workflows will require continued hardware performance improvements to keep user productivity at a peak level. 

2015 Technology Changes
Figure 8.7 highlights the technology changes that are making a difference in 2015. We have higher capacity servers, more processor cores per chip, and more chips per server.

Processor vendors are responding to new data center platform environments. Virtual server deployments are becoming standard practice for enterprise data centers. Vendors are introducing faster 4, 6, 8, 10, 12, 14, 16, and 18 core chips, with a growing number of high capacity 4 chip servers. Data Center consolidation efforts are expanding rapidly. Cloud computing solutions are being accepted as a viable and cost-effective alternative to on-premise operations, and we are seeing an expanding number of vendors with public cloud offerings.

Data Center consolidation can save operational costs. Cloud hosting can change how we manage and support our enterprise business operations.

'''Warning: All of the Cloud vendor administrative savings may not be passed on to the customer. '''

Hardware processing performance continues to improve. New 2015 processor cores are faster than last year and use less power (Watts). 2015 performance baseline is over 9 percent faster than 2014. New multi-core desktop and server processors include turbo boost technology.

Enterprise license agreements provide a more adaptive and cost effective way to manage enterprise GIS operations. ArcGIS online provides an expanding range of deployment options. (ArcGIS Online for Organizations Software as Service, ArcGIS Online for Developers Platform as a Server, and Amazon and Azure Cloud Infrastructure as a Service). ArcGIS for Server term licensing is available to support adaptive peak throughput demands.

ArcGIS Online Organizations provide a rich platform to share, collaborate, and deploy your GIS services within your organization or with a shared community of users. Online subscription services expand available data services and include a growing number of network and spatial analysis tools for use by the ArcGIS Online user community. 

Platform identification
Hardware vendors provide a wide range of choices at different performance levels for different user communities. New processors may perform faster than older processors that run at a higher clock speed (MHz), and processor speed is no longer a good measure of performance. Figure 8.8 shares how vendors have responded to this problem, and the nomenclature we use to make sure we understand the platform we are talking about.

CPT Platform terminology 
 * Chip processor number. Hardware vendors have identified specific model numbers that are unique for each processor chip configuration (E3-2637v3). Hardware vendors use these chips as components in building their server offerings. There are a limited number of platform chip manufactures still competing in our marketplace (Intel and AMD provide all of the Windows processor technology). These processor chips are used in building all of the hardware vendor platforms offerings.
 * Total processor core. The total number of processor core identifies how many user requests can be processed at the same time. Total core is a key parameter for establishing appropriate memory and identifying the proper software configuration and platform capacity. You may find vendors identifying platforms by number of chips and how many core per chip – you need to do your math to identify the total number of core. This can be confusing, and for this reason the CPT terminology we use includes the total number of processor core.
 * Total chips per node. Total number of chips included in the platform configuration is provided for information purposes (not as important as understanding the total number of core). Some vendors refer to chips as sockets – a chip is the board that holds the integrated communication circuits and the core processor. The chip plugs into a socket, so the terms are used interchangeably. I use chip in the CPT nomenclature because this is the term used by SPEC and is shorter than using socket.

Platform Performance
Hardware vendor technology has been changing over the past 5 years. Improved hardware performance has enabled deployment of a broad range of powerful software and continues to improve user productivity. Sub-second server processing times suggest that future user productivity gains will likely come from more loosely coupled operations, higher capacity network communications, disconnected processing, mobile operations, pre-processed cached maps, and more rapid access and assimilation of distributed information sources.

System processing capacity becomes very important. System availability and scalability are most important. The quality of the information product (display and database design) provided by the technology can make a user's think time more productive. Proper tradeoff between display quality and performance contributes to optimum user productivity.

Hardware vendor performance gains
Much can be learned about server platform competition from vendor published SPEC benchmarks. Figure 8.9 provides an overview of relative per core performance for key vendor-published benchmarks from 2011 to 2015.

Intel processors are maintaining a strong performance leadership position over the last 5 years.
 * Intel Xeon E3-2637v3 4-core chips show 2015 per-core performance at 57.4; expect some gains later in the year. IBM Power8 64-core chips show 2015 per-core performance at 65.2, an impressive target for future Intel processor core releases.
 * Intel Xeon E3-1270v3 4-core 2014 per-core performance peaked at 53.
 * Intel Xeon E5-1280v2 4-core 2013 per-core performance peaked at 48.
 * AMD Opteron's best 2013 per-core performance peaks just over 20. We have seen only minor AMD performance gains since 2006.
 * Most other processor vendors have dropped out of the competition.

ArcGIS for Server software is deployed on Intel and AMD processors, supporting both Intel Linux and Windows deployments. The Intel platforms deliver a strong performance advantage over AMD, improving server peak throughput capacity and reducing overall software licensing costs. 

Processing speed drives platform throughput
Figure 8.10 shows the relationship between display processing time and system throughput.

There is an indirect relationship between server processing time and peak system throughput. Faster per core processor performance reduces service processing time. Shorter processing time means each processor core can service more requests. More service requests per core means more peak throughput.

ArcGIS for Server licensing is based on number of hardware platform physical or virtual server cores (all core are treated equal). More processor core implies more server throughput, with license cost based on number of server core. Faster processor cores optimize platform peak throughput for a given server license (more bang for the buck).

'''Warning: Deploying ArcGIS for Server on platforms with slower processor core will increase cost per service transaction. ''' 

2015 Intel processor performance
The right platform selection is based on a balance between server capacity, performance, and power. Figure 8.11 identifies the Intel processors available for the vendor 2015 hardware.

Intel provides eight 2015 platform commodity servers deployment strategies.
 * Quad core chip configurations (1 and 2 chip configurations)
 * Six core chip configurations (1, 2, and 4 chip configurations)
 * Eight core chip configurations (1, 2, and 4 chip configurations)
 * Ten core chip configurations (1, 2, and 4 chip configurations)
 * Twelve core chip configurations (1, 2, and 4 chip configurations)
 * Fourteen core chip configurations (1, 2, and 4 chip configurations)
 * Sixteen core chip configurations (1, 2, and 4 chip configurations)
 * Eighteen core chip configurations (1, 2, and 4 chip configurations)

What platform should I buy? Quad and Six core chip configurations provide the highest per-core performance. Six and Eight core chip configurations can support a higher number of virtual servers per platform while delivering excellent per core performance. Twelve, fourteen, sixteen, and eighteen core chip configurations can support a higher number of virtual servers per platform at reduced per core performance.

The high core performance for the 2015 six and eight core chip configurations is quite impressive, making your final server decision a challenging choice (particularly for larger virtualized data center environments that can take advantage of the higher capacity servers).

Stay away from the slower performing platform models, they will likely end up costing you more in software licensing. 

2015 ArcGIS for Server platform selection
When you go to purchase a platform, vendors are not very good at sharing their performance numbers. I will say, to the vendor’s credit, that they are good at providing their performance numbers on the SPEC site (but not on their sales page). You need to do your homework before you buy your hardware. With GIS servers, platform performance is important both for optimum user productivity and to reduce overall system cost. The good news (for GIS users) is that the best performing hardware often delivers the lowest overall system cost. If you don’t do your homework, you might miss the savings.

Figure 8.12 provides an overview of platform configuration options provided by a vendor showing their relative per core performance and dollars per transaction. The dollars per transaction is calculated by dividing the total cost of the server divided by the relative SPEC benchmark throughput value. *Hardware pricing was based on list values.
 * Esri Software estimated at $10,000 per core.
 * SPEC baseline/core used for processor speed.
 * CPT Calculator used to evaluate virtual server core requirements.

CPT Best Buy Analysis
CPT can be used to complete a platform best buy analysis. Sample platform best buy analysis is provided in the Capacity Planning Tool appendix.

CPT was designed to automate the system architecture design analysis. It is particularly suited to translate user business requirements to appropriate hardware platform selections.

Establishing business needs.
Proper hardware selection depends on a clear understanding of your business needs. Platform pricing analysis will show how to use the CPT to select the optimum platform for publishing your mapping services.

Completing the capacity planning analysis
Business workflow requirements are used to identify the required server cores for each analysis.

Best Buy:
 * Xeon E5-2637v3 4-core platform provides the best per-core performance.
 * Xeon E5-2693v3 28-core platform has the lowest per-core platform cost.
 * Xeon E5-2643v3 12-core platform has the lowest per-core system cost when including GIS software.

CPT Design platform analysis summary report
The CPT Design tab can be used to evaluate all servers in a single report.

ArcGIS Platform Sizing
The primary purpose for this Platform Performance chapter is to share best practices for selecting the right vendor hardware platforms (platform sizing). Since 1992 our system design consultants have helped customers identify proper hardware solutions. We found out early on that it is not possible to have a successful GIS deployment if we don't get the hardware right. For this reason, our system architecture design process was developed to translate business needs to identified platform requirements.

Business needs are identified through a user workflow loads analysis. The system architecture design analysis translates peak workflow loads to identified platform solutions. The CPT tools can be used to complete the design analysis. This section will share how vendor platform technology is changing, and show some rough estimates of what the available platform configurations can deliver in an ArcGIS environment.

ArcGIS Desktop Platform Sizing
ArcGIS for Desktop system requirements are identified in the ArcGIS 10.2 help. Figure 8.13 shows Intel platform performance gains experienced over the past five years. The 2015 workstation processors are over 45 percent faster than the processors that supported ArcGIS for Desktop workstation users in 2011. Additional memory and faster storage solutions can contribute to additional performance gains. The advance of GIS technology is enriched by the remarkable contributions provided by Esri's hardware partners.

Workstation life cycle upgrades depend on user performance needs.
ArcGIS for Desktop power user productivity is often limited by processor per-core performance, and upgrading power user workstations can increase user productivity. Upgrade ArcGIS for Desktop power user workstations whenever there is a large improvement in processor per-core performance. Typical power user workstation life cycle is 1-2 years.

"Warning: A single ArcMap user display session takes advantage of a single processor core. Display performance is determined by per-core processor speed, not by the total number of available processor cores."

"Note: Some ArcGIS geoprocessing background services are multi-threaded, and can take advantage of additional available processor core. ArcGIS Pro is multi-threaded and achieves optimum performance with hyper threading enabled and at least 4 processor core."

ArcGIS for Desktop standard users normally work fine with slower display performance. It is a good practice to upgrade ArcGIS for Desktop casual users every 2-3 years to maintain work productivity.

Windows terminal clients and web clients require much less processing, and can work fine with most standard office workstations. Upgrade terminal and browser client users every 3-5 years to maintain work productivity.

Workstation operating system
Full release and support for the Windows 64-bit operating system provides performance enhancement opportunities for ArcGIS for Desktop workstation environments. Windows 64-bit OS improves ArcGIS for Desktop memory access, supports a larger number of concurrent background sessions, and takes advantage of higher memory capacity for data caching. Working with imagery, cached basemap tiles, and local feature cache are examples where ArcMap display performance can be improved when accessing data from local memory cache.

Make sure your workstation has sufficient physical memory to handle your application workflow. 8 GB memory is adequate for most SDE geodatabase workflow clients. 16 GB memory or more may be required when working with large data files (imagery, shape files, etc). Additional memory may be required when working with several different applications on the same workstation or working in a virtual desktop environment.

Workstation performance
Most GIS users are quite comfortable with the performance provided by current Windows desktop technology. Power users and heavier GIS user workflows will notice performance improvements with the faster processor technology.
 * Quad-core technology is now the standard for desktop platforms, and although a single ArcMap process will see little performance gain in a multi-core environment there will be significant user productivity gains by enabling concurrent background processing loads.
 * Turbo boost increases per-core processing performance when supporting a single user session and background processing is not required.
 * Desktop parallel processing environments are leveraged when using a basemap layer or accelerated imagery layer in ArcGIS 10 applications.
 * 3D image streaming with ArcGIS Explorer 900 and future enhancements with 3D simulation in ArcGIS Pro and geoprocessing also leverage the increased capacity of multi-core workstation environments.

Video display processing
Video graphics cards enhance the ArcGIS for Desktop user display environment, particularly for 3D Analysis performance and imagery display quality, and will be particularly important in supporting the new ArcGIS Pro desktop application. ArcGIS 3D Analyst requires OpenGL-compatible graphics cards, as OpenGL technology is used for 3D display in the ArcGlobe, City Engine, and ArcScene applications. ArcGIS Explorer for Desktop also uses OpenGL technology for 3D rendering. Frequently asked questions for selecting a video card is provided in the 3D Analysis for ArcGIS for Desktop 10 help documentation. 

Windows Terminal Server/Remote Desktop Services Platform Sizing
Windows Terminal Server supports centralized deployment of ArcGIS Desktop applications for use by remote terminal clients. Figure 8.14 identifies three standard Windows Terminal Server software configurations. The ArcGIS Desktop direct connect architecture will be used to demonstrate how Windows Terminal Server sizing has been influence by hardware technology change.

Esri certifies each ArcGIS for Desktop release with Citrix XenApp server (Citrix Receiver) environment. A more complete discussion on Centralized Windows Terminal Server/Remote Desktop Services (Citrix) Architecture is provided in Chapter 7. 

ArcGIS for Desktop terminal server platform capacity changes
Figure 8.15 identifies how vendor hardware improvements have made a difference in Windows Terminal Server sizing over the past 5 years. The improvements in processor core performance in conjunction with more processor core per chip provide increased server throughput capacity (number of concurrent users supported on a single platform). As the number of concurrent user sessions on a platform increase, the memory and input/output (storage access) requirements must also increase to accommodate the additional concurrent user sessions. Heavier workflows can require more memory per session than lighter workflows. Servers must be configured with sufficient physical memory to take advantage of the higher platform processing capacity.

Server 4-core capacity has increased by about 66 percent over the last 5 years.
 * 2010 4-core server supported 25 - 50 concurrent users (38 GB RAM)
 * 2012 4-core server supported 33 - 66 concurrent users (44 GB RAM)
 * 2014 4-core server supported 38 - 76 concurrent users (50 GB RAM)
 * 2015 4-core server supported 42 - 84 concurrent users (56 GB RAM)
 * 2015 2-core virtual server supports 17 - 33 concurrent users (24 GB RAM).

These are approximate capacity estimates based on our platform capacity planning models. Recommend using the CPT along with workflow performance guidelines provided in Chapter 3 for proper capacity planning. Workflow complexity and use productivity can have a significant impact on overall platform sizing requirements.

If you are going to run XenDesktop on a virtual server (i.e. on top of a hypervisor), then it is best to use the Citrix XenServer's hypervisor (rather than the VMware ESXi hypervisor). XenServer is supported by the same software company (Citrix) and customers have been more successful configuring and supporting XenDesktop on XenServer than using products from two different companies (i.e. XenDesktop on VMware ESXi).

Terminal Server physical memory guidelines
Servers must be configured with sufficient physical memory to take advantage of the higher platform processing capacity. Heavier user workflow can require more session memory than lighter workflows.

It is important to take advantage of the Windows 64-bit Operating System for the new Intel platforms, since these higher capacity servers require much more physical memory to handle the high number of current active client sessions. 64-bit operating systems improve memory management and provide up to 10 percent performance gain over Windows 32-bit Server Advanced operating systems.

These high performance servers push capacity to new levels, and GIS applications may push platform and disk subsystems to their limits. Monitor disk traffic and platform paging during peak loads to ensure these subsystems are not overloaded. More memory can reduce paging and reduce disk contention by improving data caching. Data can be distributed over multiple disk volumes to reduce file access contention. You need to know you have a problem before you can fix it, so keep an eye on platform performance metrics to see all is working as it should.


 * CPT for Windows Terminal Server platform sizing

CPT Calculator tab can be used for platform sizing. The recommended platform solution is generated by Excel once you enter your business requirements and make your hardware selections. You can try different platform configurations and experiment with different workflow complexities.

The CPT Calculator tab can be used for single workflow platform sizing. The CPT Design tab should be used for more detailed enterprise design planning.

Additional ArcGIS platform memory configuration guidelines are provided in the SDSwiki appendix on Windows Memory Management.

ArcSDE Geodatabase Server Sizing
Figure 8.16 identifies software configuration options for the geodatabase server platforms. The geodatabase transaction models apply to both ArcGIS Desktop and Web mapping service transactions. Normally a geodatabase is deployed on a single database server node, and larger capacity servers are required to support scale-up user requirements.

The ArcGIS for Desktop direct connect architecture will be used to demonstrate how SDE geodatabase sizing has been influence by hardware technology change.

Figure 8.17 identifies the impact of hardware technology change on ArcSDE Geodatabase server sizing over the past 5 years. Improvements in processor core performance in conjunction with more processor core per chip have significantly increased server throughput capacity (number of concurrent users supported on a single platform). As the number of concurrent user sessions on a platform increase, the memory requirements will increase to accommodate the additional concurrent user sessions. Heavier workflows can require more memory per session than lighter workflows. Servers must be configured with sufficient physical memory to take advantage of the higher platform processing capacity.

SDE geodatabase server platform capacity changes
Database platform capacity has increased by about 66 percent over the past 5 years.
 * 2010 4-core server supported 250 - 500 concurrent users (38 GB RAM)
 * 2012 4-core server supported 330 - 660 concurrent users (44 GB RAM)
 * 2014 4-core server supported 380 - 760 concurrent users (50 GB RAM)
 * 2015 4-core server supported 420 - 840 concurrent users (56 GB RAM)

Many data centers are now deploying the database on virtual server machines. The faster host machines increase the capacity of the virtual machines, requiring a smaller number of virtual server vCPU to satisfy capacity requirements.
 * 2015 2-core virtual server supports 166 - 332 concurrent users (24 GB RAM).

These are approximate capacity estimates based on our platform capacity planning models. Recommend using the CPT along with workflow performance guidelines provided in Chapter 3 for proper capacity planning. Workflow complexity and use productivity can have a significant impact on overall platform sizing requirements.

SDE Geodatabase Platform Memory guidelines
Servers must be configured with sufficient physical memory to take advantage of the higher platform processing capacity. Heavier user workflows can require more session memory than lighter workflows.

It is important to take advantage of the Windows 64-bit Operating System for the new Intel platforms, since these higher capacity servers require much more physical memory to handle the high number of current active client sessions. 64-bit operating systems improve memory management and provide up to 10 percent performance gain over Windows 32-bit Server Advanced operating systems.

These high performance servers push capacity to new levels, and GIS applications may push platform and disk subsystems to their limits. Monitor disk traffic and platform paging during peak loads to ensure these subsystems are not overloaded. More memory can reduce paging and reduce disk contention by improving data caching. Data can be distributed over multiple disk volumes to reduce file access contention.

You need to know you have a problem before you can fix it, so keep an eye on platform performance metrics to ensure all is working as it should.


 * CPT for ArcSDE Geodatabase platform sizing

CPT Calculator tab can be used for ArcSDE Geodatabase platform sizing.

Identify your peak user requirements and the SDE_DBMS data source. Select a two tier platform architecture and your SDE GDB hardware platform choice.

The recommended platform solution is generated by Excel once you enter your business requirements and make your hardware selections. You can try different platform configurations and experiment with different workflow complexities.

The CPT Calculator tab can be used for single workflow platform sizing. The CPT Design tab should be used for more detailed enterprise design planning.

Additional ArcGIS platform memory configuration guidelines are provided in the SDSwiki appendix on Windows Memory Management.

Web Mapping Servers
ArcGIS for Server system requirements are provided in the ArcGIS resource center. The legacy ArcIMS image service was deployed using the ArcIMS software, and the ArcGIS Server map services are deployed using the ArcGIS for Server software (ArcGIS for Server is the primary selection for current Web environments). All Web mapping technologies can be deployed in a mixed software environment (they can be deployed on the same server platform together). All mapping services can be configured to access a file data source or a separate ArcSDE database.

Web mapping services have experienced dramatic performance changes over the past 5 years. These performance enhancements improve Web user productivity and reduce deployment cost. Some of these performance changes were due to expanding software deployment options and others were due to improved hardware processing speed and platform capacity changes.

Figure 8.18 identifies recommended software configuration options for standard two-tier Web mapping deployments. This configuration option supports the Web server and GIS server components on the same platform tier. Results for four standard two-tier web server software configurations will be shown.
 * Legacy ArcIMS image service direct connect to an SDE geodatabase.
 * ArcGIS 10.0 for Server direct connect to an SDE geodatabase. This was the last release publishing services using the windows (MXD) map rendering engine.
 * ArcGIS 10.3 for Server direct connect to an SDE geodatabase.
 * Portal for ArcGIS and ArcGIS 10.3 for Server direct connect to an SDE geodatabase and Data Store.

Many organization are deploying ArcGIS for Server in a virtual server environment. Deployment of ArcGIS for Server in virtual server environments is discussed in the Virtual Desktop and Server Technology section in Chapter 11. The Virtual server section shares some joint testing reports with VMware virtualization technology showing ArcGIS for Server performance and scalability in different virtual server deployment configurations.

"Best practice: Selecting the right host platform for your virtual server deployment makes a difference - physical host processor performance directly impacts virtual server performance and scalability" 

Web mapping server platform capacity changes.
Vendor hardware improvements along with ArcGIS for Server architecture changes have made a difference in web server sizing over the past 5 years.



Figure 8.19 shows performance changes for Web services over the past 4 years. Intel Xeon E3-1280 4-core server was the baseline platform for 2011. ArcIMS image service supported up to 88,000 transactions per hour, ArcGIS for Server ADF MXD mapping services supported up to 48,200 transactions per hour, ArcGIS for Server REST MXD mapping services supported up to 55,800 transactions per hour, and ArcGIS for Server REST MSD mapping services supported up to 93,800 transactions per hour. Platform memory recommendations were 3 GB per core, with 12 GB memory recommended for a 4-core server. ArcGIS for Server with new MSD map rendering engine outperformed ArcIMS and provided better map quality on the same platform environment.

Intel Xeon E5-1280v2 4-core server was the baseline platform for 2013. ArcGIS for Server REST MXD mapping services supported up to 66,800 transactions per hour, and ArcGIS for Server REST MSD mapping services supported up to 112,400 transactions per hour. Platform memory recommendations were 3 GB per core, with 12 GB memory recommended for a 4-core server.

Intel Xeon E5-2637v3 4-core server was the baseline platform for 2015. ArcGIS for Server REST 2D Vector mapping services supported up to 127,500 transactions per hour. ArcGIS for Server REST 2D VP mapping services registered with Portal for ArcGIS supported up to 114,000 transactions per hour. The ArcGIS 10.3 for Server functionality makes it easy to publish and share map services and geoprocessing models to a local server and to ArcGIS Online cloud services. Platform memory recommendations are 3 GB per core, with 12 GB memory recommended for a 4-core server.

Intel Xeon E5-2667v3 16-core server was a popular virtual server host platform for 2015. ArcGIS for Server REST 2D Vector mapping services deployed on a 2-core virtual server supported up to 44,980 transactions per hour. Platform memory recommendations are 3 GB per core, with 6 GB memory recommended for a 2-core virtual server. Additional 1 GB per host platform core was added for visualization overhead.

ArcGIS for Server platform physical memory guidelines
Servers must be configured with sufficient physical memory to take advantage of the higher platform processing capacity. Heavier user workflows can require more session memory than lighter workflows. It is important to take advantage of the Windows 64-bit Operating System for the new Intel platforms, since these higher capacity servers require much more physical memory to handle the high number of current active client sessions. 64-bit operating systems improve memory management and provide up to 10 percent performance gain over Windows 32-bit Server Advanced operating systems. ArcGIS for Server deployments should carefully follow recommended maximum service instance and host platform configuration guidelines discussed in Chapter 4.

These high performance servers push capacity to new levels, and GIS applications may push platform and disk subsystems to their limits. Monitor disk traffic and platform paging during peak loads to ensure these subsystems are not overloaded. More memory can reduce paging and reduce disk contention by improving data caching. Data can be distributed over multiple disk volumes to reduce file access contention.

You need to know you have a problem before you can fix it, so keep an eye on platform performance metrics to see all is working as it should.


 * CPT for ArcGIS for Server platform sizing

CPT Calculator tab can be used for ArcGIS for Server platform sizing.
 * Select the workflow description that represents your user performance targets.
 * Identify your peak user requirements and your selected data source.
 * Select your platform architecture and your hardware platform choice.

The recommended platform solution is generated by Excel once you enter your business requirements and make your hardware selections. You can try different platform configurations and experiment with different workflow complexities.

The CPT Calculator tab can be used for single workflow platform sizing. The CPT Design tab should be used for more detailed enterprise design planning.

Additional ArcGIS platform memory configuration guidelines are provided in the SDSwiki appendix on Windows Memory Management.

Platform Selection Criteria
Figure 8.20 provides a summary of the factors contributing to proper hardware selection. These factors include the following:



Esri system design role
User requirements analysis: Proper platform selection is driven by your business needs. You need to know what you need to do before you can identify what you need to do it. Once you identify what you need to do, you can use the CPT to identify your platform needs.

Platform processor selection: Platform must be configured properly to support your user performance requirements. Proper platform technology selection based on user performance needs and peak system processing loads significantly reduces implementation risk. Esri performance sizing models establish a solid foundation for proper hardware platform selection. The Capacity Planning Tool automates the System Architecture Design analysis, providing a framework for coupling enterprise GIS user requirements analysis with system architecture design and proper platform technology selection.

Network infrastructure requirements: Network bandwidth must be adequate to support peak throughput loads required to support remote user productivity. Proper bandwidth is essential to enable implementation success. The Capacity Planning Tool automates the Network Suitability Analysis to ensure remote site bandwidth requirements are identified for projected peak processing loads.

Hardware vendor role
Purchase Price: Cost of the hardware will vary depending on the vendor selection and platform configuration. Capacity Planning Tools can identify specific technology required to satisfy peak system processing needs. Pricing should be based on the evaluation of hardware platforms with equal display performance platform workflow capacity.

System Supportability: Customers must evaluate system supportability based on vendor claims and previous experience with supporting vendor technology.

Vendor Relationships: Relationships with the hardware vendor may be an important consideration when supporting complex system deployments.

Total Life Cycle Costs: Total cost of the system may depend on many factors including existing customer administration of similar hardware environments, hardware reliability, and maintainability. Customers must assess these factors based on previous experience with the vendor technology and evaluation of vendor total cost of ownership claims.

Establishing specific hardware technology specifications for evaluation during hardware source selection significantly improves the quality of the hardware selection process. Proper system architecture design and hardware selection provide a basis for successful system deployment.

Previous Editions
Platform Performance 35th Edition Platform Performance 34th Edition Platform Performance 33rd Edition Platform Performance 32nd Edition Platform Performance 31st Edition Platform Performance 30th Edition Platform Performance 29th Edition Platform Performance 28th Edition Platform Performance 27th Edition

Page Footer Specific license terms for this content System Design Strategies 26th edition - An Esri ® Technical Reference Document • 2009 (final PDF release)