Platform Performance 28th Edition (Fall 2010)

From wiki.gis.com
Jump to: navigation, search
System Design Strategies
System Design Strategies 28th Edition (Fall 2010)
1. System Design Process 2. GIS Software Technology 3. Software Performance 4. GIS Data Administration
5. Performance Fundamentals 6. Network Communications 7. GIS Product Architecture 8. Information Security
9. Platform Performance 10. Capacity Planning Tool 11. City of Rome 12. System Implementation



Fall 2010 Platform Performance 28th Edition

Chapter 3 (Software Performance) discussed some best practices for building high performance map services, and the importance of selecting the right software technology to support your business needs. Chapter 5 (Performance Fundamentals) provided an overview of capacity planning performance models assuming all hardware platforms were the same. This chapter will focus on hardware platform performance, and share the importance of selecting the right computer technology to support your system performance needs.

Platform Performance Baseline

The world we live in today is experiencing the benefits of rapid technology change. Technology advancements are directly impacting GIS user productivity—the way we all deal with information and contribute to our environment. Our ability to manage and take advantage of technology benefits can contribute to our success in business and in our personal life.

Figure 9-1 User Performance Expectations

To develop a system design, it is necessary to identify user performance needs. User productivity requirements are represented by the workstation platforms selected by users to support computing needs. GIS users have never been satisfied with platform performance, and each year power users are looking for the best computer technology available to support their processing needs. As platform technology continues to improve, performance expectations may change. It is not clear just when computers will be fast enough for the GIS professionals – there is always more we could do if we had the power. Application and data servers must be upgraded to continue meeting increasing user desktop processing requirements.

GIS user performance expectations have changed dramatically over the past 10 years. This change in user productivity is enabled primarily by faster platform performance and lower hardware costs.

Figure 9.1 identifies the hardware desktop platforms selected by GIS users as their performance baseline since the ARC/INFO 7.1.1 release in February 1997. This Performance Baseline History has improved user productivity and expanded acceptance of GIS technology.


Performance Baseline History

Figure 9-2 provides a graphic overview of Intel workstation performance over the past ten years. The chart shows the radical change in relative platform performance since 2000. Technology change introduced by the hardware platform manufacturers represented a major contribution to performance and capacity enhancements over the past 10 years.


Figure 9-2 Platform Performance Baseline

The boxes at the bottom of the chart represent the performance baselines selected to support ESRI capacity planning models. These performance baselines were reviewed and updated each year to keep pace with the rapidly changing hardware technology.

The change in hardware performance over the years has introduced unique challenges for capacity planning and for software vendors trying to support customer performance and scalability expectations. Understanding how to handle hardware performance differences is critical when addressing capacity planning, performance, and scalability issues.

Figure 9-3 shows how user expectations have changed over the past 10 years. An ArcGIS Desktop simple dynamic map display processing time in CY2000 would take almost 6 seconds. That same map display today can be rendered in less than 0.25 seconds – over 23 times faster than just 10 years earlier. All of this performance gain can be accounted for by the change in platform technology.

Figure 9-3 Time to Produce a Map

Figure 9-3 identifies a minimum user expectation range which we expect may open new opportunities on how we may use GIS. Traditional heavy map displays are rendered in less than 1 second, suggesting hardware technology is no longer a limitation on user productivity. IT departments would like to buy higher capacity platforms and leverage virtual server environments for simpler administration. I expect GIS users will see this as an opportunity to incorporate more complex analysis into user workflows, leveraging more compute intensive statistical analysis and logistics routing functions for use in standard business workflows.

Knowing how to account for platform technology change is fundamental to understanding capacity planning. Figure 9-4 identifies a simple relationship that we have used since 1992 to relate platform performance with capacity planning.

Figure 9-4 How do we Handle Platform Performance Change?

The relationship simply states that if one can determine the amount of work (display transactions) that can be supported by server A and identify the relative performance between server A and server B, then one can identify the additional work that can be supported by server B. This relationship is true for single-core servers (servers with a single computer processing unit) and for multi-core servers with the same number of cores. This relationship is also true when comparing the relative capacity of server A and server B.


SPEC Performance Benchmarks

Identifying a fair measure of relative platform performance and capacity is very important. Selection of an appropriate performance benchmark and agreement on how the testing will be accomplished and published are all very sensitive hardware marketing issues.

Figure 9-5 shares the mission statement published by the Standard Performance Evaluation Corporation (SPEC), a consortium of hardware vendors established in the late 1980s for the purpose of establishing guidelines for conducting and sharing relative platform performance measures.

Figure 9-5 How do we Measure Relative Platform Performance

The SPEC compute-intensive benchmarks have been used by ESRI as a reference for relative platform capacity metrics since 1992. The system architecture design platform sizing models used in conjunction with these relative performance metrics have supported ESRI customer capacity planning since that time. The SPEC benchmarks were updated in 1996 and 2000 to accommodate technology changes and improve metrics.

SPEC provides a separate set of integer and floating point benchmarks. Computer processor core are optimized to support integer or floating point calculations, and performance can be very different between these environments. Testing with the ESRI software since the ArcGIS technology release has followed the integer benchmark results, suggesting the ESRI ArcObjects software predominantly uses integer calculations. The Integer benchmarks should be used for relative platform performance calculations when using ArcGIS software technology.

SPEC also provides two methods for conducting and publishing benchmark results. The SPECint2006 benchmarks measure execution time for a single benchmark instance and use this measure for calculating relative platform performance. The SPECint_rate2006 benchmarks are supported by several concurrent benchmark instances (maximum platform capacity) and measure executable instance cycles over a 24-hour period. The SPECint_rate2006 benchmark results are used for relative platform capacity planning metrics in the ESRI system architecture design sizing models.

There are two results published on the SPEC site for each benchmark, the conservative (baseline) and the aggressive (result) values. The conservative baseline values are generally published first by the vendors, and the aggressive values are published later following additional tuning efforts. Either published benchmark can be used to estimate relative server performance, although the conservative benchmarks would provide the most conservative relative performance estimate (removes tuning sensitivities).

Figure 9-6 provides an overview of the published SPEC2006 benchmark suites. The conservative SPECint_rate2006 benchmark results are used in the ESRI system architecture design documentation as a vendor-published reference for platform performance and capacity planning.

Figure 9-6 Platform Relative Performance (SPEC2006 Benchmark Suites)

The SPEC performance benchmarks are published on the Web. The Esri Capacity Planning Tool release site includes a HardwareSPEC workbook that shows a list of published SPECrate_integer benchmarks. The SRint2000 tab includes all vendor published SPECrate_int2000 benchmarks available on the SPEC site. SPEC stopped publishing the SRint2000 benchmarks in January 2007. All the new platform benchmarks are now published on the SPECrate_integer2006 site (SRint2006 tab). The last date the benchmark tab was updated is shown with the link name. A hot link to the SPEC site is included on the top of the Capacity Planning Tool (CPT) hardware tab.

Figure 9-7 identifies the location of the SPEC link on the CPT hardware tab and provides some views of the HardwareSPEC workbook.

Figure 9-7 SPEC Web Site

Several benchmarks are published on the SPEC Web site. You will need to select and go to the SPECrate2006 Rates and then scroll down to configurable request selection - you can then select specific items that you want included in your display query. I like to include the processor and the Processor MHz in my display, which was not included in the default selection. The Processor Characteristics include maximum Turbo boost MHz which can be used to estimate maximum performance at low utilization levels.

The HardwareSPEC workbook tabs include an additional column (baseline/core) that I add to the table. This identifies the processing performance of an individual core, a value that is used to estimate relative platform processing performance for a single sequential display. The relative processing performance per core values will be used in comparing user display performance.


Platform Performance

Hardware vendor technology has been changing rapidly over the past 10 years. Improved hardware performance has enabled deployment of a broad range of powerful software and continues to improve user productivity. Most business productivity increases experienced over the past 10 years have been promoted by faster computer technology. Technology today is getting fast enough for most user workflows, and faster compute processing is becoming less relevant. Most user displays are generated in less than a second. Access to Web services over great distances is almost as fast. Most of a user's workflow is think time—the time a user spends thinking about the display before requesting more information.

Most future user productivity gains will likely come from more loosely coupled operations, higher capacity network communications, disconnected processing, mobile operations, pre-processed cached maps, and more rapid access and assimilation of distributed information sources. System processing capacity becomes very important. System availability and scalability are most important. The quality of information provided by the technology can make a user's think time more productive.

Hardware processing encountered some technical barriers during 2004 and 2005 which slowed the performance gains experienced between platform releases. There was little user productivity gain by upgrading to the next platform release (which was not much faster), so as a result, computer sales were not growing at the pace experienced in previous years. Hardware vendors searched for ways to change the marketplace and introduced new technology with a focus on more capacity at a lower price. Vendors also focused on promoting mobile technologies, wireless operations, and more seamless access to information.

Competition for market share was brutal, and computer manufacturers tightened their belts and their payrolls to stay on top. CY2006 brought some surprises with the growing popularity of the AMD technology and a focus on more capacity for less cost. Intel provided a big surprise with a full suite of new dual-core processors (double the capacity of the single-core chips) while at the same time significant processing performance gains at a reduced platform cost. Hardware vendor packaging (Blade Server technology) and a growing interest in virtual servers (abstracting the processing environment from the hardware) is further reducing the cost of ownership and provide more processing capacity in less space.

Figure 9-8 provides an overview of vendor-published single-core benchmarks for hardware platforms using Intel processor technology.

Figure 9-8 Platform Performance Makes a Difference—Intel Supported Intel Platforms

The Intel Xeon 3200 MHz platform (single-core SPECrate_int2000 = 18 / SPECrate_int2006 = 8.8) was released in 2003 and remained one of the highest-performing workstation platforms available through CY2005. The SPECint_rate2000 benchmark result of 18 was used as the Arc04 and Arc05 performance baseline.

CY2005 was the first year since CY1992 that there was no noticeable platform performance change (most GIS operations were supported by slower platform technology).

There were some noticeable performance gains early in CY2005 with the release of the Intel Xeon 3800 MHz and the AMD 2800 MHz single-core socket processors. An Arc06 performance baseline of 22 (SPECrate_2006 = 10.5) was selected in May 2006. Since May, Intel released the Intel Xeon 5160 4 core (2 chip) 3000 MHz processor, a dual-core chip processor with a single core SPECrate_int2000 benchmark of 30 (SPECrate_int2006 = 13.4) and operating much cooler (less electric consumption) than the earlier 3.8 MHz release. The Arc07 performance baseline of 14 (SPECrate_int2006 = 14) was selected based on the Intel X5160 technology.

Figure 9-9 provides an overview of vendor-published single-core benchmarks for hardware platforms using AMD processor technology.

Figure 9-9 Platform Performance Makes a Difference—AMD Supported AMD Platforms

AMD platforms were very competitive with Intel in the 2004 - 2005 timeframe. Since that time, Intel processor performance improvements have been much more impressive than available AMD alternatives. AMD per core performance has seen some minor improvements since 2005, falling behind after 2007.

AMD introduced some high capacity platforms in 2010. The AMD Opteron 6174 24 core (2 chip) 2200 MHz processor with more throughput than the Intel Xeon X5677 8 core (2 chip) 3467 platform. AMD per core performance was about 36 percent of what Intel was offering. These high capacity platforms are attractive for IT departments that wish to consolidate many light back office applications on lots of virtual servers in a single platform. Virtual servers perform best with dedicated processor core, so a slower platform with more core can host more virtual servers.

Intel technology continued to improve in CY2008 and server pricing was even more competitive. Hardware vendors were promoting platforms with dual core chips and reducing the price on lower performance low power configurations. The Xeon 5260 4 core (2 chip) platform (SPECrate_int2006 = 17.5) was selected as the 2008 baseline.

2009 was another great year for performance gains. Intel released a new chip technology that was over 70 percent faster per core than their 2008 release. Hardware vendors stop providing Dual core chip options, and all entry level commodity servers include Quad core or higher capacity chips. The Intel Xeon 5570 8 core (2 chip) 2933 MHz platform was over 3.3 times the capacity of the 2008 baseline at about the same platform cost.

Intel introduced a new chip design in 2010 that was 15-20 percent faster than the previous year. Virtual server technology is being adopted as the framework for many IT data centers. Private and public Cloud hosting is becoming more popular. Hardware vendors are building higher capacity platforms focusing on the virtual server and cloud computing markets. Intel introduced new 6 core per chip 5600 series platforms and a range of higher capacity 7500 series platforms with 6 and 8 core per chip configurations. Intel released a Xeon X7550 32 core (4 chip) 2000 MHz platform that was about 40 percent slower per core than the X5677 platform and about 2.5 times the capacity. SGI released an Intel Xeon X7560 512 core (64 chip) 2266 MHz platform with per core performance the same as the Xeon X7550 and about 40 times the capacity of the X5677 platform. The Xeon X5677 8 core (2 chip) 3467 MHz platform was selected as the CY2010 performance baseline (SPECrate_int2006 baseline = 35/core).

Figure 9-10 provides an overview of vendor-published per core benchmarks for hardware platforms supporting UNIX operating systems.

Figure 9-10 Platform Performance Makes a Difference—UNIX Supported UNIX Platforms

The UNIX market has focused for many years on large "scale up" technology (expensive high-capacity server environments). These server platforms are designed to support large database environments and critical enterprise business operations. UNIX platforms are traditionally more expensive than the Intel and AMD "commodity" servers, and the operating systems typically provide a more secure and stable compute platform.

IBM (PowerPC technology) is an impressive performance leader in the UNIX environment. The high capacity Intel and AMD platforms are starting to penetrate most of the remaining UNIX vendor market.  


2010 Technology Changes

Figure 9-11 highlights the technology changes that are making a difference in 2010. Hardware vendor focus on higher capacity servers is driven by IT adoption of Virtual Server and Cloud Computing as a better way to consolidate and manage adaptive data center environments. Platform core performance continues to increase, and turbo boost technology automatically adjusts display performance based on server utilization.

Figure 9-11Identifying the Right Platform / How do we select the platform we want?

Hardware vendor efforts to reduce cost and provide more purchase options make it important for customers to understand their performance needs and capacity requirements. In the past new hardware included the latest processor technology, and customers would expect new purchases would increase user productivity and improve operations. In today's competitive market place, new platforms do not ensure faster processor core technology. You must understand your performance needs and use relative hardware benchmarks in selecting the right platform.

How we identify the platform configuration we want has been changing. Hardware vendors are providing a wide range of choices at different performance levels for different user communities. New processors may perform faster than older processors that run at a higher clock speed (MHz), and processor speed is no longer a good measure of performance. Figure 9-12 shares how vendors have responded to this problem, and the nomenclature we use to make sure we understand the platform we are talking about.

Figure 9-12 Platform Identification

Hardware vendors have identified specific model numbers that are unique for each processor chip configuration (X5677, X5660, Opteron 6174). Hardware vendors use these chips as components in building their server offerings. There are a limited number of platform chip manufactures still competing in our marketplace (Intel and AMD provide all of the Windows processor technology). These processor chips are used in building all of the hardware vendor platforms offerings.

The total number of processor core identifies how many user requests will be processed at the same time. Total core is a key parameter for establishing appropriate memory and identifying the proper software configuration and platform capacity. You may find vendors identifying platforms by number of chips and how many core per chip – you need to do your math to identify the total number of core. This can be confusing, and for this reason the CPT terminology we will use includes the total number of core. Total number of chips is provided for information purposes (not as important as understanding the total number of core). Some vendors refer to chips as sockets – a chip is the board that holds the integrated communication circuits and the core processor. The chip plugs into a socket, so the terms are used interchangeably. I use chip in the CPT nomenclature because this is the term used by SPEC and is shorter than using socket.

Getting the Right Hardware

When you go to purchase a platform, vendors are not very good at providing the performance numbers. I will say, to the vendor’s credit, that they are very good at providing their performance numbers on the SPEC site. You need to do your homework before you buy your hardware. With GIS servers, platform performance is important both to support user productivity needs and to reduce overall system cost. The good news is that the best performing hardware provides the lowest overall system costs. If you don’t do your homework, you might miss the savings.

Figure 9-13 provides an overview of platform configuration options available on a DELL site. This is just an example. You should do your own homework with your own vendor and pricing. The 2010 recommended ArcGIS Server container machine configuration is used in our example.

Figure 9-13 Identifying the Right Platform / How do we select the platform we want?

The ideal platform would include the right processor, memory, and hard drive configuration. Configuring a Dell PowerEdge R710 server with two Xeon X5677 quad core 3.46 GHz processors, 24GB 1333 MHz memory, 64 Bit Windows Server Standard operating system, and three RAID 5 146 GB 15,000K RPM disk drives cost just over $14,000.

Processors: The X5677 quad core 3.46 GHz processors provide the best performance/core. Selecting the "Energy Efficient" Xeon E5640 quad core 2.66 GHz processors reduce the overall cost by about $2500. The “High Efficiency” Xeon L5640 six core 2.26 GHz processors increase capacity back to about 98 percent of the X5677 and reduce cost by 1400.

What is not shown? You need to lookup SPEC throughput and calculate per core performance to get the rest of the story. Performance of the E5640 core processor is 84 percent of the X5677. Performance of the Xeon L5640 core processor is 65 percent of the X5677.   Figure 9-14 takes a closer look at the server options from an overall system cost perspective. For this example, we wish to purchase a server to host our ArcGIS Server Web mapping services. We want a server solution that will host estimated peak loads up to 39,000 transactions per hour. We estimate ArcGIS Server software licensing is about $5000 per core. We will use the platform pricing identified in Figure 9-13 for the hardware. Our IT department will deploy ArcGIS Server in multiple 2 core virtual servers on the selected physical platforms.

Figure 9-14 What is the Best Buy?

The CPT Calculator was used to identify the total number of virtual servers required to support the business requirements with each of the candidate physical platforms. The Xeon L5640 12 core (2 chip) 2.26 GHz platform will require three virtual servers (6 core), the Xeon E5640 8 core (2 chip) 2.66 MHz platform will require three virtual servers (6 core), and the Xeon X5677 platform will require two virtual servers (4 core) to support the projected business loads. The cost analysis shows the X5677 platform would save over $9,000 on the initial procurement, with reduced software maintenance costs accrued over the life of the system. The X5677 was also the better performing solution, with twice the performance (half the render time) with the faster platform environment.

Selecting the right platform is more challenging today than ever before. You need to do your homework before you buy and know the model number you are looking for or you may be paying more for a platform that gives you less. Figure 9-15 provides a graphic overview of the platforms we just discussed showing the relative performance per core for each. It also includes last year’s platforms which are still for sale at a reduced price – we saw some good performance gains over this past year. The new Turbo Boost capability increases the processor MHz during light power loads to improve user performance.

Figure 9-15 Vendor Published Platform Performance

The platforms that run with reduced power are slower than the full power configurations (reduced power means reduced user productivity). Know what you are shopping for before you buy and you will be much happier with the performance of your new platform selection.


ArcGIS Desktop Platform Sizing

Figure 9-16 provides an overview of supported ArcGIS workstation platform technology. This chart shows the Intel platform performance changes experienced over the past five years. The new Intel Core i7-680 3200 MHz quad-core processor is more than 3 times faster and over 6 times the capacity of the Pentium D 3200 MHz platform that supported ARC/INFO workstation users in 2006. The advance of GIS technology is enriched by the remarkable contributions provided by ESRI's hardware partners.


Figure 9-16 Workstation Platform Recommendations

Full release and support for Windows 64-bit operating systems provide performance enhancement opportunities for ArcGIS Desktop workstation environments. The increasing size of the operating system executables and the number of concurrent operations supporting GIS operations makes more memory and improved memory access an advantage for ArcGIS Desktop users. Recommended ArcGIS Desktop workstation physical memory with an ArcSDE data source is 3 GB, and 6 GB may be required to support large image and/or file-based data sources.

Most GIS users are quite comfortable with the performance provided by current Windows desktop technology. Power users and heavier GIS user workflows will see big performance improvements with the faster Xeon i7 quad-core technology. Quad-core technology is now the standard for desktop platforms, and although a single process will see little performance gain in a multi-core environment there will be significant user productivity gains by enabling concurrent processing of multiple executables. Parallel processing environments such a 3D image streaming with ArcGIS Explorer 900 and future enhancements with 3D simulation and geoprocessing will leverage the increased capacity of multi-core workstation environments.  


Windows Terminal Server Platform Sizing

Windows Terminal Server supports centralized deployment of ArcGIS Desktop applications for use by remote terminal clients. Figure 9-17 identifies three standard Windows Terminal Server software configurations. The ArcGIS Desktop direct connect architecture will be used to demonstrate how Windows Terminal Server sizing has been influence by hardware technology change.

Figure 9-17 Windows Terminal Server Architecture

Figure 9-18 identifies how vendor hardware improvements have made a difference in Windows Terminal Server sizing over the past 5 years. The improvements in processor core performance in conjunction with more processor core per chip have significantly increased server throughput capacity (number of concurrent users supported on a single platform). As the number of concurrent user sessions on a platform increase, the memory requirements must also increase to accommodate the additional concurrent user sessions. Heavier workflows can require more memory per session than lighter workflows. Servers must be configured with sufficient physical memory to take advantage of the higher platform processing capacity.

Figure 9-18 Windows Terminal Server Platform Capacity is Changing

Windows Terminal Servers were limited to about 15 concurrent users on a 2 processor (chip) platform back in 2006. Platform performance and capacity have really change, supporting up to 177 concurrent users on a 2 chip platform today.

It is important to take advantage of the Windows 64-bit Operating System for the new Intel platforms, since these higher capacity servers require much more physical memory to handle the high number of current active client sessions. Up to 96 GB of memory is required to take full advantage of the 88 – 177 concurrent user capacity available with the Citrix XenApp Server hosted on Xeon X5677 8 core (2 chip) 3467 MHz platforms (half this capacity could be supported with a 4 core (1 chip) configuration). Deploying Windows Terminal Server in a 4 core Virtual Server environment would reduce server capacity to 35 – 70 concurrent users. 64-bit Operating system is critical for these high capacity servers, improving memory management and providing up to 10 percent performance gains over the Windows 32-bit Server Advanced Operating Systems.

These high performance servers push capacity to new levels, and GIS applications may push platform and disk subsystems to their limits. You should monitor disk I/O and platform paging during peak loads to ensure these subsystems are not overloaded. More memory can reduce paging, and data on disk can be distributed to reduce disk contention. You need to know you have a problem before you can fix it, so keep an eye on platform performance metrics to see all is working as it should.


ArcSDE Geodatabase Server Sizing

Figure 9-19 identifies software configuration options for the geodatabase server platforms. The geodatabase transaction models apply to both ArcGIS Desktop and Web mapping service transactions. Normally a geodatabase is deployed on a single database server node, and larger capacity servers are required to support scale-up user requirements. We will use the Direct Connect architecture for the database sizing demonstrations.

Figure 9-19 ArcSDE Geodatabase Server Architecture Alternatives

The ArcSDE and DBMS display processing times (service times) are roughly the same for capacity sizing purposes, so the DBMS Server and ArcSDE Remote Servers platform sizing would be about the same.

Figure 9-20 identifies the impact of hardware technology change on ArcSDE Geodatabase server sizing over the past 5 years. Improvements in processor core performance in conjunction with more processor core per chip have significantly increased server throughput capacity (number of concurrent users supported on a single platform). As the number of concurrent user sessions on a platform increase, the memory requirements will increase to accommodate the additional concurrent user sessions. Heavier workflows can require more memory per session than lighter workflows. Servers must be configured with sufficient physical memory to take advantage of the higher platform processing capacity.

Figure 9-20 Geodatabase Server Platform Capacity is Changing

Commodity Intel platforms in 2005 would be limited to 55 – 110 concurrent users. If you had more than 100 GIS clients, you would need expensive higher capacity UNIX platforms to support peak Enterprise geodatabase loads. Oracle spent a considerable amount of money developing a Real Application Server that could leverage commodity servers in a database cluster environment – the idea was to extend database server capacity leveraging the lower cost commodity platforms. Esri spend considerable amount of investment on distributed geodatabase technology – again looking for ways to manage enterprise GIS environments with lower cost commodity servers.

The Intel Xeon X5677 8 core (2 chip) 3467 MHz commodity platform with 96 GB memory can support up to 1600 concurrent geodatabase clients – over 15 times the capacity of the 2005 commodity platforms. Four core virtual servers can support up to 630 concurrent geodatabase users. This has changed how we think about Enterprise capacity – the database hardware is no longer the limiting technology.

ArcGIS Desktop Standard Workflow Performance

Figure 9-21 provides an overview of the display performance for Standard ESRI Workflows used in the Capacity Planning Tool.

Figure 9-21 ArcGIS Desktop Performance Summaries (Standard ESRI Workflows)

The 20 workflow combinations identified above can be generated from just three Standard ESRI Workflows included on the Capacity Planning Tool workflow tab. The first chart shows workflows using the ArcGIS 10 Desktop Medium Workstation workflow, while the second chart shows the same workflows with ArcGIS Desktop application supported on a Windows Terminal Server configuration.


Web Mapping Servers

Web mapping services platform sizing guidelines are provided for the ArcIMS and ArcGIS Server software technology. The ArcIMS image service is deployed using the ArcIMS software, and the ArcGIS Server map services are deployed using the ArcGIS Server software. All Web mapping technologies can be deployed in a mixed software environment (they can be deployed on the same server platform together). All mapping services can be configured to access a file data source or a separate ArcSDE database. Geodatabase access can be through direct connect or an ArcSDE server connection.

Web Mapping Performance Changes

Web mapping services have experienced dramatic performance changes over the past 5 years. These performance enhancements improve Web user productivity and reduce deployment cost. Some of these performance changes were due to expanding software deployment options and others were due to improved hardware processing speed and platform capacity changes.

Figure 9-22 identifies recommended software configuration options for standard two-tier Web mapping deployments. This configuration option supports the Web server and spatial servers (container machines) on the same platform tier. The following charts will show how technology has changed and its impact on Web server platform sizing.

Figure 9-22 Web Server Two Tier Architecture

Figure 9-23 provides an overview of available 2005 - 2006 technology. ArcIMS was the primary Web mapping choice. ArcGIS Server Web mapping applications introduced options for deploying much richer Web mapping applications. Display performance ranged from 1 - 4 seconds over remote 1.5 Mbps connections. Typical entry level ArcIMS Image Service configurations supported peak throughput of 8,000 to 16,000 transactions per hour, while richer ArcGIS Server map services supported about half this capacity.

Figure 9-23 2005 - 2006 Web Service Performance Summary (Standard ESRI Workflows)

Figure 9-24 provides an overview of available 2007 - 2008 technology. ArcGIS Server Web mapping applications were gaining market share, with ArcIMS mapping services retaining a major market share. ArcIMS and ArcGIS Server ADF display performance improved slightly ranging from 1 - 2.5 seconds over remote 1.5 Mbps connections. Typical entry level ArcIMS Image Service configurations supported peak throughput of 21,000 to 55,000 transactions per hour, while richer ArcGIS Server ADF applications supported about half this capacity.

Figure 9-24 2007 - 2008 Web Service Performance Summary (Standard ESRI Workflows)

ArcGIS Server REST services and a new Map Cache data source were introduced in 2008 expanding ArcGIS Server development options. ArcGIS Server REST services improved entry level Web dynamic mapping services throughput capacity by over 20 percent over similar ArcGIS Server ADF deployments. Map Cache data source reduced dynamic server loads to almost zero (preprocessed map services), with remote client display performance determined primarily be network bandwidth. Map cache tiles would be retained in the local browser cache, providing very fast Web mapping experience for clients working in an established local area.

Figure 9-25 provides an overview of available 2009 technology. This was the first year ArcGIS Server providing a dynamic Web mapping deployment pattern that outperformed the ArcIMS Image service, removing any remaining ArcIMS benefits over ArcGIS Server and providing a broad range of proven functional benefits encouraging ArcIMS migration to current Web mapping software technology. Web mapping performance improved to a range from 0.5 - 2.0 seconds over remote 1.5 Mbps connections. Typical entry level ArcIMS Image Service configurations supported peak throughput up to 90,000 transactions per hour, while similar ArcGIS Server dynamic mapping applications supporting peak throughput loads up to 118,000 transactions per hour.


Figure 9-25 2009 Web Service Performance Summary (Standard ESRI Workflows)

ArcGIS Server REST MSD services and improved Map Cache base layer mashups were introduced in 2009 enhancing and expanding ArcGIS Server development options. ArcGIS Server REST MSD services improved entry level Web dynamic mapping services throughput capacity by over 100 percent, significantly enhancing performance and quality of maps provided by ArcGIS Server REST MXD deployments. Map Cache base layer mashups significantly reduced dynamic map layer transaction loads, introducing a new back-office data management strategy (pre-processing map cache basemap layers) for publishing fast interactive mapping services.

The 9 workflow combinations above provide a representative subset of the Standard ESRI Workflows for Web Mapping Services included on the Capacity Planning Tool workflow tab. The CPT Calculator can generate hundreds of customer workflow performance targets based on software technology selection and map service configuration parameters. The ArcGIS Server 9.3.1 software technology options, along with platform performance improvements of over 70 percent per core, make 2009 a record breaking year for Web service performance improvements.

Figure 9-26 provides an overview of available 2010 technology. Web mapping performance continued to improve to a range from 0.3 - 2.0 seconds over remote 1.5 Mbps connections. Typical entry level ArcIMS Image Service configurations supported peak throughput up to 110,000 transactions per hour, while similar ArcGIS Server dynamic mapping applications supporting peak throughput loads up to 144,000 transactions per hour.


Figure 9-26 2010 Web Service Performance Summary (Standard ESRI Workflows)

Network bandwidth is currently one of the primary factors impacting Web client display performance. Server processing load variations of the different ArcGIS Server deployment patterns have a secondary impact on client display performance. Server platform technology (processor performance and platform capacity) along with the software technology and display performance parameters determine platform sizing and peak server throughput capacity.


Platform Selection Criteria

Figure 9-27 provides a summary of the factors contributing to proper hardware selection. These factors include the following:

Figure 9-27 Platform Vendor Selection

Platform Performance: Platform must be configured properly to support user performance requirements. Proper platform technology selection based on user performance needs and peak system processing loads significantly reduces implementation risk. ESRI performance sizing models establish a solid foundation for proper hardware platform selection. The ESRI Capacity Planning Tool automates the System Architecture Design analysis, providing a framework for coupling enterprise GIS user requirements analysis with system architecture design and proper platform technology selection.

Purchase Price: Cost of hardware will vary depending on the vendor selection and platform configuration. Capacity Planning Tools can identify specific technology required to satisfy peak system processing needs. Pricing should be based on the evaluation of hardware platforms with equal display performance platform workflow capacity.

System Supportability: Customers must evaluate system supportability based on vendor claims and previous experience with supporting vendor technology.

Vendor Relationships: Relationships with the hardware vendor may be an important consideration when supporting complex system deployments.

Total Life Cycle Costs: Total cost of the system may depend on many factors including existing customer administration of similar hardware environments, hardware reliability, and maintainability. Customers must assess these factors based on previous experience with the vendor technology and evaluation of vendor total cost of ownership claims.

Establishing specific hardware technology specifications for evaluation during hardware source selection significantly improves the quality of the hardware selection process. Proper system architecture design and hardware selection provide a basis for successful system deployment.

CPT: Platform Performance

Previous Editions

[Platform Performance 27th Edition (Spring 2010)]


System Design Strategies
System Design Strategies 28th Edition (Fall 2010)
1. System Design Process 2. GIS Software Technology 3. Software Performance 4. GIS Data Administration
5. Performance Fundamentals 6. Network Communications 7. GIS Product Architecture 8. Information Security
9. Platform Performance 10. Capacity Planning Tool 11. City of Rome 12. System Implementation


Page Footer
Specific license terms for this content
System Design Strategies 26th edition - An Esri ® Technical Reference Document • 2009 (final PDF release)