Software Performance 30th Edition (Fall 2011)

Fall 2011 GIS Software Performance 30th Edition

This section shares lessons learned about selecting and building effective GIS design solutions that satisfy operational performance and scalability needs. Software technology allows us to model our work processes, and provide these models to computers to optimize user workflow performance. The complexity of these models, the functions selected to generate our display, and how application functions are orchestrated to analyze and present information processing needs have a significant impact on computer system performance and scalability.

For many years we focused our system architecture design consulting efforts toward identifying and establishing a hardware infrastructure that would support a standard implementation of Esri software technology. We developed platform sizing models based on consulting experience and customer implementation success. We updated our sizing models based on relative performance benchmark testing which focused on quantifying changes in critical processing loads introduced with each new software release.

There were examples of GIS deployments that did not take advantage of system architecture design best practices. Systems were deployed with unresolved performance issues, and scalability was not well understood. In some cases performance issues were not identified before the production system was under critical peak loads, and the platform solution or network infrastructure failed to meet performance needs. Resolving performance issues while in production can be expensive, both in terms of lost services and user productivity. Building a system design that addresses capacity planning needs throughout development and deployment can improve user productivity and reduced implementation risk.

Technology is Changing GIS user Productivity
User workflow performance is changing. ARC/INFO was ported to the Windows operating system in 1997 on the Intel Pentium Pro 200 MHz workstation. GIS user map display refresh time was about 10 seconds; with some geographic analysts telling stories about how this same map used to take up to 70 seconds on a Sun SPARCStation 2 Solaris workstation back in 1992. ArcGIS Desktop was introduced in 1998, taking about 12 seconds to render the same map on an Intel Pentium II 300 MHz workstation (ARC/INFO could produce the same map in half the processing time). By 2000 this same map display was rendered in less than 4 seconds on an Intel Pentium III 500 MHz workstation. Web mapping (ArcIMS) was introduced in CY2000 as a way to share geographic information products throughout the organization and with the public - data center map display rendering time was less than 3 seconds.



Figure 3-1 shows the changes in map display performance over the last 10 years. In 2001, Light ArcIMS map displays rendered in less than 2.5 seconds, while medium complexity maps still took almost 5 seconds. GIS software technology patterns expanded in 2004 through 2010, with medium complexity dynamic map services rendered in less than a second by 2008 along with a growing interest in using preprocessed map products delivered from optimized cached map services. Heavier high quality cached basemaps now provide a rich foundation for displaying dynamic business information products.

GIS display performance has changed, and this provides new opportunities for future GIS information products. If we assume a maximum user productivity of 6 – 10 displays per minute and an average user think time of less than 6 seconds, map display processing times of less than 1 second can satisfy most user productivity needs. With 2008 - 2010 technology, we have achieved this level of performance for the full range of traditional ArcGIS Server map publishing deployment patterns. What does this mean?



Figure 3-4 provides a simpler view, showing light, medium, and heavy display rendering times over the same time period. User performance expectations have changed along with the technology – GIS project efforts can be completed in less than half the time it took just 10 years ago. GIS professionals used to wait on computers to do their work – today computers are waiting for us to review and provide our feedback.

User display performance expectations in 2000 were around 5 seconds – a challenge for light map displays viewed in the computer room. The same map service today can be rendered in less than 0.25 seconds. These performance improvements open opportunities for more complex dynamic map services, deployment of ArcGIS Server on easier to manage virtual server environments, deployment of Web services on hosted cloud computing environment, or possibly much richer dynamic services that employ more sophisticated statistical analysis or network routing algorithms (2 to 3 times the complexity of current GIS workflow baselines). These opportunities will introduce new challenges – as heavier processing options are introduced, it will be increasingly important to plan, set performance milestones, and manage compliance during system deployment. At the same time we have more opportunities than ever before to reduce the risk of deploying systems that do not meet our performance needs.

We noticed over the years that our test and tuning efforts were often finding and fixing the same performance issues over and over again, and from these lessons learned we identified best practices for building high performance scalable systems. The ArcGIS Server technology today includes a broad range of functionality, from simple cached map services with very high performance and scalability to heavy geoprocessing services that can take hours for a single request. Developers and administrators need to understand what functions are appropriate, and how to deploy a solution that meets user productivity needs during peak system loads.

This section shares our lessons learned, and identifies functional areas within the software that make a difference. Our capacity planning tools are designed to help consultants and developers set appropriate performance targets based on the selected software technology. The tools also identify how to measure compliance during design and development. These same tools can be used to maintain and support deployed operational system performance needs.

This chapter will share system design best practices and associated performance baselines for the most common GIS software deployment patterns and identify several key parameters that impact system performance and scalability. We will also share how to use the available capacity planning tools for setting appropriate performance targets and validating compliance during design and implementation of the selected technology solution. 

Map Display Performance
GIS provides users with a geographic view of their business environment, and for many GIS workflows the map display is used as a primary spatial information resource. The software functions used to generate map documents used in the display can represent the heaviest processing loads within the user workflow. The map data resources are often shared across a network from a common geodatabase data source, generating a relatively high network traffic and server processing load with each display. The map display is an information product that is simple to understand, so often the average user productivity may include up to 10 displays per minute for an active GIS power user. The types of functions, data sources, and design of the user display can make a big difference on the level of processing and network loads required to support a GIS user workflow.

Figure 3-3 shows the processing performance for three different ArcGIS Server dynamic map displays, all deployed on the same platform environment. The performance difference can be traced back to the complexity of the map display document. The light display requires about half the amount of map display processing (slightly under 0.2 second) as the medium display, and the heavy display requires over three times the light display processing (more than 0.5 seconds). The design and complexity of the map display make a significant difference in system performance and scalability. Light maps are rendered three times faster than heavy maps. The first step in publishing a map service is to create a map document (MXD) representing the geographic information product you wish to publish.

The following best practices share lessons learned in designing high performance map documents. The complexity of the authored map document will be a primary factor in determining map service performance and scalability. Online help provides several tips for designing maps for optimal performance.



Quality vs. Speed
Figure 3-4 shows the classic tradeoff between quality and performance.

The high quality map is a shaded relief with transparent layers and dynamic Maplex labeling. These are expensive functions that require extra computer processing for each user display. In contrast, the same display on the right uses low resolution relief, solid colors, and simple annotation providing similar information with good display performance.



Optimizing lines and polygons
We discussed earlier the importance of keeping the display functions simple. The ArcGIS software includes a symbol selection called Esri Optimized to guide users to the more simple display symbols. Outlines for all fills are simple instead of cartographic lines. Picture fills are EMF-based instead of BMP-based. Figure 3-5 shows the location of the Esri Optimized symbol selection.

Using Esri Optimized symbols can improve display drawing performance by up to 50 percent.



Measuring display complexity
Display complexity is an estimate of the average amount of computer processing required to produce an average display. For GIS mapping workflows, most of the display rendering time is consumed in creating the dynamic map product. The map rendering time can be estimated/observed during the map authoring process. ArcGIS 9.3.1 release introduced a Map Service Publishing Preview tool that measures optimized (MSD) map display rendering time. Esri consultants use the MXDPerfstat performance statistics tool to generate a performance profile of an MXD map document. Map display rendering time can be used as a relative measure of map display complexity.

Map rendering time is measured in seconds, and the measured result will be a function of the map display complexity and performance of the platform rendering the map. Faster computer platforms will render the map in much less time than slower platforms. Figure 3-6 plots map display complexity (light, medium, heavy) as a function of MXD rendering time and platform performance (vendor published SPECrate_int2006 per core performance). A medium complexity MXD map display can be rendered in 0.38 seconds by a workstation performing at 2011 baseline levels (SRint2006 = 40/core).

Platform performance is represented as a horizontal line based on vendor published per core SPEC performance baseline. Display complexity (low, medium, high) is shown as a function of platform performance. Intersection of the horizontal platform performance line with the display complexity curve identifies the measured render time on the horizontal x-axis.



Map Service Publishing Preview Tool
The ArcGIS 9.3.1 release introduced some new map optimization tools incorporated in the Map Service Publishing tool bar. Figure 3-7 provides an overview of the ArcGIS Map Service Publishing toolbar. The Preview tool included with the Map Service Publishing toolbar can be used to measure MSD render time. The CPT Calculator models estimate relative performance between MXD and MSD rendering times (you can estimate MXD rendering time by multiplying measured MSD rendering time by a factor of 2). ArcGIS Desktop MXD rendering time can be used along with the MXD Display Performance Platform Translation to estimate map display complexity (low, medium, high).

The Map Service Publishing tools use the new MSD rendering engine to analyze and display the selected ArcGIS Desktop map extent. With the ArcGIS Desktop Map Service Publishing tools, results are provided for the current map scale only - so you would need to move the display to each area you would like to evaluate.

It is important for me to state that the average workflow service times used for capacity planning may not be the same value as a specific map display render time. Some workflow displays will be heavier, while other displays are lighter. User productivity (displays per minute) is also a factor in defining workflow processing loads. The workflow service times and user productivity together should represent the average user loads applied to the system over time. It is best to estimate actual workflow service time loads from platform throughput and CPU utilization measurements collected during real user operations - it is very difficult to accurately simulate or estimate these loads by measuring single map display processing times. This does not preclude setting appropriate workflow performance targets for capacity planning purposes. These performance targets will be directly influenced by the number of map display layers, the features per layer, and the complexity of the functions used in building the display (display complexity).



Measuring map document display performance
Figure 3-8 provides a sample output generated by the MXDPerfstat performance measurement tool. Results below from the MXDPerfStat report were copied into Microsoft Excel for display purposes.

The MXDperfstat tool identifies display refresh times at multiple scales, shows layer refresh times for each map scale, provides layer performance statistics such as number of features, vectors, labeling, and breaks out display time for several key rendering phases (geography, graphics, cursor, and database). The tool also provides some high level recommendations for performance tuning (actually, once you see the layer processing time and the performance metrics the display problems are quite obvious).

MXDperfstat is an excellent tool for measuring map document display performance, since it lists layer statistics (render time, features, edges, projection, etc) for each layer included within a complete series of map scales. The measured results can be used for evaluating map display performance and tuning your map document for optimum display performance.



ArcGIS Server Optimized Map Service Document (MSD)
ArcGIS 9.3.1 introduced a new optimized map service description (MSD). Optimized map services perform better than ArcIMS AXL and MXD-based map services, provide more consistent performance across Windows and Linux operating system environments, and use a new cartographic engine that significantly improves map display output quality. The standard map document (MXD) is translated to an optimized map document (MSD) for high performance high quality read only map publishing. Figure 3-9 shows the CPT Calculator map document selection (MXD or MSD).

There are some functional limitations with publishing with the MSD map document. Some of the nice features of the map publishing analysis report is that identified problems with the display include hyperlinks to online Esri Help providing instructions for resolving errors and optimizing performance.



Measuring map service display performance
Figure 3-10 provides a sample output generated by the PerfHeatMap performance measurement tool. The PerfHeatMap tool measures published ArcGIS Server map service display rendering times for multiple scales across the full map extent.

PerfHeatMap results are displayed on a map grid showing color coded rendering time ranges across the map extent. Display rendering times are generated for each map scale, presenting a spatial overview of display processing loads. These results can be useful for identifying specific “hot” areas within the map extent or estimating display complexity based on specific area work profiles. 

Selecting Workflow Display Complexity
Figure 3-11 provides an overview of the CPT Calculator workflow display complexity selection and associated performance metrics. The complexity selection includes 7 parameters (light, medium light, medium, medium heavy, heavy, 2xMedium, 3xMedium). The workflow baseline processing times (Esri benchmark) establish the medium complexity service times. Light complexity is half the benchmark processing times, and heavy is 50 percent more than the benchmark processing times. Medium light and medium heavy settings are half way between the extremes. The 2x and 3x medium complexity settings were added to represent workflows that include heavier analysis and richer dynamic display functions.

Workflow ArcGIS Desktop, SOC, SDE, and DBMS services times are adjusted based on the display complexity setting. Web application and browser/terminal client processing times are not impacted by the complexity settings.



Selecting the right Image Resolution
The resolution of the map service output (map size) is an important performance consideration. Figure 3-12 compares the PNG24 data volume (Mbpd) when increasing the display extent from 600 x 400 to 1280 x 1024 resolution. Doubling the display resolution (map size) more than doubles the display output traffic.

The resolution of the map display output can be an important consideration for GIS user workflows. A high resolution display with full screen map size is important for users that work with map displays over long periods throughout the day. A display resolution of 1280 x 1024 would be common for GIS power users or data maintenance (GIS editor) workflows. Rich Internet Applications (Flex and Silverlight) present higher resolution map displays. Remote client display performance on a dedicated T-1 connection would be over 2.3 seconds slower than a local user display. A display resolution of 600 x 400 is a common size for Web browser map displays. Remote display performance over T-1 line is within 0.6 seconds of local clients.

Web mapping services produce map images that are sent to the client browser for display. Each user request will generate a new map image that must be delivered to the client browser. The size of the output image varies directly with the number of pixels, so higher resolution images generate much higher client traffic loads. The required amount of traffic per display can have a significant impact on user performance over lower bandwidth. Server processing loads may also increase with higher resolution displays (higher resolution can result in larger map extent increasing the number of features rendered for each client display).

Figure 3-13 provides an overview of the CPT Calculator map display Resolution selections. Resolution selection impacts SOC, SDE, DBMS services times and display traffic (traffic is the most significant performance impact). Display resolution can vary from 400x300 for mobile devices up to 1600x1200 for high resolution desktop map displays. Performance charts show traffic and processing adjustments associated with the different resolution settings. CPT Calculator performance adjustments are estimates derived from Esri benchmark testing.



Selecting the right image output format
Web mapping services produce map images that are sent to the client browser for display. Each user request will generate a new map image that must be delivered to the client browser. The selected image type can have a direct impact on the volume of network traffic. Lighter images require less display traffic and heavier images require more display traffic. The required amount of traffic per display can have a significant impact on user performance over lower bandwidth.

Figure 3-14 identifies the amount of data required to support common ArcGIS Server output image types (JPEG, PNG24, PNG8, PDF). The volume of data identified in megabits per display (Mbpd) required to support the same map varies with each image type. Transport times represent display performance impacts over low bandwidth (56 Mbps and T-1: 1.5 Mbps client connections). A common resolution of 600 x 400 pixels was used for image type comparison. Vector only images compress better than images than include a Digital Ortho raster layer.

JPEG image types provide the most consistent compression, with a slight variation between raster and vector images. PNG images do much better with vector data than with raster - PNG supports transparencies and is the default ArcGIS Server format. PDF is a heavier output format used for high quality map plotting.

 Figure 3-15 provides an overview of the CPT Calculator Density (Vector Only, Raster Image) and Output (Default, JPEG, PNG8, PNG24, PNG32, PDF, Feature) selections. Default output settings are generated based on the Density selection (Vector Only = PNG24, Raster Image = JPEG). The CPT Calculator applies appropriate traffic and performance adjustments based on the Density and Output selections. Software performance parameter adjustments are derived from Esri benchmark testing.



GIS Dynamic Map Display Processing
GIS displays are normally created one layer at a time, similar to the procedure geographers would follow to layout a map display on a table using Mylar sheets. The technology has changed, but the procedure for building a map is much the same. Maps with a few layers require less processing than maps with many layers (computer programs are more sensitive about the number of layers (feature tables) in a display than about the number of features in a single layer (rows in a feature table). The display complexity (number of features, edges, symbology, tasks, etc) impact rendering time for each layer.

Figure 3-16 shows the software processing for building a map display, one layer at a time, joining the features (points, polygons, lines) in each layer sequentially one on top of the other until the final display is complete.

 Figure 3-17 shows a software process for building the layers of a map display using a parallel processing procedure. The procedure initiates three separate display requests, each building a third of the display layers. An additional server process then brings the three primary layers together to build the final display.

In theory, the second approach generates the display faster. The same amount of processing is required for both methods - in fact the parallel approach requires additional procedures for establishing the parallel display request and then bringing those results back together to produce the final map display.

Hardware vendors are providing computers with an increasing number of processor core per chip, expanding the capacity of the server platforms with reducing expectations for increased performance gains per processor core. Vendors have encourages software developers to take advantage of this increased server capacity by increasing the number of concurrent processes used in generating each user display. Most heavy processing workloads today require sequential processing and a single display generation will not take advantage of multiple processor core. The actual standard map display performance gains for reprogramming software to take advantage of parallel processing may not be worth the extra effort and additional processing loads.

Figure 3-18 compares a traditional COTS map display performance with the performance of a parallel query map display, where the display layers are blended together on the client browser. Parallel query displays can be published with the current ArcGIS Server technology - but is the performance gain worth the use of extra shared infrastructure resources.

The parallel implementation is supported by three ArcGIS Server REST API map services mashed together in a JavaScript API client browser application. Client access was over a 1.5 Mbps DSL Internet connection, requiring over 1 sec to deliver each 200 KB map image over the network connection to the client browser display. The extra network transport time and queue time to support the parallel display build consumed most of the parallel processing display performance gain. Parallel processing may not always improve system performance, and in some cases could reduce overall system performance and scalability.



ArcGIS Server map cache: the Performance Edge
ArcGIS Server technology provides a variety of enhanced solutions to improve production system performance and scalability. The concept of pre-processing map outputs for customer delivery is not new - we used to do this with paper map books before we had computers. The ArcGIS Server 9.3 release introduces a variety of ways to maintain and support pre-processed maps, and to organize the map files in a map cache structure optimized for map publishing.

Figure 3-19 shows the display time for both light and medium complexity dynamic map products in comparison to the display time for a fully cached map. The quality of the fully cached map can be much higher than the medium dynamic display, the difference is that the fully cached map processing was completed before posting on the Web site, and the final processing time is minimal. Pre-cached maps perform and scale much faster than dynamic Web map services. <br style="clear: both" />

The optimum Web mapping display combines dynamic map services presented as a transparent image [mashup (business layers)] over a map cache base layer. Dynamic map services are important for geographic analysis, editing, and geoprocessing functions which require access to point, polygon, and line features rendered in a dynamic map display. Map cache tiles provide an optimum base map foundation layer, combining high quality map visualization with high display performance. A mashup of dynamic operational layers over high quality base map reference layer delivers the optimum combination of quality and performance.

Figure 3-20 shows how the Capacity Planning Calculator is designed to encourage cached map services. ArcGIS Desktop can be used to create the map document (MXD) from a local dynamic data source. Dynamic map display complexity can be estimated based on measured map rendering time. Map layers used for background visualization can be identified as candidates for a map cache data source, and removed from the dynamic MXD display. Map rendering time of the remaining dynamic layers can be used to estimate percent of display that will be cached. The CPT Calculator percent data cache (%DataCache) setting will reduce display processing time based on the cached layers removed from the dynamic display.

The CPT Calculator includes an option to add a map cache data source to the selected dynamic workflow, providing appropriate composite service loads to complete a system design. The "+mapcache" data source selection applies an addition 0.5 Mbpd traffic load for Calculator sizing purposes (this load is included in the display traffic displayed on the Workflow tab). The client traffic identified in cell E4 will turn pink to highlight the composite traffic load setting and +$$ will be added to the Calculator workflow name.

The ArcGIS Server map cache service traffic can vary greatly depending on the workflow activity. Map tiles are downloaded to the client browser Internet cache and subsequent request for the same tile are delivered from local client cache (not over the network). A "well behaved" client (one who works in a small geographic area) will soon have all the map tiles located on the local machine cache, and minimum map tile traffic will be required from the central site over the network. On the other hand, a "world traveler" scenario (each map view from a different world location) can generate very high display traffic over the network connection (several map tiles required for each map display). The average 0.5 Mbpd traffic used by the CPT Calculator is based on average worldwide requests from ArcGIS.com map cache service statistics.

<br style="clear: both" />

Data Source Performance Parameters
Figure 3-21 compares display performance between using an ArcSDE Geodatabase data source and the available File data sources. The file data sources include small and large shape files and file geodatabase. Values shown on the chart are those currently represented in the Capacity Planning Tool data source performance targets. The File Geodatabase was introduced with the ArcGIS 9.2 release. Performance factors were updated based on available ArcGIS performance benchmark test results.

The small Shape File performance is about the same as the ArcSDE Geodatabase. The large Shape File format requires three times the processing required with ArcSDE. The small File Geodatabase loads are about 80 percent of the ArcSDE Geodatabase performance values, while the large File Geodatabase provides performance similar to ArcSDE loads.

<br style="clear: both" />

Figure 9-22 shows the data extent of the San Diego geodatabase used in performance validation testing. Initial evaluation of performance with shape files was conducted using this data set.

The performance test series generated common displays for 100 random maps within the neighborhood area identified in the map above. The ArcSDE geodatabase display performance was roughly the same whether using the full San Diego extent or the small neighborhood area as the data source.

The small Shape File testing used vector layers extracted for the neighborhood extent identified above. The large Shape File data set included the complete San Diego area (the Capacity Planning Tool large shape file load would represent about twice the full San Diego data extent).

Display performance with the File Geodatabase is much improved over the Shape File format (test results are not shown). Tests have been completed accessing a File Geodatabase with up to 1 TB of data, and performance was quite good. The small File Geodatabase performance targets would likely support a data source representing the full San Diego extent. The large File Geodatabase performance targets would data sources up to 1 TB in size.

It is difficult to be precise when evaluating these performance targets. There are many factors in the database design, number of features per layer, and number of layers in the display that can factor into these performance values. Processing loads on the file data server platform are very light (all query processing loads for a file data source are performed by the client application).



Figure 3-23 provides some additional information on the number of display layers, total number of features, and the size of the San Diego data source used during our initial performance validation testing. The same raster layer was used for both tests. Network throughput for the small Shape File data source was half the traffic of the medium Shape File data source.

Figure 3-24 shows the different in display performance for the two Shape File data source. The medium Shape File required twice the processing of the small Shape File. Data source adjustments were modified based on results of ArcGIS 9.3.1 performance validation testing. Small File GDB outperforms the SDE Geodatabase data source by about 20 percent, while large File GDB performs about the same as an SDE geodatabase. Large shape file performs about 3 times slower than the SDE Geodatabase. These are performance planning targets and large Enterprise SDE Geodatabase data sources may be maintained and tuned to outperform the file geodatabase.

The large Shape File performance targets included in the Capacity Planning Tool represent 3 times the processing of the small Shape file. These performance targets should be adequate to support twice the extent identified in the test results shown above (9 vector layers with up to 2 million features or possibly twice the number of vector layers at the same extent).

Figure 3.25 identifies the data source selections (SDE DBMS, Small File GDB, Large File GDB, Small Shape file, Large Shape File, Image) available for the CPT Calculator.

The same data source selections are available for each workflow identified on the CPT Design tab. The data source performance adjustments are provided in a "CalcData" lookup table. Adjustment factors are applied to desktop, WTS Citrix, and SOC application service times and client display traffic. The same adjustment factors are applied to workflow data source selections on the CPT Design tab.

<br style="clear: both" />

Capacity Planning Workflow Recipe
The capacity planning workflow nomenclature was first introduced in the GIS Software Technology section. The workflow name generated by the Capacity Planning Calculator tool provides a recipe for generating the selected workflow performance targets. The selected software technology pattern and key performance parameters selected during GIS planning are documented in the workflow recipe.

The Capacity Planning Calculator generates performance targets based on software technology baselines and key parameters derived from Esri benchmark testing. The software technology baselines are reviewed and adjusted with each software release. Figure 3-26 provides an overview of the CPT Calculator workflow recipe.

The workflow recipe identifies the selected software release version (930, 931, 10, etc) followed by the deployment pattern (Wkstn, WTS Citrix, ADF, SOAP, REST, WMS, Image, etc). The recipe follows with the map document (MXD or MSD), complexity (Lite, ML, Med, MH, Hvy, 2xMed, 3xMed), %Dynamic (1-%DataCache), resolution (4x3, 6x4, 8x6, 10x7, 12x10, 16x12), Density (V or R) and output (JPEG, PNG8, PNG24, PNG32, PDF, ICA, Feature). Calculator generated workflow service times and traffic are displayed on the CPT Workflow tab along with the workflow recipe. The Calculator recipe is the Standard Esri Workflow names provided on the CPT Workflow tab. The CPT Calculator workflow recipe identifies the assumptions made in creating performance targets for use in the CPT Design.

Workflow display traffic and service times (performance targets) are generated from the measured benchmark results included as lookup tables on the CPT Calculator tab. Figure 3-27 shows the CPT Calculator calculations for a custom workflow. The software technology and software performance parameter selections are shown in column B on rows 71 to 78 of the CPT Calculator tab. The baseline workflow traffic and service times are pulled from the workflow baseline lookup table. The baseline performance targets are multiplied by adjustments pulled from the various performance factor lookup tables. The adjusted baseline service times are passed to the CPT Workflow tab (row 88 in the lower figure). System configuration adjustments (data source, platform configuration, operating system) are made on the CPT Calculator and the Design tabs providing final adjustments supporting the system architecture design analysis. The formulas used to complete the system architecture design analysis are discussed in Performance Fundamentals (Chapter 9).

<br style="clear: both" />

Performance Testing Tools
A variety of performance monitor and testing tools are available for systems administration. Standard tools used for Esri benchmark testing include Microsoft Fiddler for network traffic measurements and Visual Studio for Esri Web applications. Windows performance monitor is used for collecting server performance metrics. The Enterprise GIS Resource Center provides information on performance measurement tools used for Esri benchmark testing. <br style="clear: both" />

ArcGIS Server Terminology and Tuning
Figure 3-28 defines the instances, processes and threads within the ArcGIS Server configuration. Instances, processes, and threads identify what software will be deployed to leverage the available hardware processing resources.



An ArcGIS Server install instance refers to a complete ArcGIS Server software installation. A single site can have multiple ArcGIS Server install instances managed by different server object managers.

An ArcGIS Server service instance refers to the individual service threads deployed to support a specific ArcGIS Server service configuration. Service instances and threads are used within the software interchangeably. A service instance is a SOC process thread deployed by the SOM available to execute a service request. In general terms, instances or threads represent a single service request and how the request will be serviced through the software stack.

The term “thread” is also used at the hardware level to represent an on-chip location to park a service instance close to the core for processing (a multi-threaded core can reduce the time required for context switching when multiple concurrent execution threads are sharing a single core processor; objective is to better utilize available core processing cycles).

The term “Process” refers to the package of program executables contained within each deployed SOC (each deployed SOC contains one copy of the ArcObjects program executables). A single multi-threaded SOC process can host multiple service instances.

<br style="clear: both" />

Selecting high isolation or low isolation
Figure 3-29 shows the available ArcGIS Server SOC configurations. There are two types of SOC executable configurations, high isolation and low isolation (terms used by the Esri ArcGIS Server software documentation). A high-isolation SOC is a single-threaded executable allowing for one service instance; in other words, a high-isolation configuration restricts a single SOC to one instance. A low-isolation SOC is a multithreaded process supporting up to 256 service instances; a low-isolation configuration allows a single SOC executable to service as many as 256 concurrent instances (ArcGIS Server version 10 SOC can support from 2 to 256 threads). Each SOC thread (service instance) is actually a pointer within the executable tracking execution of the assigned service request (all requests share the same copy of the executables).

For example, a SOM deploying 12 service instances using high isolation would be launching 12 separate SOC executables each providing one instance thread. A SOM deploying the same 12 service instances using low isolation (4 instances per SOC process) would be launching 3 separate SOC processes, each with four (4) service instance threads. The low-isolation SOC configuration requires less host machine memory, but if one service instance (thread) fails, the SOC process will fail along with its remaining instances. When a high-isolation service instance fails, the SOC executable failure is isolated to loss of a single service instance. <br style="clear: both" />

Selecting a pooled or nonpooled service model
Figure 3-30 identifies SOC process pooling settings. The Map Service Properties pooling tab is used to define whether the SOC process is pooled or non-pooled.



The pooled service model is the best selection for most service configurations. The current-state information (extent, layer visibility, etc.) is maintained by the Web application or with the client browser. The deployed service instances are shared with inbound users, released after each service transaction for another client assignment. The pooled service model scales much better than the non-pooled model because of this shared object pool.

The way the non-pooled service executes is similar to an ArcGIS Desktop session on a Windows Terminal Server. Both use the same ArcObject executable functions; both are restricted to one user able to take advantage of the service instance. ArcGIS Desktop on Windows Terminal Server would perform best for most implementations. The ArcGIS Server deployment (40-50 concurrent user sessions on a 4 core server platform) would be a less expensive implementation due to the difference in software licensing costs.

The non-pooled service model should only be used when an application’s function requires it. A nonpooled SOC is assigned to a single-user session and holds its reference to the client application for the duration of the user session. In this case, the current state information (extent, layer visibility, etc.) can be maintained by the SOC. Non-pooled SOC will not be supported beyond the ArcGIS 10 release (editing functions will be supported by pooled feature services in ArcGIS 10.1).

<br style="clear: both" />

Configuring SOM service instances
Figure 3-30 also shows the user interface for configuring the SOM service instances. The service instance parameters are identified on the pooling tab within ArcGIS Server Map Service Properties. Use the pooled service whenever possible. For optimum user performance and system capacity, a single pooled service can handle many concurrent clients, while access to a single non-pooled service is limited to a single user session. The minimum number of instances would normally be “1” to avoid startup delays if requested by a single user. If the service is seldom used, this setting could be “0.” The maximum number of instances establishes the maximum system capacity the SOM can expose to this service configuration. It’s worth repeating: configuring service instances beyond the maximum platform capacity will not increase system throughput, but it may very well reduce user display performance.

Popular map service. For a popular service, the maximum instances number should be large enough to take advantage of the full site capacity. Full site capacity could support 3-5 service instances per core. Two 4-core container machines could handle up to 32 concurrent service instances—the maximum recommended instance setting for a popular map service.

The blue line in Figure 3-32 shows the maximum host platform utilization and system throughput in displays per minute (DPM) for a series of ArcGIS Server service configuration instance settings responding to random Web service requests. The bars show host platform service time (colored tier) and service queue times (processing queue times result from random arrival of service requests). The host platform has 4 core; the four core processors are shared resources used to execute the deployed service instances.



The first bar represents a service configuration with two (2) service instances. Peak service load is limited to 36 percent host platform utilization with peak throughput of 224 DPM and display response time just over 0.5 seconds, well below the maximum host platform throughput capacity. The seventh bar represents a service configuration with fourteen (14) service instances. Host platform is over 90 percent utilization with a display response time of around 1.5 seconds. Average display response time continues to increase as additional service instances are deployed with minimum increase in throughput. Peak throughput is normally reached at about 3 to 5 service instances per host platform core. Increasing the number of service instances will only increase the average display response time with minimal throughput gain.

There are some challenges if you limit host capacity settings to 16 instances. If several services are competing for peak service loads at the same time, the SOM may need to stop and start SOC processes within host capacity setting limits in response to the changing concurrent peak service requests. Extra SOC startup processing overhead will increase host platform processing loads and reduce peak throughput levels – this may not provide optimum throughput response. Ideally it would be good to avoid SOC startups while trying to service peak throughput loads.

In this example, the host capacity could be set at a level based on a reasonable peak display response time rather than peak throughput levels. Display response time with twenty (20) service instances is around 2 seconds. We can expect display response time to increase linearly above this point, so 40 instances would render 4 second display response times with limited throughput gain. If 4 second response time is acceptable, we could set host capacity at 40 instances and provide the SOM more flexibility to address changing service request levels during peak throughput loads.

The best solution is to provide enough hardware to avoid these high utilization load profiles. There is no simple configuration prescription for optimizing host platform throughput and client response times as the server approaches maximum utilization levels.

Heavy batch process. For handling a heavy batch service, the maximum instances should be a small number to protect site resources. A single, heavy batch service can consume a server core for the length of the service process time. Four concurrent batch requests on a 4-core server can consume all available host processing resources.

The blue line in Figure 3-32 shows the maximum host platform utilization and system throughput in displays per minute (DPM) for a series of ArcGIS Server batch process service configuration instance settings. The bars show host platform service time (colored tier) and service wait times (wait times are due to shared use of the available core processor). The host platform has 4 core; the four core processors are shared resources used to execute the deployed service instances.



The first bar represents a batch process configuration with one (1) service instances. Peak service load is limited to 22 percent host platform utilization with peak throughput of 137 DPM, well below the maximum host platform throughput capacity (display service time is normally not important for batch process loads – more attention is given to how long the total batch job will run). The fifth bar represents a service configuration with five (5) service instances. Host platform reaches 100 percent utilization with minimum increase in batch run time. Display response time (including total batch service run time) will increase linearly once server is operating at 100 percent utilization. A service configuration with ten (10) service instances would take twice as long to complete each batch job. Peak throughput is normally reached at N+1 service instances (host platform core + 1). Increasing the number of service instances will only increase batch processing times – it is better to queue up processes and complete jobs sequentially that try to run them all at the same time.

<br style="clear: both" />

Configuring host platform capacity
Implementing the right software configuration can improve user display performance and increase system throughput capacity. The right number of service instances to assign (to enable maximum throughput) will vary slightly, based on the type of service (reference heavy batch process and popular map service examples above). If all the service execution is supported on the host container platform, a single instance per core may be an optimum capacity setting (heavy batch process). For most map services, however, the execution is distributed across the Web Server, container machine, and database server, which shares the processing load across multiple platforms. Additional delays can occur due to random instruction arrival times (queuing theory, in chapter 9), which can account for over 50 percent of the total processing time when approaching peak throughput loads. The test results in figure 3-35 show the optimum capacity setting at 3-5 service instances per core. In both the tool and testing, the performance fundamentals presented in chapter 9 have been applied to find the right instance capacity. ESRI capacity recommendations are based on the results of performance tests and our basic understanding of performance fundamentals, backed up by customer operational experience.

Figure 3-33 shows the user interface for establishing the host capacity settings. The host capacity setting restricts the number of maximum SOC instances each SOM can deploy on the assigned host container machines. The recommended capacity setting—or the maximum number of instances the parent SOM should deploy on each of the host container machines—is 3-5 service instances per core. A standard commodity Windows server has a total of 4 processor cores, so the server will approach peak throughput at around 16 concurrent instances. A good rule of thumb is to set host capacity at 2 to 3 times the number of instances required to reach full capacity (optimum settings will depend on your specific service distribution).



A standard, high-available, 3-tier configuration for ArcGIS Server could have 2 Web servers (each with a SOM), 3 host container machines, and a database server. If each of the host container machines was a standard 4-core Windows server, the optimum capacity would be 32 service instances on each host machine. In a high-available configuration, the SOMs are not cluster-aware, so separate configurations would be allocated for the shared environment, with a separate capacity setting needed for each parent SOM.

It is very easy to inadvertently allow too many SOC instances. With the same ArcGIS Server configuration identified above, full capacity of the host machine tier (3 servers, each with 4 cores) would approach maximum throughput with 48 active service instances (16 on each host machine). If all the host capacity settings were 32 (not necessarily the right answer with both Web servers SOM operational), the fully operational configuration would allow a total of 192 concurrent service instances (96 per SOM, each SOM would deploy 32 on each host machine for a total of 192).

There are two potential performance problems when deploying too many service instances per host machine. When server loads approach peak throughput loads, an optimum capacity configuration would minimize response time by completing work in process (first in, first out) before assigning more requests (concurrent arrivals would not be assigned for processing until the current work is completed). When you deploy too many service instances (more than the number of core processors can service at the same time), the available CPUs must share their time to process all the assigned service requests simultaneously—and they do this in a very fair way. All assigned service requests are completed at about same time—all must wait until all processing is complete for them to be done. Too high a capacity setting would result in increased user response times during peak system loads due to a high number of deployed service instances executed by the limited number of processor cores. Service response time increases linearly when peak capacity loads are reached with minimum increase in server throughput levels.

Service configuration and optimum capacity
Figure 3-34 includes a handful of terms and configuration strategies – there is no simple recipe for getting it right. It all depends on your user environment – how popular will your services be and how will people access your site. The overall goal is to configuration the server for the maximum number of requests with the minimum amount of processing overhead.



ArcGIS Server is made up of two types of software components, Server Object Manager (SOM) and Server Object Containers (SOC). The SOM manages the SOCs based on instructions provided with each service configuration. Services are what you publish on ArcGIS Server. Web applications consume these services to produce the client display. The SOC process threads are executed by the hardware core processors.

Each published service will have a service configuration file. The service configuration file will contain the parameters you identify when publishing each service configuration. How you configure your services will determine how they will be deployed by the SOM. Service configuration files are created using the ArcGIS Manager or ArcCatalog administrator tools.

There are some key things to keep in mind when configuring server. The SOM will deploy SOC instances within the bounds established by the Service Configuration MIN/MAX instance settings and the assigned Host Capacity settings. The minimum service instances for all services will be deployed during SOM startup – this establishes the minimum number of instances that will be distributed equally across the assigned host platforms. During peak concurrent loads, the SOM can increase the number of SOC instances up to the maximum service levels required to support concurrent inbound service requests. The maximum number of total instances deployed by the SOM cannot exceed the identified host platform capacity. For multiple SOM sharing a common host platform, each SOM will be able to deploy to the maximum assigned host capacity.

Keep in mind, there is extra platform processing overhead required every time the SOM has to start a new SOC process. Ideally, you would like to deploy just the right amount of service instances so there is one available for immediate SOM assignment for each client request. During peak server loads, you want to have just the right number of maximum service instances identified to fully utilize available host platform compute resources staying well within available platform memory limitations. During maximum peak loads, you want the host capacity setting to limit concurrent processing loads to allow optimum service throughput. For high performance services (processing time less than 1 second) you may want to configure host capacity at two or three times the number of instances required for peak system throughput.

During installation, a server object container agent is installed on each host machine. This provides the ArcObjects code needed to support SOC deployment. During startup, the SOM deploys the minimum number of service instances for each server configuration, distributing the instances evenly across the assigned host container machines. Service instances are deployed in SOC processes.

During operations, if concurrent requests for a specific service exceed the number of deployed service instances, the SOM will increase the number of deployed service instances to handle peak request rates up to the maximum value allowed for in the service configuration. If the inbound concurrent requests for that service exceed the maximum deployed instances you’ve set in the service configuration, requests will wait in the service queue until an existing service instance is available for service assignment. If necessary, service instances for less popular service configurations will be reduced (closed) to make room for the popular service. Non-active services can be reduced down to the minimum instances specified in their service configuration file.

Deployment algorithms within the SOM provide even distribution of service instances across the assigned host platforms. The deployment algorithms along with the service queue work to balance the ArcGIS Server processing load across the host container machines. The SOM should be used as the final load balance solution for the host container machine tier.

Understanding what each of these performance parameters do and how they should be configured to satisfy your specific service needs is important for optimum utilization of your deployed services. Getting this right will take some thinking and some careful planning. Once you deploy your services, you will need to monitor to see if your settings are working. Modifications can be applied to optimize your specific service loads. Remember, the goal is to configure the system to minimize processing overhead (excessive SOC process startup during peak service loads is overhead you would like to avoid). Also, a sufficient number of service instances must be deployed to consume available host platform compute resources and enable the host platform to service peak throughput levels.

Selecting the right physical memory
The platform configuration must include sufficient physical memory to accommodate all active process executions. Systems that do not have sufficient memory will experience performance degradation during peak load conditions (processing overhead increases due to operating system swapping executables between disk and memory). If memory is not sufficient to support concurrent server processing requirements, system will start to experience random process failures (process executables dropped from memory during execution). Figure 3-35 shows memory performance considerations for an ArcGIS Server host machine deployment.

Sufficient number of instances must be deployed to take advantage of processor core capacity. If the ArcSOC process memory requirements exceed available physical memory, performance will start to degrade and at some point start to fail. Setting host capacity can limit the number of SOC instances deployed to make sure memory is sufficient to support peak deployed instance memory requirements.

<br style="clear: both" />

Building the data cache
ArcGIS Server provides a pre-processed map cache service for use as a data source for Web applications and ArcGIS Desktop or custom desktop clients developed with the ArcGIS Engine. ArcGIS Server can also stream pre-processed map cache images to ArcGIS Explorer or ArcGIS Desktop with 3D Analysis client for 3D visualization. In both cases, cached images are stored at the client for high performance display. ArcGIS 10 supports on-the-fly translation of 2D map cache to 3D images. Figure 3-36 provides an overview of the cached map service image structure. ArcGIS Server includes automated processing functions to build and maintain (pre-process) optimized map cache pyramids.

The cached map service would consist of a pyramid of pre-processed data imagery or vector data, starting at a single map resolution at the highest layer and breaking each image into four images at twice the map scale for each additional layer included in the pyramid.

Client access to the cached data would deliver tiles that correspond to the requested map scale. Tiles would be blended together as a single reference layer by the client application. Total pre-processing time would depend on the total number of tiles in the map cache and the average map service time required to render each map tile image. Figure 3-37 can be used to get a rough estimate of the expected map cache generate time.

The chart above shows the estimated processing hours starting with one tile at the top layer and building the required number of layers to reach the largest map scale. Three map generation times are plotted - 0.5 seconds for a simple map display, 1 second for a medium map display, and 5 seconds for a heavy map display. Charts shows about 100 hours to generate 9 layers with average map generation time of 5 seconds. It would take over 2000 hours to generate just two more layers (11 layers at 5 sec per map tile). This chart is generated simply by multiplying the time to generate one map tile by the total number of tiles required to complete the map cache pyramid.

The recommended procedure for estimating map cache generation times is to select a same area dataset that represents a small percentage of the total caching area. Build a map cache of the sample area and test the output symbology, labeling, and performance with your primary map client. Use the cache time for the small sample area to estimate processing time for the remaining map cache.

The ArcGIS 9.3.1 release includes partial data cache and cache on demand options for building and maintaining map cache making this technology adaptive to a broad range of operational scenarios.

Partial Data Cache. A partial data cache can be defined for high priority areas within your display environment. The partial data cache can be specified by both area and level of resolution, providing optimum flexibility for pre-processing custom areas and levels of map cache.

Cache on Demand. The initial map query can be generated from a dynamic data source, and the resulting tiles can be added to the map cache. The next request for these tiles would come from the map cache, significantly improving display performance for popular map display environments.

Figure 3-38 shows a view of the ArcGIS Server Map Service Properties Caching tab. The map cache process is compute intensive, automatically generating tile after tile based on the defined map cache properties. Map caching can be performed by several parallel service instances, and ArcGIS Server coordinates activities of these services to build and store the prescribed map cache.

When using a local image file data source, each map cache instance will consume a platform core. The optimum service configuration would specify 5 instances to make sure the server is operating at 100 percent capacity. The windows performance task monitor can be used to identify the machine is operating at 100 percent utilization. If your map cache include a DBMS data source, you may be able to include one or two additional service instances to reach 100 percent utilization. It is important to configure sufficient instances to take full advantage of your hardware platform processing resources.

It is a best practice to execute longer cache jobs in sections. It is good to plan cache areas that can be completed within an 8 or 10 hour period, providing an opportunity to complete the jobs in a stable platform environment. This will also provide an opportunity to periodically reboot system between each job section to maintain a reliable platform environment.

Figure 3-39 provides an example of taking advantage of the hardware, as described above. In this example, the cache job required 500 hours of processing. A single instance would take 500 hours to complete. Five(5)instances running on a 4 core machine will complete the same job in 125 hours. Ten (10) instances, using two 4 core machines each with 5 instances, completed the same job in 65 hours.

A variety of caching scenarios are being evaluated to expand the feasibility of using pre-processed data cache to improve performance and scalability of future GIS operations. Experience shows pre-processing map data can make a difference for our customers. ArcGIS Server is leading the way toward more productive and higher quality Enterprise level GIS operations.

<br style="clear: both" />

Selecting the right technology: A case study
Selecting the right software technology can make a big difference in performance and scalability, and cost of the production system. The following case study shares an experience with a real customer implementation which clearly represents the value of selecting the right software technology. Figure 3-40 provides an overview of the Greek Citizen Declaration use case. The use case has been modified to demonstrate current technology options.

Our customer had a requirement to design a Web application solution that would be used to collect national property location and census information during a three month national citizen declaration period. Citizens would report to regional government centers, use a local desktop computer to locate their home residence on a map display generated from a national imagery and geospatial features repository. Citizen would place a point on the map identifying their residence, and then fill out a reference table identifying their census information. The citizen input would be consolidated at a centralized national data center and shared with all regional government centers throughout the declaration process.



Figure 3-41 provides an overview of the user locations and central data center architecture. The initial system design was developed using an earlier ArcGIS Server Web application development framework (ADF) map editor, hosting a centralized ArcGIS Server dynamic Web application with browser clients located at 60 regional national sites. Following contract award, the customer reviewed available technology options to finalize the system design.

<br style="clear: both" />

Figure 3-42 shows four possible software technology options that were considered during the design review. Technology had progressed since the initial proposal, and there was a desire to validate the final technology selection before deployment. The ArcGIS Server dynamic Web ADF application was the solution provided in the initial design proposal two years earlier. The current ArcGIS Server technology included improvements in Web application performance and user experience that would be evaluated in the design review.

ArcGIS Server provides a data cache option where reference map layers could be pre-processed and stored in a map cache pyramid file data source. Pre-processing the reference layers would significantly reduce server processing loads during production operations. A single point declaration layer contained all features that would be edited and exchanged during the citizen declaration period, all remaining reference layers could be cached. Changes would be displayed at all remote site locations with each client display refresh.

There were three different Web application options that could leverage a reference data cache.

1) ArcGIS Server Web ADF application with a centralized map cache service.

2) ArcGIS Server REST service, leveraging a Rich Internet Application (RIA) Flash or Silverlight Map Editor application with a centralized reference map cache service.

3) ArcGIS Mobile application, leveraging a Mobile client application with a local reference map cache data source. Point changes to the declaration layer would be exchanged using a centralized ArcGIS Server mobile synchronization service.

The Esri Capacity Planning Calculator was used to evaluate the architecture for the four different workflow technology patterns identified above. Peak system loads were estimated at 2400 concurrent users with standard Web productivity of 6 displays per minute. System design results are provided in the following paragraphs.

<br style="clear: both" />

Dynamic Web ADF Application
An ArcGIS Server 10 ADF MXD light 100% Dynamic standard Esri workflow was used to generate hardware requirements and traffic loads required to represent the dynamic web application solution. JPEG output was used to minimize traffic (base layer include digital ortho photography). Figure 3-43 provides the results of the capacity planning analysis.

Peak central data center traffic loads were estimated to reach 480 Mbps, well beyond the bandwidth available with the current data center Wide Area Network (WAN) service connection. Larger regional office sites (50 concurrent users) would require 24 Mbps bandwidth and smaller regional office sites (10 concurrent users) 6 Mbps WAN connections to support the projected peak citizen declaration traffic loads. Major infrastructure bandwidth increases would be needed to handle projected traffic flow requirements.

The central hardware solution was supported by Intel Xeon X5677 8 core (2 chip) 3467 MHz Windows 64-bit Servers each with 24 GB memory (data server required 60 GB memory). Total of 4 servers were required for the Web Application Server tier, 7 servers for the Container Machine tier, and one SDE geodatabase server. <br style="clear: both" />

Cached Web ADF Application
A custom ArcGIS Server ADF MXD light 10% Dynamic mashup with Cached Reference layer basemap service was used to support the cached web ADF workflow analysis. ADF application would use a PNG8 image output to minimize client traffic loads. Software service time and network traffic performance targets were generated by the CPT Calculator for the custom workflow. Figure 3-44 shows the results of the system design analysis.

Peak central data center traffic load estimates dropped to 252 Mbps with larger regional office sites requiring 12 Mbps WAN connections. Smaller offices would be configured with 3 Mbps WAN connections. The Web tier remained at 4 servers to support the centrally hosted ADF Map Viewer Web applications. The ArcGIS Server Container Machine tier reduced to 1 server and the DBMS load was reduced to 6.2 percent utilization. The Cached data source provided a significant server cost reduction from the initial proposal.

A sample data set was used to evaluate basemap caching timelines, and the complete country reference map cache could be generated within one week of processing time. Pre-caching the base reference layers would be well worth the effort, since there would be no need to update or change the reference cache during the peak citizen declaration period (data would be static). <br style="clear: both" />

Cached Rich Internet Client Application (REST API)
A custom ArcGIS Server 10 REST MXD light 10% Dynamic mashup with Cached Reference layer basemap service was used to support the cached web REST workflow analysis. REST service rendered a PNG8 image output to minimize client traffic loads. Software service time and network traffic performance targets were generated by the CPT Calculator for the custom workflow. Figure 3-45 shows the results of the system design analysis.

Peak central data center traffic load estimates were the same as the previous ADF cached workflow. The Web load was significantly reduced since the Map Viewer application was supported by the RIA browser clients. Web and server object containers were supported on two (2) servers in a two tier architecture. The database load remained the same. The lighter REST server architecture reduced hosting environment by 4 servers. <br style="clear: both" />

ArcGIS Mobile Application
The fourth design option was to use the ArcGIS Mobile application with a local reference cache data source. A demo of the ArcGIS Mobile client was provided on a Windows desktop platform to demonstrate feasibility of supporting the required editing functions with this client technology. The ArcGIS Mobile client technology operates very well on a standard Windows display environment and performed all the functions needed to support the citizen declaration requirements.

The ArcGIS Mobile standard Esri workflow synchronization service was used to support the design analysis. This workflow was generated by the CPT Calculator using a SOAP MXD Light service with a feature output (display features streamed to the client application). A 95 percent data cache setting was used to represent traffic for point feature exchanges (only point changes would be exchanged between the client and server displays). Cached reference layers would be distributed to each regional site in advance, and access would be provided by a file share to the ArcGIS Mobile clients running on the local workstations. The ArcGIS Mobile client would synchronize point changes to the dynamic citizen declaration layer over the government WAN. The peak concurrent SOAP service load would be reduced to 600 concurrent users, representing 25 percent of the total client displays (point changes are made only during edit transactions).

User display performance would be very fast, supported by the local reference map cache and the point layer in the ArcGIS Mobile application cache. The point layer cache is updated from the central data center geodatabase with each display refresh, and point layer edits are synchronized with the central server as a background data exchange. Figure 3-46 shows the results of our capacity planning analysis.

Peak central data center traffic loads were reduced to 15 Mbps, well within the T3 (45 Mbps) bandwidth available with the current data center Wide Area Network (WAN) service connection. Large regional office sites peak traffic was 1.3 Mbps, which would function well within 3 Mbps WAN connections. Small regional offices site peak traffic was 0.25 Mbps supported well by available T-1 connections. The existing infrastructure would be able to support peak WAN traffic loads with guaranteed service to each of the remote desktop locations (ArcGIS Mobile client would continue to function as a standalone system if WAN communication were lost, and edits would be sent to the central server when communication was restored). The central hardware requirements were reduced to 2 composite Web/container machine servers and the data server load was minimal (less than 1 percent utilization)

It was very clear that the cached client application provided significant cost and performance benefits over the centralized Web application dynamic solution included in the initial proposal. Pre-processing of map reference layers as an optimized map cache pyramid can significantly improve display performance. Use of an intelligent desktop client that can access reference layers from a local map cache can minimize network traffic and improve display performance even more. Selecting the right technology can make a big difference in total system cost and user productivity. Figure 3-47 highlights the advantage of selecting the right technical solution. <br style="clear: both" />

Software performance summary
Experience suggests we can do a better job selecting and building better software solutions. Understanding software performance can reduce implementation risk and save customer time and money. Projects can be delivered within project cost, time, and performance budgets. <br style="clear: both" />

CPT Video: Software Performance
The next section will take a closer look GIS Data Administration.

Previous Editions
Software Performance 29th Edition Software Performance 28th Edition Software Performance 27th Edition

Page Footer Specific license terms for this content System Design Strategies 26th edition - An Esri ® Technical Reference Document • 2009 (final PDF release)