Software Performance - 27th Edition (Spring 2010)
Software Performance - 27th Edition (Spring 2010)
This section shares lessons learned about selecting and building effective GIS design solutions that satisfy operational performance and scalability needs. Software technology allows us to model our work processes, and provide these models to computers to optimize user workflow performance. The complexity of these models, the functions selected to generate our display, and how application functions are orchestrated to analyze and present information processing needs have a significant impact on computer system performance and scalability.
For many years we focused our system architecture design consulting efforts toward identifying and establishing a hardware infrastructure that would support a standard implementation of ESRI software technology. We developed platform sizing models based on consulting experience and customer implementation success. We updated our sizing models based on relative performance benchmark testing which focused on quantifying changes in critical processing loads introduced with each new software release.
There were examples of GIS deployments that did not take advantage of system architecture design best practices. Systems were deployed with unresolved performance issues, and scalability was not well understood. In some cases performance issues were not identified before the production system was under critical peak loads, and the platform solution or network infrastructure failed to meet performance needs. Resolving performance issues while in production can be expensive, both in terms of services and lost productivity. Building a system design that addresses capacity planning needs throughout development and deployment can improve user productivity and reduced implementation risk.
We noticed over the years that our test and tuning efforts were often finding and fixing the same performance issues over and over again, and with these lessons learned we were able to document best practices for building a high performance scalable system. The ArcGIS Server technology today includes access to a broad range of functionality, from simple cached map services that support very high performance and scalability to heavy geoprocessing services that may take hours consuming all available server resources for a single request. Developers and administrators need to understand what functions are appropriate, and how to publish a software design solution that meets user performance needs during critical system loads.
This section shares our lessons learned, and identifies functional areas within the software that can make a difference in meeting customer performance needs. Our capacity planning tools are designed to allow consultants and developers to set appropriate workflow performance targets based on the selected software technology, and identify performance measures that can be used during development to ensure performance goals are met during initial system implementation. These same tools can be used to maintain and support deployed operational system performance needs, identifying every day platform performance thresholds that can be used to validate a proper system is deployed to meet peak performance needs.
- 1 Map Display Performance
- 1.1 Quality vs. Speed
- 1.2 Optimizing lines and polygons
- 1.3 Measuring Display Complexity
- 1.4 Selecting Workflow Display Complexity
- 1.5 ArcGIS Server Optimized Map Service Document (MSD)
- 1.6 Selecting the right image output format
- 1.7 GIS Dynamic Map Display Processing
- 1.8 ArcGIS Server map cache: the Performance Edge
- 1.9 Capacity Planning Calculator Software Workflow Nomenclature
- 2 Data Source Performance Parameters
- 3 Performance Testing Tools
- 4 Selecting the right physical memory
- 5 Deploying high performance Web applications
- 6 Building the data cache
- 7 Selecting the right technology: A case study
- 8 Software performance summary
Map Display PerformanceGIS provides users with a geographic view of their business environment, and for many GIS workflows the map display is used as a primary spatial information resource.
Figure 3-1 shows the processing performance for three different ArcGIS Server dynamic map displays, all deployed on the same platform environment. The performance difference can be traced back to the complexity of the map display document.
The Web and client processing times are about the same for each Web mapping service. The light display is generated in about 0.25 seconds. The Web medium display requires twice the amount of map display processing (less than 0.5 second), and the Web heavy display requires over three times the light display processing (more than 0.7 seconds). The design and complexity of the map display make a significant difference in system performance and scalability. Light maps are rendered three times faster than heavy maps. The first step in publishing a map service is to create a map document (MXD) representing the geographic information product you wish to publish.
The following best practices share lessons learned in designing high performance map documents. The complexity of the authored map document will be a primary factor in determining map service performance and scalability.
Quality vs. SpeedFigure 3-2 shows the classic trade off between quality and performance.
The high quality map is a shaded relief with transparent layers and dynamic Maplex labeling. These are expensive functions that require extra computer processing for each user display. In contrast, the same display on the right uses low resolution relief, solid colors, and simple annotation providing similar information with good display performance.
Optimizing lines and polygons
We discussed earlier the importance of keeping the display functions simple.
Using ESRI Optimized symbols can improve display drawing performance by up to 50 percent.
Measuring Display Complexity
Display complexity is an estimate of the average amount of computer processing required to produce an average display. For GIS mapping workflows, most of the display rendering time is consumed in creating the dynamic map product. The map rendering time can be estimated/observed during the map authoring process. ArcGIS 9.3.1 release introduced a map publishing analysis tool that measures map display rendering time. ESRI consultants use an MXDperfstat performance statistics tool to generate a performance profile of an MXD map document. Map display rendering time can be used as a relative measure of map display complexity.Map rendering time is measured in seconds, and the measured result will be a function of the map display complexity and performance of the platform rendering the map.
Platform performance is represented as a horizontal line based on vendor published per core SPEC performance baseline. Display complexity (low, medium, high) is shown as a function of platform performance. Intersection of the horizontal platform performance line with the display complexity curve identifies the measured render time on the horizontal x-axis.
Map Publishing Analysis ToolArcGIS 9.3.1 introduced some new map optimization tools incorporated in the Map Service Publishing tool bar.
These tools render the selected ArcGIS Desktop MXD and analyze the selected display. With the ArcGIS 9.3.1 optimization tools, results are provided for the current map scale only - so you would need to move the display to each area you would like to evaluate.
It is important for me to state that the average workflow service times used for capacity planning may not be the same value as a specific workflow map display. Some workflow displays will be heavier, while other displays are lighter. User productivity (displays per minute) is also a factor in defining workflow processing loads. The workflow service times and user productivity together should represent the average user loads applied to the system over time. It is best to translate actual workflow service time loads from platform throughput and CPU utilization measurements collected during real user operations - it is very difficult to accurately simulate or estimate these loads by measuring single map display processing times. This does not preclude setting appropriate workflow performance targets for capacity planning purposes. These performance targets will be directly influenced by the number of map display layers, the features per layer, and the complexity of the functions used in building the display.
Map Document (MXD) Performance Statistics (MXDperfstat)Figure 3-6 provides a sample output generated by the MXDPerfstat performance measurement tool. http://arcscripts.esri.com. Results below were copied into Microsoft Excel for display purposes.
The MXDperfstat tool identifies display refresh times at multiple scales, shows layer refresh times for each map scale, provides layer performance statistics such as number of features, vectors, labeling, and breaks out display time for several key rendering phases (geography, graphics, cursor, and database). The tool also provides some high level recommendations for performance tuning (actually, once you see the layer processing time and the performance metrics the display problems are quite obvious).
MXDperfstat is an excellent tool for measuring map document display performance, since it lists layer statistics (render time, features, edges, projection, etc) for each layer included within a complete series of map scales. The measured results can be used for evaluating map display performance and tuning your map document for optimum display performance.
Selecting Workflow Display ComplexityFigure 3-7 provides an overview of the CPT Calculator workflow display complexity selection and associated performance metrics.
Workflow ArcGIS Desktop, SOC, SDE, and DBMS services times are adjusted based on the display complexity setting. Web application and client processing times are not impacted by the display rendering time.
ArcGIS Server Optimized Map Service Document (MSD)
ArcGIS 9.3.1 introduced a new optimized map service description (MSD).
Figure 3-8 shows the CPT Calculator map document selection.
There are some functional limitations with publishing with the MSD map document. Some of the nice features of the map publishing analysis report is that identified errors include hyperlinks to online ESRI Help providing instructions for resolving errors and optimizing performance.
Selecting the right image output format
Web mapping services produce map images that are sent to the client browser for display. Each user request will generate a new map image that must be delivered to the client browser. The selected image type can have a direct impact on the volume of network traffic. Lighter images require less display traffic and heavier images require more display traffic. The required amount of traffic per display can have a significant impact on user performance over lower bandwidth.Figure 3-9 identifies the amount of data required to support common ArcGIS Server output image types (JPEG, PNG24, PNG8, PDF). The volume of data identified in megabits per display (Mbpd) required to support the same map varies with each image type.
JPEG image types provide the most consistent compression, with a slight variation between raster and vector images. PNG images do much better with vector data than with raster - PNG supports transparencies and is the default ArcGIS Server format. PDF is a heavier output format used for high quality map plotting.
Figure 3-10 provides an overview of the CPT Calculator Density (Vector Only, Raster Image) and Output (Default, JPEG, PNG8, PNG24, PNG32, PDF, Feature) selections.
The resolution of the map display image can be an important consideration for GIS user workflows. A high resolution display is important for users that work with map displays over long periods throughout the day. A display resolution of 1280 x 1024 would be common for GIS power users or data maintenance (GIS editor) workflows. Remote client display performance on a dedicated T-1 connection would be over 2.3 seconds slower than a local user display. A display resolution of 600 x 400 is a common size for Web browser map displays. Remote display performance is within 0.6 seconds of local clients.Figure 3-12 provides an overview of the CPT Calculator Resolution selections.
GIS Dynamic Map Display ProcessingGIS displays are normally created one layer at a time, similar to the procedure geographers would follow to layout a map display on a table using Mylar sheets.
Figure 3-13 shows the software procedure for building a map display, one layer at a time, joining the features (points, polygons, lines) in each layer sequentially one on top of the other until the final display is complete.Figure 3-14 shows a software process for building the layers of a map display using a parallel processing procedure.
In theory, the second approach generates the display faster. The same amount of processing is required for both methods - in fact the parallel approach requires additional procedures for establishing the parallel display request and then bringing those results back together to produce the final map display.
Hardware vendors are providing computers with an increasing number of processor core per chip, expanding the capacity of the server platforms with reducing expectations for increased performance gains per processor core. Vendors have encourages software developers to take advantage of this increased server capacity by increasing the number of concurrent processes used in generating each user display. Most heavy processing workloads today require sequential processing and a single display generation will not take advantage of multiple processor core. The actual standard map display performance gains for reprogramming software to take advantage of parallel processing may not be worth the extra effort and additional processing loads.Figure 3-15 compares a traditional COTS map display performance with the performance of a parallel query map display, where the display layers are blended together on the client browser.
ArcGIS Server map cache: the Performance Edge
ArcGIS Server technology provides a variety of enhanced solutions to improve production system performance and scalability. The concept of pre-processing map outputs for customer delivery is not new - we used to do this with paper map books before we had computers. The ArcGIS Server 9.3 release introduces a variety of ways to maintain and support pre-processed maps, and to organize the map files in a map cache structure optimized for map publishing.Figure 3-16 shows the display time for both light and medium complexity dynamic map products in comparison to the display time for a fully cached map.
The optimum Web mapping display combines dynamic map services presented as a transparent image mashup over a map cache base layer. Dynamic map services are important for geographic analysis, editing, and geoprocessing functions which require access to point, polygon, and line features rendered in a dynamic map display. Map cache tiles provide an optimum base map foundation layer, providing high display performance with high quality map visualization. A mashup of dynamic operational layers over high quality base map reference layer delivers the optimum combination of quality and performance.Figure 3-17 shows how the Capacity Planning Calculator is designed to encourage cached map services.
The CPT Calculator includes an option to add a map cache data source to the selected dynamic workflow, providing appropriate composite service loads to complete a system design. The "+mapcache" data source selection applies an addition 0.5 Mbpd traffic load for Calculator sizing purposes (this load is only applied for the Calculator platform sizing analysis). The client traffic identified in cell E4 will turn pink to highlight the composite traffic load setting.
The ArcGIS Server map cache service traffic will vary depending on the user workflow activity. Map tiles are downloaded to the client browser Internet cache, and subsequent tile requests are delivered from local client cache (not over the network). A "well behaved" client (one who works in a small geographic area) will soon have all the map tiles located on the local machine cache, and minimum map tile traffic will be required from the central site over the network. On the other hand, a "world traveler" scenario (each map view from a different world location) can generate very high display traffic over the network connection (several map tiles required for each map display). The average 0.5 Mbpd traffic used by the CPT Calculator is based on average worldwide traffic measurements experienced from ArcGIS Online map cache service statistics.
Capacity Planning Calculator Software Workflow NomenclatureThe user workflow name automatically generated by the Capacity Planning Calculator provides the calculator recipe used to create the selected workflow performance targets.
The workflow software selection identifies the ArcGIS software release version (930, 931, etc) followed by the software deployment pattern (Wkstn, WTS Citrix, ADF, SOAP, REST, WMS, etc). The name follows with the map document (MXD or MSD), complexity (Lite, ML, Med, MH, Hvy), %Dynamic (1-%DataCache), Density (V or R), resolution (4x3, 6x4, 8x6, 10x7, 12x10, 16x12)and output (JPEG, PNG8, PNG24, PNG32, PDF, ICA, Feature). Calculator generated workflow service times and traffic are transfered to the CPT Workflow tab along with the workflow nomenclature to include as a CPT Design workflow. The Standard ESRI Workflows provided on the CPT Workflow tab use the Calculator nomenclature (identifies Calculator recipe used to create the standard workflow). The CPT Calculator workflow identifies assumptions made in creating workflow performance targets for us in the CPT Design.
Data Source Performance ParametersFigure 3-19 compares display performance between using an ArcSDE Geodatabase data source and the available File data sources.
The small Shape File performance is about the same as the ArcSDE Geodatabase. The large Shape File format requires three times the processing required with ArcSDE. The small File Geodatabase loads are about 80 percent of the ArcSDE Geodatabase performance values, while the large File Geodatabase provides performance similar to ArcSDE loads.Figure 9-26 shows the data extent of the San Diego geodatabase used in performance validation testing.
The performance test series generated common displays for 100 random maps within the neighborhood area identified in the map above. The ArcSDE geodatabase display performance was roughly the same whether using the full San Diego extent or the small neighborhood area as the data source.
The small Shape File testing used vector layers extracted for the neighborhood extent identified above. The large Shape File data set included the complete San Diego area (the Capacity Planning Tool large shape file load would represent about twice the full San Diego data extent).
Display performance with the File Geodatabase is much improved over the Shape File format (test results are not shown). Tests have been completed accessing a File Geodatabase with up to 1 TB of data, and performance was quite good. The small File Geodatabase performance targets would likely support a data source representing the full San Diego extent. The large File Geodatabase performance targets would data sources up to 1 TB in size.
It is difficult to be precise when evaluating these performance targets. There are many factors in the database design, number of features per layer, and number of layers in the display that can factor into these performance values. Processing loads on the file data server platform are very light (all query processing loads for a file data source are performed by the client application).
Figure 3-21 provides some additional information on the number of display layers, total number of features, and the size of the San Diego data source used during our initial performance validation testing.
Network throughput for the small Shape File data source was half the traffic of the medium Shape File data source.Figure 3-22 shows the different in display performance for the two Shape File data source.
The large Shape File performance targets included in the Capacity Planning Tool represent 3 times the processing of the small Shape file. These performance targets should be adequate to support twice the extent identified in the test results shown above (9 vector layers with up to 2 million features or possibly twice the number of vector layers at the same extent).
Figure 3.23 identifies the data source selections (SDE DBMS, Small FGDB, Large FGDB, Small Shape file, Large Shape File, Image) available for the CPT Calculator.
Performance Testing Tools
A variety of performance monitor and testing tools are available for systems administration. Standard tools used for ESRI benchmark testing include Microsoft Fiddler for network traffic measurements and Visual Studio for ESRI Web applications. Windows performance monitor is used for collecting server performance metrics. The Enterprise GIS Resource Center provides information on performance measurement tools used for ESRI benchmark testing.
Selecting the right physical memoryThe platform configuration must include sufficient physical memory to accommodate all active process executions.
Sufficient number of instances must be deployed to take advantage of processor core capacity. If the ArcSOC process memory requirements exceed available physical memory, performance will start to degrade and at some point start to fail. Setting host capacity can limit the number of SOC instances deployed to make sure memory is sufficient to support peak deployed instance memory requirements.
Deploying high performance Web applicationsThere are several software technology terms and software tuning selection that impact ArcGIS Server performance.
Configuring the service instancesWeb services must be configured with the appropriate number of service instances to take full advantage of the licensed hardware.
From a user perspective, service instances and threads mean the same thing. The service instances managed by the SOM are hosted by the ArcSOC processes (SOC) on the container machine tier.
Selecting high isolation or low isolationFigure 3-27 shows two different service configurations for an ArcSOC process (high isolation or low isolation).
The low isolations SOC are supported by a multi-threaded ArcSOC process. A multi-threaded process is one set of executables shared by more than one service instance, each service instance supported by separate pointers within the ArcSOC process. The ArcSOC process is designed to support execution of the available service instances in parallel (no processing conflicts). Available SOC Threads can support execution of assigned service requests in parallel on separate processor core located on the same platform. The primary advantage of using a low isolation SOC is to reduce the server platform memory footprint (one set of ArcSOC executables supporting multiple service instances). Single SOC thread failure will kill the complete SOC process, so the high isolation SOC configuration is preferred when sufficient physical memory is available to support the isolation configuration. Standard memory recommendations (2 GB per platform core) should be adequate to support high isolation SOC instances for most ArcGIS Server customer configurations.
Selecting a pooled or non-pooled service modelArcGIS Server provides options for two different service models (Pooled and Non-pooled).
The pooled service model should be used for all service configurations except those that require a non-pooled session. Non-pooled sessions are map editor workflows where MXD context changes are maintained within the ArcSOC process.
Configuring SOM service instancesThe SOM service configuration instance settings are very important system capacity parameters.
Map Service Properties must be defined for each published service configuration (map service). The minimum number of instances property identifies the number of service instances deployed by the SOM during startup and maintained on the host platforms during light operations. The maximum number of instances property identifies the maximum instances the SOM will deploy during peak load conditions. The SOM will distribute the assigned instances across the available SOC platforms, and will increase and decrease the number of deployed instances within the identified minimum and maximum instance properties based on demand for this service.
Configuring host instance capacityEach host container machine will have a limited number of processor core (4 core, 8 core, etc) that can be used for processing service requests.
Figure 3-31 shows results from a series of tests that demonstrate the optimum map service configuration to optimize system capacity and user display performance.
The results show a trade off between optimum capacity and user display performance. As the number of available service instances (threads) increased the peak system throughput and utilization would also increase. As utilization increased above 50 percent of available platform capacity, user display response time would increase; slowing more and more as additional service instances were included in the configuration. The optimum configuration was achieved with 4 service instances (threads) per server core (CPU).Figure 3-32 shows a view of the Map Services Properties Hosts tab and how the host capacity properties are defined.
When configuring high available Web configurations with multiple SOM, separate host capacity configurations must be defined for each SOM environment (each SOM will deploy separate service instances on the assigned host machines - the SOM are not aware of each other and will function as separate environments). System load balancing is managed by each SOM to optimize system performance and scalability - system should be configured as discussed earlier in Section 4 (Product Architecture).
Building the data cacheArcGIS Server 3-33 introduced automated processing functions to build and maintain an optimized map cache pyramid for service pre-processed images as a map service.
Figure 3-34 can be used to get a rough estimate of the expected map cache generate time.
The chart above shows the estimated processing hours starting with one tile at the top layer and building the required number of layers to reach a resolution level. Three map generation times are plotted - 0.5 seconds for a simple map display, 1 second for a medium map display, and 5 seconds for a heavy map display. Charts shows about 100 hours to generate 9 layers with average map generation time of 5 seconds. It would take over 2000 hours to generate just two more layers (11 layers at 5 sec per map tile). This chart is generated simply by multiplying the time to generate one map tile by the total number of tiles required to complete the map cache pyramid.
The recommended procedure for estimating map cache generation times is to select a same area dataset that represents a small percentage of the total caching area. Build a map cache of the sample area and test the output symbology, labeling, and performance with your primary map client. Use the cache time for the small sample area to estimate processing time for the remaining map cache.
The ArcGIS 9.3 release additional options for building and maintaining map cache that make this technology adaptive to a broad range of operational scenarios.
Partial Data Cache. A partial data cache can be defined for high priority areas within your display environment. The partial data cache can be specified by both area and level of resolution, providing optimum flexibility for pre-processing custom areas and levels of map cache.
Cache on Demand. The initial map query can be generated from a dynamic data source, and the resulting tiles can be added to the map cache. The next request for these tiles would come from the map cache, significantly improving display performance for popular map display environments.Figure 3-35 shows a view of the ArcGIS Server Map Service Properties Caching tab.
When using a local image file data source, each map cache instance will consume a platform core. The optimum service configuration would specify 5 instances to make sure the server is operating at 100 percent capacity. The windows performance task monitor can be used to identify the machine is operating at 100 percent utilization. If your map cache include a DBMS data source, you may be able to include one or two additional service instances to reach 100 percent utilization. It is important to configure sufficient instances to take full advantage of your hardware platform processing resources.
It is a best practice to execute longer cache jobs in sections. It is good to plan cache areas that can be completed within an 8 or 10 hour period, providing an opportunity to complete the jobs in a stable platform environment. This will also provide an opportunity to periodically reboot system between each job section to maintain a reliable platform environment.Figure 3-36 provides an example of taking advantage of the hardware, as described above.
A variety of caching scenarios are being evaluated to expand the feasibility of using pre-processed data cache to improve performance and scalability of future GIS operations. Experience shows pre-processing map data can make a difference for our customers. ArcGIS Server is leading the way toward more productive and higher quality Enterprise level GIS operations.
Selecting the right technology: A case studySelecting the right software technology can make a big difference in performance and scalability, and cost of the production system.
Our customer had a requirement to design a Web application solution that would be used to collect national property location and census information during a three month national citizen declaration period. Citizens would report to regional government centers, use a local desktop computer to locate their home residence on a map display generated from a national imagery and geospatial features repository. Citizen would place a point on the map identifying their residence, and then fill out a reference table identifying their census information. The citizen input would be consolidated at a centralized national data center and shared with all regional government centers throughout the declaration process.
The initial system design was developed using an earlier ArcGIS Server Web application development framework (ADF) map editor, hosting a centralized ArcGIS Server dynamic Web application with browser clients located at 60 regional national sites. Following contract award, the customer reviewed available technology options to finalize the system design.Figure 3-38 shows three possible software technology options that were considered during the design review.
The ArcGIS Server dynamic Web ADF application was the solution provided in the initial design proposal two years earlier. The current ArcGIS Server technology included improvements in Web application performance and user experience that would be evaluated in the design review.
ArcGIS Server provides a data cache option where reference map layers could be pre-processed and stored in a map cache pyramid file data source. Pre-processing the reference layers would significantly reduce server processing loads during production operations. A single point declaration layer contained all features that would be edited and exchanged during the citizen declaration period, all remaining reference layers could be cached. Changes would be displayed at all remote site locations with each client display refresh.
There were three different Web application options that could leverage a reference data cache.
1) ArcGIS Server Web ADF application with a centralized map cache service.
2) ArcGIS Server REST service, leveraging a Rich Internet Application (RIA) Flash or Silverlight Map Editor application with a centralized reference map cache service.
3) ArcGIS Mobile application, leveraging a Mobile client application with a local reference map cache data source. Point changes to the declaration layer would be exchanged using a centralized ArcGIS Server mobile synchronization service.
The ESRI Capacity Planning Calculator was used to evaluate the architecture for the four different workflow technology patterns identified above. Peak system loads were estimated at 2400 concurrent users with standard Web productivity of 6 displays per minute. System design results are provided in the following paragraphs.
Dynamic Web ADF ApplicationAn ArcGIS Server ADF MXD light 100% Dynamic standard ESRI workflow was used to generate hardware requirements and traffic loads required to represent the dynamic web application solution.
Peak central data center traffic loads were estimated to reach 480 Mbps, well beyond the bandwidth available with the current data center Wide Area Network (WAN) service connection. Larger regional office sites (50 concurrent users) would require WAN connections with 24 Mbps bandwidth to support the projected peak citizen declaration traffic loads. Major infrastructure bandwidth increases would be needed to handle projected traffic flow requirements.
The central hardware solution was supported by Intel Xeon X5570 8 core (2 chip) 2933 MHz Windows 64-bit Servers each with 16 GB memory. Total of 5 servers were required for the Web Application Server tier, 8 servers for the Container Machine tier, and one SDE geodatabase server.
Cached Web ADF ApplicationA custom ArcGIS Server ADF MXD light 10% Dynamic mashup with Cached Reference layer basemap service was used to support the cached web ADF workflow analysis.
Peak central data center traffic load estimates dropped to 252 Mbps with larger regional office sites requiring 12 Mbps WAN connections. The Web tier remained at 5 servers to support the centrally hosted Map Viewer Web applications. The ArcGIS Server Container Machine tier reduced to 1 server and the DBMS load was reduced to 6.8 percent utilization. The Cached data source provided a significant cost reduction from the initial proposal.
A sample data set was used to evaluate map caching timelines, and the complete country reference map cache could be generated within one week of processing time. Pre-caching the base reference layers would be well worth the effort, since there would be no need to update or change the reference cache during the peak citizen declaration period (data would be static).
Cached Rich Internet Client Application (REST API)A custom ArcGIS Server REST MXD light 10% Dynamic mashup with Cached Reference layer basemap service was used to support the cached web ADF workflow analysis.
Peak central data center traffic load estimates were the same as the previous ADF cached workflow. The Web load was significantly reduced since the Map Viewer application was supported by the RIA browser clients. Web and server object containers were supported on two (2) servers in a two tier architecture. The database load remained the same. The lighter REST server architecture reduced hosting environment by 4 servers.
ArcGIS Mobile Application
The fourth design option was to use the ArcGIS Mobile application with a local reference cache data source. A demo of the ArcGIS Mobile client was provided on a Windows desktop platform to demonstrate feasibility of supporting the required editing functions with this client technology. The ArcGIS Mobile client technology operates very well on a standard Windows display environment and performed all the functions needed to support the citizen declaration requirements.
The ArcGIS Mobile standard ESRI workflow synchronization service was used to support the design analysis. This workflow was generated by the CPT Calculator using a SOAP MXD Light service with a feature output (display features streamed to the client application). A 95 percent data cache setting was used to represent traffic for point feature exchanges (only point changes would be exchanged between the client and server displays). Cached reference layers would be distributed to each regional site in advance, and access would be provided by a file share to the ArcGIS Mobile clients running on the local workstations. The ArcGIS Mobile client would synchronize point changes to the dynamic citizen declaration layer over the government WAN. The peak concurrent REST service load would be reduced to 600 concurrent users, representing 25 percent of the total client displays (point changes are made only during edit transactions).User display performance would be very fast, supported by the local reference map cache and the point layer in the ArcGIS Mobile application cache.
Peak central data center traffic loads were reduced to 15 Mbps, well within the T3 (45 Mbps) bandwidth available with the current data center Wide Area Network (WAN) service connection. Large regional office sites peak traffic was 1.3 Mbps, which would function well within 3 Mbps WAN connections. The existing infrastructure would be able to support peak WAN traffic loads with guaranteed service to each of the remote desktop locations (ArcGIS Mobile client would continue to function as a standalone system if WAN communication were lost, and edits would be sent to the central server when communication was restored). The central hardware requirements were reduced to 2 composite Web/container machine servers and the data server load was minimal (less than 1 percent utilization)It was very clear that the cached client application provided significant cost and performance benefits over the centralized Web application dynamic solution included in the initial proposal.
Software performance summaryExperience suggests we can do a better job selecting and building better software solutions.
The next section will take a closer look GIS Data Administration.