The often neglected part of the performance testing is the Client side performance aspects like caching, lesser number of static files, file minification, compression, java script processing time, page rendering, etc.
For rich internet applications with lots of images, videos, etc, the client side aspects have a bigger bearing on the actual response time when compared to the server side response time and should be given due importance.
Here is a link for a good article on this topic from "Impetus".
Monday, July 11, 2011
Saturday, September 25, 2010
Throughput
Throughput in general means the rate at which the output is generated.
In performance terms, the most literally used meaning is rate at which the data is sent from the servers back to the users.
However, throughput may also be used to describe transaction throughput ie, number of transactions per sec.
For running tests to compare the earlier baseline performance results to current level of code, it is important to maintain the same transaction throughput and not the data throughput (as data throughput can vary due to changes on the page such as intoduction of new images, different result set in different tests, etc).
To maintain same throughput for different tests, loadrunner and other load test tools provide a the feature called "Pacing". This allows to control the number of transactions executed over a period of time.
Analysis of data throughput graphs between test runs with similar TPS can easily point to issues such as addition of new image/javascript/other non html files, increase in response page result set, issues related to compression, and thus leading to high response times without affecting any other server resource, issues with caching of non-html resources at both web server and the browser, etc.
In performance terms, the most literally used meaning is rate at which the data is sent from the servers back to the users.
However, throughput may also be used to describe transaction throughput ie, number of transactions per sec.
For running tests to compare the earlier baseline performance results to current level of code, it is important to maintain the same transaction throughput and not the data throughput (as data throughput can vary due to changes on the page such as intoduction of new images, different result set in different tests, etc).
To maintain same throughput for different tests, loadrunner and other load test tools provide a the feature called "Pacing". This allows to control the number of transactions executed over a period of time.
Analysis of data throughput graphs between test runs with similar TPS can easily point to issues such as addition of new image/javascript/other non html files, increase in response page result set, issues related to compression, and thus leading to high response times without affecting any other server resource, issues with caching of non-html resources at both web server and the browser, etc.
Dynatrace AJAX - A Must tool to check actual end user performance
Just started to have a look at the Dyantrace AJAX tool which can give a very clear picture of the actual end user performance when he is working on a browser.
It breaks down the request response time on the browser into network time, server response time, embedded image/javascript download times, javascript execution time to render the page, etc.
Even before starting a performance test with any of the load test tools, a simple study of requested pages of the application with Dynatrace tool can provide probably lots of pointers to start tuning the application.
http://ajax.dynatrace.com/pages/
It breaks down the request response time on the browser into network time, server response time, embedded image/javascript download times, javascript execution time to render the page, etc.
Even before starting a performance test with any of the load test tools, a simple study of requested pages of the application with Dynatrace tool can provide probably lots of pointers to start tuning the application.
http://ajax.dynatrace.com/pages/
Monday, November 2, 2009
Understanding Iteration Pacing
A video by Mark Tomlinson, LoadRunner Product manager
HP LoadRunner Official Blog page
Tuesday, January 13, 2009
Monday, January 12, 2009
Latency
General concept of latency in performance testing is the time it takes the request to reach the server from the client and then from the server back to the client, commonly known as the network latency. However, its not the only latency a request encounters. The flow of the request from web server to application server to DB is controlled by the use of queues and threads. So the time for which the request waits in the application server queue for a free thread to execute or in the database server for a connection pool, is also latent time for the request.
Normally under higher user loads, all the Application server threads and DB connection pools get used up and the incoming request queue up at these places, thus increasing the latency of the later requests. So it is important to have proper configuration settings for the queues and threads so that user requests do not queue up. For example, if the max thread count is set to a low value, then a high configuration application server may be able to handle the requests fast, but the response throughput will be limited by the number of threads available to process the requests.
Coming back to the network latency, this latency can be because of the following reasons:
Normally under higher user loads, all the Application server threads and DB connection pools get used up and the incoming request queue up at these places, thus increasing the latency of the later requests. So it is important to have proper configuration settings for the queues and threads so that user requests do not queue up. For example, if the max thread count is set to a low value, then a high configuration application server may be able to handle the requests fast, but the response throughput will be limited by the number of threads available to process the requests.
Coming back to the network latency, this latency can be because of the following reasons:
- transmission delays (properties of the physical medium)
- and processing delays (such as passing through proxy servers or making network hops on the Internet)
Performance Testing Guidance for Web Applications
Performance Testing Guidance for Web Applications is one of the best resource available for download, which talks of the whole performance testing process right from determining performance goals, creating usage models, scripting, execution to effective reporting of the test results.
The concepts explained in the guide are independent of any particular load testing tool and can be used to understand and apply the learnings to any performance testing project.
The concepts explained in the guide are independent of any particular load testing tool and can be used to understand and apply the learnings to any performance testing project.
Subscribe to:
Posts (Atom)