Friday, February 16, 2018

How To Use Apache JMeter To Perform Load Testing on a Web Server

Introduction
In this tutorial, we will go over how to use Apache JMeter to perform basic load and stress testing on your web application environment. We will show you how to use the graphical user interface to build a test plan and to run tests against a web server.
JMeter is an open source desktop Java application that is designed to load test and measure performance. It can be used to simulate loads of various scenarios and output performance data in several ways, including CSV and XML files, and graphs. Because it is 100% Java, it is available on every OS that supports Java 6 or later.
Prerequisites
In order to follow this tutorial, you will need to have a computer that you can run JMeter on, and a web server to load test against. Do not run these tests against your production servers unless you know they can handle the load, or you may negatively impact your server's performance.
You may adapt the tests in this tutorial to any of your own web applications. The web server that we are testing against as an example is a 1 CPU / 512 MB VPS running WordPress on a LEMP Stack, in the NYC2 DigitalOcean Datacenter. The JMeter computer is running in the DigitalOcean office in NYC (which is related to the latency of our tests).
Please note that the JMeter test results can be skewed by a variety of factors, including the system resources (CPU and RAM) available to JMeter and the network between JMeter and the web server being tested. The size of the load that JMeter can generate without skewing the results can be increased by running the tests in the non-graphical mode or by distributing the load generation to multiple JMeter servers.
Install JMeter
Because we are using Apache JMeter as a desktop application, and there are a large variety of desktop OSes in use, we will not cover the installation steps of JMeter for any specific OS. With that being said, JMeter is very easy to install. The easiest ways to install are to use a package manager (e.g. apt-get or Homebrew), or download and unarchive the JMeter binaries from the official site and install Java (version 6 or later).
Here is a list of the software, with links to archives, required to run JMeter:
Depending on how you install Java, you may need to add the Java bin directory to your PATHenvironmental variable, so JMeter can find the Java and keytool binaries.
Also, we will refer to the path that you installed JMeter to (the directory that you unarchived it to) as $JMETER_HOME. Therefore, if you are on a Linux or Unix-based OS, the JMeter binary is located at $JMETER_HOME/bin/jmeter. If you are running Windows, you can run $JMETER_HOME/bin/jmeter.bat.
For reference, when writing this tutorial, we used the following software versions:
  • Oracle Java 7 update 60, 64-bit
  • JMeter 2.11
Once you have JMeter installed and running, let's move on to building a test plan!
Building a Basic Test Plan
After starting JMeter, you should see the graphical user interface with an empty Test Plan:
JMeter GUI
A test plan is composed of a sequence of test components that determine how the load test will be simulated. We will explain the how some of these components can be used as we add them into our test plan.
Add a Thread Group
First, add a Thread Group to Test Plan:
1.      Right-click on Test Plan
2.      Mouse over Add >
3.      Mouse over Threads (Users) >
4.      Click on Thread Group
The Thread Group has three particularly important properties influence the load test:
  • Number of Threads (users): The number of users that JMeter will attempt to simulate. Set this to 50
  • Ramp-Up Period (in seconds): The duration of time that JMeter will distribute the start of the threads over. Set this to 10.
  • Loop Count: The number of times to execute the test. Leave this set to 1.
Thread Group Properties
Add an HTTP Request Defaults
The HTTP Request Defaults Config Element is used to set default values for HTTP Requests in our test plan. This is particularly useful if we want to send multiple HTTP requests to the same server as part of our test. Now let's add HTTP Request Defaults to Thread Group:
1.      Select Thread Group, then Right-click it
2.      Mouse over Add >
3.      Mouse over Config Element >
4.      Click on HTTP Request Defaults
In HTTP Request Defaults, under the Web Server section, fill in the Server Name or IP field with the name or IP address of the web server you want to test. Setting the server here makes it the default server for the rest of the items in this thread group.
HTTP Request Defaults
Add an HTTP Cookie Manager
If your web server uses cookies, you can add support for cookies by adding an HTTP Cookie Manager to the Thread Group:
1.      Select Thread Group, then Right-click it
2.      Mouse over Add >
3.      Mouse over Config Element >
4.      Click on HTTP Cookie Manager
Add an HTTP Request Sampler
Now you will want to add an HTTP Request sampler to Thread Group, which represents a page request that each thread (user) will access:
1.      Select Thread Group, then Right-click it
2.      Mouse over Add >
3.      Mouse over Sampler >
4.      Click on HTTP Request
In HTTP Request, under the HTTP Request section, fill in the Path with the item that you want each thread (user) to request. We will set this to /, so each thread will access the homepage of our server. Note that you do not need to specify the server in this item because it was already specified in the HTTP Request Defaults item.
Note: If you want to add more HTTP Requests as part of your test, repeat this step. Every thread will perform all of the requests in this test plan.
Add a View Results in Table Listener
In JMeter, listeners are used to output the results of a load test. There are a variety of listeners available, and the other listeners can be added by installing plugins. We will use the Table because it is easy to read.
1.      Select Thread Group, then Right-click it
2.      Mouse over Add >
3.      Mouse over Listener >
4.      Click on View Results in Table
You may also type in a value for Filename to output the results to a CSV file.
Run the Basic Test Plan
Now that we have our basic test plan set up, let's run it and see the results.
First, save the test plan by clicking on File then Save, then specify your desired file name. Then select on View Results in Table in the left pane, then click Run from the main menu then click Start (or just click the green Start arrow below the main menu). You should see the test results in the table as the test is run like:
Test Results Table
Interpreting the Results
You will probably see that the Status of all the requests is "Success" (indicated by a green triangle with a checkmark in it). After that, the columns that you are probably most interest in are the Sample Time (ms)and Latency (not displayed in example) columns.
  • Latency: The number of milliseconds that elapsed between when JMeter sent the request and when an initial response was received
  • Sample Time: The number of milliseconds that the server took to fully serve the request (response + latency)
According to the table that was generated, the range of Sample Time was 128-164 ms. This is a reasonable response time for a basic homepage (which was about 55 KB). If your web application server is not struggling for resources, as demonstrated in the example, your Sample Time will be influenced primarily by geographical distance (which generally increases latency) and the size of the requested item (which increases transfer time). Your personal results will vary from the example.
So, our server survived our simulation of 50 users accessing our 55 KB WordPress homepage over 10 seconds (5 every second), with an acceptable response. Let's see what happens when we increase the number of threads.
Increasing the Load
Let's try the same test with 80 threads over 10 seconds. In the Thread Group item in the left-pane, change the Number of Threads (users) to 80. Now click View Results in Table, then click Start. On our example server, this results in the following table:
Results Table 2
As you can see, the sample time has increased to nearly a second, which indicates that our web application server is beginning to become overburdened by requests. Let's log in to our VPS and see take a quick look at resource usage during the load test.
Log in to your web server via SSH and run top:
top
Unless you have users actively hitting your server, you should see that the Cpu(s) % user usage (us) should be very low or 0%, and the Cpu(s) % idle (id) should be 99%+, like so:
Idle Top Output
Now, in JMeter, start the test again, then switch back to your web server's SSH session. You should see the resource usage increase:
Max CPU Top Output
In the case of our example, the CPU % user usage is 94% and the system usage (sy) is 4.7% with 0% idle. We aren't running out of memory, as indicated in the image above, so the decreased performance is due to lack of CPU power! We can also see that the php5-fpm processes, which are serving WordPress, are using is using the majority of the CPU (about 96% combined).
In order to meet the demands of this simulation of 80 users in 10 seconds, we need to either increase our CPU or optimize our server setup to use less CPU. In the case of WordPress, we could move the MySQL database (which uses portion of the CPU) to another server and we could also implement caching (which would decrease CPU usage).
If you are curious, you may adjust the number of threads in the test to see how many your server can handle before it begins to exhibit performance degradation. In the case of our 1 CPU droplet example, it works fine until we use 72 threads over 10 seconds.
Conclusion
JMeter can be a very valuable tool for determining how your web application server setup should be improved, to reduce bottlenecks and increase performance. Now that you are familiar with the basic usage of JMeter, feel free to create new test plans to measure the performance of your servers in various scenarios.
The test that we used as the example does not accurately reflect a normal user's usage pattern, but JMeter has the tools to perform a variety of tests that may be useful in your own environment. For example, JMeter can be configured to simulate a user logging into your application, client-side caching, and handling user sessions with URL rewriting. There are many other built-in samplers, listeners, and configuration tools that can help you build your desired scenario. Additionally, there are JMeter plugins to enhance its functionality that are available for download at http://jmeter-plugins.org/.


Sunday, August 20, 2017

Performance Testing Overview


Introduction

Performance testing is a non-functional type of testing to determine the system responsiveness i.e. speed, stability, reliability and scalability.

OR

Performance testing, a non-functional testing technique performed to determine the system parameters in terms of responsiveness and stability under various workload. Performance testing measures the quality attributes of the system, such as scalability, reliability and resource usage.

OR

Performance testing is done to provide stakeholders with information about their application regarding speed, stability and scalability.

Goals


There are various factors which are examined but following are some of the main goals of conducting the performance tests on an application.
Access the Production Readiness
Check the system response time during expected load conditions
System behaviour during unexpected load conditions
Check the system scalability
Best configuration settings for optimal performance
System behaviour during spike user loads
System stability
Compare two platforms with the same software to see which performs better
Compare Performance characteristics of system configurations
Evaluate System against performance criteria
Find throughput level
Discover what parts of the application perform poorly and under what conditions
Finding the source of performance problems
Support system tuning

Importance of Performance Testing

Average user clicks away after 8 seconds of delay

$45 billion business revenue loss due to poor web applications performance

In November 2009, a computerized system used by US based airlines to maintain flight plans failed for several hours causing havoc amongst all major airports. This caused huge delays to flight schedules causing inconvenience for thousands of frustrated passengers. Identified as a ‘serious efficiency problem’ by the Federal Aviation Authority, this was one of the biggest system failures in US Aviation History!

Aberdeen found that inadequate performance could impact revenue by up to 9%
Business performance begins to suffer at 5.1 seconds of delay in response times of web applications and 3.9 for critical applications and an additional second of waiting on a website significantly impact customer satisfaction and visitor conversions. Page views, conversions rate and customer satisfaction drops 11%, 7% and 16% respectively!

Thursday, March 9, 2017

JMeter 3.1 New Features

Jmeter 3.1 released with couple of key performance improvements....here is the changes for your reivew.

Link: http://jmeter.apache.org/changes.html

Thursday, October 9, 2014

Jmeter: Understanding Summary Report

Jmeter: Understanding Summary Report

The summary report shows values about the measurement Jmeter have done while calling the same page as if many users is calling the page. It gives the result in tabular format which you can save as .csv file.

These are some main headings in the summary result listener. Let’s understand them in detail:
  
LabelIn the label section you will able to see all the recorded http request, during test run or after test run.


SamplesSamples denote to the number of http request ran for given thread. Like we have one http request and we run it with 5 users, than the number of samples will be 5x1=5.
Same if the sample ran two times for the single user, than the number of samples for 5 users will be 5x2=10.


Average:  Average is the average response time for that particular http request. This response time is in millisecond. Like in the image you can see for first label, in which the number of sample is 4 because that sample run 2 time for single user and i ran the test with 2 user. So for 4 samples the average response time is 401 ms.


MinMin denotes to the minimum response time taken by the http request. Like for the above image the minimum response time for first four samples is 266 ms. It means one http request responded in 266 ms out of four samples.


MaxMax denotes to the maximum response time taken by the http request. Like for the above image the maximum response time for first four samples is 552 ms. It means one http request responded in 552 ms out of four samples.


Std.Deviation: This shows how many exceptional cases were found which were deviating from the average value of the receiving time.   The lesser this value more consistent the time pattern is assumed.


Error %: This denotes the error percentage in samples during run. This error can be of 404(file not found), or may be exception or any kind of error during test run will be shown in Error %. In the above image the error % is zero, because all the requests ran successfully.


ThroughputThe throughput is the number of requests per unit of time (seconds, minutes, hours) that are sent to your server during the test.





Monday, September 16, 2013

.Net CLR Memory Counters




.Net CLR Memory Counters

Large Object Heap Size:

This counter displays the current size of the Large Object Heap in bytes. Objects greater than 20 Kbytes are treated as large objects by the Garbage Collector and are directly allocated in a special heap; they are not promoted through the generations. This counter is updated at the end of a GC; it is not updated on every allocation.

% Time in GC:

Time in GC is the percentage of elapsed time that was spent in performing a garbage collection (GC) since the last GC cycle. This counter is usually an indicator of the work done by the Garbage Collector on behalf of the application to collect and compact memory. This counter is updated only at the end of every GC, and the counter value reflects the last observed value; it is not an average.

# Bytes in all Heaps

This counter is the sum of four other counters; Gen 0 Heap Size; Gen 1 Heap Size; Gen 2 Heap Size; and the Large Object Heap Size. This counter indicates the current memory allocated in bytes on the GC heaps.

 

# Gen 0 Collections:

 

The youngest, most recently allocated are garbage collected (Gen 0 GC) since the start of the application. Gen 0 GC occurs when the available memory in generation 0 is not sufficient to satisfy an allocation request. This counter is incremented at the end of a Gen 0 GC. Higher generation GCs include all lower generation GCs. This counter is explicitly incremented when a higher generation (Gen 1 or Gen 2) GC occurs. _Global_ counter value is not accurate and should be ignored. This counter displays the last observed value.

 

# Gen 1 Collection:

 

This counter displays the number of times the generation 1 objects are garbage collected since the start of the application. The counter is incremented at the end of a Gen 1 GC. Higher generation GCs include all lower generation GCs. This counter is explicitly incremented when a higher generation (Gen 2) GC occurs. _Global_ counter value is not accurate and should be ignored. This counter displays the last observed value.

 

# Gen 2 Collections:

This counter displays the number of times the generation 2 objects (older) are garbage collected since the start of the application. The counter is incremented at the end of a Gen 2 GC (also called full GC). _Global_ counter value is not accurate and should be ignored. This counter displays the last observed value.

# of Pinned Objects:

This counter displays the number of pinned objects encountered in the last GC. This counter tracks the pinned objects only in the heaps that were garbage collected; e.g., a Gen 0 GC would cause enumeration of pinned objects in the generation 0 heap only. A pinned object is one that the Garbage Collector cannot move in memory.

 

Exceptions:

 

# of Exceps Thrown/Sec:

This counter displays the number of exceptions thrown per second. These include both .NET exceptions and unmanaged exceptions that get converted into .NET exceptions; e.g., null pointer reference exception in unmanaged code would get rethrown in managed code as a .NET System.NullReferenceException; t his counter includes both handled and unhandled exceptions. Exceptions should only occur in rare situations and not in the normal control flow of the program; this counter was designed as an indicator of potential performance problems due to large (>100s) rate of exceptions thrown. This counter is not an average over time; it displays the difference between the values observed in the last two samples divided by the duration of the sample interval.

 

Throw to Catch Depth/Sec:

This counter displays the number of stack frames traversed from the frame that threw the .NET exception to the frame that handled the exception per second. This counter resets to 0 when an exception handler is entered; so nested exceptions would show the handler to handler stack depth. This counter is not an average over time; it displays the difference between the values observed in the last two

 

Locking and Threading:

 

Contention Rate/Sec:
Rate at which threads in the runtime attempt to acquire a managed lock unsuccessfully. Managed locks can be acquired in many ways: by the "lock" statement in C# or by calling System.Monitor.Enter or by using MethodImplOptions.Synchronized custom attribute.

 

# of Current Logical Threads:

This counter displays the number of current .NET thread objects in the application. A .NET thread object is created either by new System.Threading.Thread or when an unmanaged thread enters the managed environment. This counter maintains the count of both running and stopped threads. This counter is not an average over time; it just displays the last observed value.

 

# of Current Physical Threads:

This counter displays the number of native OS threads created and owned by the CLR to act as underlying threads for .NET thread objects. This counter’s value does not include the threads used by the CLR in its internal operations; it is a subset of the threads in the OS process.

 

 

# of Current Recognized Threads:

This counter displays the number of threads currently recognized by the CLR; they have a  orresponding .NET thread object associated with them. These threads are not created by the CLR; they are created outside the CLR but have since run inside the CLR at least once. Only unique threads are tracked; threads with the same thread ID reentering the CLR or recreated after thread exit are not counted twice.

 

 

 

Performance Test Benefits




Performance Test Benefits

Term
Benefits
Challenges and Areas Not Addressed
Performance test
·         Determines the speed, scalability and stability characteristics of an application, thereby providing an input to making sound business decisions.
·         Focuses on determining if the user of the system will be satisfied with the performance characteristics of the application.
·         Identifies mismatches between performance-related expectations and reality.
·         Supports tuning, capacity planning, and optimization efforts.
·         May not detect some functional defects that only appear under load.
·         If not carefully designed and validated, may only be indicative of performance characteristics in a very small number of production scenarios.
·         Unless tests are conducted on the production hardware, from the same machines the users will be using, there will always be a degree of uncertainty in the results.
Load test
·         Determines the throughput required to support the anticipated peak production load.
·         Determines the adequacy of a hardware environment.
·         Evaluates the adequacy of a load balancer.
·         Detects concurrency issues.
·         Detects functionality errors under load.
·         Collects data for scalability and capacity-planning purposes.
·         Helps to determine how many users the application can handle before performance is compromised.
·         Helps to determine how much load the hardware can handle before resource utilization limits are exceeded.
·         Is not designed to primarily focus on speed of response.
·         Results should only be used for comparison with other related load tests.
Stress test
·         Determines if data can be corrupted by overstressing the system.
·         Provides an estimate of how far beyond the target load an application can go before causing failures and errors in addition to slowness.
·         Allows you to establish application-monitoring triggers to warn of impending failures.
·         Ensures that security vulnerabilities are not opened up by stressful conditions.
·         Determines the side effects of common hardware or supporting application failures.
·         Helps to determine what kinds of failures are most valuable to plan for.
·         Because stress tests are unrealistic by design, some stakeholders may dismiss test results.
·         It is often difficult to know how much stress is worth applying.
·         It is possible to cause application and/or network failures that may result in significant disruption if not isolated to the test environment.
Capacity test
·         Provides information about how workload can be handled to meet business requirements.
·         Provides actual data that capacity planners can use to validate or enhance their models and/or predictions.
·         Enables you to conduct various tests to compare capacity-planning models and/or predictions.
·         Determines the current usage and capacity of the existing system to aid in capacity planning.
·         Provides the usage and capacity trends of the existing system to aid in capacity planning
·         Capacity model validation tests are complex to create.
·         Not all aspects of a capacity-planning model can be validated through testing at a time when those aspects would provide the most value.