HTTP Benchmarks Viewer lets you upload your Apache Benchmarks and search, visualize and export the results.

Uploading benchmark outputs is easy, upload them as multiple raw text files or bundled together in .zip files.


Running Apache Benchmark

Download Apache Bench for Windows

You can download a copy of the Apache Benchmark Utility for Windows at ab.exe. It's also included in every installation of Apache under the bin/ab.exe directory, which is available for Windows in the zip package of XAMPP.

Running a benchmark

Here's an example of using the ab utility to run a benchmark against a HTTP URL:

ab -k -n 1000 -c 10 "http://localhost:55000/json" > json_1000_10.txt

This benchmark uses the HTTP keepalive feature when performing 1000 requests at 10 requests at a time against the above url, saving the results of the benchmark in a file called json_1000_10.txt. More options are available in the Apache Benchmark docs.

The json_1000_10.txt output from this command can be uploaded to this site for analyzing.

Benchmark Structure

The file names of the benchmarks aren't used, instead the significant fields that categorize your benchmarks are:

Meaning the results will be displayed grouped by host name and seperated into different tests.


User Guide

This guide will take you through the steps of uploading and analyzing your benchmarks

Creating a Test Plan

A test plan which allows you to group a historical series of test runs together, which you create after logging in:

The slug is the unique identifier for your benchmarks publicly accessible at /slug.
Once created, test plans will appear under the My Test Plans heading, sorted by most recent.

Editing a test plan

Selecting a test plan will take you to the edit screen where you can manage details about your environment

Creating a test run

Benchmarks are attached to individual test runs representing a snapshot when the benchmarks were run. Repeated results should be added to new test runs so they're progress are properly grouped and tracked.

By default a test run is automatically created and labelled with the current date, which will let you start adding benchmarks straight away.

You can also create your own by selecting Create new test run from the drop down:

This will open a modal dialog that will let you provide a custom label for the new test run. Once created it will automatically be selected and you can start attaching benchmarks to it.

Uploading benchmarks

The easiest way to add bechmarks is to upload files from your computer by pressing upload test results.
You can upload multiple files at once, or if you prefer zip them together and upload them inside .zip files:

Each upload will display its own status bar with a greenbar for each file indicating it was successfully uploaded. The contents of all the files that were uploaded will also be displayed in a grid format for inspection.

Once benchmarks are added, the links will be enabled and the badge listing the total count of benchmarks will be updated.

Uploading benchmarks as new test runs

If you want to upload multiple runs of the same benchmark than you want to check the Create new test run for each .zip checkbox which will create a new test run for each batch uploaded, using the name of the .zip file as the series id, e.g:

Clicking View Results will take you to the public benchmarks page and let you visualize the results:

As noted above, benchmarks are grouped by hostname:port and charted into different tests by /pathinfo. By default the raw labels will be used, but you make them charts more readable by assigning custom labels to use instead which you can do by going back to the edit screen under the Markup Graphs with Custom Labels section:

It automatically shows all the unique server host names and /pathinfos listed in any of the benchmarks so you only need to check the edit labels checkbox and fill in the blanks next to each text identifier:

Once you've finished adding labels, hitting save will show that your labels properly displayed in the adjacent table cells:

Re-visiting the charts again will use the custom labels for a more readable display:

Whilst the graphs give you a nice overview, they only show the Requests Per Second at different concurrency levels. The other metrics in the Apache benchmarks can be inspected by clicking on view details in the top right.

This will bring you to the search results page that displays all the benchmark results in a gridview:

The gridview supports sorting of each column and the results can be further filtered with the filters provided.

Each filtered resultset can be exported by clicking on your preferred format of choice in the links on the top right:

Although this literally just adds the format's extension, e.g. .csv to the end of the pathinfo, so urls can easily be hand-crafted itself.

Feature Preview

This brings us to the end of this feature preview, we hope you'll find the service useful and we're aiming to update it with more useful charts soon!