python-benchmark-harness

A micro/macro benchmark framework for the Python programming language that helps with optimizing your software.


Keywords
benchmark, benchmark-framework, benchmarking, decorator, flame-graphs, intrusive-testing, performance, performance-analysis, performance-bottleneck, performance-monitoring, performance-testing, profiler, profiling, python, python-library, test-automation, testing, testing-tools, visualization
License
MIT
Install
pip install python-benchmark-harness==1.0.3

Documentation

Profile and test to gain insights into the performance of your beautiful Python code

View Demo - Report Bug - Request Feature


QuickPotato in a nutshell

QuickPotato is a Python library that aims to make it easier to rapidly profile your software and produce powerful code visualizations that enables you to quickly investigate where potential performance bottlenecks are hidden.

Also, QuickPotato is trying to provide you with a path to add an automated performance testing angle to your regular unit tests or test-driven development test cases allowing you to test your code early in the development life cycle in a simple, reliable, and fast way.

Installation

Install using pip or download the source code from GitHub.

pip install QuickPotato

Generating Flame Graphs

Example of a Python flame graph

How to interpret the Flame Graphs generated by QuickPotato together with d3-flame-graph:

  • Each box is a function in the stack
  • The y-axis shows the stack depth the top box shows what was on the CPU.
  • The x-axis does not show time but spans the population and is ordered alphabetically.
  • The width of the box show how long it was on-CPU or was part of an parent function that was on-CPU.

If you are unfamiliar with Flame Graphs you can best read about them on Brendan Greg's website.

In the following way you can generate a Python flame graph with QuickPotato:

from examples.example_code import FancyCode
from QuickPotato.profiling.intrusive import performance_test as pt
from QuickPotato.statistical.visualizations import FlameGraph

# Create a test case
pt.test_case_name = "FlameGraph"

pt.measure_method_performance(
    method=FancyCode().say_my_name_and_more,  # <-- The Method which you want to test.
    arguments=["joey hendricks"],  # <-- Your arguments go here.
    iteration=10,  # <-- The number of times you want to execute this method.
    pacing=0  # <-- How much seconds you want to wait between iterations.
)

# Generate the flame graph visualizations to analyse your code performance.
FlameGraph(pt.test_case_name, test_id=pt.current_test_id).export("C:\\temp\\")

Generating Heatmaps (Beta)

Example of a Python heatmap

How does a by QuickPotato generated D3 heatmap work:

  • Every box in the heatmap is a function
  • The y-axis is made up of functions ordered by its latency.
  • The x-axis spans the amount of sample (one sample is on execution of your code) and is separated into columns of test id's (one test id is one completely executed test).
  • The color shows the speeds of the function to more red a box is the more time there was spent.
  • All boxes are clickable and will give you information about that particular function.

In the following way you can generate a Python heatmap with QuickPotato:

from examples.example_code import FancyCode
from QuickPotato.profiling.intrusive import performance_test as pt
from QuickPotato.statistical.visualizations import HeatMap

# Create a test case
pt.test_case_name = "Heatmap"

# Attach the method from which you want to performance test
pt.measure_method_performance(
    method=FancyCode().say_my_name_and_more,  # <-- The Method which you want to test.
    arguments=["joey hendricks"],  # <-- Your arguments go here.
    iteration=10,  # <-- The number of times you want to execute this method.
    pacing=0  # <-- How much seconds you want to wait between iterations.
)

# Generate the heatmap visualizations to analyse your code performance.
HeatMap(pt.test_case_name, test_ids=[pt.current_test_id, pt.previous_test_id]).export("C:\\temp\\")

This D3 visualization is still being tweaked and improved if you encounter any issue with it please open an issue. (Your feedback is appreciated!)

Generating a CSV file

Example of a csv file

You can generate a CSV export in the following way:

from examples.example_code import FancyCode
from QuickPotato.profiling.intrusive import performance_test as pt
from QuickPotato.statistical.visualizations import CsvFile

# Create a test case
pt.test_case_name = "exporting to csv"

# Attach the method from which you want to performance test
pt.measure_method_performance(
    method=FancyCode().say_my_name_and_more,  # <-- The Method which you want to test.
    arguments=["joey hendricks"],  # <-- Your arguments go here.
    iteration=10,  # <-- The number of times you want to execute this method.
    pacing=0  # <-- How much seconds you want to wait between iterations.
)

# Export the sample into csv file for further analysis
CsvFile(pt.test_case_name, test_id=pt.current_test_id).export("C:\\temp\\")

Generating a Bar Chart

Example of a bar chart

How to interpret the bar chart generate by QuickPotato:

  • each color is method executed in your performance test.
  • The graph is ordered by latency from slowest to fastest (This can be disabled.)
  • The Y axis is time spent per method.
  • The X axis made up out of samples and divided per test id.
  • You can exclude a method by clicking the method name in the legend. Also, by double clicking a method name you can deselect all other methods.
  • On the top right-hand side Plotly's control bar can be found to further interact with the graph.

You can generate a simple interactive bar chart in the following way:

from examples.example_code import FancyCode
from QuickPotato import performance_test as pt
from QuickPotato.statistical.visualizations import BarChart

# Create a test case
pt.test_case_name = "bar chart"

# Attach the method from which you want to performance test
pt.measure_method_performance(
    method=FancyCode().say_my_name_and_more,  # <-- The Method which you want to test.
    arguments=["joey hendricks"],  # <-- Your arguments go here.
    iteration=10,  # <-- The number of times you want to execute this method.
    pacing=0  # <-- How much seconds you want to wait between iterations.
)

# Generate visualizations to analyse your code.
BarChart(pt.test_case_name, test_ids=[pt.current_test_id, pt.previous_test_id]).export("C:\\temp\\")

Boundary testing

Within QuickPotato, it is possible to create a performance test that validates if your code breaches any defined boundary or not. An example of this sort of test can be found in the snippet below:

from QuickPotato import performance_test as pt
from examples.example_code import FancyCode

# Create a test case
pt.test_case_name = "test_performance"  # <-- Define test case name

# Defining the boundaries
pt.max_and_min_boundary_for_average = {"max": 1, "min": 0.001}

# Execute your code in a non-intrusive way
pt.measure_method_performance(
    method=FancyCode().say_my_name_and_more,  # <-- The Method which you want to test.
    arguments=["joey hendricks"],  # <-- Your arguments go here.
    iteration=10,  # <-- The number of times you want to execute this method.
    pacing=0  # <-- How much seconds you want to wait between iterations.
)

# Analyse results for change True if there is no change otherwise False
results = pt.verify_benchmark_against_previous_baseline()

Regression testing

It is also possible to verify that there is no regression between the current benchmark and a previous baseline. The method for creating such a test can also be found in the snippet below:

from QuickPotato import performance_test as pt
from QuickPotato.configuration.management import options
from examples.example_code import FancyCode

# Disabling this setting will filter out untested or failed test-id's out of your baseline selection.
options.enable_the_selection_of_untested_or_failed_test_ids = False

# Create a test case
pt.test_case_name = "test_performance"  # <-- Define test case name

# Execute your code in a non-intrusive way
pt.measure_method_performance(
  method=FancyCode().say_my_name_and_more,  # <-- The Method which you want to test.
  arguments=["joey hendricks"],  # <-- Your arguments go here.
  iteration=10,  # <-- The number of times you want to execute this method.
  pacing=0  # <-- How much seconds you want to wait between iterations.
)

# Analyse results for change True if there is no change otherwise False
results = pt.verify_benchmark_against_previous_baseline()

Integrating with unit testing frameworks

Uplifting basic performance tests into a test framework is easy within QuickPotato and can be achieved the following way:

from QuickPotato import performance_test as pt
from QuickPotato.configuration.management import options
from examples.example_code import *
import unittest


class TestPerformance(unittest.TestCase):

    def setUp(self):
        """
        Disable the selection of failed or untested test results.
        This will make sure QuickPotato will only compare you tests against a valid baseline.
        """
        options.enable_the_selection_of_untested_or_failed_test_ids = False

    def tearDown(self):
        """
        Enabling the selection of failed or untested test results.
        We enable this setting after the test so it will not bother you when quick profiling.
        """
        options.enable_the_selection_of_untested_or_failed_test_ids = True

    def test_performance_of_method(self):
        """
        Your performance test.
        """
        # Create a test case
        pt.test_case_name = "test_performance"  # <-- Define test case name

        # Defining the boundaries
        pt.max_and_min_boundary_for_average = {"max": 10, "min": 0.001}

        # Execute your code in a non-intrusive way
        pt.measure_method_performance(
            method=FancyCode().say_my_name_and_more,  # <-- The Method which you want to test.
            arguments=["joey hendricks"],  # <-- Your arguments go here.
            iteration=10,  # <-- The number of times you want to execute this method.
            pacing=0  # <-- How much seconds you want to wait between iterations.
        )

        # Pass or fail the performance test
        self.assertTrue(pt.verify_benchmark_against_previous_baseline())
        self.assertTrue(pt.verify_benchmark_against_set_boundaries())

Coming soon

Some features which I am planning to add to QuickPotato soon:

  • Improving the heatmap
  • Scatter plot
  • Creating a virtual map of your code
  • Time Line (Showing from left to right the time spent per action in your code.)

Learn more about QuickPotato

If you want to learn more about test driven performance testing and want to see how this project reached its current state? Then I would encourage you to check out the following resources: