I Created EasyBench: A Simple Python Benchmarking Tool
I’ve created EasyBench, a benchmarking tool that makes it easy to measure execution time of Python functions.
Motivation
Python already has a standard library called timeit.
However, when actually using it, I encountered the following issues:
- Code needs to be input as strings
- While there is a setup functionality, it only executes once before the overall process begins,
and there’s no feature to perform setup immediately before each trial without including it in the measurement time
I was recently writing an introductory Python article and wanted to explain computational complexity by measuring actual execution times to give readers a tangible experience.
However, due to the constraints above, I couldn’t measure as precisely as I wanted,
which led me to develop EasyBench
as a more flexible tool.
How to Use EasyBench
EasyBench
offers three measurement methods for different use cases.
You can choose the one that best fits your situation.
1. @bench
Decorator - Ideal for Single Function Measurement
The simplest method is using the @bench
decorator. Just add it above the function you want to measure to track its execution time.
Note that with this approach, the creation of big_list
(list(range(1_000_000))
) happens only once at the beginning,
and the same list is reused in subsequent trials.
If you want to use a new list for each trial, you can solve this by making the parameters into functions:
For more complex cases where you’re dealing with functions as parameters, we also provide the @bench.fn_params
decorator.
2. EasyBench
Class - Convenient for Comparing Multiple Operations
To compare multiple operations, creating a subclass of EasyBench
class is appropriate.
You can define methods to be measured within the class and centrally manage settings and setup processes.
3. easybench
Command - For Batch Measurement of Entire Projects
If you want to run multiple benchmark scripts at once, the easybench
command-line tool is useful.
You can measure performance across your entire project in these three simple steps:
- Create a
benchmarks
directory at your project root - Place benchmark scripts with the naming convention
bench_*.py
inside this directory - Run the
easybench
command from your project root
# Example: Run 10 trials, measure memory, and display results sorted by average time
easybench --trials 10 --memory --sort-by avg
When executed, all benchmark scripts will run in sequence and the results will be displayed in a list.
By integrating this into your CI/CD pipeline, you can also continuously monitor performance.
Customization and Advanced Features
EasyBench also includes the following features:
- Output Format Options: Supports multiple formats including table, JSON, and CSV
- Adjustable Trial Count: Can be tuned based on your precision needs and execution time constraints
- Memory Usage Measurement: Evaluates memory efficiency in addition to execution time
For more detailed information and advanced usage examples, please refer to the official EasyBench documentation.