Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions .github/workflows/python-package.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,14 @@ jobs:
flags: ${{ matrix.os }}-${{ matrix.python-version }}
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
- name: Benchmarking upload to Codspeed
if: matrix.python-version == '3.12' && matrix.os == 'ubuntu-latest'
uses: CodSpeedHQ/action@v3
with:
run: |
cd tests
python -m pytest --codspeed
token: ${{ secrets.CODSPEED_TOKEN }}
test_without_numpy:
name: Test without numpy
runs-on: ubuntu-latest
Expand Down
9 changes: 9 additions & 0 deletions CHANGES.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,15 @@ Changes
execute it, a `NotImplementedError` is raised indicating that the
function can't be used because `numpy` couldn't be imported.

Adds:

- Added a small benchmarking suite to CI to guard against absolute performance
regressions and accidental breakage of the lazy expansion algorithm whichs ensures
O(N), rather than O(N^2), scaling complexity for operations involving many numbers
with uncertainty. Established connectivity with `codspeed.io<codspeed.io>`_ to track
benchmarking results. (#274)


Fixes:

- fix `readthedocs` configuration so that the build passes (#254)
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ Changelog = "https://github.com/lmfit/uncertainties/blob/master/CHANGES.rst"

[project.optional-dependencies]
arrays = ["numpy"]
test = ["pytest", "pytest_cov"]
test = ["pytest", "pytest_codspeed", "pytest_cov"]
doc = ["sphinx", "sphinx-copybutton", "python-docs-theme"]
all = ["uncertainties[doc,test,arrays]"]

Expand Down
57 changes: 57 additions & 0 deletions tests/test_performance.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
from math import log10
import time
import timeit

import pytest

from uncertainties import ufloat


def repeated_summation(num):
"""
generate and sum many floats together, then calculate the standard deviation of the
output. Under the lazy expansion algorithm, the uncertainty remains non-expanded
until a request is made to calculate the standard deviation.
"""
result = sum(ufloat(1, 0.1) for _ in range(num)).std_dev
return result


def test_repeated_summation_complexity():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've seen a test similar to this that kept randomly failing so I don't know how reliable this test will be.
I think test_repeated_summation_speed will catch any increases in time so these tests are testing the same thing. I'm not opposed to this test- we can leave it in for now, and if it we find it unreliable we can remove it.

I'll add a link to PR with the graph you plotted as it helps makes sense of this test. It'll be good to see this plotted for your other PR too.

"""
Test that the execution time is linear in summation length
"""
approx_execution_time_per_n = 10e-6 # 10 us
target_test_duration = 1 # 1 s

n_list = [10, 100, 1000, 10000, 100000]
t_list = []
for n in n_list:
"""
Choose the number of repetitions so that the test takes target_test_duration
assuming the timing of a single run is approximately
N * approx_execution_time_per_n
"""
# Choose the number of repetitions so that the test
single_rep_duration = n * approx_execution_time_per_n
num_reps = int(target_test_duration / single_rep_duration)

t_tot = timeit.timeit(
lambda: repeated_summation(n),
number=num_reps,
timer=time.process_time,
)
t_single = t_tot / num_reps
t_list.append(t_single)
n0 = n_list[0]
t0 = t_list[0]
for n, t in zip(n_list[1:], t_list[1:]):
# Check that the plot of t vs n is linear on a log scale to within 10%
# See PR 275
assert 0.9 * log10(n / n0) < log10(t / t0) < 1.1 * log10(n / n0)


@pytest.mark.parametrize("num", (10, 100, 1000, 10000, 100000))
@pytest.mark.benchmark
def test_repeated_summation_speed(num):
repeated_summation(num)
Loading