You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the codspeed.io run on CPython only, but PyPy is also important.
Running benchmarks on PyPy requires a slightly different runtime approach, insofar as it has a JIT (and future CPython may add JIT too, of course). In particular, my impression is that codspeed will only run the code once, and measure instruction count. This is fine for CPython, but for a JIT it will give misleading results. So the benchmarking framework should run code multiple times if it's on PyPy.
This can be done with a custom pytest fixture.
Separately, there's the issue of how to communicate different runs to codspeed, which I will also look into.
The text was updated successfully, but these errors were encountered:
OK, as mentioned at CodSpeedHQ/pytest-codspeed#33 PyPy benchmarks are a Problem. Maybe not a problem for Twisted though, so that may not be a blocker, will have to do some more investigation.
Currently the codspeed.io run on CPython only, but PyPy is also important.
Running benchmarks on PyPy requires a slightly different runtime approach, insofar as it has a JIT (and future CPython may add JIT too, of course). In particular, my impression is that codspeed will only run the code once, and measure instruction count. This is fine for CPython, but for a JIT it will give misleading results. So the benchmarking framework should run code multiple times if it's on PyPy.
This can be done with a custom pytest fixture.
Separately, there's the issue of how to communicate different runs to codspeed, which I will also look into.
The text was updated successfully, but these errors were encountered: