Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#12174 Run benchmarks with with PyPy as well #12175

Draft
wants to merge 3 commits into
base: trunk
Choose a base branch
from

Conversation

itamarst
Copy link
Contributor

@itamarst itamarst commented May 8, 2024

Scope and purpose

Fixes #12174

Also run benchmarks on PyPy.

@itamarst itamarst linked an issue May 8, 2024 that may be closed by this pull request
Copy link

codspeed-hq bot commented May 8, 2024

CodSpeed Performance Report

Merging #12175 will improve performances by 35.56%

Comparing 12174-expand-benchmarks-to-run-on-pypy-as-well (048ff24) with trunk (fd08b87)

Summary

⚡ 2 improvements

Benchmarks breakdown

Benchmark trunk 12174-expand-benchmarks-to-run-on-pypy-as-well Change
test_http_client_small_response 1.7 ms 1.2 ms +35.56%
test_http11_server_empty_request 2.1 ms 1.5 ms +43.28%

@adiroiban
Copy link
Member

Is there an official solution for this ?

I guess that we might want to run these tests with python 3.12 and later with 3.13 to see if there are any improvements or regressions


What do you say about a helper that injects test functions in module namespace?

It can generate one for pypy and one for cpython ... with different names


I don't have experience with pytest... but I am using this trick to generate Selenium tests that are executed with Firefox and Chrome.. so that I get 2 different names

@itamarst
Copy link
Contributor Author

Currently blocked on upstream support: CodSpeedHQ/pytest-codspeed#34

@glyph
Copy link
Member

glyph commented May 14, 2024

@itamarst thanks for tracking that

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Expand benchmarks to run on PyPy as well
4 participants