You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Near the end of the community meeting today, the idea came up of doing performance testing for various functionality in plasmapy.particles. Here's the contents and output of a Jupyter notebook for that.
84 µs ± 1.21 µs per loop (mean ± std. dev. of 7 runs, 5,000 loops each)
When creating a ParticleList, the dominant timescale is creation of the Particle objects inside of it.
%timeit-n5000ParticleList([proton])
622 ns ± 33.1 ns per loop (mean ± std. dev. of 7 runs, 5,000 loops each)
%timeit-n5000ParticleList(["p+"])
30.3 µs ± 451 ns per loop (mean ± std. dev. of 7 runs, 5,000 loops each)
Performance testing for particle_input
defalfven_speed_use_mass(B, n, mass: u.Quantity[u.kg]):
"""Calculate the Alfvén velocity, using mass."""return (B/np.sqrt(μ0*n*mass)).to(u.m/u.s)
defalfven_speed_undecorated(B, n, particle: Particle):
""" Calculate the Alfvén velocity, using a `Particle` but not the `particle_input` machinery. """return (B/np.sqrt(μ0*n*particle.mass)).to(u.m/u.s)
@particle_inputdefalfven_speed_decorated(B, n, particle: ParticleLike):
""" Calculate the Alfvén velocity, using `particle_input` machinery. """return (B/np.sqrt(μ0*n*particle.mass)).to(u.m/u.s)
The performance is not very different when we provide a function with a mass Quantity vs. when we provide it with a Particle object.
%timeit-n5000alfven_speed_use_mass(B, n, mass)
266 µs ± 1.93 µs per loop (mean ± std. dev. of 7 runs, 5,000 loops each)
%timeit-n5000alfven_speed_undecorated(B, n, proton)
283 µs ± 1.36 µs per loop (mean ± std. dev. of 7 runs, 5,000 loops each)
There is a 40% performance penalty here for using particle_input while passing in a Particle.
%timeit-n5000alfven_speed_decorated(B, n, proton)
394 µs ± 1.59 µs per loop (mean ± std. dev. of 7 runs, 5,000 loops each)
More time is needed when provided with other types of particle objects.
%timeit-n5000alfven_speed_decorated(B, n, "p+")
455 µs ± 1.17 µs per loop (mean ± std. dev. of 7 runs, 5,000 loops each)
%timeit-n5000alfven_speed_decorated(B, n, particle_list)
448 µs ± 1.13 µs per loop (mean ± std. dev. of 7 runs, 5,000 loops each)
%timeit-n5000alfven_speed_decorated(B, n, custom_particle)
510 µs ± 3.14 µs per loop (mean ± std. dev. of 7 runs, 5,000 loops each)
%timeit-n5000alfven_speed_decorated(B, n, mass)
598 µs ± 2.12 µs per loop (mean ± std. dev. of 7 runs, 5,000 loops each)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Near the end of the community meeting today, the idea came up of doing performance testing for various functionality in
plasmapy.particles
. Here's the contents and output of a Jupyter notebook for that.Performance tests for creating particle objects
Creating a
CustomParticle
is slower, but I'm not certain why.When creating a
ParticleList
, the dominant timescale is creation of theParticle
objects inside of it.Performance testing for
particle_input
The performance is not very different when we provide a function with a mass
Quantity
vs. when we provide it with aParticle
object.There is a 40% performance penalty here for using
particle_input
while passing in aParticle
.More time is needed when provided with other types of particle objects.
Beta Was this translation helpful? Give feedback.
All reactions