New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] binary bloat analysis #1223
base: master
Are you sure you want to change the base?
Conversation
It's definitely a good idea to keep track of size over time ... that affects the ability to distribute. Not part of this at ALL, but someday it'll be nice to do a size comparison--instead of just comparing simdjson against other libraries for speed on certain tasks, make a separate executable for each library+task combo and compare size as well. |
Would a reasonable small example be one that takes the twitter json and outputs the first tweet found that contains "cats" ? I think there are several metrics interesting to keep track of over time
There was discussion of tracking performance earlier, perhaps it would be possible to store all these metrics somewhere? The curl project tracks all sorts of things over time, we might get inspiration there. |
For some reason I didn't see your response: I think it's reasonable to run this on simdjson.so at a minimum, and perhaps the parse executable. For ondemand, perhaps the partial_tweets benchmarks? Perhaps benchmark_ondemand as a whole, which would potentially give us interesting comparisons. |
@pauldreik I'm fine leaving this pr around if you plan to get back to it; otherwise let's file an issue and get back to it when we have time :) |
I pinged the author of the bloaty action job, let's see if I get a response and if not, let's do as you suggest! |
0b94c19
to
e03e602
Compare
@jkeiser I fixed the bloaty job and it seems to work, but what should we do with the results? should we reject the pull request if the binary size increases more than X%? |
2d0eb3e
to
00a59a7
Compare
@pauldreik do you have any idea whether the results of this are generally stable? If so, I think it'd be reasonable to reject changes with a 20% size change (or at least flag the crap out of them). @lemire thoughts? At the very, very least, we should run this in CI so we can go look at the results when we're worried. If it's expensive we can restrict to just master pushes. |
@jkeiser Sure, sure. |
I do not know if the results are stable, I do not understand why they
wouldn't be?
20% seems like a pretty big margin, let's start with that!
|
This is for measuring binary size. Not that anyone asked about it that I know of, but it may be interesting to
It is really quick to run, less than a minute.
However, I do not really know how to present the result and/or possibly act on it, see djarek/bloaty-analyze#1