Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark against similar libs #7

Open
ds300 opened this issue Jan 20, 2023 · 4 comments
Open

Benchmark against similar libs #7

ds300 opened this issue Jan 20, 2023 · 4 comments
Assignees

Comments

@ds300
Copy link
Contributor

ds300 commented Jan 20, 2023

No description provided.

@ds300 ds300 added this to the Announce Initial public release milestone Jan 20, 2023
@ds300 ds300 self-assigned this Feb 23, 2023
@danielweck
Copy link

@ds300
Copy link
Contributor Author

ds300 commented Mar 4, 2023

ooh very useful thanks!

@danielweck
Copy link

I will be interesting to compare creation / updation costs as well as GC pressure, between Signia's incremental computeds (diffing) vs. "standard" hybrid push + lazy/pull approach.
Good write-up by the way :)
https://signia.tldraw.dev/docs/scalability
Transaction (batch + possible rollback) is a nice feature too.
But note that Milo's benchmark only stress-tests core primitives in various graph topologies. This provides useful metrics but of course doesn't tell the full story (there's a separate UI benchmark for this)
https://github.com/krausest/js-framework-benchmark

@ds300
Copy link
Contributor Author

ds300 commented Mar 4, 2023

I will be interesting to compare creation / updation costs as well as GC pressure, between Signia's incremental computeds (diffing) vs. "standard" hybrid push + lazy/pull approach.

Yeah would be nice to have some metrics on this. The incremental stuff mostly becomes valuable for larger collections and/or more expensive operations, and I'm sure folks would appreciate being offered some intuition about that those kinds of sizes/operations are.

note that Milo's benchmark only stress-tests core primitives in various graph topologies. This provides useful metrics but of course doesn't tell the full story

Already found a significant win this morning thanks to these 😊 but in general yeah you're right. The microbenchmarks don't matter too much for real apps. The cost of the effects/derivations far outweigh the reactivity overhead, so the important points of comparison for pure signals libraries are stuff like features, DX, and integration with UI rendering.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants