You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, it seems like Echidna uses a single core/process/thread to shrink failed sequences.
In some cases, however, we're interested in using 100% of the machine's resources to extract the results of that particular sequence. For example, this is often the case when I am using stopOnFail: true and workers: N in a multicore setup. I don't care about other failed properties, I only care about that particular one that I know to have failed. The problem is that this takes forever even if I bump up N or the number of cores.
It seems like trying to shrink a sequence on a c5.large instance takes about the same amount of time as on a c5.4xlarge (benchmark pending), which is unexpected.
The text was updated successfully, but these errors were encountered:
Shrinking multicore #1249 (this issue) relates to how echidna could use all cores to make shrinking run faster
For example, suppose there was a threads: 32 (or something like that); even with workers: 1, I would want echidna to use all of my 32-vCPU instance cores to make shrinking run faster
Shrinking with multiple cores is hard with our current (stochastic) approach, since the number of messages required between the core is likely to be high, and that will cause an overhead that will offset the gains for multiple workers. If you can @aviggiano, please test #1250 to see if that improved the shrinking speed on a single worker.
Describe the desired feature
Currently, it seems like Echidna uses a single core/process/thread to shrink failed sequences.
In some cases, however, we're interested in using 100% of the machine's resources to extract the results of that particular sequence. For example, this is often the case when I am using
stopOnFail: true
andworkers: N
in a multicore setup. I don't care about other failed properties, I only care about that particular one that I know to have failed. The problem is that this takes forever even if I bump up N or the number of cores.It seems like trying to shrink a sequence on a
c5.large
instance takes about the same amount of time as on ac5.4xlarge
(benchmark pending), which is unexpected.The text was updated successfully, but these errors were encountered: