You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've noticed a significant speed gap in training time series data with Prophet: my code is roughly 5 times faster on Apple M1/M2 CPUs than on modern Intel processors. This contrast was consistent across four different setups running similar Python versions (3.11 and 3.12), with CPU architecture being the key variable.
My main question is whether this speed difference between Apple and Intel CPUs is a recognized issue? If it's not expected, could you please advise me on which dependencies I should investigate or possibly recompile to understand why Prophet runs slower on Intel CPUs compared to Apple ones?
The text was updated successfully, but these errors were encountered:
I've noticed a significant speed gap in training time series data with Prophet: my code is roughly 5 times faster on Apple M1/M2 CPUs than on modern Intel processors. This contrast was consistent across four different setups running similar Python versions (3.11 and 3.12), with CPU architecture being the key variable.
My main question is whether this speed difference between Apple and Intel CPUs is a recognized issue? If it's not expected, could you please advise me on which dependencies I should investigate or possibly recompile to understand why Prophet runs slower on Intel CPUs compared to Apple ones?
The text was updated successfully, but these errors were encountered: