-
-
Notifications
You must be signed in to change notification settings - Fork 888
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
insufficient_quota #131
Comments
Thanks for the feedback! We will add it, also it is possible to integrate a retry mechanism using tenacity |
ok cool, i will check that out. also, i need to see ow to get this to work with open LLMs. |
Let us know man |
Do u have any news? |
have not had a chance to focus on this yet but will do. |
please try the new version |
still get this error from OpenAI: RateLimitError |
This is because the website you want to scrape is too big, you have to increase your biking plan on OpenAI |
Describe the bug
keep getting insufficient_quota when testing with google colab.
To Reproduce
run the colab example
Expected behavior
it is probably rate limiting because it is trying to call openai too fast.
Screenshots
run the colab example
Additional context
I think if there is a parameter to adjust the rate it is accessing openai API as a parameter to this call smart_scraper_graph.run(), it should work fine.
anotehr approach would be to support some open hosted LLMs, like together.ai llama-3-70b, and it may not have this issue.
The text was updated successfully, but these errors were encountered: