api_key is not present in llm_config or OPENAI_API_KEY env variable for agent ** primary_assistant**. Update your workflow to provide an api_key to use the LLM. #1715
Replies: 6 comments 2 replies
-
Had the same issue.
|
Beta Was this translation helpful? Give feedback.
-
I have the same issue I used the export OPEN_API_KEY=sk-.... before running the autogenstudio and all of my requests result in the same. I also can't figure out where the config file is to set up the LLM (eg. https://microsoft.github.io/autogen/blog/2023/12/01/AutoGenStudio/#configuring-an-llm-provider) |
Beta Was this translation helpful? Give feedback.
-
First of all: you need to make sure that you fill in the key and other information correctly. After filling in the job you can click on the test button to see if the test passes. |
Beta Was this translation helpful? Give feedback.
-
I have got the same error, and got it working with editing the json "OAI_CONFIG_LIST". For me also the "test model" worked, but trying the chat returned the missing key error. |
Beta Was this translation helpful? Give feedback.
-
I had the same error, the reason was that I left in the Workflow - Receiver - groupchat_assistant - Group Chat Agents initial "primary_assistant". My advice is to re-create Agents and Workflow from the scratch. And BTW in case you are using Windows the command "export" should be replaced by "set". |
Beta Was this translation helpful? Give feedback.
-
For people still hitting this issue, especially those just trying to run this locally -- scroll down for the fix -- or read on for my ramblings of the issue: The actual problem is the default group chat admin -- It is hooked into the default gpt-4-xxxx-preview model. Even if you delete the model from the model tab, it's referenced -- and even hidden. It took me multiple times cycling through the UI to get it to appear. I had to then click the X on the model to unassign it. Since the gpt-4-1106-preview needs an openai key, it complains that the value is not set. I am someone who wanted to run this entirely local and never cared to set my OPENAI_API_KEY, and you don't need to. Beyond that, I also saw the gpt-4-1106-preview model on the primary_assistant agent, even though I deleted and recreated it myself, without giving it any models. Fix: In short, any references to the gpt-4-1106-preview (or whatever model might be showing for your current version) need to be deleted. They might also be not showing up in the UI. The primary culprits are the default group_chat_manager and the primary_assistant. Especially the former, since you can't reassign that agent. Try to assign one of your local models to it, and the gpt-4-1106-preview should show up. X that out and save, and it should stop giving you the error. No environment variables need to be set, and you should be able to continue on fully locally. Naturally, if you want to use any hosted openai models, then you do need to set your API key. The gpt model even had the audacity to show up on an agent that I created myself. There seems to be a bug that is auto-assigning that model (even when its deleted!) to agents without a model. So check all your agents. Better yet, delete and re-create, but make sure the gpt- models are never assigned. It seems the best method is to assign all agents a model, even if they don't need one. If the error continues to come up in a group chat workflow, try and delete and recreate the buggy agent. Last note, you need to recreate any existing workflows to get the errors to stop. |
Beta Was this translation helpful? Give feedback.
-
Running autogen studio 2, I have my model setup and tested successful - when I run the travel demo I get the following error:
api_key is not present in llm_config or OPENAI_API_KEY env variable for agent ** primary_assistant**. Update your workflow to provide an api_key to use the LLM.
Let me repeat - i have tested the model successfully and the agent is pointing to the correct model.
Beta Was this translation helpful? Give feedback.
All reactions