Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Agent retrieval is not peristant #4512

Open
PietFourie opened this issue May 18, 2024 · 0 comments
Open

Agent retrieval is not peristant #4512

PietFourie opened this issue May 18, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@PietFourie
Copy link

What is the issue?

I use an agent to scrape a website using the "webscraper" agent. The @ agent icon disappears and it seems as if the interface is still talking to the agent. However, this is not so. When the question is asked about the retrieved resultant web page, the LLM answers (as seen by icon and the answers) and does not consider the retrieved website nor information. You have to say /exit before the @ icon is shown again.
Basically what has to be done is formulate everything in the prompt to the agent before running the agent and after that you loose control over the results.. It is not used in any context again.
What I expect to happen is that the agent remains in agent mode, alternative agents are called when necessary and the retrieved data is considered by the LLM as a context. That is because you want to retrieve a webpage and process it further, is necessary
You might argue that the other options do cover this, but that would not be very user friendly nor practically. You want to have a chat with the agent AND LLM, especially for information retrieved from the net or files.
If you propose the later RAG method then there is no need for agents in the Chat window except to query the database. It just confuses the issue.
Maybe it is as simple as feeding the retrieved information of the agent into the context window of the LLM and parse the next question if an agent is required or the LLM, such as automatically initialising a storage agent if the next question says "save the result for late processing", etc.

Just to say...
Thanks for the great program. It is really exceptional. Keep the good work going!

OS

Linux

GPU

Nvidia

CPU

No response

Ollama version

latest

@PietFourie PietFourie added the bug Something isn't working label May 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant