-
-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: function calling #2175
feat: function calling #2175
Conversation
@sebdg thanks for the response! you need to press "save and go back" or ctrl+s in order for your code to actually be saved. this is a remnent from my old project and im sure i could make it autosave on key press :) also it seems your code has an error - at runtime it would not give the LLM a response because you have not defined if you add those, it'll work (but you may get weird results logged since "param1" and "param2" are nondescript names) |
Amazing stuff, I'll review in a bit 🙌 |
not sure how ready this thing is but i super appreciate it. if you reckon its ready enough then merging is up to you :) |
There might be room for fixes based on this build log of mine: |
im unsure what linting (i assume?) you guys are using, but it seems to be detecting problems in the |
@not-nullptr this is excellent! My feedback after using this is that I wish there was a way to render the response from the API in open-webui directly, without passing it through the model first. For example, I have a function that fetches some JSON and formats a response as markdown. |
thanks! though at that point, wouldn't you just want to use a separate program or something? seems a bit unnecessary to run it through open webui |
Having the model is nice for figuring out which endpoint to call and mapping the params but I found that passing the response back through the model resulted in weird behavior (like Phi3 telling me that "a chance of Llamas" is not a real type of weather). Maybe this can be fixed with prompt engineering though so not really in scope? |
i feel that's more of a prompt engineering issue, like you said. wrangling smaller models to output what you want can be tricky though, i get what you mean... |
Building your PR took 30-35 seconds longer than building the |
eval is used in order to actually execute the function ran. all code executed is written by the user so it's not unsafe. the higher build times are probably due to monaco being added as a dependency, as it isn't very light. this current method is the only way to get typescript intellisense in the browser afaik |
Does it not work with openAI api, or am I doing something wrong? |
nope, custom method. apologies (though this is a pretty good idea) |
As much as I want to see function calling added quickly, since everyone else uses the openAI API standard it makes sense that it should be implemented here. If it's added later it'll break a bunch of things and the project might be forced to support a deprecated API for a while to come. This way you also gain compatibility with all the programs already out there as well. |
all function calling in this PR occurs on the client. |
I agree with @not-nullptr, I'll be taking a look in a bit to try to have this merged for our next release. We can always add OpenAI API compatible function calling feature later, and have the best of the both worlds. |
mega appreciate it |
Apologies, I didn't notice. In that case I'd love to see this added! |
I'll be closing this in favour of Pipelines Plugin function calling support, but I might cherrypick some changes here to support browser-end JS function calling later down the line. Thank you for all your hard work, @not-nullptr! I've added you as a co-author for v0.2.0 in recognition of your inspiring contributions :) https://github.com/open-webui/pipelines/blob/main/examples/function_calling/function_calling_filter_pipeline.py |
Pipeline is a more robust system, but I love the feature of having a web-based editor. In future, could we perhaps we can get a frontend for pipelines (with an editor), either as part of pipeline deployment or as part of admin settings in webui? It'd streamline the testing and deployment! |
i am more than happy to implement everything else if someone can get a code editor with completion working in the browser for python. |
Coming soon! #2825 I'll let you know where we could use some help once I setup the scaffolding! (perhaps the reintroduction of js function calling!) |
Pull Request Checklist
dev
branch.Description
i've implemented function calling, since this is one of the last "big" features relating to LLMs that open-webui has yet to have implemented.
Changelog Entry
Added
Fixed
(n/a)
Changed
(n/a)
Removed
(n/a)
Security
(n/a)
Breaking Changes
(n/a)
Additional Information
this PR also adds
monaco
as a dependency, since functions are written in-the-browser with first-class typescript support. i looked into writing my own editor - this is not an option (for me, at least)functions are written to
localStorage
, this is because i don't know python and i'd rather not write poor code for the backend .there's still some work to be done (such as adding validation for parameter names - they can only include values which a javascript variable could include) but its functional.
demo:
https://github.com/open-webui/open-webui/assets/62841684/70bcdd8c-d887-43f7-8f7b-54b5ac1dab31