Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: llm_completion_callback ignores prompt keyword argument #13404

Open
Tshimanga opened this issue May 9, 2024 · 2 comments
Open

[Bug]: llm_completion_callback ignores prompt keyword argument #13404

Tshimanga opened this issue May 9, 2024 · 2 comments
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized

Comments

@Tshimanga
Copy link

Tshimanga commented May 9, 2024

Bug Description

LLM classes, like Ollama, expose a complete method for providing llm completions like so,

    @llm_completion_callback()
    def complete(
        self, prompt: str, formatted: bool = False, **kwargs: Any
    ) -> CompletionResponse:
        .
        .
        .
            return CompletionResponse(
                text=text,
                raw=raw,
                additional_kwargs=get_additional_kwargs(raw, ("response",)),
            )

however this llm_completion_callback wrapper ignores keyword arguments.

That is, given

llm = Ollama(model="llama3:instruct")

the following works

llm.complete("some prompt")

whereas,

llm.complete(prompt="some prompt")

raises IndexError: tuple index out of range.

This seems to be because in llm_completion_callback the callback only checks for the prompt as a positional argument

If I'm not misunderstanding anything, I'd be happy to contribute a pr for this callback that checks for the prompt in the kwargs before checking the positional arguments. Let me know.

Version

0.10.36

Steps to Reproduce

class MinimalFailingLLM(CustomLLM):
    
    def stream_complete(self, prompt: str, formatted: bool = False, **kwargs: Any) -> CompletionResponseGen:
        pass

    @property
    def metadata(self) -> LLMMetadata:
        pass

    @llm_completion_callback()
    def complete(
        self, prompt: str, formatted: bool = False, **kwargs: Any
    ) -> CompletionResponse:
        pass

llm = MinimalFailingLLM()
llm.complete(
    prompt="some prompt"
)

Relevant Logs/Tracbacks

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
Cell In[14], line 1
----> 1 llm.complete(
      2     prompt="some prompt"
      5 )

File ~/Library/Caches/pypoetry/virtualenvs/reproducer/lib/python3.10/site-packages/llama_index/core/llms/callbacks.py:309, in llm_completion_callback.<locals>.wrap.<locals>.wrapped_llm_predict(_self, *args, **kwargs)
    312 model_dict = _self.to_dict()
    313 model_dict.pop("api_key", None)
    314 dispatcher.event(
    315     LLMCompletionStartEvent(
    316         model_dict=model_dict,
--> 317         prompt=str(args[0]),
    318         additional_kwargs=kwargs,
    319         span_id=span_id,
    320     )
    321 )
    322 event_id = callback_manager.on_event_start(
    323     CBEventType.LLM,
    324     payload={
   (...)
    328     },
    329 )
    331 f_return_val = f(_self, *args, **kwargs)

IndexError: tuple index out of range
@Tshimanga Tshimanga added bug Something isn't working triage Issue needs to be triaged/prioritized labels May 9, 2024
Copy link

dosubot bot commented May 9, 2024

The issue you've encountered with llm_completion_callback ignoring the prompt keyword argument is indeed due to the implementation expecting the prompt to be passed as the first positional argument. This is evident in both the wrapped_async_llm_predict and wrapped_llm_predict functions within the llm_completion_callback wrapper, where args[0] is used to access the prompt. Consequently, passing the prompt as a keyword argument leads to an IndexError because it's not positioned in the args tuple as expected.

To work around this issue, ensure you call the complete method with the prompt as the first positional argument:

ollama_instance.complete("Your prompt here")

Avoid passing the prompt as a keyword argument to prevent the IndexError:

ollama_instance.complete(prompt="Your prompt here")  # Causes IndexError

Your willingness to contribute a fix for this behavior is commendable. Since I can't accept pull requests, I recommend reaching out directly to the maintainers of the LlamaIndex repository. They can provide guidance on how they'd like to address this issue and whether your proposed solution aligns with their vision for the project.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@shivamklr
Copy link
Contributor

@Tshimanga A fix has been merged. A duplicate issue #13056 is already closed. Let us know if this issue persists.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants