Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Community Issue: How to customize response handlers? #6220

Open
sophialagerkranspandey opened this issue May 13, 2024 Discussed in #6198 · 0 comments
Open

Community Issue: How to customize response handlers? #6220

sophialagerkranspandey opened this issue May 13, 2024 Discussed in #6198 · 0 comments
Assignees
Labels
bug Something isn't working

Comments

@sophialagerkranspandey
Copy link
Contributor

Discussed in #6198

Originally posted by niltor May 12, 2024

Problems I encountered

When I use Kernel.InvokePromptStreamingAsync(prompt);, I have errors:

System.ArgumentNullException: Value cannot be null. (Parameter 'value')
         at Azure.AI.OpenAI.ChatRole.op_Implicit(String value)
         at Azure.AI.OpenAI.StreamingChatCompletionsUpdate.DeserializeStreamingChatCompletionsUpdates(JsonElement element)
         at Azure.Core.Sse.SseAsyncEnumerator`1.EnumerateFromSseStream(Stream stream, Func`2 multiElementDeserializer, CancellationToken cancellationToken)+MoveNext()
         at Azure.Core.Sse.SseAsyncEnumerator`1.EnumerateFromSseStream(Stream stream, Func`2 multiElementDeserializer, CancellationToken cancellationToken)+System.Threading.Tasks.Sources.IValueTaskSource<System.Boolean>.GetResult()
         at Microsoft.SemanticKernel.Connectors.OpenAI.ClientCore.GetStreamingChatMessageContentsAsync(ChatHistory chat, PromptExecutionSettings executionSettings, Kernel kernel, CancellationToken cancellationToken)+MoveNext()
         at Microsoft.SemanticKernel.Connectors.OpenAI.ClientCore.GetStreamingChatMessageContentsAsync(ChatHistory chat, PromptExecutionSettings executionSettings, Kernel kernel, CancellationToken cancellationToken)+MoveNext()
         at Microsoft.SemanticKernel.Connectors.OpenAI.ClientCore.GetStreamingChatMessageContentsAsync(ChatHistory chat, PromptExecutionSettings executionSettings, Kernel kernel, CancellationToken cancellationToken)+System.Threading.Tasks.Sources.IValueTaskSource<System.Boolean>.GetResult()
         at Microsoft.SemanticKernel.KernelFunctionFromPrompt.InvokeStreamingCoreAsync[TResult](Kernel kernel, KernelArguments arguments, CancellationToken cancellationToken)+MoveNext()
         at Microsoft.SemanticKernel.KernelFunctionFromPrompt.InvokeStreamingCoreAsync[TResult](Kernel kernel, KernelArguments arguments, CancellationToken cancellationToken)+MoveNext()
         at Microsoft.SemanticKernel.KernelFunctionFromPrompt.InvokeStreamingCoreAsync[TResult](Kernel kernel, KernelArguments arguments, CancellationToken cancellationToken)+System.Threading.Tasks.Sources.IValueTaskSource<System.Boolean>.GetResult()
         at Microsoft.SemanticKernel.KernelFunction.InvokeStreamingAsync[TResult](Kernel kernel, KernelArguments arguments, CancellationToken cancellationToken)+MoveNext()
         at Microsoft.SemanticKernel.KernelFunction.InvokeStreamingAsync[TResult](Kernel kernel, KernelArguments arguments, CancellationToken cancellationToken)+MoveNext()
         at Microsoft.SemanticKernel.KernelFunction.InvokeStreamingAsync[TResult](Kernel kernel, KernelArguments arguments, CancellationToken cancellationToken)+System.Threading.Tasks.Sources.IValueTaskSource<System.Boolean>.GetResult()

Because the LLM what I used return data like:

data: {"id":"d64e9540-d213-4907-954e-cfc93a840029","choices":[{"delta":{"content":" today","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":6030235,"model":"deepseek-chat","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}

data: {"id":"d64e9540-d213-4907-954e-cfc93a840029","choices":[{"delta":{"content":"?","function_call":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":6030235,"model":"deepseek-chat","object":"chat.completion.chunk","system_fingerprint":null,"usage":null}

data: {"id":"d64e9540-d213-4907-954e-cfc93a840029","choices":[{"delta":{"content":"","function_call":null,"role":null,"tool_calls":null},"finish_reason":"stop","index":0,"logprobs":null}],"created":6030235,"model":"DeepSeek-LLM-67B-chat","object":"chat.completion.chunk","system_fingerprint":null,"usage":{"prompt_tokens":16,"completion_tokens":10,"total_tokens":26}}

data: [DONE]

All the previous data can be obtained. Please pay attention to the end mark (data: [DONE]) of the last line.

This is not a json format. The parsing may return null and an error may be reported when using it later.

Solution

Is it possible to customize the parsing process so that I can perform special processing on the data returned by different language models and finally process it into a unified format?

@markwallace-microsoft markwallace-microsoft self-assigned this May 13, 2024
@markwallace-microsoft markwallace-microsoft added bug Something isn't working and removed triage labels May 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Status: No status
Development

No branches or pull requests

2 participants