Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

.Net - Bug: OpenAIAssistantAgent class IAutoFunctionInvocationFilter and IPromptRenderFilter do not work #9673

Open
Mano1192 opened this issue Nov 13, 2024 · 1 comment · May be fixed by #9690
Assignees
Labels
agents bug Something isn't working .NET Issue or Pull requests regarding .NET code

Comments

@Mano1192
Copy link

Mano1192 commented Nov 13, 2024

Describe the bug
When connecting a kernel to an assistant it does not seem to fire the filters. Filters work fine with normal SK agents but when using the OpenAIAssistantAgent in particular the filters never hit when added in the kernelbuilder.

To Reproduce
Steps to reproduce the behavior:

  1. Instantiate an OpenAIAssistantAgent
  2. pass it a kernel with filters added during build
  3. chat with the agent
  4. You will notice that the filters are never raised by the kernel, neither when prompt rendering or calling tools

Expected behavior
The Filters should function as they do for any other agents in SK

Instantiation:

public async Task<OpenAIAssistantAgent> CreateNewAgent(string agentName, 
        string agentInstructions, string agentDescription, float? temperature, bool enableFileSearch = false, string? vectorStoreId = null)
    {
        OpenAIAssistantAgent assistant = await OpenAIAssistantAgent.CreateAsync(
            OpenAIClientProvider.ForOpenAI(new ApiKeyCredential(_openAiApiKey)),
            new OpenAIAssistantDefinition("gpt-4o")
            {
                CodeInterpreterFileIds = null,
                EnableCodeInterpreter = false,
                EnableFileSearch = enableFileSearch,
                EnableJsonResponse = false,
                Metadata = null,
                Temperature = temperature,
                TopP = null,
                VectorStoreId = vectorStoreId,
                ExecutionOptions = null,
                Description = agentDescription,
                Name = agentName,
                Instructions = agentInstructions,
            },
            semanticKernelService.CreateKernelGpt4Omni());
        
        return assistant;
    }

Kernel builder:

public Kernel CreateKernelGpt4Omni(Guid chatId)
        {
            IKernelBuilder builder = Kernel.CreateBuilder();
            builder.Services.AddLogging(c => c.SetMinimumLevel(LogLevel.Trace).AddDebug());
            builder.AddOpenAITextEmbeddingGeneration("text-embedding-3-small", _openAiApiKey);
            builder.AddOpenAIChatCompletion("gpt-4o", _openAiApiKey, null, "textGen", _httpClient);

            builder.Services.AddSingleton<IAutoFunctionInvocationFilter>(
                new AutoFunctionInvocationFilter(_logger, chatId));
            builder.Services.AddSingleton<IPromptRenderFilter>(
                new PromptFilterExample(_logger, chatId));

            Kernel kernel = builder.Build();
            return kernel;
        }

Prompt Filters:

private class AutoFunctionInvocationFilter(ILogger logger, Guid chatId = default)
            : IAutoFunctionInvocationFilter
        {
            public async Task OnAutoFunctionInvocationAsync(AutoFunctionInvocationContext context,
                Func<AutoFunctionInvocationContext, Task> next)
            {
                try
                {
                    if (context.ChatHistory.Last() is OpenAIChatMessageContent)
                    {
                        OpenAIChatMessageContent messageContent = (OpenAIChatMessageContent) context.ChatHistory.Last();
                        ChatToolCall tool = messageContent.ToolCalls[context.FunctionSequenceIndex];

                        logger.LogWarning("#{ChatID}: Executing a Total of {Number} Functions", chatId,
                            messageContent.ToolCalls.Count);
                        logger.LogWarning("#{ChatID}: Executing Function {Number}:  {Name} ", chatId,
                            context.FunctionSequenceIndex + 1, context.Function.Name);
                        logger.LogWarning("#{ChatID}: Executing Arguments: {Args}", chatId, tool.FunctionArguments);

                        foreach (var toolCall in messageContent.ToolCalls)
                            ToolCallEvent?.Invoke(chatId, toolCall.FunctionName);
                    }
                    else
                    {
                        logger.LogWarning("#{ChatID}: Executing Function {Name} ", chatId, context.Function.Name);
                        logger.LogWarning("#{ChatID}: Executing Arguments: {Args} ", chatId,
                            context.Arguments?.FirstOrDefault().ToString());
                        ToolCallEvent?.Invoke(chatId, context.Function.Name);
                    }
                }
                catch (Exception e)
                {
                    Console.WriteLine("#{ChatID}: Error executing function: {FunctionName}: StackTrace {Stack}", chatId,
                        context.Function.Name, e.StackTrace);
                }

                try
                {
                    await next(context);
                    var tokenizer = new OpenAiChatCompletionsTokenizer();
                    var count = tokenizer.CountTokens(context.Result.ToString());
                    if (count > 90000)
                    {
                        context.Result = new FunctionResult(context.Result,
                            "This response is too large to display. Please ask a more specific question.");
                        context.Terminate = true;
                    }
                }
                catch (Exception e)
                {
                    logger.LogError(e, "#{ChatID}: Error executing function: {FunctionName}: StackTrace {Stack}",
                        chatId,
                        context.Function.Name, e.StackTrace);
                    context.Result = new FunctionResult(context.Result, "Function Error. Retry function.");
                }
            }
        }

        private class PromptFilterExample(ILogger logger, Guid chatId = default) : IPromptRenderFilter
        {
            public async Task OnPromptRenderAsync(PromptRenderContext context, Func<PromptRenderContext, Task> next)
            {
                var functionName = context.Function.Name;

                await next(context);

                logger.LogWarning("#{ChatID}: Rendered Prompt:  {Prompt} ", chatId, context.RenderedPrompt);
            }
        }
@Mano1192 Mano1192 added the bug Something isn't working label Nov 13, 2024
@markwallace-microsoft markwallace-microsoft added .NET Issue or Pull requests regarding .NET code triage labels Nov 13, 2024
@github-actions github-actions bot changed the title Bug: .NET OpenAIAssistantAgent class IAutoFunctionInvocationFilter and IPromptRenderFilter do not work .Net: Bug: .NET OpenAIAssistantAgent class IAutoFunctionInvocationFilter and IPromptRenderFilter do not work Nov 13, 2024
@crickman crickman removed the triage label Nov 13, 2024
@crickman crickman moved this to Sprint: In Progress in Semantic Kernel Nov 13, 2024
@crickman crickman self-assigned this Nov 13, 2024
@crickman crickman changed the title .Net: Bug: .NET OpenAIAssistantAgent class IAutoFunctionInvocationFilter and IPromptRenderFilter do not work .Net - Bug: OpenAIAssistantAgent class IAutoFunctionInvocationFilter and IPromptRenderFilter do not work Nov 13, 2024
@crickman
Copy link
Contributor

Hi @Mano1192 - I've been able to make progress on exploring this issue.

As we've discussed out-of-band:

  • IFunctionInvocationFilter does currently engage as expected for each function invocation.
  • IPromptRenderFilter not egaged as neither the OpenAIAssistantAgent nor ChatCompletionAgent make internal use of a prompt-function. (The system instruction is rendered as a IPromptTemplate but filters only engage at the prompt-function level.) PromptFunctions that are invoked by the model as part of tool-calling will result in filter engagement.
  • IAutoFunctionInvocationFilter engagement is currently driven by each connector as has been overlooked / omitted when using OpenAIAssistantAgent. It does engage for ChatCompletionAgent however. I am currently working to activate this filter for OpenAIAssistantAgent.

A one note on your AutoFunctionInvocationFilter class.

  • Instead of context.ChatHistory.Last(), I think you can utlize context.ChatMessageContent. In the AssistantCase, ChatHistory won't be populated since querying the entire thread will add non-trivial latency.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
agents bug Something isn't working .NET Issue or Pull requests regarding .NET code
Projects
Status: Sprint: In Progress
3 participants