langgraph/tutorials/customer-support/customer-support/ #477
Replies: 40 comments 62 replies
-
When I was testing the code of part 4, I found that when the user inputs "Yeah I think I'd like an affordable hotel for my week-long stay (7 days). And I'll want to rent a car.", the following exception occurs:
Maybe the current code does not handle this type of exception. After comparison and verification, it appears that it is related to the following two lines of code in the “class Assistant” being repeated one extra time:
Could someone please help confirm this? Thank you. |
Beta Was this translation helpful? Give feedback.
-
I tried running the code but ran into this error: TypeError Traceback (most recent call last) TypeError: unsupported operand type(s) for |: 'type' and 'type' |
Beta Was this translation helpful? Give feedback.
-
For part 2, when I add the interrupt, the AI does not reply with the details received from the Tool Message. In fact, the tool message never executes. However, when I don't include interrupt, the workflow executes as expected and the Tool Message is returned, and the AI can utilize it for a conclusion. |
Beta Was this translation helpful? Give feedback.
-
Very interesting tutorial. In the last part with supervisor design pattern, there are some messages that are pushing redundant information to the user, when the sub agent delegate the rest of the conversation to the host assistant. I was wondering if making the sub agents more independent in terms of handling the whole concern (e.g. finishing up everything related to hotel booking without delegating the intermediate steps to the host) would remedy that. |
Beta Was this translation helpful? Give feedback.
-
From where "message.id" appears? Traceback (most recent call last): When I print my message above "if message" in _print_event, it appears: PRINT EVENT MESSAGE:: ('user', {'messages': [SystemMessage(content='Voce e uma secretaria responsavel por realizar agendamentos e responder perguntas basicas sobre o caso\n Use as devidas tools para consultar datas indisponiveis ou duvidas legais frequentes, ou outras informacoes em funcao da mensagem do usuario\n Caso seja uma solicitacao de agendamento, retorne de 2 a 4 opcoes de datas disponiveis dentro do intervalo de tempo dado pela messages\n Segue sua ultima acao, a partir dela, defina se vai dar uma resposta final, ou chamar uma tool.\n pense sempre na proxima acao \n '), HumanMessage(content='Quando podemos conversar na semana que vem?')]}) |
Beta Was this translation helpful? Give feedback.
-
How to make this agent interactive, rather than feeding it a list of pre-defined user input? |
Beta Was this translation helpful? Give feedback.
-
In the section where the primary assistant is defined, the 'route_primary_assistant' function is missing one output type 'enter_book_car_rental' in the literal. |
Beta Was this translation helpful? Give feedback.
-
I wonder how this chatbot run in production? If many people chat in the same time, does state will be mixed and chatbots responses are wrong for others people? |
Beta Was this translation helpful? Give feedback.
-
I was attempting to create the graph in part 4, swapping out the llm with OllamaFunction with the relatively new bind_tools() feature. However, when invoking the llm after it is delegated to another assistant, it keep throwing an error "Failed to parse response from model". Was wondering if there are any solution for this? |
Beta Was this translation helpful? Give feedback.
-
I'm interested in creating a graph based structure which allows for direct tools calls without the llm seeing it. Perhaps having a tool deciding assistant in the middle of the graph, one for direct and one for regular. Would love any advice. Example : graph TD |
Beta Was this translation helpful? Give feedback.
-
When I ask bot a question where the LLM outputs more than one tool_call objects, and them were to be routed to different nodes by some routing function, I get the following error:
It happens because the routing method can only route a message to one node, but sometimes the LLM expects to reach tools in diferent nodes, and ends up getting answer from only one tool. Example, in the following route function: route = tools_condition(state)
if route == END:
return END
tool_calls = state["messages"][-1].tool_calls
did_cancel = any(tc["name"] == CompleteOrEscalate.__name__ for tc in tool_calls)
if did_cancel:
return "leave_skill"
tool_names = [t.name for t in book_excursion_safe_tools]
# here is checking if all tool_calls match tools from safe node
if all(tc["name"] in tool_names for tc in tool_calls):
return "book_excursion_safe_tools"
return "book_excursion_sensitive_tools" In this case, if LLM calls a safe tool and a sensitive tool, the rounting function will end up routing to sensitive tools. Is there a way to resolve that? Should I prompt the LLM to always outputs only one tool_call at once? @hinthornw |
Beta Was this translation helpful? Give feedback.
-
When I run part4 using openai, I got error:
Any help would be appreciated. |
Beta Was this translation helpful? Give feedback.
-
Hi all! For part 4, we replace: graph_builder.add_edge("fetch_user_info", "primary_assistant") with the code below. I'm trying to understand why this is needed. It looks like we can enter the user_info node from what is called a "delegated state" (from the comments) where the state is already initialized. Can anyone explain this to me? Thank you! graph_builder.add_conditional_edges("fetch_user_info, route_to_workflow) def route_to_workflow(state: State) -> Literal["primary_assistant", |
Beta Was this translation helpful? Give feedback.
-
hello , |
Beta Was this translation helpful? Give feedback.
-
Hi all - I'm trying to solve the issue when the primary assistant calls multiple tools. Right now when that happens the code produces the following error: openai.BadRequestError: Error code: 400 - {'error': {'message': "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_2HBvFjVImXIiszYJKVvP9xvU", 'type': 'invalid_request_error', 'param': 'messages', 'code': None}} I think one way to solve this could be to add an additional node between the primary assistant and the four assistant entry nodes, where the job of this additional node is to loop through all the tool calls that came from the primary assistant (so without invoking an LLM). This probably also requires pushing the tool_call_id's to the "stack", which can be done as part of the output of the primary assistant. The "tool looper" can then make sure that a ToolMessage is added for each tool call id that we track on the stack. The node that is used to leave a special assistant (pop_dialog_state) would likely have to route to the looper node, and then the looper node can route to the primary assistant once all tools call ids have been satisfied. I wish I could paste in a slide to visualize the above - let me know if there is a way to do this pls. What are your thoughts on this? Is there an existing/simpler design pattern for this issue? Thank you JDB |
Beta Was this translation helpful? Give feedback.
-
Hi, when I call the graph using events = graph.stream(
{"messages": ("user", question)}, config, stream_mode="values"
) or result = graph.invoke(
{"messages": ("user", question)}, config
) It looks like I am not passing any conversation history to the LLM. |
Beta Was this translation helpful? Give feedback.
-
Hi, |
Beta Was this translation helpful? Give feedback.
-
I can't seem to access the config using ensure_config() within the tools. Anyone else facing the same issue? |
Beta Was this translation helpful? Give feedback.
-
Hi! |
Beta Was this translation helpful? Give feedback.
-
Hello! Is there any good documentation available for Part 4 based on reAct approach? Thanks! |
Beta Was this translation helpful? Give feedback.
-
Hey, My Graph is similar to :
)
Any help would be appreciated to resolve this error |
Beta Was this translation helpful? Give feedback.
-
I have an issue on @tool
def search_flights(
departure_airport: Optional[str] = None,
arrival_airport: Optional[str] = None,
start_time: Optional[date | datetime] = None,
end_time: Optional[date | datetime] = None,
limit: int = 20
): I am getting the following error:
I replace the
|
Beta Was this translation helpful? Give feedback.
-
there are duplicate sentence below: messages = state["messages"] + [("user", "Respond with a real output.")] |
Beta Was this translation helpful? Give feedback.
-
ValueError: Unexpected message type: content. Use one of 'human', 'user', 'ai', 'assistant', 'function', 'tool', or 'system'. |
Beta Was this translation helpful? Give feedback.
-
In Part 3, the while loop forgets to add: _print_event(result, _printed) Add this before snapshot = part_2_graph.get_state(config), otherwise the AI response after the tool node won't be displayed. |
Beta Was this translation helpful? Give feedback.
-
Congratulations for this valuable tutorial! Regarding asking confirmatin to the Human, how to implement this feature in a web environment, where the client would be in a webchat and our agent and tools would be in a cloud microservice? Thank you! |
Beta Was this translation helpful? Give feedback.
-
I am trying to adapt PART 4 to my use case, the main differences are different |
Beta Was this translation helpful? Give feedback.
-
Im extending the part 4 to one of my use cases and im facing some weird behavior when the user queries something that triggers more than 1 tool from the primary assistant (the tools used to work on the conditional_branching). Those behaviors are:
im clueless on this one |
Beta Was this translation helpful? Give feedback.
-
Can someone guide how can I use these specialized workflows in https://github.com/langchain-ai/opengpts |
Beta Was this translation helpful? Give feedback.
-
Could you please provide an example of: |
Beta Was this translation helpful? Give feedback.
-
langgraph/tutorials/customer-support/customer-support/
Build language agents as graphs
https://langchain-ai.github.io/langgraph/tutorials/customer-support/customer-support/
Beta Was this translation helpful? Give feedback.
All reactions