Skip to content

Commit

Permalink
update agents section
Browse files Browse the repository at this point in the history
  • Loading branch information
ekzhu committed Nov 10, 2024
1 parent 160f335 commit 42333e5
Show file tree
Hide file tree
Showing 2 changed files with 140 additions and 37 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The code snippet above introduces two high level concepts in AgentChat: *Agent* and *Team*. An Agent helps us define what actions are taken when a message is received. Specifically, we use the {py:class}`~autogen_agentchat.agents.AssistantAgent` preset - an agent that can be given tools (functions) that it can then use to address tasks. A Team helps us define the rules for how agents interact with each other. In the {py:class}`~autogen_agentchat.teams.RoundRobinGroupChat` team, agents respond in a sequential round-robin fashion.\n",
"The code snippet above introduces two high level concepts in AgentChat: *Agent* and *Team*. An Agent helps us define what actions are taken when a message is received. Specifically, we use the {py:class}`~autogen_agentchat.agents.AssistantAgent` preset - an agent that can be given access to a model (e.g., LLM) and tools (functions) that it can then use to address tasks. A Team helps us define the rules for how agents interact with each other. In the {py:class}`~autogen_agentchat.teams.RoundRobinGroupChat` team, agents respond in a sequential round-robin fashion.\n",
"In this case, we have a single agent, so the same agent is used for each round."
]
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,23 +14,28 @@
"\n",
"- {py:attr}`~autogen_agentchat.agents.BaseChatAgent.name`: The unique name of the agent.\n",
"- {py:attr}`~autogen_agentchat.agents.BaseChatAgent.description`: The description of the agent in text.\n",
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.run`: Run with a given task and produce a {py:class}`~autogen_agentchat.base.TaskResult`.\n",
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.run_stream`: Run with a given task and produce an iterator of {py:class}`~autogen_agentchat.messages.AgentMessage` that ends with a {py:class}`~autogen_agentchat.base.TaskResult`.\n",
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages`: Send the agent a sequence of {py:class}`~autogen_agentchat.messages.ChatMessage` get a {py:class}`~autogen_agentchat.base.Response`.\n",
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages_stream`: Same as {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages` but returns an iterator of {py:class}`~autogen_agentchat.messages.AgentMessage` followed by a {py:class}`~autogen_agentchat.base.Response` as the last item.\n",
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.reset`: Reset the agent to its initial state.\n",
"\n",
"See {py:mod}`autogen_agentchat.messages` for more information on AgentChat message types.\n",
"\n",
"\n",
"## Assistant Agent\n",
"\n",
"{py:class}`~autogen_agentchat.agents.AssistantAgent` is a built-in agent that\n",
"is using the ReAct pattern to generate responses with ability to use tools."
"uses a language model with ability to use tools."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"from autogen_agentchat.agents import AssistantAgent\n",
"from autogen_agentchat.messages import TextMessage\n",
"from autogen_core.base import CancellationToken\n",
"from autogen_ext.models import OpenAIChatCompletionClient\n",
"\n",
"\n",
Expand All @@ -57,28 +62,32 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We can call the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.run` \n",
"method to run the agent with a given task."
"We can call the {py:meth}`~autogen_agentchat.agents.AssistantAgent.on_messages` \n",
"method to get the agent to respond to a message."
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"TaskResult(messages=[TextMessage(source='user', models_usage=None, content='Find information on AutoGen'), ToolCallMessage(source='assistant', models_usage=RequestUsage(prompt_tokens=82, completion_tokens=15), content=[FunctionCall(id='call_fNG1y1aEfDOVDMwWLGE83YXk', arguments='{\"query\":\"AutoGen\"}', name='web_search')]), ToolCallResultMessage(source='assistant', models_usage=None, content=[FunctionExecutionResult(content='AutoGen is a programming framework for building multi-agent applications.', call_id='call_fNG1y1aEfDOVDMwWLGE83YXk')]), TextMessage(source='assistant', models_usage=RequestUsage(prompt_tokens=92, completion_tokens=14), content='AutoGen is a programming framework designed for creating multi-agent applications.')], stop_reason=None)\n"
"[ToolCallMessage(source='assistant', models_usage=RequestUsage(prompt_tokens=61, completion_tokens=15), content=[FunctionCall(id='call_hqVC7UJUPhKaiJwgVKkg66ak', arguments='{\"query\":\"AutoGen\"}', name='web_search')]), ToolCallResultMessage(source='assistant', models_usage=None, content=[FunctionExecutionResult(content='AutoGen is a programming framework for building multi-agent applications.', call_id='call_hqVC7UJUPhKaiJwgVKkg66ak')])]\n",
"source='assistant' models_usage=RequestUsage(prompt_tokens=92, completion_tokens=14) content='AutoGen is a programming framework designed for building multi-agent applications.'\n"
]
}
],
"source": [
"async def assistant_run() -> None:\n",
" # Run the agent with a given task.\n",
" result = await agent.run(task=\"Find information on AutoGen\")\n",
" print(result)\n",
" response = await agent.on_messages(\n",
" [TextMessage(content=\"Find information on AutoGen\", source=\"user\")],\n",
" cancellation_token=CancellationToken(),\n",
" )\n",
" print(response.inner_messages)\n",
" print(response.chat_message)\n",
"\n",
"\n",
"# Use asyncio.run(assistant_run()) when running in a script.\n",
Expand All @@ -89,37 +98,39 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The call to the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.run` method\n",
"returns a {py:class}`~autogen_agentchat.base.TaskResult` with a list of messages\n",
"generated by the agent.\n",
"The call to the {py:meth}`~autogen_agentchat.agents.AssistantAgent.on_messages` method\n",
"returns a {py:class}`~autogen_agentchat.base.Response`\n",
"that contains the agent's final response in the {py:attr}`~autogen_agentchat.base.Response.chat_message` attribute,\n",
"as well as a list of inner messages in the {py:attr}`~autogen_agentchat.base.Response.inner_messages` attribute,\n",
"which stores the agent's \"thought process\" that led to the final response.\n",
"\n",
"### Stream Messages\n",
"\n",
"We can also stream each message as it is generated by the agent by using the\n",
"{py:meth}`~autogen_agentchat.agents.BaseChatAgent.run_stream` method."
"{py:meth}`~autogen_agentchat.agents.AssistantAgent.on_messages_stream` method."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"source='user' models_usage=None content='Find information on AutoGen'\n",
"source='assistant' models_usage=RequestUsage(prompt_tokens=82, completion_tokens=15) content=[FunctionCall(id='call_0M1jNwCWR6sTnCxI4FC1Y1KA', arguments='{\"query\":\"AutoGen\"}', name='web_search')]\n",
"source='assistant' models_usage=None content=[FunctionExecutionResult(content='AutoGen is a programming framework for building multi-agent applications.', call_id='call_0M1jNwCWR6sTnCxI4FC1Y1KA')]\n",
"source='assistant' models_usage=RequestUsage(prompt_tokens=92, completion_tokens=16) content='AutoGen is a programming framework designed for the development of multi-agent applications.'\n",
"TaskResult(messages=[TextMessage(source='user', models_usage=None, content='Find information on AutoGen'), ToolCallMessage(source='assistant', models_usage=RequestUsage(prompt_tokens=82, completion_tokens=15), content=[FunctionCall(id='call_0M1jNwCWR6sTnCxI4FC1Y1KA', arguments='{\"query\":\"AutoGen\"}', name='web_search')]), ToolCallResultMessage(source='assistant', models_usage=None, content=[FunctionExecutionResult(content='AutoGen is a programming framework for building multi-agent applications.', call_id='call_0M1jNwCWR6sTnCxI4FC1Y1KA')]), TextMessage(source='assistant', models_usage=RequestUsage(prompt_tokens=92, completion_tokens=16), content='AutoGen is a programming framework designed for the development of multi-agent applications.')], stop_reason=None)\n"
"source='assistant' models_usage=RequestUsage(prompt_tokens=61, completion_tokens=15) content=[FunctionCall(id='call_fXhM4PeZsodhhUOlNiFkoBXF', arguments='{\"query\":\"AutoGen\"}', name='web_search')]\n",
"source='assistant' models_usage=None content=[FunctionExecutionResult(content='AutoGen is a programming framework for building multi-agent applications.', call_id='call_fXhM4PeZsodhhUOlNiFkoBXF')]\n",
"Response(chat_message=TextMessage(source='assistant', models_usage=RequestUsage(prompt_tokens=92, completion_tokens=31), content='AutoGen is a programming framework designed for building multi-agent applications. If you need more specific information about its features or usage, feel free to ask!'), inner_messages=[ToolCallMessage(source='assistant', models_usage=RequestUsage(prompt_tokens=61, completion_tokens=15), content=[FunctionCall(id='call_fXhM4PeZsodhhUOlNiFkoBXF', arguments='{\"query\":\"AutoGen\"}', name='web_search')]), ToolCallResultMessage(source='assistant', models_usage=None, content=[FunctionExecutionResult(content='AutoGen is a programming framework for building multi-agent applications.', call_id='call_fXhM4PeZsodhhUOlNiFkoBXF')])])\n"
]
}
],
"source": [
"async def assistant_run_stream() -> None:\n",
" # Run the agent with a given task and stream the response.\n",
" async for message in agent.run_stream(task=\"Find information on AutoGen\"):\n",
" async for message in agent.on_messages_stream(\n",
" [TextMessage(content=\"Find information on AutoGen\", source=\"user\")],\n",
" cancellation_token=CancellationToken(),\n",
" ):\n",
" print(message)\n",
"\n",
"\n",
Expand All @@ -131,9 +142,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The {py:meth}`~autogen_agentchat.agents.BaseChatAgent.run_stream` method\n",
"returns an asynchronous generator that yields each message generated by the agent,\n",
"and the final task result as the last item.\n",
"The {py:meth}`~autogen_agentchat.agents.AssistantAgent.on_messages_stream` method\n",
"returns an asynchronous generator that yields each inner message generated by the agent,\n",
"and the last item is the final response message in the {py:attr}`~autogen_agentchat.base.Response.chat_message` attribute.\n",
"\n",
"From the messages, you can see the assistant agent used the `web_search` tool to\n",
"search for information and responded using the search results.\n",
Expand Down Expand Up @@ -221,13 +232,16 @@
" code_executor_agent = CodeExecutorAgent(\"code_executor\", code_executor=code_executor)\n",
"\n",
" # Run the agent with a given code snippet.\n",
" task = \"\"\"Here is some code\n",
" task = TextMessage(\n",
" content=\"\"\"Here is some code\n",
"```python\n",
"print('Hello world')\n",
"```\n",
"\"\"\"\n",
" code_execution_result = await code_executor_agent.run(task=task)\n",
" print(code_execution_result.messages[-1])\n",
"\"\"\",\n",
" source=\"user\",\n",
" )\n",
" response = await code_executor_agent.on_messages([task], CancellationToken())\n",
" print(response.chat_message)\n",
"\n",
" # Stop the code executor.\n",
" await code_executor.stop()\n",
Expand Down Expand Up @@ -259,14 +273,104 @@
"\n",
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages`: The abstract method that defines the behavior of the agent in response to messages. This method is called when the agent is asked to provide a response in {py:meth}`~autogen_agentchat.agents.BaseChatAgent.run`. It returns a {py:class}`~autogen_agentchat.base.Response` object.\n",
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.reset`: The abstract method that resets the agent to its initial state. This method is called when the agent is asked to reset itself.\n",
"- {py:attr}`~autogen_agentchat.agents.BaseChatAgent.produced_message_types`: The list of possible message types the agent can produce in its response.\n",
"- {py:attr}`~autogen_agentchat.agents.BaseChatAgent.produced_message_types`: The list of possible {py:class}`~autogen_agentchat.messages.ChatMessage` message types the agent can produce in its response.\n",
"\n",
"Optionally, you can implement the the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages_stream` method to stream messages as they are generated by the agent. If this method is not implemented, the agent\n",
"uses the default implementation of {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages_stream`\n",
"that calls the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages` method and\n",
"yields all messages in the response.\n",
"yields all messages in the response."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### CounterDownAgent\n",
"\n",
"In this example, we create a simple agent that counts down from a given number to zero,\n",
"and produces a stream of messages with the current count."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"3...\n",
"2...\n",
"1...\n",
"Done!\n"
]
}
],
"source": [
"from typing import AsyncGenerator, List, Sequence\n",
"\n",
"from autogen_agentchat.agents import BaseChatAgent\n",
"from autogen_agentchat.base import Response\n",
"from autogen_agentchat.messages import AgentMessage, ChatMessage\n",
"\n",
"\n",
"class CountDownAgent(BaseChatAgent):\n",
" def __init__(self, name: str, count: int = 3):\n",
" super().__init__(name, \"A simple agent that counts down.\")\n",
" self._count = count\n",
"\n",
" @property\n",
" def produced_message_types(self) -> List[type[ChatMessage]]:\n",
" return [TextMessage]\n",
"\n",
" async def on_messages(\n",
" self, messages: Sequence[ChatMessage], cancellation_token: CancellationToken\n",
" ) -> AgentMessage | Response:\n",
" # Calls the on_messages_stream.\n",
" response: Response | None = None\n",
" async for message in self.on_messages_stream(messages, cancellation_token):\n",
" if isinstance(message, Response):\n",
" response = message\n",
" assert response is not None\n",
" return response\n",
"\n",
" async def on_messages_stream(\n",
" self, messages: Sequence[ChatMessage], cancellation_token: CancellationToken\n",
" ) -> AsyncGenerator[AgentMessage | Response, None]:\n",
" inner_messages = []\n",
" for i in range(self._count, 0, -1):\n",
" msg = TextMessage(content=f\"{i}...\", source=self.name)\n",
" inner_messages.append(msg)\n",
" yield msg\n",
" # The response is returned at the end of the stream.\n",
" # It contains the final message and all the inner messages.\n",
" yield Response(chat_message=TextMessage(content=\"Done!\", source=self.name), inner_messages=inner_messages)\n",
"\n",
" async def reset(self, cancellation_token: CancellationToken) -> None:\n",
" pass\n",
"\n",
"\n",
"async def run_countdown_agent() -> None:\n",
" # Create a countdown agent.\n",
" countdown_agent = CountDownAgent(\"countdown\")\n",
"\n",
" # Run the agent with a given task and stream the response.\n",
" async for message in countdown_agent.on_messages_stream([], CancellationToken()):\n",
" if isinstance(message, Response):\n",
" print(message.chat_message.content)\n",
" else:\n",
" print(message.content)\n",
"\n",
"\n",
"# Use asyncio.run(run_countdown_agent()) when running in a script.\n",
"await run_countdown_agent()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### UserProxyAgent \n",
"\n",
"A common use case for building a custom agent is to create an agent that acts as a proxy for the user.\n",
Expand All @@ -284,7 +388,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"source='user_proxy_agent' models_usage=None content='I am glad to be here.'\n"
"I am glad to be here.\n"
]
}
],
Expand All @@ -297,7 +401,6 @@
"from autogen_agentchat.messages import (\n",
" ChatMessage,\n",
" StopMessage,\n",
" TextMessage,\n",
")\n",
"from autogen_core.base import CancellationToken\n",
"\n",
Expand All @@ -322,8 +425,8 @@
"\n",
"async def run_user_proxy_agent() -> None:\n",
" user_proxy_agent = UserProxyAgent(name=\"user_proxy_agent\")\n",
" user_proxy_agent_result = await user_proxy_agent.run(task=\"What's your thought?\")\n",
" print(user_proxy_agent_result.messages[-1])\n",
" response = await user_proxy_agent.on_messages([], CancellationToken())\n",
" print(response.chat_message.content)\n",
"\n",
"\n",
"# Use asyncio.run(run_user_proxy_agent()) when running in a script.\n",
Expand Down

0 comments on commit 42333e5

Please sign in to comment.