release notes
release notes
Published 4/22/2025
Contains new featuresYou can use AgentTool and TeamTool to wrap agent and team into tools to be used by other agents.
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.tools import AgentTool
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main() -> None:
model_client = OpenAIChatCompletionClient(model="gpt-4")
writer = AssistantAgent(
name="writer",
description="A writer agent for generating text.",
model_client=model_client,
system_message="Write well.",
)
writer_tool = AgentTool(agent=writer)
assistant = AssistantAgent(
name="assistant",
model_client=model_client,
tools=[writer_tool],
system_message="You are a helpful assistant.",
)
await Console(assistant.run_stream(task="Write a poem about the sea."))
asyncio.run(main())
See AgentChat Tools API for more information.
Introducing adapter for Azure AI Agent, with support for file search, code interpreter, and more. See our Azure AI Agent Extension API.
Thinking about sandboxing your local Jupyter execution environment? We just added a new code executor to our family of code executors. See Docker Jupyter Code Executor Extension API.
Shared "whiteboard" memory can be useful for agents to collaborate on a common artifact such code, document, or illustration. Canvas Memory is an experimental extension for sharing memory and exposing tools for agents to operate on the shared memory.
Updated links to new community extensions. Notably, autogen-contextplus provides advanced model context implementations with ability to automatically summarize, truncate the model context used by agents.
autogen-oaiapi and autogen-contextplus by @SongChiYoung in https://github.com/microsoft/autogen/pull/6338SelectorGroupChat UpdateSelectorGroupChat now works with models that only support streaming mode (e.g., QwQ). It can also optionally emit the inner reasoning of the model used in the selector. Set emit_team_events=True and model_client_streaming=True when creating SelectorGroupChat.
CodeExecutorAgent UpdateCodeExecutorAgent just got another refresh: it now supports max_retries_on_error parameter. You can specify how many times it can retry and self-debug in case there is error in the code execution.
CodeExecutionAgent by @Ethan0456 in https://github.com/microsoft/autogen/pull/6306ModelInfo Updatemultiple_system_message on model_info by @SongChiYoung in https://github.com/microsoft/autogen/pull/6327startswith("gemini-") by @SongChiYoung in https://github.com/microsoft/autogen/pull/6345Full Changelog: https://github.com/microsoft/autogen/compare/python-v0.5.3...python-v0.5.4
release notes
Published 4/22/2025
Contains new featuresYou can use AgentTool and TeamTool to wrap agent and team into tools to be used by other agents.
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.tools import AgentTool
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main() -> None:
model_client = OpenAIChatCompletionClient(model="gpt-4")
writer = AssistantAgent(
name="writer",
description="A writer agent for generating text.",
model_client=model_client,
system_message="Write well.",
)
writer_tool = AgentTool(agent=writer)
assistant = AssistantAgent(
name="assistant",
model_client=model_client,
tools=[writer_tool],
system_message="You are a helpful assistant.",
)
await Console(assistant.run_stream(task="Write a poem about the sea."))
asyncio.run(main())
See AgentChat Tools API for more information.
Introducing adapter for Azure AI Agent, with support for file search, code interpreter, and more. See our Azure AI Agent Extension API.
Thinking about sandboxing your local Jupyter execution environment? We just added a new code executor to our family of code executors. See Docker Jupyter Code Executor Extension API.
Shared "whiteboard" memory can be useful for agents to collaborate on a common artifact such code, document, or illustration. Canvas Memory is an experimental extension for sharing memory and exposing tools for agents to operate on the shared memory.
Updated links to new community extensions. Notably, autogen-contextplus provides advanced model context implementations with ability to automatically summarize, truncate the model context used by agents.
autogen-oaiapi and autogen-contextplus by @SongChiYoung in https://github.com/microsoft/autogen/pull/6338SelectorGroupChat UpdateSelectorGroupChat now works with models that only support streaming mode (e.g., QwQ). It can also optionally emit the inner reasoning of the model used in the selector. Set emit_team_events=True and model_client_streaming=True when creating SelectorGroupChat.
CodeExecutorAgent UpdateCodeExecutorAgent just got another refresh: it now supports max_retries_on_error parameter. You can specify how many times it can retry and self-debug in case there is error in the code execution.
CodeExecutionAgent by @Ethan0456 in https://github.com/microsoft/autogen/pull/6306ModelInfo Updatemultiple_system_message on model_info by @SongChiYoung in https://github.com/microsoft/autogen/pull/6327startswith("gemini-") by @SongChiYoung in https://github.com/microsoft/autogen/pull/6345Full Changelog: https://github.com/microsoft/autogen/compare/python-v0.5.3...python-v0.5.4
A programming framework for agentic AI