release notes
release notes
A programming framework for agentic AI
release notes
Published 1/22/2025
PatchContains new featuresThis is the first release since 0.4.0 with significant new features! We look forward to hearing feedback and suggestions from the community.
One of the big missing features from 0.2 was the ability to seamlessly cache model client completions. This release adds ChatCompletionCache which can wrap any other ChatCompletionClient and cache completions.
There is a CacheStore interface to allow for easy implementation of new caching backends. The currently available implementations are:
import asyncio
import tempfile
from autogen_core.models import UserMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.models.cache import ChatCompletionCache, CHAT_CACHE_VALUE_TYPE
from autogen_ext.cache_store.diskcache import DiskCacheStore
from diskcache import Cache
async def main():
with tempfile.TemporaryDirectory() as tmpdirname:
openai_model_client = OpenAIChatCompletionClient(model="gpt-4o")
cache_store = DiskCacheStore[CHAT_CACHE_VALUE_TYPE](https://github.com/microsoft/autogen/blob/main/Cache(tmpdirname))
cache_client = ChatCompletionCache(openai_model_client, cache_store)
response = await cache_client.create([UserMessage(content="Hello, how are you?", source="user")])
print(response) # Should print response from OpenAI
response = await cache_client.create([UserMessage(content="Hello, how are you?", source="user")])
print(response) # Should print cached response
asyncio.run(main())
ChatCompletionCache is not yet supported by the declarative component config, see the issue to track progress.
This releases adds support for GraphRAG as a tool agents can call. You can find a sample for how to use this integration here, and docs for LocalSearchTool and GlobalSearchTool.
import asyncio
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.ui import Console
from autogen_ext.tools.graphrag import GlobalSearchTool
from autogen_agentchat.agents import AssistantAgent
async def main():
# Initialize the OpenAI client
openai_client = OpenAIChatCompletionClient(
model="gpt-4o-mini",
)
# Set up global search tool
global_tool = GlobalSearchTool.from_settings(settings_path="./settings.yaml")
# Create assistant agent with the global search tool
assistant_agent = AssistantAgent(
name="search_assistant",
tools=[global_tool],
model_client=openai_client,
system_message=(
"You are a tool selector AI assistant using the GraphRAG framework. "
"Your primary task is to determine the appropriate search tool to call based on the user's query. "
"For broader, abstract questions requiring a comprehensive understanding of the dataset, call the 'global_search' function."
),
)
# Run a sample query
query = "What is the overall sentiment of the community reports?"
await Console(assistant_agent.run_stream(task=query))
if __name__ == "__main__":
asyncio.run(main())
#4612 by @lspinheiro
Semantic Kernel has an extensive collection of AI connectors. In this release we added support to adapt a Semantic Kernel AI Connector to an AutoGen ChatCompletionClient using the SKChatCompletionAdapter.
Currently this requires passing the kernel during create, and so cannot be used with AssistantAgent directly yet. This will be fixed in a future release (#5144).
#4851 by @lspinheiro
We also added a tool adapter, but this time to allow AutoGen tools to be added to a Kernel, called KernelFunctionFromTool.
#4851 by @lspinheiro
This release also brings forward Jupyter code executor functionality that we had in 0.2, as the JupyterCodeExecutor.
Please note that this currently on supports local execution and should be used with caution.
It's still early on but we merged the interface for agent memory in this release. This allows agents to enrich their context from a memory store and save information to it. The interface is defined in core and AssistantAgent in agentchat accepts memory as a parameter now. There is an initial example memory implementation which simply injects all memories as system messages for the agent. The intention is for the memory interface to be able to be used for both RAG and agent memory systems going forward.
Memory interfaceAssistantAgent with new memory parameter#4438 by @victordibia, #5053 by @ekzhu
We're continuing to expand support for declarative configs throughout the framework. In this release, we've added support for termination conditions and base chat agents. Once we're done with this, you'll be able to configure and entire team of agents with a single config file and have it work seamlessly with AutoGen studio. Stay tuned!
#4984, #5055 by @victordibia
Full Changelog: https://github.com/microsoft/autogen/compare/v0.4.1...v0.4.3
release notes
Published 1/22/2025
PatchContains new featuresThis is the first release since 0.4.0 with significant new features! We look forward to hearing feedback and suggestions from the community.
One of the big missing features from 0.2 was the ability to seamlessly cache model client completions. This release adds ChatCompletionCache which can wrap any other ChatCompletionClient and cache completions.
There is a CacheStore interface to allow for easy implementation of new caching backends. The currently available implementations are:
import asyncio
import tempfile
from autogen_core.models import UserMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.models.cache import ChatCompletionCache, CHAT_CACHE_VALUE_TYPE
from autogen_ext.cache_store.diskcache import DiskCacheStore
from diskcache import Cache
async def main():
with tempfile.TemporaryDirectory() as tmpdirname:
openai_model_client = OpenAIChatCompletionClient(model="gpt-4o")
cache_store = DiskCacheStore[CHAT_CACHE_VALUE_TYPE](https://github.com/microsoft/autogen/blob/main/Cache(tmpdirname))
cache_client = ChatCompletionCache(openai_model_client, cache_store)
response = await cache_client.create([UserMessage(content="Hello, how are you?", source="user")])
print(response) # Should print response from OpenAI
response = await cache_client.create([UserMessage(content="Hello, how are you?", source="user")])
print(response) # Should print cached response
asyncio.run(main())
ChatCompletionCache is not yet supported by the declarative component config, see the issue to track progress.
This releases adds support for GraphRAG as a tool agents can call. You can find a sample for how to use this integration here, and docs for LocalSearchTool and GlobalSearchTool.
import asyncio
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.ui import Console
from autogen_ext.tools.graphrag import GlobalSearchTool
from autogen_agentchat.agents import AssistantAgent
async def main():
# Initialize the OpenAI client
openai_client = OpenAIChatCompletionClient(
model="gpt-4o-mini",
)
# Set up global search tool
global_tool = GlobalSearchTool.from_settings(settings_path="./settings.yaml")
# Create assistant agent with the global search tool
assistant_agent = AssistantAgent(
name="search_assistant",
tools=[global_tool],
model_client=openai_client,
system_message=(
"You are a tool selector AI assistant using the GraphRAG framework. "
"Your primary task is to determine the appropriate search tool to call based on the user's query. "
"For broader, abstract questions requiring a comprehensive understanding of the dataset, call the 'global_search' function."
),
)
# Run a sample query
query = "What is the overall sentiment of the community reports?"
await Console(assistant_agent.run_stream(task=query))
if __name__ == "__main__":
asyncio.run(main())
#4612 by @lspinheiro
Semantic Kernel has an extensive collection of AI connectors. In this release we added support to adapt a Semantic Kernel AI Connector to an AutoGen ChatCompletionClient using the SKChatCompletionAdapter.
Currently this requires passing the kernel during create, and so cannot be used with AssistantAgent directly yet. This will be fixed in a future release (#5144).
#4851 by @lspinheiro
We also added a tool adapter, but this time to allow AutoGen tools to be added to a Kernel, called KernelFunctionFromTool.
#4851 by @lspinheiro
This release also brings forward Jupyter code executor functionality that we had in 0.2, as the JupyterCodeExecutor.
Please note that this currently on supports local execution and should be used with caution.
It's still early on but we merged the interface for agent memory in this release. This allows agents to enrich their context from a memory store and save information to it. The interface is defined in core and AssistantAgent in agentchat accepts memory as a parameter now. There is an initial example memory implementation which simply injects all memories as system messages for the agent. The intention is for the memory interface to be able to be used for both RAG and agent memory systems going forward.
Memory interfaceAssistantAgent with new memory parameter#4438 by @victordibia, #5053 by @ekzhu
We're continuing to expand support for declarative configs throughout the framework. In this release, we've added support for termination conditions and base chat agents. Once we're done with this, you'll be able to configure and entire team of agents with a single config file and have it work seamlessly with AutoGen studio. Stay tuned!
#4984, #5055 by @victordibia
Full Changelog: https://github.com/microsoft/autogen/compare/v0.4.1...v0.4.3