Build agents with long-term, persistent memory using Letta to manage and edit context efficiently.
We'd like to know you better so we can create more relevant courses. What do you do for work?
Instructors: Charles Packer, Sarah Wooders
Build agents with long-term, persistent memory using Letta to manage and edit context efficiently.
Learn how an LLM agent can act as an operating system to manage memory, autonomously optimizing context use.
Apply memory management to create adaptive, collaborative AI agents for real-world tasks like research and HR.
Learn how to build agentic memory into your applications in this short course, LLMs as Operating Systems: Agent Memory, created in partnership with Letta, and taught by its founders Charles Packer and Sarah Wooders.
An LLM can use any information stored in its input context window but has limited space. Using a longer input context also costs more and causes slower processing. Managing this context window and what to input becomes very important.
Based on the innovative approach in the MemGPT research paper “Towards LLMs as Operating Systems,” its authors, two of whom are Charles and Sarah, proposed using an LLM agent to manage this context window, building a management system that provides applications with managed, persistent memory.
Examples of Managing Agent Memory are:
In this course, you’ll learn:
By the end of this course, you will have the tools to build LLM applications that can leverage virtual context, extending memory beyond the finite context window of LLMs.
Anyone who has basic Python skills and is curious about how autonomous agents can manage their own memory.
Introduction
Editable memory
Understanding MemGPT
Building Agents with Memory
Programming Agent Memory
Agentic RAG and External Memory
Multi-agent Orchestration
Conclusion
Course access is free for a limited time during the DeepLearning.AI learning platform beta!
Keep learning with updates on curated AI news, courses, and events, as well as Andrew’s thoughts from DeepLearning.AI!