Welcome to Long-Term Agentic Memory with LangGraph built in partnership with LangChain. I'm delighted to welcome back Harrison Chase, co-founder and CEO of LangChain. Thanks, Andrew. Recently we've seen many agentic applications being built, and this has helped us to develop a mental framework that is useful to think about for when you're adding memory to agents. We would like to share that with learners. More and more AI applications persist over time, and this really drives the need for agent memory. An example would be an AI personal assistant. Exactly. Assistants are a great example. The more they learn, the better they are at future tasks. To add memory to an agent, you must first figure out what information to store in long term memory and also when it comes time to use the information, what to retrieve. First, on what to store: Chatbots initially just saw conversational history in the context memory at each turn of a conversation. But agents that act for you over time need long term memory. For example, a calendar agent might need to persist information about meetings over long periods and across multiple invocations of the agent. Then comes retrieval. Retrieval will take information from memory and insert it into the context. Harrison will show you how to figure out when and what to retrieve. And additionally, you also need to decide when to update the stored information. So should it be updated through each iteration of an agent loop or maybe in the background over time? Right. To address these questions, we've found that it's useful to think about three types of memories. Semantic Memories: These are facts like important birthdays for a calendar agent. The next is Episodic Memories: These are experiences that can help an agent remember how to do tasks. And finally, there's Procedural Memories, which are rules for an agent to follow. To help manage memory, we've created a new library, Langmem, that supports a vector database that provides searchable, shareable, persistent storage that can be updated immediately by the agent or in the background by a helper agent. In this course, you'll build a useful email assistant that demonstrates all of these concepts using Langmem. Several people have worked to create this course. I'd like to thank from LangChain, Lance Martin, Will Fu-Hinthorn, and Nuno Campos. And from DeepLearning.AI Geoff Ladwig. All right, let's get started on the first lesson.