DeepLearning.AI
AI is the new electricity and will transform and improve nearly all areas of human lives.

Quick Guide & Tips

💻   Accessing Utils File and Helper Functions

In each notebook on the top menu:

1:   Click on "File"

2:   Then, click on "Open"

You will be able to see all the notebook files for the lesson, including any helper functions used in the notebook on the left sidebar. See the following image for the steps above.


💻   Downloading Notebooks

In each notebook on the top menu:

1:   Click on "File"

2:   Then, click on "Download as"

3:   Then, click on "Notebook (.ipynb)"


💻   Uploading Your Files

After following the steps shown in the previous section ("File" => "Open"), then click on "Upload" button to upload your files.


📗   See Your Progress

Once you enroll in this course—or any other short course on the DeepLearning.AI platform—and open it, you can click on 'My Learning' at the top right corner of the desktop view. There, you will be able to see all the short courses you have enrolled in and your progress in each one.

Additionally, your progress in each short course is displayed at the bottom-left corner of the learning page for each course (desktop view).


📱   Features to Use

🎞   Adjust Video Speed: Click on the gear icon (⚙) on the video and then from the Speed option, choose your desired video speed.

🗣   Captions (English and Spanish): Click on the gear icon (⚙) on the video and then from the Captions option, choose to see the captions either in English or Spanish.

🔅   Video Quality: If you do not have access to high-speed internet, click on the gear icon (⚙) on the video and then from Quality, choose the quality that works the best for your Internet speed.

🖥   Picture in Picture (PiP): This feature allows you to continue watching the video when you switch to another browser tab or window. Click on the small rectangle shape on the video to go to PiP mode.

√   Hide and Unhide Lesson Navigation Menu: If you do not have a large screen, you may click on the small hamburger icon beside the title of the course to hide the left-side navigation menu. You can then unhide it by clicking on the same icon again.


🧑   Efficient Learning Tips

The following tips can help you have an efficient learning experience with this short course and other courses.

🧑   Create a Dedicated Study Space: Establish a quiet, organized workspace free from distractions. A dedicated learning environment can significantly improve concentration and overall learning efficiency.

📅   Develop a Consistent Learning Schedule: Consistency is key to learning. Set out specific times in your day for study and make it a routine. Consistent study times help build a habit and improve information retention.

Tip: Set a recurring event and reminder in your calendar, with clear action items, to get regular notifications about your study plans and goals.

☕   Take Regular Breaks: Include short breaks in your study sessions. The Pomodoro Technique, which involves studying for 25 minutes followed by a 5-minute break, can be particularly effective.

💬   Engage with the Community: Participate in forums, discussions, and group activities. Engaging with peers can provide additional insights, create a sense of community, and make learning more enjoyable.

✍   Practice Active Learning: Don't just read or run notebooks or watch the material. Engage actively by taking notes, summarizing what you learn, teaching the concept to someone else, or applying the knowledge in your practical projects.


📚   Enroll in Other Short Courses

Keep learning by enrolling in other short courses. We add new short courses regularly. Visit DeepLearning.AI Short Courses page to see our latest courses and begin learning new topics. 👇

👉👉 🔗 DeepLearning.AI – All Short Courses [+]


🙂   Let Us Know What You Think

Your feedback helps us know what you liked and didn't like about the course. We read all your feedback and use them to improve this course and future courses. Please submit your feedback by clicking on "Course Feedback" option at the bottom of the lessons list menu (desktop view).

Also, you are more than welcome to join our community 👉👉 🔗 DeepLearning.AI Forum


Sign in

Create Your Account

Or, sign up with your email
Email Address

Already have an account? Sign in here!

By signing up, you agree to our Terms Of Use and Privacy Policy

Choose Your Learning Path

Enjoy 30% Off Now. Cancel Anytime!

MonthlyYearly

Change Your Plan

Your subscription plan will change at the end of your current billing period. You’ll continue to have access to your current plan until then.

View All Plans and Features

Welcome back!

Hi ,

We'd like to know you better so we can create more relevant courses. What do you do for work?

DeepLearning.AI
  • Explore Courses
  • Community
    • Forum
    • Events
    • Ambassadors
    • Ambassador Spotlight
  • My Learnings
  • daily streak fire

    You've achieved today's streak!

    Complete one lesson every day to keep the streak going.

    Su

    Mo

    Tu

    We

    Th

    Fr

    Sa

    free pass got

    You earned a Free Pass!

    Free Passes help protect your daily streak. Complete more lessons to earn up to 3 Free Passes.

    Free PassFree PassFree Pass
Welcome to LLMs as Operating Systems: Agent Memory. Built in partnership with Letta. In this course, you'll explore and build memory for AI agents. You will also explored the idea that an LLM agent can manage its own context window, essentially acting as the operating system in an AI application. Your instructors for this course are two authors of of the influential paper MemGPT: Towards LLMs as Operating Systems and are experts on this topic. Welcome, Charles Packer and Sarah Wooders. Thanks for having us. Thank you. If you've taken some taken some of our previous courses on prompting LLMs via an API, you might be familiar with the idea that although LLMs can do amazing things, they don't have persistent memory and that you need to manage its memory explicitly. Indeed, the common example is if you make one API call to say "Hello, my name is Sarah" and then make a separate API call to ask "What is my name", it won't remember. If you want it to remember, you have to pass in the conversation's history again with each additional API call. By including information, the prompts also referred to as the input context, you can give the model additional information to actually use to generate is output. Exactly. And what is in the context window really determines the behavior of the LLM and your application. For example, a chatbot application has conversational memory that stores earlier exchanges in the conversation. You may also want to track personal facts or names over time. Or keep track of tasks. You may want to share information between LLMs in different agents. In RAG applications, you retrieve relevant information from an external datasource and introduce it into the context of the LLM. So an LLMs can make use of any information that is included in its input context to generate its response. But the space available in the input context window is limited, and using longer input contexts also cost more and results in slower processing. So managing this in-context information what to include in this input context is really important. The MemGPT paper describes a novel approach to this - to let the LLM manage it. If you're familiar with the concept of virtual memory from computer systems, there's a nice analogy that MemGPT uses to explain this idea. But if you aren't familiar with that concept of virtual memory, don't worry about it. This course will still make sense and still be useful to you. But Charles, why don't you explain that analogy. Sure. So we like to think of the context window, like the virtual memory on a computer. Your computer thinks it has a really large memory, much bigger than the physical memory it actually has. Specifically, it has a very large virtual memory. When it tries to reference a virtual location that is not present in physical memory, the operating system will first make room for it by moving a block of information that is in physical memory out to disk, preserving any changes in that block, and then fetching the new block of information with a reference address from disk back into physical memory. Similarly, you can think of the context window of an LLM as analogous to the physical memory. In our system, an LLM agent includes the role of the operating system and makes decisions about what information should be included in the context window. AI agents can use an LLM for planning, use tools, and make decisions such as deciding to stop or continue a task. A similar approach also lets an AI agent manage as a memory. To support memory management, the agent is given a section of the context window for long term memory, which the agent can write to. The agent is given tools to access external storage, such as databases, to create a larger memory store combining tools to write to both its in context and external memory, as well as tools to search external memory and place those results into the LLM context, So in this course, you'll get to put these ideas into practice. In these lessons, you will build an LLM agent that can edit its own memory from scratch. This will ground you in the basic ideas of LLM memory management. Next, you'll go beyond the basics and learn the key concepts from the MemGPT paper. Then we'll introduce Letta an open source agent framework in which agents have the tools and information to manage the context window. You can use Letta not only to build MemGPT agents as described in the original paper, but also go beyond the research paper and build agents with more advanced types of memory. In multiple lessons, you'll create agents and explore the details of how memory is built. You'll put this into practice by building a custom task memory. You will use this knowledge in applications, including your own research agent and an H.R. multi-agent application, where agents share a memory. We want to acknowledge some of the people who helped create this course. Professor Joseph Gonzalez, who is a UC Berkeley faculty member and a Letta advisor. The Letta team. And from DeepLearning.AI Geoff Ladwig. The ideas here are really exciting. Letting an AI agent manage its own memory is a novel and powerful technique, and it provides a powerful infrastructure on top of which to build many applications. Let's go on to the next video to start learning about this.
course detail
Next Lesson
LLMs as Operating Systems: Agent Memory
  • Introduction
    Video
    ・
    5 mins
  • Editable memory
    Video with Code Example
    ・
    12 mins
  • Understanding MemGPT
    Video
    ・
    14 mins
  • Building Agents with Memory
    Video with Code Example
    ・
    12 mins
  • Programming Agent Memory
    Video with Code Example
    ・
    14 mins
  • Agentic RAG and External Memory
    Video with Code Example
    ・
    8 mins
  • Multi-agent Orchestration
    Video with Code Example
    ・
    14 mins
  • Conclusion
    Video
    ・
    1 min
  • Course Feedback
  • Community