In the previous lesson, we added in semantic memory to our agent. We're now going to add in episodic memory. We're going to add it in in the form a few shot examples. And we're going to add it in to the triage step. So as a reminder, episodic memory is experiences. And in agents this is generally best represented as past agent actions and as few shot examples that will pass into the prompt. Let's see this in action. This should hopefully look familiar to you by now. This is the basic setup. We're going to load some environment variables, define our profile, define our prompt instructions, and define an example email. Now is where we normally define the triage logic. But let's take a pause here and talk about a few shot examples. The first thing we need to do is define our long-term memory store. This is the same as we used in lesson three. After this, we can define the data that we want to load as few shot examples. So first, we need to decide on the email content that we want to store. So this is actually the same as the example input above, but we will use this now as a few shot example. We then put this as part of a larger data model that contains the email. This is generally the inputs and then the outputs. In this case we'll call it label and we'll call it respond. And these outputs are also going to be in the few shot examples that we format. So this is going to help change the behavior of the agent. We can now put this example into the long-term memory store. We can do this by specifying a namespace. And so this is similar to before email assistant Lance. But then notice that here is examples. Whereas what we use for the semantic search over facts is collection. We pass in the ID which is just a Uuid. And then we pass in this data up here. Let's add in another data point so that we can do search over multiple data points and see which ones are returned. So here we'll add in a separate one. This one is a new email sent by Sara Chan. Let's now simulate searching and returning these examples in a formatted way. Before that, we're going to define a simple helper function, which is going to take in the few shot examples that we retrieved directly from the store and format them nicely into a string. This is great because it will allow us to easily see what we retrieved. And this is also a better format to pass into the LLM than the raw examples. We can see that once we get the examples back from the store, we are going to format them into this template which just contains information about the email subject, email from, email to. So it's putting it in a nice human readable way. And then it has this triage result. This is the outcome of the triage step. And we're now going to be putting this as part of the prompt. And so here you can see that for each example we're taking the email part. This is the input. And we're formatting this above. And then we're putting the result is the label. This is the output that we put as part of the data that we stored in the store. So these are just helper functions to take that data and format it nicely. So let's now simulate searching over. This is the exact same email that we passed in above. And I'm going to change something small just so that it's not doing an exact match. I'm then going to search over the namespace that we had, and I'm going to pass in as query parameter, this email data. I'm going to set limit equals one. So remember I've stored two pieces of information in the store already. So it should just fetch the one that is most similar to this. I get back some results. And now, let's format them with the helper function that we had above. We can see that I got back the data point that we stored, which is slightly different from the data point we passed in, but it's still semantically very similar. Let's now start putting this together into a new triage step that does this few shot example search. So the first thing we're going to do is define a triage system prompt. So previously we imported this from a helper file. But now we're actually define it in code so that we can modify it if we want. And we can see down here that we have this few-shot examples section where we are going to format the examples, and then also add some more instructions here that basically just say to pay close attention to these examples. Let's import a bunch of stuff for the routing step. So this is all the same as before. We have some imports. We're going to initialize a chat model. We have this output schema. We're going to attach that and get this LLM that now generates structured information. And then we're going to import our triage user prompt. We don't need to import the triage system prompt because we've defined that above already. We then have some code that should look pretty similar to before it's used to set up the triage router node. So we're going to create our state. And we're going to have some imports here. This is the previous definition of the triage router node. So how are we going to modify this to account for this few shot example prompting. First, we're going to take in a config and a store into the input. This will be passed through by the main agent. We're then going to add some logic before the system prompt gets formatted to basically add in the few shot examples here. This is the logic that we're going to add. First, we define the namespace. This is where to search for the few shot examples. It's going to be email assistant and then this is where we're going to use the config. And we're going to pull in the LangGraph user ID from there. And then examples. And so this is the namespace. We're then going to use this to search in the long-term memory store for the query, which is just a stringified version of the email. We're then going to format these few shot examples into a string. And we're going to update this to pass in the example string into the system prompt. After that we can go about creating the rest of our agent. So we're going to create all the same tools that we had before. We don't need to redo this because we already created the store. So we can get rid of that. We're going to create the tools for managing and searching memories. We're going to create this prompt and the function for creating it. We're going to create this response agent. And we're going to have this config that we're going to use. We're then going to put together this email agent. Same as before. And again we're passing in the store to make sure that it's covered. Let's now test this email agent out and see the effect a few shot prompting. So here we have an example email input from Tom Jones. Hi John. Want to buy documentation. Let's call it. And let's pass in the config with the LangGraph user id of Harrison. And we can see that by default this email requires a response. But what if we don't like that? What we can do is we can add a few shot example in to our long-term memory, and then that will be pulled in at runtime. And it will handle future similar emails in a similar way. So here we can add that with the label of ignore. So let's now say we want to ignore these types of emails. We'll put that in a store our name spaces email assistant Harrison. Because this is the LangGraph user ID examples. And then we store it in there. If we run this again, we can now see that it's classified as ignore. It's pulling in this few shot example and learning that this should be ignored. We can also change it slightly. So I've now added more question marks and added in Jim. And we can see that it still gets classified as ignored. So it's learning to treat similar emails in a similar way. If we pass in a different LangGraph user ID, we can see that it's respond. That's because these few shot are examples are scoped to this individual LangGraph user ID. We've now added an episodic memory to help with updating and learning user preferences for triaging. This is the end of the lesson. So it's a great time to pause and play around with different examples and different LangGraph user IDs, and just generally play around with it and better understand all the fun and all the learning that it can have. See you in the next lesson.