Let's build our first identity pipeline. In this lesson, you will be building a very simple self-reflecting agent by building a Hastag pipeline that is able to loop this looping mechanism will allow you to implement the self reflection. As a very simple example, you will have an LM generate named entities from unstructured texts and iteratively improve on its response with self-reflection. Let's dive into the code. Self-reflection is a mechanism for self-assessment and correction, and it allows us to refine outputs by providing more reliable and accurate information. With self-reflection, we allow the NLM to provide feedback to itself for future trials. We can have the model provide the self with a success status, or allow an external source to do so, but then we have it self formulate specific feedback. We can go even further and get the NLM to reflect on what actions it can take to improve on the response, and even provided with some criteria to look out for. It can then verbally states what went wrong and how to improve. It's in the next iterations. In this lab, we're going to be building a very simple self-reflecting agent where you'll ask and then allow them to generate entities from unstructured text. At each iteration, the NLM will either state that it is done so it's reflected on the output, and it is happy with the results, or it will reflect and critique on what should be improved about the extracted entities. We're going to be building this thanks to the availability of custom components. And we're going to create a component that we call the entities validator that will simply evaluate whether the generator decided that it was done or not. And until the generator decides that it's done, it will continue looping. Another way these types of looping pipelines can be super useful is to validate outputs. For example, we extract data in Json format, but we want it to adhere to a certain schema. In some cases, we could implement output validators that return both the invalid replies and the error messages we got from a validator such as pedantic, where we can validate whether an output adheres to our Json schema. Let's see how we can create the entities generator in code. Let's start with the usual warning suppression, and make sure we load all of our environment variables. Next let's import all of the dependencies for everything we'll be using in this lesson. As a first step, we'll start by creating a very simple custom haystack components that we're going to be calling the entities validator. Let's see what this component does. Let's start by creating our class. We call it the entities validator. And let's have it have a run method that is expecting replies. These replies are going to be coming from an alarm. What we want to do here is very, very simple. We're going to have this component output one of two outputs. If the replies contains the word done, we're going to be outputting entities. If however it does not include dumb we're going to be doing two things. This means the NLM has decided that it's not done creating entities yet. So we'll have this print out the entities that the NLM is still reflecting on and read. This will allow us to see the and visualize the loops easily, but it will also output the entities in and output called entities to validate. The last thing we have to do is make sure that we're providing these as output types with a decorator to the run method. In this case, you'll notice that this has two outputs entities to validate and entities. Both of them a string. Let's see how we might use a component like this. For example. Let's create an entities validator and again let's replicate a reply that we might get from another level. For example, let's run this entity validator with a response from that column that has the entity name Tawana. As you can see, there is no such thing as done in this response. So hopefully we'll be printing out these entities in red and returning them as entities to validate. Great. What if we had a response that included the word done. Let's give that a go. In this case, the entities validator is outputting the entities in entities. Now let's start to create a special type of prompt template. In this prompt template we're going to have an if block. You'll see why. Let's start by creating the first instruction we want to give to the language model. This template includes the instruction extract entities from the following text. And as you can see, our prompt builder is going to be expected an input called text while also providing it with some information of what kind of entities we want to extract from this text. The entities should be presented in key value pairs. We're telling it to extract person, location and date while also telling it if there are no possibilities for a particular category. Return an empty list and so on. Now this is great, but it will only allow the end to produce a response once. And it's not giving us a chance to iterate on this response. So we're going to be using the if else block functionality that comes with Jinja templating. So let's start. Let's make this be in the else block of our if else statement. And you'll see what we add in the if statement. Now in the if statement, we want to give instructions to the large language model if it has entities to validate. So we're going to be providing it with some examples of what it should look out for to improve on its response. in this case our prompt builder is going to be checking if it has entities to validate as inputs. And then we're going to be adding some more information that it can use to improve on its answer. We again iterate. Here was a text. You were provided and provided the text. And here are the entities you previously extracted. And then we give it some bullet points on what it can reflect on and what it can improve. This is where we also tell the length that it can say I'm done if it thinks it's done. This is a very, very simple way of implementing self reflection in these types of pipelines. Let's see how this one works. Now that we have our prompt template that is able to accept entities to validate if they exist, we can start creating our self-reflecting agents. Let's start by creating our prompt builder. This prompt builder is going to be using the templates we just created. Next we want to have our generator. In this case again I'll be using the OpenAI generator with a default model. So it's going to be GPT three. And finally we want to be able to use our entities validator somehow. Another important thing to note here is that because this pipeline is going to be looping, we'll also able to provide the maximum loops while allowing our pipeline to have. This allows you to make sure that you're not doing more requests than you want to to a model provider like OpenAI. In this case, we're telling it it can do a maximum of ten loops. If the line isn't able to create an asset, satisfied with within ten loops, our pipeline will fail. But it also means that we're not incurring more cost than we want to. Next thing we'll do is we'll simply add all these components to our pipeline, but let's see how we connect them. So we simply start by connecting our prompts to our alarm. This time we're being a bit specific. You don't need to provide prompt in each case, but if you want you can. And then we're going to be providing the replies from this large language model to our entities. Validator. Now the entities validator has a response from the large language model that it can evaluate and see whether the large language model thinks it's done or not. However, if it does have entities to validate, we want these to be looping back into our prompt. Notice how here, if there are entities to validate as outputs from our entities validator, they're going to be added to prompt builder entities to validate. Remember how the first if statement in our prompt had if there are entities to validate. This is where that loop happens. Now this is our first looping pipeline. So again let's use the show utility to see what's happening here. Let's say Self-reflecting agent Dot, show. And as you can see, we're noticing a loop. In this case, our prompt builder is providing a full prompt to our large language model, which is then forwarding its replies to the entities validator. And the entities validator is either responding with entities, or it's looping back with entities to validate. And this loop continues until either we've hit ten loops all. The response has the word done in it. Now that we have our pipeline, this provides some text to our prompt builder and see if then can generate the named entities. Here I've provided a dummy text. It's about Istanbul, the population of Istanbul, and so on. Let's run our Self-reflecting agent by providing the prompt builder with this text. We know that our entities validator will print out any entities that are still to be validated and read. As an extra step, we can also print out the final response. Once the looping is done in green. So notice here that every time you see red, it's still reflecting on those entities. The final result will be printed out in green. All right. As you can see here, the first time the NLM created the entities it had location but it didn't have personal data. Even though we asked for person and date to be included, even if there are no entities to extract, we asked for there to be an empty list. However, in the second iteration, when it was done, it did in fact create person and data as entities. Let's try another example. In this case I provided another dummy bit of text, but this time it's a transcript from a meeting. The meeting includes a few people. There's a date associated with the meeting and so on. Again, let's run this pipeline with this text as input to our prompt builder. And when we're happy with the responses, let's print it out in green. Again, we notice that the first time the NLM created entities, it had person dates, but it also had technology. Technology was not one of the entities we asked for it to extract. However, in the second go we noticed that it extracted person, location and date. This is great. In this lab, you learned how to implement pipelines that are able to loop, and in this case, you built a pipeline that loops for self-reflection. This was a very simple example of how you can implement self reflection. But you can try this with different types of input, but also ask for it to produce different types of outputs. You don't have to confine yourself to named entities. In the next lab, we're going to be creating a chat agent that is able to use function calling. And we're going to also make one of these functions a Hastag pipeline. I'll see you there.