In this final example, you'll build a guardrail that prevents your chatbot from mentioning any competitors. This is a critical concern for customer-facing applications. You'll see the guardrail in action in the Alfredo's Pizza chatbot example that you've been using throughout this course, and you'll configure the guard to prevent the chatbot from mentioning the shop's main competitor Pizza by Alfredo. Let's get started. So once again, we're going to start the trying to reproduce that error that we had seen earlier. Just to jog your memory. We are pasting in our warnings, our imports and then also setting up our unguarded chatbot. As we've done so many times now. Once again, let's try to reproduce the initial chatbot failure mode that we'd seen earlier. Here, I'm basically pretending to be a potential customer that wants to place a very large pizza order, and I want to decide why I should buy it from Alfredo's Pizzeria compared to pizza by Alfredo. Here we can see that the chatbot responded with some reasons to pick Alfredo's Pizza Cafe over Pizza by Alfredo. But in our ideal world, we wouldn't answer a respond to, you know, other customers, other potential competitors by name at all. Now let's look at how we can build a simple validator that uses a combination of techniques to detect if there's any mentions of competitors in any LLM text. When you think about this problem, you actually might think that you can do a simple word match, but a lot of companies are actually referred to by multiple names. So for example, you can think of GMC versus JPMorgan. And it's typically pretty hard to enumerate all of the common names that a competitor might be referred to as. So what we end up doing is building this cascading approach to detecting when a competitor is mentioned. First, we check for an exact match, and if we find an exact match to one of our competitor names, then we immediately fail the validator. If we don't find an exact match, however, we then go on to named Entity Recognition, where we extract all of the named entities that are mentioned in any given text to us. And then we try to see if the vector embeddings of those named entities are similar to the embeddings of our known competitors. So the idea here would be the JPMC would have a very close embedding as that of JPMorgan. If we notice this behavior and the similarities are greater than some threshold, then we again fail. Depending on whatever competitors are detected. And if not, then we end up passing and determined that there's no mentions of competitors in any text. As we're setting this out, we're going to import a number of standard machine learning libraries like Transformers and Sentence Transformers Sklearn metrics and numpy, because we're going to end up using a bunch of these foreign named entity extraction and vector similarity computation. And then separately, we're also going to initialize our named entity recognition pipeline. We are going to use a simple Bert base tokenizer and model that we're going to use as part of our NER or named entity recognition pipeline. Awesome. All right. So with that out of the way, let's actually go into setting up our competitor check validator. All right. So we've been through this exercise a couple of times at this point. We're just going to start by creating our new competitor Check Mentions Validator. And we're going to start out with an empty initialization function. And then also just going to set up our validate function to start out with. And as we know, a lot of the core logic of our validator actually lives in the validate function that we're going to end up using. All right. The first thing that we're going to do is check if there's an exact match to any of our known competitors. So in order to do this, the first thing that we're going to do is actually taken some competitors in our initialization function so that we know what to look for. So let's add that here. And then let's also assign it as a class variable, just so that the rest of our functions can access it. And just for some utility let's also cast it to lowercase. The second thing we need to do is implement this exact match function. So let us do that. Over here. So all this function is doing is taking some text. And then iterating over the list of competitors and seeing if the competitor word is present in the text that is given to us. So this is very simple regex to see if that competitor word is present in the text that is given to us. And because our validator logic lives in the validate function, we are also going to add this in our validate function over. Here, where we call the exact match function, which essentially returns which competitors were detected in our text, and if there were any exact matches found, we immediately fail the validator. All right. So this is the first stage of our cascading filter. Let's go into the second stage that uses NER followed by Similarity based thresholding. So first let's implement the NER function that extracts entities. So let's start by doing that over here. And what we've copy-pasted in is this function that takes in some text and then runs it through that NER pipeline that we've set up earlier. The results of that NER pipeline are iterated over. And then any time an entity starts with B dash or with I dash, we appended to our list of entities. And then we do some cleanup here where we basically just remove like some surrounding space and other characters from around the detected entities. So B and I markers here just help us identify what the location of the entities identified is and what their category is. So this model also classifies the entities that it recognizes into person, location, organization, etc... However, for our purposes, we actually don't really care about what the category of the entity is since we do that afterwards when we do the matching regardless. All right. Now that we have our entity extractor, let's make sure that this is also reflected. First in our initialization function. We use the NER model here. So let's make sure that this NER model is available as a class variable for us. And secondly, let's make sure that our validate function also calls the function that we just wrote. Our final step is to perform a vector similarity match. Where we look at the entities we extracted. And we look at the competitors that were given to us and see how close their embeddings are. So one of the first things we will need to do in order to use this is set up an embedding model. We've done this a couple of times before, but we just want to make sure that this is available to us as a class variable. And then secondly, just to make our life a little bit easier. And do not have to do the embedding over and over again. We are just going to store pre-computed embeddings of our competitor names. All right. Now that we have that done we come back to our vector similarity match function where we essentially iterate over the entities that we identified. And then compute the similarity of those entities with any of our known competitors. And if the similarity is greater than a threshold, then we add it to a list of matches. Now let's make sure that our validator logic also reflects this by making sure that we call this vector similarity match function with our extracted entities. Now, same as what we've done before. If we detect any competitors, we raise a fail result with the list of entities that we detected, and if no entities were detected, only then do we pass a pass result. Awesome. Now let's actually download a version of the comparator check validator from Guardrails Hub, which is a state of the art version of this validator. And let's try running it as part of our guardrail server as we've done through multiple lessons at this point. What are we going to end up doing is swapping out our OpenAI client with this new guarded OpenAI client, which uses a base URL that points to guardrail server. And this guardrail server is just running the guard endpoint with this validator that we downloaded from the hub. Let us also create a version of our RAG chatbot that uses the new guarded client instead of the vanilla OpenAI client. And now, let's try running, the same prompt that we've seen earlier on this new guarded client and see how well it performs. So as expected, we see that validation failed because Pizza by Alfredo which is our competitor, was detected in the output. And we've seen this in the previous performance of these failures as well. So this helps us mitigate this specific failure. And if you want you know, once again, you can ignore or you can not use This specific validation error message and instead use a more graceful error message that tells the user that you won't be responding to their request by catching the exception that's raised.