We mentioned pipelines can branch. So let's actually build one. Full backs are a great example where branching would be super useful. In this lesson you'll create a branching pipeline that implements a fallback to web search. If our right pipeline based on our database is not able to answer a user query, you will make it fallback to web search. So let's get coding. Let's build our first pipeline. This able to branch and full backs are a perfect use case for this. So this lesson we're going to be using a component called the conditional router in haystack. This is a special component that accepts roots and conditions and branches out your pipeline and activates a specific branch if a condition is met. If condition one is met, we might ask for a retriever. If condition two is met, we might have a whole different section of the pipeline that does web search. Similarly, if condition three is met, we might just want a generator to generate the response. Root is can also be used for many different use cases. You don't have to have specific stack components here. You can also have your router go to complete different database, go to complete different API, activate web search, and so on. what we're going to be building is a fallback to web search pipeline. So we're going to evaluate whether the answer is possible with a rank pipeline. And then based on the results of that pipeline, we're going to ask the conditional router to either route to web search or give the answer. The conditional route is going to check if there's no answer within the Rock pipeline's response, and then do web search. In this case. But if there is an answer, we're going to end the pipeline for web search. We're also going to be building a sort of Rag pipeline. But in this case, the documents that we're going to be receiving are going to be based on web search. We'll have web search, and then we'll have our usual prompt builder and generator. We also get to name our branches because we can name the outputs of the conditional router. We'll give one branch the name answer and the other branch the name. Go to Web Search. Let's see all of this in code. Let's start with the usual warning suppression, and make sure that we're loading all of our environment variables that we might need for this lab. Next, we'll simply start by importing all of the dependencies. You'll notice a new component here, which is going to be the conditional router that we'll be using. Next, you'll start by writing some documents into an in-memory document store for our AGG pipeline. Here, I've provided some dummy documents. We have four of them. And each of them talk about a specific stack components and how they are supposed to be used. Next, since we are going to be implementing a very simple react pipeline, I can simply go ahead and write all of these documents to an in-memory document store. And you don't even have to create a pipeline for this. document stores in haystack also have a utility function called Write documents, where we simply write document objects to a document store. And we don't have to split these. We don't have to clean these documents. They're already pretty short. So you can just go ahead and use that. Now that you have some documents in the in-memory document store, you can start by creating a rack pipeline. We're going to be slowly building up this pipeline start including branches and start including the web search along with that as well. Let's start with our Rag prompt templates. Here we have a simple Rag prompt template, but different to what you've seen before. We also include an instruction that says if the answer is not contained within the documents, reply with no answer. This is a very simple way of doing this type of pipeline, it works pretty well with the model we're going to be using, which is GPT three. You can also modify this prompt to output something different if you like. Next, let's start building out our pipeline. As usual, we start by initializing our pipeline, and next will simply be adding all of the components that we're going to need. notice that because we haven't created embeddings for our documents, well now we're going to be using a keyword based retrieval. So previously you have seen in-memory embedding retrieval. This time we're going to be using the in-memory Bm25 retriever. We also have our prompt builder using the prompt template we created above, and also the OpenAI generator with the default model which is GPT three. Next, let's connect these components to each other. The retriever is forwarding documents to prompt, build a documents, and then prompt builder is simply forwarding the full prompt to our alarm. This is still a very simple read pipeline. Let's have a look at what we have so far, and slowly we'll start building up on this pipeline. We have a retriever, a prompt builder and a large language model. But let's see how this pipeline reacts when we ask certain questions. first, let's ask a question that that we know our pipeline should be able to answer. You may remember that we had four documents and each of them talk about certain haystack components. We already know that we have a document in there that talks about retrievers. Let's ask the query, what is a retriever for? Remember that we have to provide the query both to the prompt builder and the retriever. The retriever is going to use the query to look up the most relevant documents. The reply is retrievers are for retrieving relevant documents to a user query. But now let's see what happens if we ask the question what Mr.. All components are, then we already know that we don't have any documents that talks about Mistral or the components that we have in haystack for mistrial. So if all goes well, because our prompt contains the statement reply with no answer, we should be getting the reply. No answer. Great. Now why using GPT three here and in my experience, we are pretty lucky in that we can always get this no answer output, but sometimes We might have no answer with a capital M capital and an A and so on. So will also account for this. Let's start creating our conditional roots, and you'll start building up on the right pipeline you already built. Well, going to be using the conditional router. And this component accepts a list of routes. You'll start by creating two routes. Notice that these routes have conditions outputs output names and output types. our first condition states. If no answer is in replies, then go to web Search and output query. The second condition states if there is not no answer and replies, simply output the reply itself to the output name. Answer. Now we also add a function that comes by default with Jinja here, while making sure that we're evaluating the reply, but only in lowercase. This is to account for whether the replies contain something that goes like no answer with a capital N, and so on. We can also do the same for the next condition. Now we can start using these routes in a conditional router. you'll start by initializing your router. And now you can start running your router. Here we have an example. And we're simulating a reply for a large language model here for the query who is Jeff? Let's imagine, an alarm responded with Jeff is my friend. Let's see what the conditional router does then. As you can see, because we didn't have no answer in this reply, the conditional router outputs an answer. Jeff is my friend. That's because we asked an output to answer, and that the answer contains the reply. Now let's see what happens if we have no answer in the simulated alarm reply. In this case, we're also testing whether it'll work with a capital N for the same question. Who is Jeff? Let's say we've simulated the answer. No answer. And what output in the query to go to web search. This is great. So this means we can continue building up on the pipeline you created above. You'll start by adding the component router with these routes to your pipeline. And then we'll connect the replies we get from the alum to the router. Replies. Let's start by looking at what this looks like now. Now, on top of the same pipeline you built before, you also have a router at the end outputting. Go to web search or answer. Let's see what happens if we run the same query as before, where we know there should be no answer? what Mr.. Components does haystack have? As you can see, the result comes from router at the bottom and is outputting the question what Mr. components does haystack have to go to web search. Now we can start building a web search branch. For web search, we're going to be using a component called supportive Web search. This component accepts some queries, searches the web, and returns the results for that query as haystack documents. So we are able to build basically the same type of react pipeline. Only this time the documents are coming from the web. So you'll start by creating your prompt for this web search rag. This prompt is very similar to all of the rest you've seen so far, but the only difference here is that we're asking the NLM Your answer should indicate that your answer was generated from web search. This prompt is expecting the query and the documents. These documents now have to come from web search results. Now that we have our web search prompt for the rest of the pipeline, that's going to be doing web search based drag, we can start building out a whole pipeline with conditional roots. The first few components are the same that you've seen so far. Let's call this pipeline rag or web search. And then let's add all the components you've seen so far. So this will be the same as the Rag pipeline that you saw earlier. However now we're building up on that. We're also going to add the component web search which is part of web search. Next we're going to be adding our prompts builder for web search which is using the template above. Finally, we're asking for a large language model to generate a human readable response. Now let's connect all of these components. The first three of what? Where you see from above. We are asking the retriever to add documents to a prompt builder. Then we're asking a large language model to generate a response. But we're also then forwarding that response to our router. From here on out, we need to ask the browser go to web Search results to go to web search. query that the go to web search is outputting is now going to the query for web search. And we're also adding the same query to the prompt for web search. Finally, the resulting documents from website are being added to this prompt as well. And we're asking a large language model to generate a response. Before we run this, you can also show it as usual. Let's see what's going on here. You'll notice that the beginning is the same react pipeline you saw before. But now we have some more things. After the router. The router is routing to web search. From the go to web search branch, you're asking the prompt builder to forward a prompt to a large language model . And the documents here are based on web search replies. The router is also outputting answer if it doesn't have to go to web search. One thing you can experiment with modifying here is notice how web search is outputting links as well. Links are the links from which the resulting documents were generated from based on web search. You could, if you like, use this output in your prompt and modify your prompts for the web search results to also reference the URLs that the results are coming from. Let's run this full pipeline with a full back to web search with the same queries we tried earlier. You can start by running the same query you ran before. What is the retriever for? And we know our pipeline should be able to answer this. so we should be getting the answer from the output router. Answer. As you can see, router answer outputs retrievers are for retrieving relevant documents. Great. Now let's see what happens. If you ask a question that shouldn't be answerable with our pipeline. Again, you can try the same query as before. What Mistral components does haystack have this time around? You should be getting the reply from web search. Great. As you can see, web search has links outputs but the reply will not reference these. We get the result from NLM for web search and the reply says based on the documents retrieved from the web. Mr.. Components. The haystack has includes etc.. You can try this with different questions that should and shouldn't be answerable with our pipeline. What is the capital of France? We should get this reply from web search. What cohere components does haystack have? This should also come from web search. But what are generators for? Should be answerable with our pipeline. In the next lab, you'll be creating two things. You'll also be creating your own component that has two outputs. So this component can also help you branch out. But we're going to be using this component to implement Self-reflecting agents. See you there.