Function calling is one of the most exciting developments by mobile providers. And what if you've got a number of pipelines like react pipelines, that you want to use as functions in this lesson we will use open AI function called and capability to create a chat agent that can use haystack pipelines as tools. Let's see how this goes. Let's build our first chat agent that can use function calling. In this lesson, we're going to be building with components that we call chat generators. And we're going to build chat agents with them. With these components we work with messages. And messages can come from user agent system or function. We can also provide these generators with tools. For example the user ask where does Mark live? And the generator simply responds with I don't know, but the generator can be provided with tools that can help it through this process. These tools can be anything from haystack pipelines to different APIs, etc.. Let's start building our own. Let's start with the usual warning suppression, and make sure we load all of the environment variables we'll be using in this lesson. Next, let's import all of our dependencies. Now we can start by creating a pipeline and providing this pipeline as a function. For this first bit we're going to be creating our usual react pipeline. And we're going to be using the default OpenAI generator model, which is going to be GPT three. By now you'll use these types of pipelines. But what's happening here, in short, is was simply asking for the large language model to respond to a query. Now we're going to also be wrapping this pipeline in a function. Now that we have this pipeline. Let's define a normal Python function and let's call it pipeline func. And it's expecting query. In this pipeline we're going to be taking a shortcut. And we're going to be providing dummy data. So we know that the documents that this pipeline is going to be generating response from are these documents. As you can see, these are all documents about who lives where. For example, Mark lives in Dublin, Georgia, lives in Rome. And so on. when this function has an input, we want to run our pipeline and we want to provide documents to our prompt builder based on the documents we created above. We also want to provide the query as a question to our prompt builder. This function is then going to return the results of the pipeline in reply. Now we have our Hastag pipeline provided as a function. Next you're going to be creating a weather function. This time you're going to be simulating a weather function. But you can imagine that this actually accesses real time live weather data from a different API. In this case, again we're going to be providing dummy data. you'll provide information about what the weather is like in Berlin, Paris, Rome, Madrid and London. As you can see, we've made these up. It's mostly sunny, mostly cloudy, sunny or cloudy and so on. This is also the same example that OpenAI provides in the documentation to explain function calling. Let's now provide this in a get current weather function. Again we're going to have a function that's expecting location this time. And it's going to return weather data. We'll first look up whether the location is present in our weather info, but if not, we'll also have a fallback here. We're just going to be making up something if we don't have the city listed in our weather info, we'll simply be returning sunny with a temperature of 21.8°F. Now you have two functions. One of them is a haystack Rock pipeline that's able to respond to where people live, and the other is a weather function. Now that we have our functions ready, we have to provide them as tools to a generator. In this use case, we're going to be using OpenAI. So we have to describe our tools in a way that the OpenAI API expects. So let's have a look at how that might look like. This is going to be a lot of data, but let's walk through it. Tools are going to be a list of tools. And what providing the name of our functions, as well as a description that the language model can use to decide whether this is the function that it should use or not. Each tool also describes what kind of inputs they're expecting. In this case, our pipeline func is expecting query, and we're also providing a description of what that query is. For example, the query to use in the search. We also tell it to infer this from the user's message and so on. We do the same for get weather. We describe the get weather function as getting the current weather, and we also describe how it's expecting location. Now we have a list of tools. We can provide these list of tools to what we call an OpenAI chat generator. Let's have a look at that. Let's start by creating our AI chat generator. This component not only is able to work with model names, but we can also provide tools in generation Kwargs. So we'll going to be creating a generator which we're calling the chat generator. And we're going to be providing our tools as tools. Next let's see what happens when we run this component with messages. This component is special in that it expects a list of messages. And we can also describe where the message is coming from. In this case, we're going to be providing a chat message from a user. And the question is where does Mark live? Well, assigning the responses from the NLM to replies. Let's see what replies contains. As you can see, this again is a chat message, and you can notice that the chat message is coming from an assistant and that it's saying that it should run a function with the query. Where does Mark live and the name of that function is right? Pipeline func. The important thing to notice here is the response that you're getting from these models are not the function's results. it's more about which function to run and how. Next, let's see what happens if we want to call these functions. For this we're going to be using a components called the open AI function caller which is actually coming from the Hastag experimental package. This component expects available functions when being initialized, and it will run the functions that our chat generator is asking to run, as well as add the responses from those function runs to the message queue. So first let's initialize our function caller and let's provide our available functions. Next, let's run this function caller with the messages that we had earlier. So our message queue earlier contained the fact that we want to run RAC pipeline func with where does Mark live? Let's assign that to results. And then let's print out these results to see how this function called reacted. All right. In this case notice we now have two messages in the queue. The first message was the assistant asking for RAC pipeline func to be run with. Where does Mark live. The second however is a response from function and the response from the function is Marc lives in Berlin. reason is we've now actually run RAC pipeline function and our RAC pipeline has responded with one of the documents we have in our documents that we provided to RAC. Marc lives in Berlin. Now that we have an OpenAI chat generator that's able to access tools and also an open function caller, we can create a chat agent, which is accessing tools. For this pipeline, we're going to be starting off with a component we haven't used before. We're going to start with a branch joiner, which is going to be joining messages from multiple components outputs into one. We'll see why we do that in a bit. Next we're going to be using our same chat generator. providing it with the tools. And finally we're going to be using our function caller. Again we're providing our function caller with the available functions we have. Now let's see how we add these components to our pipeline and how we connect This time we're creating a pipeline called the chat agent. And we're adding all of our components to this chat agents. Finally, we're going to be connecting the message collector outputs to generated messages. Next we're going to be asking for the generated responses to go through the function call to the function call. That will only call functions if the assistant has asked for a function to be called. If not, it will simply be responding with the messages that it has. Next. This is where we actually implement another loop function call. The replies are going back to the message collector. You can skip this if you like, but I like to have this because this means that if there is a response from a function, we are not simply providing the pair response from the function. We're asking a large language model to generate a human readable response from it. Again. Now we have this pipeline. Let's visualize it and see what's going on. And we'll also have a look at why we're using this branch joiner as our message collector. All right. So in this case we have a message collector. And this is very useful because the generator is expecting a list of messages. But we want those messages to either come from the function caller or a user. However, this component can only get inputs from one thing. So the message collector acts as a sort of join a combiner of those messages. Messages can either come from the user now or from function call. The replies. Now that we have our chat agents, let's actually see it in action. Let's start our message queue with a system message. This message states that if needed, break down users questions into simpler questions. Don't make assumptions about what the values to plug into functions are, and so on. Next, let's start a while loop and start asking for user inputs. This is where we start asking users to start interacting with our chat agents. We'll also add a break statement here. So if you like you can type quit or exit and you can exit the chat. Next we want to append the incoming user messages to our message queue, and we want to get a response from our chat agents based on that message. Again, we want to extend the messages with all of the responses we may get from our chat agents. And finally, let's print out the response. All right. Here we start the chat. Let's start by asking where does Mark live? Mark lives in Berlin now because we have a message queue. We have a sort of primitive memory here. So we can ask questions in context, and the yellow line should still be able to respond. So let's ask, what's the weather like there? The current weather in Berlin is mostly sunny with a temperature seven degrees Celsius. This is great because I happen to remember this is exactly what our get weather function should be responding with. All right, so let's quit this and see what else we can do. We can provide the same exact application as a radio app. If you like let's start with the same system message in our message queue. And then let's define a chat function. This is going to be used by radio to create a chat interface. Next we provide the chat interface as a demo. We can also provide some examples of what we can ask this chat agent. And finally let's launch our demo. You should now also be seeing this type of interface, where you can start chatting with your agents that has access to the tools you created. Let's start asking some questions. For example, where does Mark live? Great. Mark lives in Berlin and we also know that that's accurate. Next let's ask what's the weather like there? Notice how we don't have to specify Berlin because model has access to a message queue. So it should be able to infer that we're asking about Berlin. let's ask another follow up question. For example, what's the appropriate clothing for him today? And there we are. We get an answer based on the current weather in Berlin and what he could be wearing today. You can try playing around with this chat app and maybe even provide some of your own tools. Here we simulated a weather function. For example, you can try and see whether you can replace that with something that accesses current live data for weather information, and even replace your RAC pipeline with something completely different. For example, you can implement your own web search pipeline, and you can also use all the pipelines you've been creating so far in this entire course and provide them as tools to a chat agents. I hope you enjoyed this and congratulations on finishing the course!