In the introduction, you learned a bit about what function calling is. In this lesson, we will describe it in more detail and you will get some hands on experience. Let's dive in. So what is function calling. As described in the introduction. Function-calling is the capability of an LLM to take in a natural language query, along with a description of the function, and output a string that can be used to call that function. Consider this example. You would like to know the temperature in New York, and you have a function. Get that that can provide you that value, but without function. Calling your LLM can help you out with the function-calling the LLM. You provide the query and a description of the function to the LLM. The LLM has been trained to recognize that it can use functions as defined in the prompt to answer the query. It will generate a string that can be used to invoke that function. In this case, get them with the city argument set to New York. You can now execute that function and return the result and the query to the LLM. The LLM can now properly answer the question. Note here, even though they're called function-calling LLMs, they only generate a string. They don't actually make the call. You have to do that. Here, It's worthwhile to draw a distinction between the general purpose LLMs and the special-purpose LLMs. General purpose LLMs respond to all types of queries, which can also include function calling queries. On the other hand, special-purpose LLMs are fine-tuned to focus on a single or a small set of tasks. An example of this is NexusRavenV2-13B, which can be tuned to provide function- calling services, and will always try to return a function call. Given a user query. Special purpose LLMs can be smaller and offer better latency than general-purpose LLMs. Because they're fine-tuned for this task, They can often outperform general purpose models for the tasks that they're trained for, such as function calling. Now, I've used a couple of different terms here. Terms including function calling tools and so on. What is really the difference? Well, function calling is the name that's given to the LLM capability of forming the string containing the function call. Tools are the actual functions that are being called. With that being said, let's make it more concrete and let's build some tools. Let's start by building local Python tools. Let's write the tool that relies on matplotlib. It needs two inputs: x and y and plots the coordinates that are specified by x and y. An example of this tool is, if a user says that they want to plot y is equal to ten x for a certain set of values for x. You can use this tool to answer the user query. The way you would do that is to write a function call that looks roughly like this. In this function call, you would pull out the details for x is equal to one, two, three. And because of the transformation function requested, you will map those values to ten, 20, 30 on the y axes. While you can do this, we want the LLM to do this for us. So the way you would do this, is to provide the tool description and the user query to the LLM. Let's start with the tool description. You start by providing the prototype of the function you're using earlier. Raven, The LLM you'll be using, uses Python formatted functions. So you'll be utilizing that. You'll also add some descriptions on what the tool does. This will tell the LLM what this function or this tool is meant to do, and will improve its reasoning about if it should use this tool to answer the user query or not. You will also provide the user query to the LLM. Recall the user query was the plot y is equal to ten x, which we can simply add here, and then we're done. So this is essentially our tool descriptions, right? You have provided a tool and a user query. You will call the function call me an LLM named Raven. You can do this through the query Raven function. The result is a string. The function name came from the function prototype. The arguments came from the user query, with the LLM actually doing some math to generate ten times the inputs. Cool. Now you can execute the string like this. Great. Exactly what we expected. Try this on your own. Here's the user query and a call you can modify. Try changing the query to produce different results. So let's dive deeper into what you just did. You told the LLM via the prompt how to format the Function you've defined earlier by naming the functions in a pythonic format, and you've also provided in the form of pythonic arguments. The information you want the LLM to extract from the user input. you have also provided via the description. The information necessary to understand when a tool is relevant for a user query. It's important to note that the LLM has been trained to recognize function calls. This format is very specific to such LLMs. The LLM will respond to your user query with a string that you can use to call the function. Now let's take a look at more complicated examples. Similar to earlier, you will use a function that relies on matplotlib, but you will make it do more. You write a function that draws a clown face, and you will parameterize it using three arguments that control the clown's face, eyes, and nose color. The exact implementation of this function isn't too critical, so allow me to quickly implement it and we can move forward. Similar to earlier, what you will do is to provide the prompt that will contain the tool descriptions and the user query to your LLM. Let's say that you want the LLM to draw a pink clown face with the red nose. Format this and create the prompt. Finally, take a look at the prompt you have. As you can see, you have the function description identified with the function tag. You can see that the LLM can identify from the function description and the user query. First that it should invoke this function, and second, it can fill in the arguments face color, eye color, and nose color. From the information in the user query. Here and here. Now call the function-calling LLM. And use it to look at the code string that it has generated. It has extracted the necessary information from the user prompt, which is the face color being pink. And the nose color being red. Now you will run it. In this case, it's a pink clown face with the red nose, which is exactly what we expected. Now let's try writing your own clown. Here is a framework where you can enter your own query or change the prompt and create your own clown. Give it a try. So Raven is not the only LLM that's capable of issuing function calls. Let's try using OpenAI's function-calling on the same example. You'll import the necessary attributes, and try to use the GPT-3.5 turbo model. For this example, you build a client and you. you'll also build a helper function that wraps around everything that will allow you to kind of query the OpenAI API. So the thing is, with the OpenAI API, you'll provide a description of the tool and its arguments in a json format. While the description and the arguments will be identical to the pythonic format you had earlier. The weights presented will be a bit different. Please pause the video and take a moment to compare the two. What do you notice in the response is that the attributes are in a format that's not directly usable for us. Let's extract the function in the arguments using the following approach. At the conclusion of this, you get a Python call that you can now directly execute. With this, can see how you can utilize OpenAI's function calling API to also build a cloud that addresses the user query from earlier. What you did was actually bridged the gap between the unstructured world of textual training data that the LLM used in training, with the highly structured world of code. In the next lesson, you will focus on variations of function calling, including parallel, multiple and nested calling. A lot of exciting things are coming up. So let's go on to the next lesson.