In this lesson, you're going to try your hand at a few typical applications of function calling. Let's start applying function calling. Traditional LLMs are trained on static datasets and lack the ability to access or process information that has emerged after their last training update. This leads to outdated or irrelevant responses, especially in rapidly evolving fields like technology, medicine, or current events. Function calling allows LLMs to retrieve current data from the web or from your company internal databases in real time. For example, a function can be call to fetch the latest news, stock market updates or weather forecasts, ensuring that the information provided is up to date. This capability is crucial for applications where timeliness and accuracy of information are essential. Let's dive into a few examples. Suppose you are interested in the events that took place after you finish training your LLM. How would you adapt your LLM into this new information? For example, let's ask about a recent product announcement. We will first the load_dotenv file. Great. Now you will ask about this R1 device, which was announced well after the conclusion of the training of the LLM. So just post the question asking about this R1 thing to the LLM and submit your query. You'll notice that the LLM rejects it, saying it doesn't have the information necessary. Let's try a web search instead. Define a utility called do web search, which accepts a user query and the number of results to circumscribe your search to and within the tool, you will query the search endpoint for a web search API, which would contain a payload that wraps around your API key as well as the user query. Submit a Post request and collate all of the content from the responses into a single string and return that string. Similar to earlier, you will define a function calling prompt that Raven will use. Define the function annotation for the web search utility that you defined earlier, which specifies the function signature as well as a function dot string. You also provide a one shot example for an example of user query and function call that you want Raven to use as a reference when understanding the tool that you build. You will then provide the user query that you've been using since the first sale, and you will get back a Raven function call. Which you can execute to get a list of information that your tool returns. You will then provide that information returned by a tool back to the LLM, along with the user query you've been using since the first cell. Provide this prompt back LLM to get back a grounded response. Taking a look at the response, notice that it's far richer and contains far more details, such as the product dimensions, as well as the capabilities of the product. This is information that the LLM you had no idea about since the product was released well past its training date. However, because we provided access to the internet via the search tool, the LLM was able to find this information and then digest the results and provide you with a very concrete answer. Please try this yourself with your own queries. Next, let's take a look at chatting with your SQL database. Oftentimes, for a lot of companies, there are insights that are lot behind company internal databases and knowledge based. Because these are data that public portals will not have access to, a lot of the public open-source language models will not be able to give you any meaningful answers for questions that depend on data that's locked behind these sources. A good way of resolving this is to provide access to your database, to your LLM using function calling. Let's take a look at making this more concrete. First, create a random database in the utils.py file found within the same folder, you will find a utility called "create random database". This utility will create a database or toy_database.DB, which will be populated with random toy names and random toy prices. The database will create a table called toys, which contains the name of the toy as well as the price of the toy. You will define another utility called "execute SQL" within the same utils.py file. This will simply take in some SQL tool and execute it against your toy_database.DB, which is the database you defined in your previous utility. You will import the create random database utility and run it. You will then pose a question such as "what is the most expensive item that your company currently sells?" Answering this question depends on data that's not behind the database that you created earlier. So let's try running it and gathering some information. Since the LLM doesn't really understand the schema that you've defined earlier, let's make it concrete and provide the schema to the LLM. This schema once again just tells the LLM that you created a database called toys, which contains the name of the toy and the price of the toy. You provide this schema in your Raven prompt along with your function annotation. You also provide the user question or the user query that you had earlier. Again, to the model, you will then run the model to get the output. Great. You see that the model has returned the SQL call that selects the name and the price from the database table called toys, and it orders it by the price by descending and limited to one which gets the most expensive item that's currently defined in your table, which is the query that we want answered. You provide the results you draw back earlier from your database back to your LLM along with the question that you had earlier. And you get back a response that says that the most expensive item your company currently sells is the Wonder Robot, which costs nearly $20, which is directly an answer to the question that you posed earlier. But you'll notice that you had to allow the LLM full access to your database. You allowed the LLM to generate the raw SQL code directly. But what if you didn't want to do that? What if, due to security purposes, you didn't want the LLM to generate raw SQL code that you can execute against a database? There is the more gated version that you can use for more added security. Instead of asking the LLM to generate native SQL, which could be problematic, we can guard the access to our database more carefully. Let's define a few functions to make this a reality. We can allow for safer interactions with databases. You will define a function to wrap around the operations that you had in mind. First, let's define a function to connect to your database. You define a function to list all toys in your database and implement the SQL behind the scenes. Similarly, to find toys by prefix and define toys in a price range. To get random toys. To get the most expensive toy. To get the cheapest toy. And finally, you will provide all of the functions you defined earlier to Raven using the function annotation format. Since Raven doesn't have direct access to your database, you need not provide Raven with the database schema that you've designed earlier. Rather, Raven would just use the templates that you've provided to it to answer the user query. Let's run this and see the output. Great! Raven was able to interface with your database and pull out the information necessary to answer the user query. And you can provide the results back to Raven to get your original answer to your original query. Which is a wonder robot which costs nearly $20. Great. Try to create a query to use one of the other functions, we've defined several. Please try queries that can leverage some of these other functions that we've defined earlier. Now, you've seen how you can use Raven and other function calling LLMs and grounded with access to the internet via web search or to your database via native SQL, or via the safer templated approach to get concrete answers to your user queries, which might depend on company internal data or the most up to date happenings. In the next lesson, we will start taking a look at further pushing the capabilities of function calling LLMs. By taking a look at structured extraction. We've learned a lot through the entire progression of this course. In our next lesson, we'll put everything we've learned together to create a course project. See you there!