In this lesson, you'll be taking what you've learned so far about getting structured responses from an LLM using Pydantic data models. and you'll combine that with a new ability to use Pydantic models in the definition of tools that you can pass in your LLM API call. And so, when it comes to the ways in which Pydantic gets used in LLM workflows, these are the two big ones. structured responses and tool calling. And so what you'll do in this lesson is combine these to implement reliable structure and tool calling in your application. So, let's jump into the code. There's going to be a lot of code flying by in this lesson. So I first want to give you just a high-level visual picture for where we're going, and hopefully that'll make it a little easier to follow along. So, the starting point is the same. Namely, you're getting some user input that includes a name, an email, and some kind of message like what's my order status? You'll first take that user input and validate it using your UserInput model. Then you'll pass your validated user input into an LLM call where you're asking the LLM to return a valid instance of your CustomerQuery model. And then you'll pass that to another LLM API call with some tool definitions. And so the way you're going to code that up is first to start with two new pydantic data models. And these are FAQLookupArgs and CheckOrderStatusArgs. and you'll see how those get implemented in the notebook. But basically these are finding the parameters that you'll use to make a tool call. And those parameters go into two new functions that you'll define. So you'll have lookup_faq_answer that takes FAQLookupArgs as input, and check_order_status that takes CheckOrderStatusArgs as input. So you'll see how those functions get defined in the notebook as well. After that, you can make some tool_definitions where you'll define tools that include a name that's the Python function for that tool call, and a description, and then the parameters. and for parameters you're going to pass in the model_json_schema of those pydantic data models that you defined above. And then you'll take those tool definitions and then pass them in the API call. And that's letting the LLM know which tools are available and then the job of the LLM is to return parameters for a tool call if in fact that seems like a good next step. And then you'll take that response and first use the pydantic data models that you defined for FAQLookupArgs or CheckOrderStatusArgs and validate that the tool parameters returned by the LLM are what they need to be in order to call those functions, and then you'll go ahead and call those tools, return the outputs, and then you'll take all of that and bundle it up into one more LLM call where you're passing in your customer query, your tool outputs, and you've defined another pydantic data model called SupportTicket, which will be the final output from that last LLM call. So, you're going to be putting together a lot of different pieces and this is the big picture. All right, so in this notebook, there's going to be a fair amount of setup and a fair amount of code flying by, so bear with me. I'll try to move quickly through the parts that you've seen before and pause to explain each time there's something new. And so the first thing is just a whole bunch of imports. This is basically everything you've seen from the previous lessons. Now you're going to bring it all together. And next, you're going to define your UserInput data model. But in this case, you're making it a little bit more complex by messing with the order_id field. So order_id, which used to be an integer, is now going to be an optional string field, where the default is still None. And the description is that there's a new format. You have three capital letters, then a dash, and then five numbers. And the reason this is an interesting example is just because often times you'll have fields like this where validation is not straightforward. And that's where you need to bring in this field_validator tool from pydantic. So with field_validator, you can set your validator on the order_id field and then define a function, that in this case just has some regular expression parsing to see if the pattern in the input matches the expectation. And if it doesn't match, you can raise a ValueError, letting the user know what the problem is and what the expected format is. And so, this is just one example of how you might handle some custom validation directly within your model. And it's a way in which you can enforce security on input data. For example, if you had input coming in and you wanted to make sure it's not some kind of SQL injection or some kind of other malicious input, you can validate whatever you've got coming into your data model with completely custom validation logic like this. So you can define that model, and then you're going to define your customer query data model, just as you did before, inheriting from UserInput and with these four additional fields. Next, you're going to define a validate_user_input function, just as you did before, so I won't say much about that. And then you'll define a create customer_query function that takes in validated user data in the form of a JSON string and calls an LLM to populate an instance of CustomerQuery. So this is just a slight twist on what you were doing before. Essentially the same thing, you're just calling an LLM using in this case output_type of CustomerQuery. Now here we're using the Pydantic AI framework and Google's Gemini model, but this is just the same thing you were doing before, calling an LLM to take user input and populate an instance of the class. a customer query model. And so in this case you're returning the response.output, which when you're calling Pydantic AI is the validated instance of your CustomerQuery data model. And then you can try out what you've got so far. So here you have some user input in the form of a JSON string. You can validate the user input, then create a customer query and print out what you've got. All right, user input validated, CustomerQuery generated and looks like you got a valid instance of your customer query data model out the other end. And after that, you're going to define some new pydantic models. In this first case, you'll define a Pydantic model called FAQLookupArgs that inherits from BaseModel and has these two fields, query and tags. And so these two fields are what will be expected in an FAQ lookup function that you'll define in just a moment. And then you'll define another model called CheckOrderStatusArgs that also inherits from BaseModel and has an order_id field, just like what you had before with this field_validator on the order_id field, and an email. And so these are going to be the parameters that go passed into a check order status function. So let's define that. And then you're also going to define some fake database data. So in this case, just some data that you can actually look up with an FAQ lookup function or an order status lookup function. And so here you have a tiny little fake FAQ database with a question, answer, and keyword for each FAQ. And down here you have a a little order database with three different orders including a status, estimated delivery, purchase date, and email. And so you don't need to worry about the details of what's in here. This is just in order to complete the example. And then you can define the lookup_faq_answer function, which in this case takes as input an instance of the FAQLookUpArgs model that you created above. And this is just a function to go through that fake database of FAQs and search based on keywords in the user input. And if it finds an answer, it returns the answer. If not, it says, sorry, I couldn't find an FAQ answer for your question. And then you can define a check_order_status function, which in this case takes as input the CheckOrderStatusArgs pydantic model, a valid instance of that model as input, and then attempts to look up an order, from that little fake order database using the order ID and email for confirmation of a matching order. Again, the details here are not something you need to worry about. This is all just in service of building up an example of tool calling in your customer support system. And then finally we get to the interesting part. You can define tools that you'll use in your API call to an LLM like this, where up here you have a first tool that has a name of lookup_faq_answer, answer, that's the, that's the function that you'll call if in fact this tool is invoked. And the parameters that that function expects are the FAQLookupArgs. And so you're passing this information in your API call to an LLM, and this is where your Pydantic data model comes in in defining the parameters that that tool expects. And then you have another one for check_order_status where the parameters are the schema of that check order status args model. And as a final step, before getting into the LLM calls, you're going to define a couple more pydantic models. So this is now going to be your SupportTicket model, which is the ultimate output of the system. And so first you've got an OrderDetails model that just has a few fields of status, estimated_delivery, and note. And then what you're going to do is you're going to use OrderDetails inside of SupportTicket. So now SupportTicket is a model that inherits from CustomerQuery, so it has all the attributes of CustomerQuery. And then you're adding some new things. So a recommended_next_action, which is another Literal that has a certain set of acceptable values, escalate_to_agent, send_faq_response, send_order_status or no_action_needed. You have an order_details field and this is where you're going to pull in that OrderDetails model. And so, this is an example of how you can construct a field inside of a Pydantic data model using another Pydantic data model. And then you have an faq_response that can take the response if you looked up an FAQ answer and a creation_date. So let's define those. And now with all that infrastructure built up, you're ready to define your LLM call. And so what you're going to do is define this decide_next_action_with_tools function. And this takes as input an instance of your CustomerQuery data model. And so the first thing you're going to do is define a system prompt. And to do that, you'll first extract the model_json_schema from the SupportTicket model. and then construct a prompt that says you're a helpful customer support agent. Your job is to determine what support action should be taken for the customer, based on the query and expected fields in the SupportTicket schema below. And so the idea here is that there's input from the user. It gets converted via a first LLM call into a customer query data model. and then this is passed on to another LLM call to decide what to do, to call a tool or to construct a support ticket of a different form. So it says if more information on a particular order_id or FAQ response would be helpful in responding to the user query and can be obtained by calling a tool, then call the appropriate tool to get that information. And then here at the bottom you're passing in the support_ticket_schema as the expectation for where you're ultimately trying to go. Now this is not saying, give me back this structure in the response, because what you're going to do with this LLM call is decide whether or not to call tools. But this helps the LLM decide whether or not a tool call is a good idea in this case. So in this case your system_prompt is all this stuff up here, and then your user prompt is the model_dump of the customer_query model that you passed in. So this is all the information that was in the user input as well as the additional information from the first LLM call when you populated the customer query model. And then in this case you're going to call OpenAI, gpt-4o and you're passing in those tool definitions in the tools parameter, which lets the LLM know what tools are available to call as a next step. So let's go ahead and define that. And now you're ready for action. So you can call the decide_next_action_with_tools function, pass in customer_query. And what you're going to get back remember is the message response from the LLM, the tool_calls if there were any that were extracted from that message, and then the input prompts as well. And then with these print statements are doing are just printing out those tool_calls if in fact the LLM is suggesting a tool call. So you can run that. And so what you get back in this case is that the content of the message, that place where you were previously getting back a free form text response from the LLM, that's just null in this case, but down here in tool_calls, you can see that there's an indication to call the check_order_status tool. And so you can print that out a little bit more cleanly down here. tool_calls is passing in an order_id and an email and wants to call the check_order_status function. And so the LLM is saying, go ahead and call the check_order_status function with these parameters. So the next thing you're going to do is set up a function to call tools if in fact the LLM wants to call a tool. And so you can define a function called get_tool_outputs, passing in tool_calls from the LLM response. and then if there was a tool call, go ahead and call the lookup_faq_answer function if that's the tool indicated, or the check_order_status function if that was the tool that the LLM indicated. And what you're doing as a first step in the event of a tool call is passing in the tool_call function arguments into your FAQLookupArgs pydantic data model in this case and using the model_validate_json method to validate whether the parameters passed by the LLM are in fact valid for that tool call. So your pydantic data model came into play in the API call itself as part of the tool definition for the LLM to know what parameters are valid. And then the first thing you do when you get back parameters from the LLM is use that model to validate that those parameters returned by the LLM are going to work. And then the next step is to go ahead and call that function, passing in those arguments and return the results. And then the same thing for check_order_status. The first thing to do is validate that the parameters returned by the LLM are in fact matching the expectations of your model and then call that function. And in either case you'll return the outputs of the function. So you can run this cell to define the get_tool_outputs function and use it on the tool calls that you got back in the last LLM response. And so you can see that you've got the agent requested a call to the Check Order Status tool. Check Order Status tool returned this order ID and status from the database and that noted that the order ID and email match. Great. So the tool_outputs are this, and then you're able to pass those outputs on to the next step in the system. And so the next and final step in your system, and I know this has been a lot, so just stick with me here. We're almost there. What you're going to do here is to define a generate_structured_support_ticket function that takes in your customer_query, the message that was the response from the previous LLM call, and those tool_outputs, if in fact they were any tools called. And in this case, just for fun, since you're used Pydantic AI and Gemini in the first LLM call to create customer query, and then OpenAI in the second LLM call to decide what tools to call. Here, you're going to use Anthropic with instructor as the third and final LLM call in your system to generate a structured support ticket. And of course, there's no reason that you need to call three different frameworks and three different LLM providers to get all this done. It's just to demonstrate that when you're using Pydantic, it really doesn't matter which framework or LLM provider you're working with, the way in which you pass your pydantic models and get things done is essentially the same. So here in generate_structured_support_ticket, you're going to first unpack those tool results or say no tool calls were made if in fact that's the case. And construct a prompt that says you're a support agent, use all the information below to generate a support ticket as a validated Pydantic model. and the customer query is this, the LLM message from the previous call was this, the Tool results are this, and then you're going to set this up set up to get a response from Anthropic using Claude 3-7, passing the prompt here and setting the response_model to the Pydantic model you defined for SupportTicket. And then before you return the response, you'll add one more field called creation_date which is just datetime.now(). And this is just to demonstrate that you can populate the fields in your Pydantic model any way you like. In this case, just grabbing the current datetime and sticking it onto the creation_date field in your Pydantic data model before returning it. So let's define that. And now when you're ready to create a support ticket by passing in the customer_query, message from the previous LLM response, tool_outputs into your generate_structured_support_ticket function. And then you'll print that out. And so it worked. Now you have a valid instance of your support_ticket data model where you have all the information from the user input. You have the information from the customer_query model, and then you have additional information in this case from a tool call to the check order status function. You have a recommended_next_action of send_order_status. No FAQ response in this case because that's not what this ticket was about, and you have a creation date. And so now you're ready to put the whole pipeline together and try it out with some brand new user input data. So here we have Joe user again. And Joe says, I'm really not happy with the product I bought, gives an order ID, and so we'll define that as some new user input data. And then you can run this user input through everything you've defined so far. So first you'll validate the user JSON. Create a customer query, then decide next action with tools. Grab those tool outputs if in fact there were tool calls recommended, and then generate a structured support ticket. So let's take a look at what happens when you run this whole thing end to end. Okay, great. User input validated, CustomerQuery generated. Agent requested a call to the Check Order Status tool. Check order status tool returned some information about an order status, and you got out a validated instance of your support ticket model. where you have a recommended next action to escalate to agent. and you got some information about the order. So that's pretty cool. It all worked. And you can change the user input and see what happens. So maybe something like how do I reset my password? And when you run that, user input validated, customer query generated. Okay, now the agent requested a call to call to the Lookup FAQ tool and got something returned about a forgot password and how to get that sorted out. We also have the agent requesting a call to the check order status tool and this is because there's an order ID in the user input and that was part of the prompt to always use that tool if there's an order ID. So in this case you called both tools and then generated a support ticket with all that information. So I'd encourage you to play around and try this out with different user input, valid and invalid data, and as you're probably starting to sense now, there's a whole lot more to the story here. Like what happens if the LLM returns invalid parameters for your tool call? How would you handle that? And so in these lessons we've certainly not covered all the possibilities when it comes to how you can use Pydantic in your LLM workflows, but I hope at this point you're feeling like you've got some ideas and you're ready to explore further with building out Pydantic models for your own LLM workflows. What you have now is a system where you've built in data validation with Pydantic models at every stage. from user input to layering them into the validation of a customer query, to defining the parameters for tool calling and in the final output. You've seen nested models and field validation, and then just for fun on top of it all, you built this pipeline using three different LLM providers and three different frameworks for calling your LLM APIs. At this point, I hope you're feeling like you've got an interesting set of tools to build LLM workflows using Pydantic models. Nice work.