You saw in the previous lesson how to get real-time weather data using an API. But APIs can actually do more than just get data. They can also help you access online AI tools like OpenAI's ChatGPT or Google's Gemini or Anthropic's Claude and many more. For example, you've been using get LLM response, which uses OpenAI's ChatGPT API. OpenAI's large language model is running on Compute Service on the internet, and you can use an API to ask ChatGPT questions and get an answer back. In this lesson, you see the guts of the get LLM response function that you've been using throughout these past few courses. Let's take a look. So here's how you can use OpenAI's API. I've already installed the OpenAI package. If this is not yet on your computer, you might need to run pip install OpenAI. But I've already done that, so I won't do this here. And the OpenAI function from the OpenAI package is what actually powers the get LLM response function that you've been using in the helper functions, or in the AI setup package. Let's take a look at what get LLM response actually does. I know this looks like a lot of code. It's not important for you to understand every single line of this, but I want to quickly walk through this. So just to give you a sense of what's modern cutting edge API usage might look like. So client dot chat dot completions dot create is a function provided by OpenAI. And this line here selects the large language model that we want to use. Here we're using the GPT-4o mini model. And when you use a large language model, one of the things we often do is tell the LLM how to respond. This call the system message. And we told the language model we want it to act like an AI assistant. We'll see an example later of what happens if you change this system message. And then we also specify the prompt, which can be a question like what is the capital of France? It turns out large language models have a parameter called a temperature parameter, which controls how random the response is. And often set it to zero in my code if I don't want it to be too random. That's the lowest possible temperature they can use in the large language model. And then this line on top gets result from the large language model, which we sometimes call completion. And then you extract the text of the response and lastly, return the text of the response. Again, please don't worry if you don't understand every single line of this code. I just want you to get a sense of what it looks like. If you actually were to use this yourself. It'd be fine to take this chunk of code and copy paste it into your own code and just run it that way. In fact, if you go to the OpenAI website, which has online documentation for its API, you'll probably find a code sample that looks a lot like this. And so you can just take the code sample from OpenAI's documentation, or from Anthropic Claude's documentation or Google Gemini's documentation of whatever you're using, and then get it to work for yourself in your own code without needing to worry about what every single line is doing. And if you want to understand what each line of this does as usual you can ask a AI language model. And they'll walk you through line by line. Just one note, AI language model have learned by reading text off the internet, and so they will be better understanding better known APIs and older APIs that have been around on the internet for a while. They may not be as good as understanding less popular APIs, so APIs have just been created and released by someone else on the internet. Now it turns out that to use the OpenAI API, you need to use a secret API key from OpenAI's website. And I'm going to use the load dotenv method to get this API key securely. And then this is a line of code that you can also get from the OpenAI documentation to initialize the OpenAI service or the OpenAI client. So let me just run that. And I'll now define get LLM response that we saw just now. And now if I send this prompt, what's the capital of France. It will then generate a response. And because I was using my API key here, this will charge a fraction of a cent to my account. Just to show you something fun. If you say you are a sarcastic AI assistant. Let me redefine this. And if I run this now, let's see what it says. Oh, boy, that is pretty sarcastic. Yeah, it's Paris, but shortly, of all the attitude, they're giving us the answer. And that's what the system message does. It tells the language model how you want it to behave. And maybe to show you something fun, if I were to run this a few times, you get you know, usually the same response because temperature is zero. And so this makes it not very random. If I were to set the temperature, the higher number a 1.0, it would be pretty high temperature. Then just makes the response more random. And every time I run it, I end up with, in this case, a different sarcastic response. And this temperature, which can vary from 0 to 2, lets you control the degree of randomness you want in the responses. And many people use the value around maybe 0.7dB per common choice, and this will give you a loop with a randomness. Maybe a little bit of the appearance of creativity without excessive randomness. So I encourage you to play with this. Try out different temperature values and run this multiple times, or try different system messages. Maybe try to fit an AI system that always returns rhymes or that only speaks a certain language like Spanish or Japanese or something else and see what results you get. Lastly, if you want to run all this locally on your own computer, I want to share just a few details on how to get the API key into the code. So, after pip install the AI setup, remember the command would be bang pip install AI set up. Then you can import it from AI set up functions call authenticate, print LLM response, get a response and the authenticate function will need to be pass an API key that you can get from the OpenAI website. And this is a paid service. So it may ask you for a credit card and other large language model providers will also usually ask you for a credit card as well to get an API key to access to services. But if you do this, this is then authenticate your program to OpenAI's API service using your secret API key, so you can then use it to generate large language model responses. Having your API key stored inside your code isn't then the most secure way to do this. So a somewhat better, more recommended way to get the API key would be like this. These are the three new lines of code where you would saw the API key, and it turns out a file called dotenv. This loads the code from that file and then authenticates it with the API key that was more securely loaded from dotenv. Oh and you also need to from dotenv to download dotenv in order to run this code, as well as one more library called OS. I don't want to take in the details for this right now, but you ask a language model, it would be to walk you through all this. And if you try this out and get any error messages, then I would copy paste the error messages into the AI chatbot to get it to try to help you debug it. So if you had Python set up on your own computer, rather than running over the internet on a DeepLearning.AI website, this is the code you will use. Right after this video there's an optional reading item that will show you some options for how to install Python and Jupyter notebooks locally on your computer. The different ways of doing so, and it can be a little bit different on a Mac versus a Windows machine. Both set up a free a few options so that if you want, you can install these things and have them run on your own computer. It's completely optional for you to read this item, and also to install things, but I do enjoy having Jupyter notebooks in Python running on my own laptop, and I think you might too, because it's actually really cool to just run things on your computer. Even without internet access, like if you're on an airplane or something. Whether you go through that item or not, will then wrap up with one last video where we'll talk about where you might go next after finishing this course.