In this lesson, you will learn how to quickly get started with Llama 4 using Meta's official API. Let's dive in. The Llama 4 API is an easy way to use Llama 4 without worrying about running your own infrastructure. You can use it either through a REST-style API or via Meta's Python client. It's also compatible with OpenAI client libraries, which makes switching between APIs very smooth. And of course, you can get access to the most up-to-date LLama models like Scout and Maverick right away. In the notebook, you will have your first interaction with Llama API. You will learn how to set up the API client, send a prompt, and view the results using different text and image prompt examples. You will also build a translator chatbot that can work across all the 12 languages. Llama 4 supports. All right, let's get started. Let's begin by importing our API keys and libraries. In this lesson, you're going to use Llama API. To use Llama API, we will need Llama API key and Llama base URL. On the DeepLearning.AI platform all the keys are set for you, so you don't need to do anything. Also, you need to import the Llama API client. Also, you need to import the Llama API client. In this lesson, you will call Llama API several times, so it will be helpful to have a Llama helper function so that we can reuse it. This Llama full function receives your prompt, the URLs for the images, and also the model that is set to be default on Llama 4 Scout. Then, based on the prompt that you have passed and the image URLs, the content is formed and then the client using the Llama API client is created. And then the message based on the content is formed and is passed to the client. And finally, the responses received and returned. Let's now call the function and ask it to give us a brief history of AI in three sentences. Here is the response. Llama API also offers compatibility with popular libraries such as OpenAI library. Let's create the same Llama 4 function, this time using the OpenAI-compatible library. For this, we import OpenAI, and then this time, instead of Llama API client, we use OpenAI. Pass the Llama API key and the base URL. the Llama API key and the base URL. And the rest is the same. Let's call this function and pass the same prompt. And here we get the same response. Because the temperature is set to zero. You can also pass images to the Llama 4 function, and ask Llama about the image. Let's see this in an example. First, let's have this display image function that gets the image URL and displays it. Let's prompt Llama and ask about this image. We can pass the image URL and the prompt that is: "what's in this image?" to the Llama 4 function and get this response. The image depicts three Llamas and some more information about the content of the image. Unlike Llama 3.2, Llama 4 can natively accept multiple images with good eval results on up to five images. Let's use this second image and before passing it to Llama, let's display both of the images. So we have two different images of Llamas. Now you can ask Llama to compare two images by passing the URL of the images as a list. Here is the response. Two images depict Llamas. These are the key differences and some similarities. You will use Llama on more image use cases in the next lesson. Llama 4 Maverick and the Scout support up to 1 million and 10 million tokens of context length, respectively. A much bigger increase over the previous Llama models. Let's see this on the Tail of Two Cities free e-book, which has about 193,000 tokens. Here is a question: What is the last utterance at the end of the book and also the paragraph before that? Let's use Llama 4 Maverick for this question. By passing the last 300,000 characters of the book. And here's the response. You will work on more long context use cases in the later lessons. Another major capability of Llama 4 is its text understanding across 12 languages. Let's ask Llama "How many languages do you understand?", and ask it to answer in all the languages it speaks. Here is the list of 12 languages Llama 4 can understand. Let's build a quick chatbot that acts like a real-time translator. This is our polyglot chatbot that gets the source and the target language and forms this system message here. We use OpenAI client library and pass the Llama API key and the base URL and get the client. And then the message that is formed here using the content of the system prompt passed here, the response received, and then it is returned. Let's form this translator using English language as a source and the French as a target language. Let's start the chat by saying hello. As you can see, the language is recognized here to be English and the translation of the "hello" to the target language, French is done. And then in the response to the hello from the first person, the answer by the second person in English is returned. Let's call this again with the second question. Again, this is the recognized language. The translation of the question to the target language French, and the response to this question in the recognized language. Please feel free to try the translator Chatbot using other languages. This sums up our quick overview of the Llama API. In the next lesson, you will work on several cool image understanding and image grounding use cases. All right. See you there.