Welcome to Building with Llama 4 built in partnership with Meta and taught by Amit Sangani, who is Director of Partner Engineering for Meta's AI team. The Llama family of open models has already enabled many developers all around the world to build AI applications. And now with the Llama 4 Mixture of Expert models, you find deployment easier than before, and you can also achieve more advanced multi-modal understanding by prompting over multiple images, and also even carry out image grounding. The new Llama 4 models also have a much larger context window size, a million tokens for the Maverick model, and up to massive 10 million tokens for Scout, which is useful, for example, for analyzing even fairly large code bases. Finally, you learn about the new software releases they came with Llama 4, namely tools to optimize your prompts and also to generate synthetic data. That's right Andrew, with Llama 4, we now have models that are natively multimodal and support a truly long context of up to 10 million tokens in our Scout model. In this course, you will get hands on experience using Llama 4 through Meta's official API as well as other inference providers. You will build applications that reason over visual content, detect objects, and answer image grounding questions with precision. Then you will learn how to use long contexts to process entire books and research papers without needing to chunk the data. Meta's also introduced Llama Tools, which is a growing collection of open source utilities designed to help developers build much more powerful applications with the Llama models. In this course, you also build with two of the newest Llama tools. First, the Llama Prompt Optimization tool, which automatically improves your prompts. This uses the DSPy optimizer under the hood. For it, you need to specify an evaluation metric on a dataset of questions and desire responses. The prompt optimizer will then automatically optimize your prompt based on that evaluation metric. Second will be the Llama Synthetic Data Kit, which lets you ingest, create, curate, and save high-quality training data in multiple formats, and does release a lot of the otherwise manual work needed to create datasets for fine-tuning. Many people have worked to create this course. I like to thank Jeff Tang, Justin Lee, Sanyam Bhutani and Kshitiz Malik from Meta, as well as Esmaeil Gargari and Brandon Brown from DeepLearning.AI. Next lesson is an overview of Llama 4 and Llama API. Kshitiz Malik from Meta's AI research team will join us in parts of this lesson to explain the architecture of Llama 4. The next lesson in particular gives a technical description of the mixture of experts architecture of Llama 4, and explains why only a small subset of weights are active for any inputs, and thus why this really efficient to serve. Please go learn about this in the next video.