Get hands-on with Llama 4 family of models, understand its Mixture-of-Experts (MOE) architecture, and how to build applications with its official API.
We'd like to know you better so we can create more relevant courses. What do you do for work?
Get hands-on with Llama 4 family of models, understand its Mixture-of-Experts (MOE) architecture, and how to build applications with its official API.
Apply Llama 4’s capabilities across multi-image reasoning, image grounding to identify objects and their bounding boxes, and querying over long-context texts of up to 1 million tokens.
Use Llama 4’s prompt optimization tool to automatically refine system prompts and its synthetic data kit to create high-quality datasets for fine-tuning.
Introducing Building with Llama 4, a short course, created in collaboration with Meta and taught by Amit Sangani, Director of Partner Engineering for Meta’s AI team.
Meta’s new Llama 4 has added three new models and introduced a Mixture-of-Experts (MOE) architecture to its family of models, making them more efficient to serve.
In this course, you’ll work with two of the three new models introduced in Llama 4. First is “Maverick” a 400-billion parameter model, with 128 experts and 17 billion active parameters. Second is “Scout,” a 109-billion parameter model with 16 experts and 17 billion active parameters. Both Maverick and Scout support long context windows, of up to a million tokens and 10 million tokens respectively. The latter is enough to support very large GitHub repos for analysis.
In hands-on lessons, you’ll build apps using Llama 4’s long-context and its new multi-modal capabilities including reasoning across multiple images and “image grounding,” in which you can identify elements and reason within specific image regions. You’ll also learn about Llama’s newest tools: its prompt optimization tool that automatically improves system prompts, and synthetic data kit that generates high-quality datasets to fine-tune your model.
In detail, you’ll:
By the end of the course, you’ll confidently choose and call the right Llama 4 model and build production-ready features that span text, images, and massive context.
The open Llama 4 family of models is an important component of any GenAI Developer Toolkit. If you need an open model to extend, fine-tune, and customize, Llama is a top option and this course can help you learn what you can build with it.
Anyone who wants hands-on experience building with the Llama 4 family of models.
Introduction
Overview of Llama 4
Quickstart with Llama 4 and API
Image Grounding
Llama 4 Prompt Format
Long-Context Understanding
Prompt Optimization Tool
Synthetic Data kit
Conclusion
Quiz
Graded・Quiz
・10 minsCourse access is free for a limited time during the DeepLearning.AI learning platform beta!
Keep learning with updates on curated AI news, courses, and events, as well as Andrew’s thoughts from DeepLearning.AI!