Welcome to DSPy: Build and Optimize Agentic Apps built in partnership with Databricks. In this course you'll learn how to build and optimize your genAI application using DSPy. When you start building complex AI apps, one big challenge is writing good prompts for an LLM. I'll often try dozens of prompts tweaking words, changing formats, hoping to get better results. But this process takes a lot of time, and the prompts often even break when you change the underlying LLM. DSPy streamlines and optimizes this whole process. You define what inputs your model needs and what outputs you return, and also provide a data set of inputs and desired outputs. DSPy can then optimize your AI programs to get better performance with much less manual work. I'm delighted to introduce the instructor Chen Qian who is a software engineer at Databricks, and co-leads the developments of DSPy. Thanks Andrew. In this course you will learn how to build AI programs with DSPy. DSPy has two main building blocks: signature and module. When you are trying to build a component of an application specifying the signature tells a system what inputs and outputs to expect from the LLM component. For example, a sentiment analysis program will have a syntax as an input and an integer representing a sentiment as the output. A module then uses those signatures to actually call a language model and get results. Sometimes, an app doesn't work but we are unsure why. In this course you will also learn to use MLflow tracing to help you see exactly what is going on, each step of the application. What data was used, what tools were called what the model returned and where things broke. With just one line of code, you can turn on this tracing feature for your app. One neat use DSPy is in optimizing agentic workflows. If you have a complex workflow that takes an input and uses multiple steps of processing to generate the outputs, maybe using an LLM on multiple of these steps, then DSPy's optimizer can take your agents as well as an evaluation datasets and metric, and based on that, automatically search for better prompts for all those steps. I've seen it sometimes build really high-quality few-shot prompts from your data in a way that is far better than I, as a human could likely have achieved by hand. In this course, you use these DSPy's optimizer to optimize prompts in a RAG app that answers Wikipedia questions. Many people have worked to create this course. I'd like to thank Omar Khattab Cathy Yin, Krista Opsahl-Ong and Tomu Hirata for Databricks. From DeepLearning.AI, Esmaeil Gargari and Brendan Brown also contributed to this course. The first lesson will be an introduction to DSPy. One surprising thing about DSPy is how few lines of code it takes implement, and that it essentially automates the prompt engineering process. Please go on to the next video to see how this works.