I'm delighted to introduce the instructor for this course, Daniel Beutel, who is one of the creators of the Open Source Flower Framework. Thanks, Andrew. I'm excited to be here. In this course, you'll explore federated learning using Flower a popular open source framework with a large community of AI researchers and developers. Flower will enable you to build a federated learning system and run distributed machine learning training jobs in a privacy-enhancing way. Let's say you want to train a model on medical images, but those images are distributed across different hospitals. Due to privacy and regulations, maybe there's no way to centrally collect all of those images in one place. With federated learning, you can train on distributed data sources without having to collect all the data centrally. Instead of moving the data to the training, you can move the training to the data. Specifically, by running distributed training jobs in all the hospitals, and only after that centralizing the model parameters, but not the raw data itself. And through this, you can end up with a model that benefits from all the data across all the hospitals but without ever needing the raw data to leave any hospital. In this course, you explore this using the MNIST digits dataset, where you have a dataset that has some of the digits missing and others have different datasets with different digits missing. With federated learning, you train a model on the handwritten digit data you have, while others train on their own data. Then everyone sends the updated model parameters to central server, and the server aggregates updates from all sources, improving a global model but without assessing the individual data sources. This improved global model can then be shared with everyone. That's what really excites me about Federated learning. It lets us build powerful, accurate models while keeping data under the control of the users and organizations that own it. By training models locally on individual devices or servers, we can use a wide range of data without needing to share the actual data centrally. This approach is great for fields like healthcare and finance, where data is sensitive and needs to be protected. Federated learning enables us to train models for tasks that previously didn't have sufficient amount or sufficient diversity of training data. Hopefully, some of the examples we'll go through will inspire you to try federated learning in your own projects, and bring the advances of AI to more domains. In this course, you'll learn how the federated training process works and how to tune a federated learning system. You also learn how to think about data privacy in federated learning, and how to consider bandwidth usage in a federated learning process. You will also learn about differential privacy, often referred to as DP, a technique that protects individual data points like messages or images. In this course, we'll describe a technique where you add a little noise to the model weights so as to obscure any potentially private, sensitive details that might have been in the training set, but which can still allow the model to learn. You will get an overview of the different components and federated learning systems. You will learn how to customize and tune them, and how to orchestrate the training process to build better models. Get ready to dive into federated learning with Flower. Many people have worked to create this course. I'd like to thank Muhammad Nasri, Ruth Galindo, Javier Fernandez Maki from Flower Labs as well as Diala Ezzeddine, and Geoff Ladwig from DeepLearning.AI. In the first lesson, you will start with the motivation behind using federated Learning. You will explore the challenges of traditional centralized machine learning, where data has to be collected in one place and you see how federated Learning solves this by distributing the training. That sounds great. Let's go on to the next video and get started.