Learn how multimodality works by implementing contrastive learning, and see how it can be used to build modality-independent embeddings for seamless any-to-any retrieval.
We'd like to know you better so we can create more relevant courses. What do you do for work?
Instructor: Sebastian Witalec
Learn how multimodality works by implementing contrastive learning, and see how it can be used to build modality-independent embeddings for seamless any-to-any retrieval.
Build multimodal RAG systems that retrieve multimodal context and reason over it to generate more relevant answers.
Implement industry applications of multimodal search and build multi-vector recommender systems.
Learn how to build multimodal search and RAG systems. RAG systems enhance an LLM by incorporating proprietary data into the prompt context. Typically, RAG applications use text documents, but, what if the desired context includes multimedia like images, audio, and video? This course covers the technical aspects of implementing RAG with multimodal data to accomplish this.
As AI systems increasingly need to process and reason over multiple data modalities, learning how to build such systems is an important skill for AI developers.
This course equips you with the key skills to embed, retrieve, and generate across different modalities. By gaining a strong foundation in multimodal AI, you’ll be prepared to build smarter search, RAG, and recommender systems.
This course is for anyone who wants to start building their own multimodal applications. Basic Python knowledge, as well as familiarity with RAG is recommended to get the most out of this course.
Introduction
Overview of Multimodality
Multimodal Search
Large Multimodal Models (LMMs)
Multimodal RAG (MM-RAG)
Industry Applications
Multimodal Recommender System
Conclusion
Course access is free for a limited time during the DeepLearning.AI learning platform beta!
Keep learning with updates on curated AI news, courses, and events, as well as Andrew’s thoughts from DeepLearning.AI!