DeepLearning.AI
AI is the new electricity and will transform and improve nearly all areas of human lives.

Quick Guide & Tips

๐Ÿ’ป ย  Accessing Utils File and Helper Functions

In each notebook on the top menu:

1: ย  Click on "File"

2: ย  Then, click on "Open"

You will be able to see all the notebook files for the lesson, including any helper functions used in the notebook on the left sidebar. See the following image for the steps above.


๐Ÿ’ป ย  Downloading Notebooks

In each notebook on the top menu:

1: ย  Click on "File"

2: ย  Then, click on "Download as"

3: ย  Then, click on "Notebook (.ipynb)"


๐Ÿ’ป ย  Uploading Your Files

After following the steps shown in the previous section ("File" => "Open"), then click on "Upload" button to upload your files.


๐Ÿ“— ย  See Your Progress

Once you enroll in this courseโ€”or any other short course on the DeepLearning.AI platformโ€”and open it, you can click on 'My Learning' at the top right corner of the desktop view. There, you will be able to see all the short courses you have enrolled in and your progress in each one.

Additionally, your progress in each short course is displayed at the bottom-left corner of the learning page for each course (desktop view).


๐Ÿ“ฑ ย  Features to Use

๐ŸŽž ย  Adjust Video Speed: Click on the gear icon (โš™) on the video and then from the Speed option, choose your desired video speed.

๐Ÿ—ฃ ย  Captions (English and Spanish): Click on the gear icon (โš™) on the video and then from the Captions option, choose to see the captions either in English or Spanish.

๐Ÿ”… ย  Video Quality: If you do not have access to high-speed internet, click on the gear icon (โš™) on the video and then from Quality, choose the quality that works the best for your Internet speed.

๐Ÿ–ฅ ย  Picture in Picture (PiP): This feature allows you to continue watching the video when you switch to another browser tab or window. Click on the small rectangle shape on the video to go to PiP mode.

โˆš ย  Hide and Unhide Lesson Navigation Menu: If you do not have a large screen, you may click on the small hamburger icon beside the title of the course to hide the left-side navigation menu. You can then unhide it by clicking on the same icon again.


๐Ÿง‘ ย  Efficient Learning Tips

The following tips can help you have an efficient learning experience with this short course and other courses.

๐Ÿง‘ ย  Create a Dedicated Study Space: Establish a quiet, organized workspace free from distractions. A dedicated learning environment can significantly improve concentration and overall learning efficiency.

๐Ÿ“… ย  Develop a Consistent Learning Schedule: Consistency is key to learning. Set out specific times in your day for study and make it a routine. Consistent study times help build a habit and improve information retention.

Tip: Set a recurring event and reminder in your calendar, with clear action items, to get regular notifications about your study plans and goals.

โ˜• ย  Take Regular Breaks: Include short breaks in your study sessions. The Pomodoro Technique, which involves studying for 25 minutes followed by a 5-minute break, can be particularly effective.

๐Ÿ’ฌ ย  Engage with the Community: Participate in forums, discussions, and group activities. Engaging with peers can provide additional insights, create a sense of community, and make learning more enjoyable.

โœ ย  Practice Active Learning: Don't just read or run notebooks or watch the material. Engage actively by taking notes, summarizing what you learn, teaching the concept to someone else, or applying the knowledge in your practical projects.


๐Ÿ“š ย  Enroll in Other Short Courses

Keep learning by enrolling in other short courses. We add new short courses regularly. Visit DeepLearning.AI Short Courses page to see our latest courses and begin learning new topics. ๐Ÿ‘‡

๐Ÿ‘‰๐Ÿ‘‰ ๐Ÿ”— DeepLearning.AI โ€“ All Short Courses [+]


๐Ÿ™‚ ย  Let Us Know What You Think

Your feedback helps us know what you liked and didn't like about the course. We read all your feedback and use them to improve this course and future courses. Please submit your feedback by clicking on "Course Feedback" option at the bottom of the lessons list menu (desktop view).

Also, you are more than welcome to join our community ๐Ÿ‘‰๐Ÿ‘‰ ๐Ÿ”— DeepLearning.AI Forum


Sign in

Create Your Account

Or, sign up with your email
Email Address

Already have an account? Sign in here!

By signing up, you agree to our Terms Of Use and Privacy Policy

Choose Your Learning Path

Enjoy 30% Off Now. Cancel Anytime!

MonthlyYearly

Change Your Plan

Your subscription plan will change at the end of your current billing period. Youโ€™ll continue to have access to your current plan until then.

View All Plans and Features

Welcome back!

Hi ,

We'd like to know you better so we can create more relevant courses. What do you do for work?

DeepLearning.AI
  • Explore Courses
  • Community
    • Forum
    • Events
    • Ambassadors
    • Ambassador Spotlight
  • My Learnings
  • daily streak fire

    You've achieved today's streak!

    Complete one lesson every day to keep the streak going.

    Su

    Mo

    Tu

    We

    Th

    Fr

    Sa

    free pass got

    You earned a Free Pass!

    Free Passes help protect your daily streak. Complete more lessons to earn up to 3 Free Passes.

    Free PassFree PassFree Pass
Welcome to this short course Building multi-modal Search and RAG both in partnership with Weviate. RAG or Retrieval Augmented Generation Systems provide an LLM with context that includes information about your proprietary data and ask the LLM to use that context when generating this response. A common way to build RAG applications use the active database to store your text stock with embeddings. Then, given the query, you retrieve relevant information from the vector database and add that as text context to your prompt. But what if the context you want includes an image of a presentation, or as an audio clip, or maybe even a video? This course teaches you the technical details behind implementing RAG with such multi-modal data. The first step is to find a way to compute embeddings so that data on related topics, embedd is similarly independently of modality. For example, a text about a lion and image that shows a lion and the video or audio of a lion roaring should be embedded close to each other so that the query of lions can retrieve all of this data. In other words, we want the embedding of concepts to be moved modality independent. You learn how this is done through a process called contrastive learning, in the next video. After such a multi-modal retrieval model, You're going to use it to retrieve the context related to a user's query, so you can now build a multi-modal search app where the image of a lion can be used to retrieve video, audio, and text related to that image. Now, if your generative model that supports multi-modal inputs, you can use to retrieve results as a context and provide it to the model to ask it to respond to the query based on the relevant multi-modal contextual information. I am thrilled that instructor for this course, Sebastian, is here to explain how multi-modal apps work under the hood. Sebastian is head of developer relations at Weaviate, and he's an expert in vector databases who's worked on developer relations for over a decade. In fact, his full time job is to help developers like you build successfully with vector databases. Thanks, Andrew. I'm really excited to work with you on this course. So in this course, you first learn how to teach a computer the concept of understanding multi-modal data. Then, you build a text to any as well as any to any search. I need the next step, you learn how to combine language and multi-modal models into language visual models, that understand images as well as text. Next, you focus on multi-modal RAG. By mixing multi-modal search together with multi-modal generation and reasoning. And as a final step, you will learn how multi-modality is used in industry like implementing different real life examples that include analyzing invoices and flowcharts. Many people have worked to create this course. I'd like to thank Zain Hasan from Weaviate, as was Esmaeil Gargari from DeepLearning.AI who contributed to this course. So that. lots of exciting topics. Let's go on to the next video to get started.
course detail
Next Lesson
Building Multimodal Search and RAG
  • Introduction
    Video
    ใƒป
    3 mins
  • Overview of Multimodality
    Video with Code Example
    ใƒป
    23 mins
  • Multimodal Search
    Video with Code Example
    ใƒป
    15 mins
  • Large Multimodal Models (LMMs)
    Video with Code Example
    ใƒป
    9 mins
  • Multimodal RAG (MM-RAG)
    Video with Code Example
    ใƒป
    9 mins
  • Industry Applications
    Video with Code Example
    ใƒป
    7 mins
  • Multimodal Recommender System
    Video with Code Example
    ใƒป
    14 mins
  • Conclusion
    Video
    ใƒป
    1 min
  • Course Feedback
  • Community