DeepLearning.AI
AI is the new electricity and will transform and improve nearly all areas of human lives.

πŸ’» Β  Accessing Utils File and Helper Functions

In each notebook on the top menu:

1: Β  Click on "File"

2: Β  Then, click on "Open"

You will be able to see all the notebook files for the lesson, including any helper functions used in the notebook on the left sidebar. See the following image for the steps above.


πŸ’» Β  Downloading Notebooks

In each notebook on the top menu:

1: Β  Click on "File"

2: Β  Then, click on "Download as"

3: Β  Then, click on "Notebook (.ipynb)"


πŸ’» Β  Uploading Your Files

After following the steps shown in the previous section ("File" => "Open"), then click on "Upload" button to upload your files.


πŸ“— Β  See Your Progress

Once you enroll in this courseβ€”or any other short course on the DeepLearning.AI platformβ€”and open it, you can click on 'My Learning' at the top right corner of the desktop view. There, you will be able to see all the short courses you have enrolled in and your progress in each one.

Additionally, your progress in each short course is displayed at the bottom-left corner of the learning page for each course (desktop view).


πŸ“± Β  Features to Use

🎞 Β  Adjust Video Speed: Click on the gear icon (βš™) on the video and then from the Speed option, choose your desired video speed.

πŸ—£ Β  Captions (English and Spanish): Click on the gear icon (βš™) on the video and then from the Captions option, choose to see the captions either in English or Spanish.

πŸ”… Β  Video Quality: If you do not have access to high-speed internet, click on the gear icon (βš™) on the video and then from Quality, choose the quality that works the best for your Internet speed.

πŸ–₯ Β  Picture in Picture (PiP): This feature allows you to continue watching the video when you switch to another browser tab or window. Click on the small rectangle shape on the video to go to PiP mode.

√   Hide and Unhide Lesson Navigation Menu: If you do not have a large screen, you may click on the small hamburger icon beside the title of the course to hide the left-side navigation menu. You can then unhide it by clicking on the same icon again.


πŸ§‘ Β  Efficient Learning Tips

The following tips can help you have an efficient learning experience with this short course and other courses.

πŸ§‘ Β  Create a Dedicated Study Space: Establish a quiet, organized workspace free from distractions. A dedicated learning environment can significantly improve concentration and overall learning efficiency.

πŸ“… Β  Develop a Consistent Learning Schedule: Consistency is key to learning. Set out specific times in your day for study and make it a routine. Consistent study times help build a habit and improve information retention.

Tip: Set a recurring event and reminder in your calendar, with clear action items, to get regular notifications about your study plans and goals.

β˜• Β  Take Regular Breaks: Include short breaks in your study sessions. The Pomodoro Technique, which involves studying for 25 minutes followed by a 5-minute break, can be particularly effective.

πŸ’¬ Β  Engage with the Community: Participate in forums, discussions, and group activities. Engaging with peers can provide additional insights, create a sense of community, and make learning more enjoyable.

✍   Practice Active Learning: Don't just read or run notebooks or watch the material. Engage actively by taking notes, summarizing what you learn, teaching the concept to someone else, or applying the knowledge in your practical projects.


πŸ“š Β  Enroll in Other Short Courses

Keep learning by enrolling in other short courses. We add new short courses regularly. Visit DeepLearning.AI Short Courses page to see our latest courses and begin learning new topics. πŸ‘‡

πŸ‘‰πŸ‘‰ πŸ”— DeepLearning.AI – All Short Courses [+]


πŸ™‚ Β  Let Us Know What You Think

Your feedback helps us know what you liked and didn't like about the course. We read all your feedback and use them to improve this course and future courses. Please submit your feedback by clicking on "Course Feedback" option at the bottom of the lessons list menu (desktop view).

Also, you are more than welcome to join our community πŸ‘‰πŸ‘‰ πŸ”— DeepLearning.AI Forum


Sign in

Create Your Account

Or, sign up with your email
Email Address

Already have an account? Sign in here!

By signing up, you agree to our Terms Of Use and Privacy Policy

Choose Your Learning Path

Enjoy 30% Off Now. Cancel Anytime!

MonthlyYearly

Change Your Plan

Your subscription plan will change at the end of your current billing period. You’ll continue to have access to your current plan until then.

View All Plans and Features

Welcome back!

Hi ,

We'd like to know you better so we can create more relevant courses. What do you do for work?

DeepLearning.AI
  • Explore Courses
  • Community
    • Forum
    • Events
    • Ambassadors
    • Ambassador Spotlight
  • My Learnings
  • daily streak fire

    You've achieved today's streak!

    Complete one lesson every day to keep the streak going.

    Su

    Mo

    Tu

    We

    Th

    Fr

    Sa

    free pass got

    You earned a Free Pass!

    Free Passes help protect your daily streak. Complete more lessons to earn up to 3 Free Passes.

    Free PassFree PassFree Pass
Welcome to this short course, Open Source Models with Hugging Face πŸ€—, built in partnership with Hugging Face. Thanks to open source software, if you want to build an AI application, you might be able to grab an image recognition component here, and an automatic speech recognition model there, and an LLM somewhere else, and then string them together very quickly to build a new application. Hugging Face has been transformative for the AI community in terms of making it easy for anyone to do this by making many open source models easily accessible. This has been a huge accelerator for how many people build AI applications. In this course, you'll learn directly from the Hugging Face team, how to do this and build cool applications yourself, possibly faster than you might have previously imagined would be possible. For example, you'll use models to perform automatic speech recognition or ASR to transcribe speech into text πŸ”ŠπŸ“ Then also, text, to speech models or TTS to go the other way to convert text into audio.πŸ“πŸ”Š These models combined with an LLM give you the building blocks you can use to build your own voice assistant. You'll also see how to use Hugging Face's transformer library to quickly pre-process as well as post-process outputs of machine learning models. For example, pre-processing audio, like controlling the audio sampling rate in the ASR or TTS examples I just mentioned, as well as pre-process or post-process data such as images and text. The notion of grabbing open source components to build something quickly has been a paradigm shift in how AI applications are built. In this course, you'll get a feel for how to do this yourself. I'm delighted to introduce our instructors for this course. Younes Belkada, a machine learning engineer at Hugging Face πŸ€—, has been involved in the open source team where he works at the intersection of many open source tools developed by Hugging Face, such as Transformers, parameter efficient fine-tuning or PEFT, and TRL, which stands for Transformers Reinforcement Learning. Marc Sun, also a machine learning engineer at Hugging Face πŸ€—, is part of the open source team where he contributes to libraries such as the Transformers Library and the Accelerate Library. Maria Khalusova is a member of technical staff at Hugging Face πŸ€—, and she leads the educational projects at Hugging Face and contributes to cross-library efforts to make state-the-art machine learning more accessible to everyone. Thanks Andrew, we're excited to work with you and your team on this πŸ˜ƒ. First you will create your own chatbot with Open Source LLMs. πŸ’¬ You will use an open source LLM from Meta, the same code can apply to more powerful open source LLMs when you have access to more powerful hardware. You will use open source models to translate text from one language between two sentences. Next, you'll use Transformers for processing audio. πŸ”Š What audio tasks do you think a voice assistant might be performing when you ask it for, say, a weather forecast? It knows to wake up when you say its name. That's classification. It converts your speech to text to look up your request. That's automatic speech recognition. And it replies to you. That's text-to-speech. In this course, you'll classify arbitrary sounds, transcribe speech recordings, and generate speech from text. The computer vision applications of Transformers are plentiful. πŸ”πŸ–ΌοΈ You'll learn how to detect objects in images and segment images into regions called semantic semantic areas For example, you can apply this code to detect that a puppy exists in an image and also segment segment the part of the puppy that makes up its ears. After you've learned to handle text, audio, and image tasks, you can combine these models in a sequence to handle more complex tasks. For example, if you want your app to help someone with a visual impairment by describing an image to them, how could you implement that? In this course, you'll apply object detection to identify the objects, image classification to describe those objects in text, and then speech generation to narrate the names of those objects. You'll also use a model that can be used to describe the image. You'll also use a model that can take in more than one data type as input. These are called multimodal models. πŸ“πŸ”ŠπŸ–ΌοΈ For example, you'll build a visual question answering application in which you can send an image to a model as well as a question about that image, and your application can then return an answer to that question based on the image. You'll also use the Gradio library to deploy an AI application to Hugging Face spaces so that anyone can use your application to perform tasks by making API calls to the internet Of course, the goal of all of these examples isn't just for you to be able to build these specific examples, it's so that you'll learn about all these building blocks so that you'll be able to combine them yourself into your own unique applications. Many people have worked to create this course. I'd like to thank, on the Hugging Face side, the entire Hugging Face team for their review of the course content, 🌟 as well as the Hugging Face community for their contributions to the open source models. ✨ From DeepLearning.ai, Eddy Shyu had also contributed to this course πŸ˜€. In the first lesson, you'll learn how to navigate thousands of models on the Hugging Face Hub to find the right one for your task, and how to use the pipeline object from the Transformers library to start building your applications. That sounds super exciting. Let's go on to the next video and get started! πŸ˜ƒ
course detail
Next Lesson
Open Source Models with Hugging Face
  • Introduction
    Video
    ・
    5 mins
  • Selecting models
    Video
    ・
    5 mins
  • Natural Language Processing (NLP)
    Video with Code Example
    ・
    9 mins
  • Translation and Summarization
    Video with Code Example
    ・
    5 mins
  • Sentence Embeddings
    Video with Code Example
    ・
    5 mins
  • Zero-Shot Audio Classification
    Video with Code Example
    ・
    9 mins
  • Automatic Speech Recognition
    Video with Code Example
    ・
    15 mins
  • Text to Speech
    Video with Code Example
    ・
    2 mins
  • Object Detection
    Video with Code Example
    ・
    11 mins
  • Image Segmentation
    Video with Code Example
    ・
    16 mins
  • Image Retrieval
    Video with Code Example
    ・
    7 mins
  • Image Captioning
    Video with Code Example
    ・
    5 mins
  • Multimodal Visual Question Answering
    Video with Code Example
    ・
    4 mins
  • Zero-Shot Image Classification
    Video with Code Example
    ・
    6 mins
  • Deployment
    Video with Code Example
    ・
    11 mins
  • Conclusion
    Video
    ・
    1 min
  • Course Feedback
  • Community