Learn how to prompt and customize your LLM responses using Amazon Bedrock.
We'd like to know you better so we can create more relevant courses. What do you do for work?
Instructor: Mike Chambers
Learn how to prompt and customize your LLM responses using Amazon Bedrock.
Summarize audio conversations by first transcribing an audio file and passing the transcription to an LLM.
Deploy an event-driven audio summarizer that runs as new audio files are uploaded using a serverless architecture.
In this course, you’ll learn how to deploy a large language model-based application into production using serverless technology.
A serverless architecture enables you to quickly deploy your applications without the need to manage and scale the infrastructure that it runs on.
You’ll learn to summarize audio files by pairing an LLM with an automatic speech recognition (ASR) model. Through hands-on exercises, you’ll build an event-driven system that automatically detects incoming customer inquiries, transcribes them with ASR and summarizes them with an LLM, using Amazon Bedrock.
After course completion, you’ll know how to:
You’ll work with the Amazon Titan model, but in practice Amazon Bedrock allows you to use any model you prefer.
Start building serverless LLM applications with Amazon Bedrock and deploy your apps in just days.
Anyone who is familiar with Python, AWS services, and wants to learn to quickly deploy LLM apps with Amazon Bedrock.
Special thanks to Vocareum for powering the labs provided in this course!
Introduction
Your first generations with Amazon Bedrock
Summarize an audio file
Enable logging
Deploy an AWS Lambda function
Event-driven generation
Conclusion
Developer Advocate for Generative AI at AWS, Co-instructor of Generative AI with Large Language Models
Course access is free for a limited time during the DeepLearning.AI learning platform beta!
Keep learning with updates on curated AI news, courses, and events, as well as Andrew’s thoughts from DeepLearning.AI!