Practical Multi AI Agents and Advanced Use Cases with crewAIBuild agents that collaborate to solve complex business tasks.crewAI
Prompt Engineering with Llama 2&3Learn best practices for prompting and selecting among Meta Llama 2 & 3 models. Interact with Meta Llama 2 Chat, Code Llama, and Llama Guard models.Meta
Introducing Multimodal Llama 3.2Try out the features of the new Llama 3.2 models to build AI applications with multimodality.Meta
Evaluating and Debugging Generative AILearn MLOps tools for managing, versioning, debugging, and experimenting in your ML workflow.Weights & Biases
Functions, Tools and Agents with LangChainLearn about the latest advancements in LLM APIs and use LangChain Expression Language (LCEL) to compose and customize chains and agents.LangChain
Quantization Fundamentals with Hugging FaceLearn how to quantize any open-source model. Learn to compress models with the Hugging Face Transformers library and the Quanto library.Hugging Face
Getting Started with MistralExplore Mistral's open-source and commercial models, and leverage Mistral's JSON mode to generate structured LLM responses. Use Mistral's API to call user-defined functions for enhanced LLM capabilities.Mistral AI
Prompt Engineering for Vision ModelsLearn prompt engineering for vision models using Stable Diffusion, and advanced techniques like object detection and in-painting. Comet
Building Generative AI Applications with GradioCreate and demo machine learning applications quickly. Share your app with teammates and beta testers on Hugging Face Spaces.Hugging Face
Large Multimodal Model Prompting with GeminiLearn best practices for multimodal prompting using Google’s Gemini model.Google Cloud
Building an AI-Powered GameLearn to build with LLMs by creating a fun interactive game from scratch.Together AI, AI Dungeon
Reinforcement Learning From Human FeedbackGet an introduction to tuning and evaluating LLMs using Reinforcement Learning from Human Feedback (RLHF) and fine-tune the Llama 2 model.Google Cloud
AI Agents in LangGraphBuild agentic AI workflows using LangChain's LangGraph and Tavily's agentic search. LangChain, Tavily
How Diffusion Models WorkLearn and build diffusion models from the ground up, understanding each step. Learn about diffusion models in use today and implement algorithms to speed up sampling.
Building Systems with the ChatGPT APILearn to break down complex tasks, automate workflows, chain LLM calls, and get better outputs from LLMs. Evaluate LLM inputs and outputs for safety and relevance.OpenAI
AI Python for Beginners: Basics of AI Python CodingLearn Python programming with AI assistance. Gain skills writing, testing, and debugging code efficiently, and create real-world AI applications.DeepLearning.AI
Retrieval Optimization: Tokenization to Vector QuantizationBuild faster and more relevant vector search for your LLM applicationsQdrant
Open Source Models with Hugging FaceLearn how to easily build AI applications using open-source models and Hugging Face tools. Find and filter open-source models on Hugging Face Hub.Hugging Face
Efficiently Serving LLMsUnderstand how LLMs predict the next token and how techniques like KV caching can speed up text generation. Write code to serve LLM applications efficiently to multiple users.Predibase
Red Teaming LLM ApplicationsLearn how to make safer LLM apps through red teaming. Learn to identify and evaluate vulnerabilities in large language model (LLM) applications.Giskard
Serverless Agentic Workflows with Amazon BedrockEfficiently handle time-varying workloads with serverless agentic workflows and responsible agents built on Amazon Bedrock.AWS
Build Long-Context AI Apps with JambaBuild LLM apps that can process very long documents using the Jamba modelAI21 labs
LangChain for LLM Application DevelopmentUse the powerful and extensible LangChain framework, using prompts, parsing, memory, chains, question answering, and agents.LangChain