LLMs as Operating Systems: Agent MemoryBuild systems with MemGPT agents that can autonomously manage their memory.Letta
Evaluating and Debugging Generative AILearn MLOps tools for managing, versioning, debugging, and experimenting in your ML workflow.Weights & Biases
Safe and reliable AI via guardrailsMove your LLM-powered applications beyond proof-of-concept and into production with the added control of guardrails.GuardrailsAI
Building and Evaluating Advanced RAGLearn advanced RAG retrieval methods like sentence-window and auto-merging that outperform baselines, and evaluate and iterate on your pipeline's performance. TruEra, LlamaIndex
Carbon Aware Computing for GenAI developersTrain your machine learning models using cleaner energy sources.Google Cloud
Efficiently Serving LLMsUnderstand how LLMs predict the next token and how techniques like KV caching can speed up text generation. Write code to serve LLM applications efficiently to multiple users.Predibase
Red Teaming LLM ApplicationsLearn how to make safer LLM apps through red teaming. Learn to identify and evaluate vulnerabilities in large language model (LLM) applications.Giskard
Building an AI-Powered GameLearn to build with LLMs by creating a fun interactive game from scratch.Together AI, AI Dungeon
Reinforcement Learning From Human FeedbackGet an introduction to tuning and evaluating LLMs using Reinforcement Learning from Human Feedback (RLHF) and fine-tune the Llama 2 model.Google Cloud
LLMOpsLearn LLMOps best practices as you design and automate steps to fine-tune and deploy an LLM for a specific task.Google Cloud
Automated Testing for LLMOpsLearn how to create an automated CI pipeline to evaluate your LLM applications on every change, for faster and safer development.CircleCI
Serverless LLM Apps Amazon BedrockLearn how to deploy an LLM-based application into production using serverless technology. Learn to prompt and customize LLM responses with Amazon Bedrock.AWS