Next, we're going to hear from Weaviate. Weaviate is an open source vector database. They allow you to not only store information, but allow your agents to tap this information themselves. And some of their capabilities for searching can power amazing use cases when you need vector search. I want to make sure that we hear from them and their perspective as one of the leading databases that are being adopted out there. So let's jump into that. Sebastian, thank you so much for being with us today. I really appreciate it. It's so cool that we get to have people from Weaviate joining us on this course. So I have a few questions for you. As the agentic space continues to evolve, how important is it to have this broader ecosystem of tools and resources, including vector databases like Weaviate? Yeah, I think this is absolutely critical because it's very much like you could have a 10x developer that knows about absolutely everything, but then you rely on this single point of failure. But also, how far can the 10x developer go? And then following the same analogies, if you have a very well-rounded team with maybe even sometimes overlapping skills, etc., you can achieve a lot more. And I think this translates really nicely into the AI ecosystem. We can do so much more together, right? And then if we wanted to do absolutely everything by ourselves, we would fail. And this is the best thing because each of us can focus on the thing that we do best while always also thinking, how can I integrate? How can I connect? How can I sync with all these other tools together? And how can we work as an ecosystem? And that's the thing. We can work and play as a team because we all belong together. We're not independent teams trying to win by ourselves. I love that. We all belong together. I'm going to use this one a little more often. It's a good one. So specifically on the vector space, there's so much that people have been talking about. Everyone has been doing RAG nowadays. How do you see the role of vector knowledge evolving, especially around the idea of AI agents now, and they becoming more and more autonomous? The vector databases and the vector embeddings, they play a pretty critical role in all of this. If you have an agent that just relies on LLM, then you basically rely on a base of knowledge. Well, if you work with a vector database, you kind of get access to data that is dynamic, that can change within a millisecond. And you always get information that is up to date. The information is up to date, but also very relevant. So you get only the information that is relevant. The information then is a lot smaller. So you're not sending a 100-page PDF to your LLM, but only the relevant parts. The other thing is that also you get information that could be relevant to a specific user. Maybe your agentic workflow is serving hundreds of thousands of people, each with their own data, right? You wouldn't have hundreds of thousands of LLMs per user, because that would be absolutely crazy and it would take forever to update each of those. The other thing is you could use various fragmentations, where you could kind of separate into different tenants, the datasets. And then I would never query your part of the data and you would never query mine. So the databases can basically enhance the agents with more protections, more securities that you would expect of a mature database, right? There's the data that you want your agent to tap into. And then you want to make sure that that data is fresh data. There's a few options for that, but I don't think there's anything like a vector database like Weaviate has that gives you all the controls. Because it's not only that you want the most recent data, but you can control everything. That is definitely exciting. When you think about the new classes of applications, the use cases that now are possible by basically tightly unifying your agent orchestration with the vector retrieval, is that something that you hear a lot about, like specific use cases around agents with the vector retrievals like close to each other? And what would be some interesting ones? I'm often thinking of like, what is it that we can stop from the database to give back to the LLM? And an example being actually more than a year ago, we rewrote some of our language clients. We changed completely the APIs in Python, in TypeScript and other languages. And the struggle that we have today is that a lot of LLMs, they were trained with the old APIs. And then when somebody is trying to vibe code with Weaviate, they're getting the old code. We don't want people to use the old code because the old code wasn't great, right? The new one has got a great 10x. So by doing it through a database, we could actually go say, hey, let's delete all the records that talk about the old API and only use the new examples. So not only can we actually add more information, but we can even restrict the information from going like, hey, never show this part because we can delete it or filter it out. Because to me, agents are almost like people that are out there to help you with the most accurate information that is also relevant to your data, right? So you don't want something that is just like, hey, here's an example of like a query in Weaviate. But you're like, oh, right, you have these three collections. Here's how you query this specific collection with these specific properties. And then that's kind of like really powerful. Right now, it feels like people started understanding prompting and how that works and why that is important. But now, especially as people are going to agents where you have many API calls going with the LLMs, the complex engineering has become something that is super important. And we talk about that through the entire course, this idea that you need to optimize as much as you can. How should developers think about the role of a vector database in that complex engineering? Honestly, think of it like, again, like agents are like people to me. And then if you every single time send that 400 pages PDF to them and say like, hey, I have this question. And sometimes it could be something as simple as, you know, like, hey, how do I connect to the database, which might be accessible on the first page. But the thing is that the LLM will try to read the whole context, right? So what you end up with is just something that is super slow and not very efficient. And then you're going for an absolute overkill, right? So I think for me, it's always about like, what is it that the agent needs? What is that LLM needs to deliver on the task? That's a good point. Now talk about the challenges, like what are the common pitfalls or anti-patterns, for example, when people are trying to bring vectors and vector search into this agents, like what are the things that usually people do wrong and that they should watch for? I think one of the things like it's a classical data problem of garbage in, garbage out. Quite often like, yeah, embedding models are really good with unstructured data, but that doesn't mean that the unstructured data needs to be garbage or completely a mess, right? So quite often we don't think enough about this kind of stuff, right? How about I create my vector embeddings based on specific fields? Do you really need like the author's name or emails or all this other information? Probably not, right? So it's, again, like the problem of like less is more. The other thing that I notice quite often is, well, vector embeddings are huge, right? What about if you want to scale up and go into a billion objects and billion of those vectors, right? Like, I don't know if you can get that much RAM so easily or cheaply, right? That's going to be expensive. So that's kind of like one of the common pitfalls was like, well, you didn't assess the resources that you need for the kind of queries that you need to wrap, right? So there's a few things that you can do around that. So you could actually reduce the size of your vectors to either reduce the number of dimensions, or maybe you could change each dimension to something that is of size of one byte or like a bit even. And then with that, you could actually reduce the RAM needed, you know, to a quarter. We actually have that built in into Weaviate. It takes like two lines of code to enable quantization, but it's actually super powerful. Recently, we actually released one called rotational quantization, RQ for short. And we kind of thought that quantization would actually slow down the queries and etc. Actually the opposite is true. So the queries are faster and the recall when we measured versus just uncompressed vectors, the difference was around like 0.01%. So you could only notice the difference in kind of like a lab-like environment. We're like, why not do it by default, right? This is incredible. So these are probably like two common pitfalls. So like, you know, sending too much again when generating vector embeddings, because and then not being precise about the use case that you're trying to serve. And then the other one, like not thinking about like how big those vector embeddings could be when you run the whole thing. That's an interesting perspective. And I think you folks as a vector database company, like you're in the middle of all this revolution. You're watching all those different kind of like sources, all those different companies. You're seeing all the different frameworks. You're seeing all the different agents. What is your take on Crew? What are you seeing out there? How does Crew works with Weaviate? Like share more with us with that. So what I like about Crew and this is kind of like, we have that synergy between our teams. You guys really think about developer experience, right? Like I'm super passionate about developer experience, so for me, this already shows like you care about those developers out there that are trying to build something. And this is kind of like the same feedback we get as well. Like when we tried Crew at some like workshops, et cetera, like this was always like, oh right. Like I could follow like the first few steps. I already knew what to do. And then seemingly you guys have very similar idea of like speed to value, whereas like, okay, I went through this like half an hour quick start. I read through some examples, like, but there's also a very clear path of like, how do I go to production? Right. And when something goes wrong, like you get like an error message that you can do something about as well. But there's some frameworks out there that like they change their syntax and APIs every three months or whatever. And that kills me every time because I was like, oh, right. Like I want to show you, we did with this or the framework and it's like, oh, my demo doesn't work. Right. I was like, what did you do? Right. Like, oh, I shouldn't have updated my NPM packages or Python libraries. And then like with Crew, like I haven't had that issue yet, right? Like, it's like you guys kind of like pretty responsible about it's like, I love a good breaking change when there's like a really good reason, but don't do a breaking change that destroys everything that I can't even connect to. Right. And I think this is the incredible thing because you care about us developers using your framework, your APIs, and then the fact that I can read the code. Right. Nice. I appreciate the nice words. Thank you, Sebastian. Is there anything that you want to leave our audience with? I know that we talked a lot about many things, so I don't know if there's any kind of like parts and thoughts that you want to leave the learners with. For me, the biggest performance type that I care about is actually performance of the developer. The performance of like, hey, you are the person, like if I give you this set of tools, how fast can you go and then deliver the solution? I think this is absolutely super important. And then in this AI ecosystem that we're working, right? So always think of like the speak to values, like how fast you can get to the POC and how fast can you go to production? And also how fast can you like grow and scale? Some solutions could be enterprise. Some could be, you know, like open source projects. Some could be just like something that you build for yourself. It should be the same skill set. There shouldn't be like a completely separate solution for enterprise and completely separate for your pet projects. Your set of skills and the solutions you build should be kind of like gathered together, like I said, as one thing and as one skill set. And that will make you super powerful. And then, yeah, pick the right tool. So you already have two of those, like CrewAI and Weaviate, you know, pick the other ones and then take those that won't lock you in, you know, like they should lock you in because you love them, not because you have no other choice. So yeah, that's my point I like that. That's an interesting advice. Sebastian, thank you so much for joining us for the course. I really appreciate it. And I'm sure that all the students do as well. Perfect. Thanks for having me. And yeah, good luck everyone, like build some crazy good stuff. Love that. Have a good one. Take care.