Next, we're going to hear from Snyk. Snyk is a software company focused on security, a platform that is serving over 2.5 million developers on making sure they are building secure code. And they are using CrewAI to drive multi-agent automations of security processes. And they're also integrating their own platform with CrewAI agents so that people that are using Crews can make sure that their code is secure and ready for production. They are making major strides and investments in multi-agent systems and how to track vulnerabilities and security concerns on that new trend that is now powering the software industry. I'm very excited to hear from them. So let's jump right into that. Thank you so much for joining us. Very excited about this. We're trying to wrap things up and talk with experts in the industry like ourselves about what are some of the things that you're seeing from our perspective. And I think you have a unique perspective in here because you worked with Snyk basically thinking about security for years. So I'd love to ask you a few questions. And specifically, as the agentic space continues to evolve, how important security is to the broader ecosystem in your take? Well, yeah, first off, thanks for having me. Excited to be here. And yeah, I mean, at Snyk, our entire business is really built around helping people build secure applications. That's kind of what we do. And I think it's more important than ever now that we have agentic systems like those built with Crew, which we are a huge fan of and me personally use Crew all the time. It's great. But as you can probably imagine, for all of you students who are going through the course right now, when you're building agentic systems, there's a lot of things you have to keep in mind, right? So you might use a framework like Crew, which is fantastic for organizing your agents and having them communicate with each other, give them access to different tools and things like that, which is fantastic. But it does open up all new types of risks to consider. So for example, when you have an agent doing something and passing that information to another agent, you have to think through things that before in the deterministic world were a little simpler. Like are there any things this agent could do when combined with another agent that might cause problems? In our space, we actually call this toxic flows. So one of the things we do at Snyk is we have tools that will actually analyze your agents, your MCP servers, and all the tools that you have hooked up into your applications. Look at the tool definitions and figure out what each tool is doing, and then use generative AI on the backend to actually apply a bunch of guardrails and filters to help better understand what type of risks could happen when certain tools are exposed to certain agents. So that's just like one of the new types of security things to consider. But obviously it's a very big field. There's a lot of things that go into it. No, but that is so interesting because again, I think that's where the experience comes up. Like I imagine like what your folks are seeing out there in order to even think about all these edge cases, right? What are some of the general kind of like key challenges that your team is seeing when it comes to rolling out this workflow securely like that? Yeah, there's a ton. So from our perspective as a security developer, security company, we basically, the way that we kind of work, if you will, and I think this will help everyone really understand the field a little bit better. We basically take a look at a few different things to build out a security framework. So first of all, we take a look at the actual code of your application. So for years and years now, we've had tools that do static code analysis. So first of all, we'll actually analyze custom code you're building, and we'll look for really obvious security issues, right? Like everybody is aware of things like SQL injection attacks or cross-site scripting attacks. Those things can be easily detected using deterministic logic, let's just say. So you don't actually need an LLM to detect that. And they can be fixed in a very deterministic way because there's like 100% crystal clear way of how to fix SQL injections. So those are like kind of solved problems. So static analysis is like one part of the basic building blocks of modern application security. But then on top of that, when you're talking about building more complex systems, things that work autonomously, obviously, there's still a lot of unknowns. And so the way that we work internally is we do two things. Like first of all, we stay on the lookout and we actually have like a threat intelligence feed where it keeps us up to date when other companies have interesting breaches and security incidents that involve agents. So every time there's an incident like that becomes public, our security research team basically gets a notification and it might link off to some GitHub article or some security researcher from a different company. And we actually have a security researcher go through it and analyze it and say, oh, this is actually a very interesting novel type of attack. Like, are there any things we can learn from this as a security company that we can then help apply in our product or in our own research to help customers build applications in a more safe way? That's actually how we've done this for quite a while. So a lot of our products that we have out today, like our MCP scanning tool, which looks for toxic flows and things like that, those were initially inspired by novel breaches that we came across. We did a lot of research on them and then found out, hey, this is actually something new that hasn't really been talked about before. Let's build products around this, like let's build safeguards and educational materials to make it easier for people to prevent this from happening in the future. That's so interesting. And I think like one of those breaches was the big GitHub breach that one of your teams actually was able to detect itself, right? Yep, exactly. One of our researchers published some very interesting research earlier in the year, which essentially found that GitHub was using large language models and doing some autonomous stuff with GitHub issues. And basically by doing some fancy prompt injection type stuff through a GitHub issue, attackers were able to leak sensitive information about a repository. And so, you know, having like these kind of threat models in your mind as you're going through and building stuff is just super useful. So what I would tell students who are learning this stuff and building their first autonomous systems is as you're building these things, in the back of your mind, it doesn't have to be something you're afraid of, right? But like, just keep in mind, what are some of the things that could go wrong? And then how would I address those things? And as long as you're proactively thinking about those things, it puts you like, in the top 1% of development professionals, because thinking about the risks and just keeping them in your mind and making sure you kind of architect things in a nice way, is really one of the best things you can do, not only for your application, but for your career, right? Like, having good security practices and a good security mindset is just a critical part of being like a great engineer in, you know, modern circles. I like how you put it. It's like, it's a security mindset. If like, if you approach data from the get-go, just as you're building, you're just asking yourself on every feature that you do, what are the factors of attack that I might be exposing myself here and how I'm trying to safeguard that. That alone's already like moves the needle, like most people don't. So the fact that you as an engineer can actually think that way, I think that could actually be a huge differentiator. So you brought up a few good points, right? Like AI generated code, MCPs, another big one as well. How are some of the challenges that you're seeing on people that are actually trying to plug this agents into existing DevOps or SecOps pipelines? Is that something that people are struggling with? Or like, are people already cracking that? That's a very good question. I feel split in terms of answering this because on one hand, I feel like it's never been easier to add agentic tools and functionality into your application. So at Snyk, for example, I mean, our company has been around for quite a while and we have lots of older projects that maybe internal projects haven't been maintained in a while. And sometimes I'll literally just go in and like rewrite them using AI or add in a bunch of new features into the pipeline. And I mean, with the tools we have available today, like you're able to do it at a speed that I wouldn't even have thought was possible like four years ago, you know, it's completely mind blowing. But at the same time, it can also be really scary for people, which I think is the biggest concern, right? Especially people in larger companies. I think the biggest concern that people have when introducing these tools and these practices into their modern workplace is there's a little bit of fear and uncertainty. You know, people are like, well, hey, if I put this agent here, is it safe to use? Will this expose customer information somehow, right? And there's lots of components that people are worried about. Even take a very simple one. So from the very outset of generative AI becoming popular, one of the most common things you heard in articles and internet posts and everything was this fear of prompt injection. And prompt injection is still a big issue, and it's not necessarily solved. Like there's no 100% foolproof way to fully solve prompt injection because generative models, non-deterministic, and there's a lot of mystique in what happens behind the scenes there. And so for each model, there might be different best practice ways to avoid it, to remediate it. So maybe that means having other generative AI review prompts and then try to see if there's anything suspicious in it. Maybe it means having some deterministic logic in a scanning tool or security application that looks through it. Or maybe it means having good runtime tooling and auditability so that when your agents and tools are actually doing things, you can then have a human or an agent look back through the runtime history of what these things are doing and look for suspicious patterns. So there's lots of ways to address these things, but fundamentally, I think it goes back to just having a good security mindset and being proactive with your engineering work. That's such a good point. I agree with you. Yeah. Prompt injection, it's great that you bring that back because a lot of people, again, it was a big thing where everyone was talking about it. And now people are not talking about it as much, but the problem is not so. It's very much still out there. You can find your LLMs to pick them up. You can try to proactively prevent it. But at the end of the day, there's no 100% this is a guarantee. Enable this flag and everything is taken care of for you. That makes me wonder, and I'd love to hear your take on this. How do you see the relationship between autonomous AI and security? And I know this is a hard one because the market's moving so fast, right? But how do you see that combination evolving over the next three years? I think a couple of things are really interesting. First of all, the pace of AI models just generally improving is like astoundingly good. A lot of the software you get back from even a very basic prompt without a lot of structure and things like that can be very good. So how does security play into it? Well, I think we're getting close to a point where security can become a fully autonomous thing. And what I mean by that is taking LLMs and agents and non-deterministic applications and having them pair up with a deterministic security scanning tool and something that can, with 100% certainty, tell you if there is an issue here or if something bad has happened and using that as feedback for the agents to help it improve and guide it. I think that is quickly becoming the winning strategy in our space. So if you've ever played around with any of Snyk's tools, which you can use for free, you can just go play around with it. But one of the things you'll notice is that if you're, let's say, writing software using Cursor or something, right? You can take Snyk's tools, you can plug them into your Cursor rules. And what will happen is as you are prompting Cursor and asking it to build a new crew or build a new crew flow or something, right? What will end up happening is each time Cursor adds some code to the project, it will use Snyk behind the scenes to automatically scan and look for lots of common types of issues. Take the context that Snyk provides and then have Cursor itself actually issue fixes and then retest to make sure these issues are fixed. That creates this really interesting autonomous loop, which we've seen really tremendous results from. And so in my mind, the way the security space is kind of evolving in this autonomous future is it's becoming just an iterative part of using generative AI in general. And my personal hope is that in the next three years, we'll get to a point where your typical engineer won't even need to think about security ever, right? Like, they'll just be able to say, hey, Cursor or Claude or whatever tool you're using, build me this application. Security will kind of be handled behind the scenes with this iterative process. And the end result is that us as users will have way better software and we won't have to worry about things as much. And that's kind of the dream, I think. I like that. Yeah, I got to say, yeah, you're absolutely right. If like these LLMs are getting better and better in writing code, they should be getting better and better and also write code to prevent and to fact check that code. So that's an interesting take. Before we go, I know that you mentioned that you use CrewAI yourself. And I have been kind of like talking about CrewAI this entire course, teaching people on how to use it. What is your take on the role that CrewAI is playing on this ecosystem, how it's integrating? And I know that we have some partnerships going on as well. What is your take on that? When you all first came out, I think I discovered you through Hacker News at the time. And I was following along with your tutorials, building some systems and doing some orchestration just in Python applications and really liked it. I think one of the strengths you all have is a really crystal clear framework on the engineering side to make it very repeatable and reliable how to build the agents. You know, like having the nice like agent definitions in the YAML files, having the flows and the crews and the orchestration components are really something that I think we take for granted now, which you all kind of mainstreamed. So I'm a big fan of that. Like internally, it's neat. I actually build and run a ton of projects, literally using crews in production. And they're fantastic. We use it from everything for helping automate our employees' social media to helping build product messaging frameworks and reference documentation for new products that we're launching. There is truly a massive amount of automation that is powered by these tools internally. And they feel rock solid, which is, I think, mainly due to the crew framework. So thanks for that. Come on. Thank you. Thank you for using crew. If anything, I'm excited about how things are going to look like moving forward. I think there's so much going on with agents. And hey, I guess we're tagging along for the ride. Thank you so much, Random. I really appreciate the time. Very excited to have you folks here and hopefully all the learners as well. I don't know if you have any parting thoughts for all the learnings, but those are all the questions I had. No, the only thing I want to say is for those of you taking the course, you're amazing. Keep building. Have fun. That's what this is all about. Use AI to make your life more enjoyable, and it certainly does that for me. So hopefully it does it for you as well. I love that. Thank you so much, Randall. I'll catch you later. Have a good one. Thank you.