AI in AppSec: Hype, Layoffs and What’s Actually Real
Artificial intelligence is dominating headlines in cybersecurity, but how much of it holds up under scrutiny? In this solo episode of Secured, Cole Cornford, founder and CEO of Galah Cyber, shares his unfiltered take on three of the biggest AI narratives making waves in the AppSec space right now.
Cole breaks down the Claude Code security announcement and why the market reaction dramatically overstated its real-world impact, arguing that the most meaningful security vulnerabilities have never been the ones static analysis tools can easily catch. He then examines Aikido’s continuous penetration testing proposition, raising serious questions around noise, cost, resilience, and whether most organisations are even architected to support it.
Finally, Cole tackles the AI job displacement narrative head-on, making the case that most high-profile tech layoffs are less about AI capability and more about mismanaged businesses using automation as convenient cover for decisions driven by poor performance and investor pressure.
00:00 – Intro & Cole’s hot take on AI hype
01:30 – Claude Code Security: what it is and why markets overreacted
03:30 – Why meaningful vulnerabilities need context, not static analysis
05:30 – Autofix, token waste, and who’s actually using Claude Code
08:00 – Aikido Infinite: the continuous pen testing promise
10:00 – Cost, resilience, and noise concerns with Aikido
12:49 – The AI jobs narrative: Cole’s verdict
14:30 – WiseTech, Block, and the smokescreen theory
16:00 – Jobs shift, not job loss
17:03 – Closing thoughts and solo format feedback
Cole Cornford:
I think it’s all bullshit. I haven’t seen many situations where AI has been the absolute root cause behind why somebody has had their job replaced. We’ve done that kind of stuff previously in the past with filter set. Ultimately, what I find is it just creates a hell of a lot of noise and very little signal for people, as opposed to manual interrogation and stuff. So look, jury’s out. Let’s see how it goes.
I’m Cole Cornford, founder and CEO of Galah Cyber, and you’re listening to Secured, the podcast where I catch up with developers, security leaders, and innovators to talk about the real world of AppSec.
Open source now powers over 90% of the software we build, but it’s also where attackers increasingly strike. Chainguard closes that trust gap with hardened, secure, production-ready open source builds so teams can build faster, stay compliant, and eliminate risk. Get your free CV reduction report at day1.fm/chainguard and start shipping software with confidence.
Hey, everybody. It’s Cole. I’ve decided today to do a solo episode to talk about a couple of interesting things I’ve been seeing in the application security space. The first thing that I’ve really wanted to talk a bit about is that there’s just so much noise and craziness when people are just talking about the potential for artificial intelligence. And as someone who’s been spending a lot of time, either with companies seeking to secure artificial intelligence, but also companies that are looking at building products around artificial intelligence, I’m kind of uniquely positioned to talk a fair bit about where I’m seeing it being used and where there’s a lot of hype, and marketing fluff, and things are not going as well as they appear to be in the broader media.
So the first thing I’ll talk about is just the Claude Code Security piece. So I’m recording this episode on Friday the 27th of February. And about a week ago, Claude Code Security was released. For those who don’t know, when you run, you have Claude, and Claude basically is a copilot that can sit in your IDE or used as an agent to go and produce code, and it is able to either suggest things or go off and build things for you reasonably well.
Claude is a darling of many, many people, and so when Anthropic came out and said, “Well, now Claude has the ability to do secure code review as well. So you can write your code and then you can run the agent, or you can just ask Claude to make sure it’s secure.” One of the first things that happened is all the cybersecurity companies had a massive drop in valuation because they all saw Claude coming and AI disrupting that whole field, to the point where a bunch of my friends who are building AppSec companies and product businesses are now struggling to be able to even raise seed funding or series A funding because of that announcement.
And look, I understand, I actually think it’s a good thing because we want to be releasing products that make it easier for us to secure our software, but I don’t think it doesn’t make as much sense to me to have the sheer market sentiment. Just because something doesn’t meet the goal, doesn’t mean it can’t be exceptionally harmful to talk through it.
One example is if you’re writing code, the first thing that you’re going to say is like, “Well, why wouldn’t Claude just write secure code by default? Why do I need to have something in an adversarial relationship with it?” And I mean, that’s a pretty interesting point. There’s actually many circumstances where security is not the objective, it’s not the end goal. And while we try to do what we can to write secure by default code, it’s not always achievable if we’re trying to just meet different business objectives, right? That context really matters.
And so something that, when you write code locally and then get an agent to look at it, the context that it has is the static code, what’s available within the repository, seeded with information to give us some context, potentially, about your company, or about the product that you’re building, or maybe the types of systems it’s interacting with, or what it can ascertain from going off and talking to its own few other endpoints and stuff. But that’s kind of where it falls over, is when you want to talk about security issues, the vast majority of meaningful and impactful security issues are not because somebody left a parameterized query, or had a hard coded credential, or used a thing like dangerouslySetInnerHTML. Those do occur, but they’re found quite easily with existing security tools.
The meaningful problems are always when somebody puts something in and makes assumptions about how it operates in the broader environment, and it’s usually a chain of a lot of different bits and pieces in some kind of production ecosystem that leads to some problem. And it’s why penetration testing has never gone away, because people have so little confidence that these tools are able to get context about your business and about how that application is working. So I don’t really see Claude code doing as much as people seem to be thinking.
The other thing is we already have a lot of existing tools to manage and identify security vulnerabilities. The biggest problem has never been in finding bugs. We’re really good at finding bugs. We’ve been finding bugs since the ’90s, they’re just either manual interrogation of source code or asking tools like Find Sec Bugs or Fortify back then to just statically look at code and just find bad patterns or traverse the AST. Finding things is never the issue. It’s choosing what things to fix, and making sure people understand why they’re fixing it, and making it hard to make us reintroduce those things in the future.
I don’t want to be spending a bunch of tokens on bumping dependencies, or fixing secrets that are not relevant because they’ve been rotated already, or having to update configuration of specific assets when it’s configured that way for a specific reason. I just see autofix doesn’t really work as well as you’d think because you’re going to solve security problems, but create reliability ones.
Lastly, there’s a question about, is everyone even using Claude code? In my experience, the vast fucking majority of people that I speak to, they kind of use it as a bit of a hobbyist thing, but I’ve seen almost no organizations, except for really small places say, “Yep, everybody goes and uses this, and they have to use it, and if they’re not using it, they’re going to get performance managed and pushed out.” There’s a couple of places doing that, but 99% of institutions have a couple of hobbyists who use it very well, and a couple of people who are using it incredibly poorly, and the vast majority of people don’t care which way or other and just doing what they’ve always been doing.
And so if you’re telling me that you’re only going to target that 1% to 5% of elite power users who really want to do the best thing possible, but that’s also the group of people who are the least likely to introduce security vulnerabilities because they’re the best at their fucking jobs. So I’m not too worried about this replacing AppSec, as people seem to think. I think it’s actually kind of embarrassing. It shows a distinct lack of maturity on people if they think it’s going to do that.
Anyway, the next thing I wanted to talk through was the Aikido Infinite release, which came out yesterday, and I see almost exactly the same kind of things coming up for that, which is they’ve effectively said, “Any type of cybersecurity professional service, we’re going to try to move it to happening on a continuous basis, because everything else is a point in time assessment and what we can do is just do continuous assessment of your code, your assets, do penetration testing of continuous reports, and help you test things, catch things and patch things.”
I really worry about, again, a few aspects here. One, the vast majority of software assets are just not going to be operating in an environment you can be doing that. They’re not SaaS businesses. If you set it up in maybe an integration environment to be continuously testing and finding things and so on, great. But most places don’t have an architecture where production has parity with dev, and test, and staging. And also there’s many production assets that are just not going to be exposed or accessible on the internet, like OT infrastructure, like citizen developed software applications on people’s workstations. I struggle to see how this has much market penetration, outside of hitting all of these high level things.
I worry about cost. It’s expensive to be running things on Bedrock, $2,000 to $3,000 a month or more. So if you’re willing to pay that for continuous scanning, wouldn’t it be better to just look at hosting your own server and running your own models and stuff? I just worry about the ability for you to be able to say that this is worthwhile and good to do. Again, resilience is a huge conversation here. Do you want to have AI agents continuously penetration testing things? You don’t necessarily know what the outcome is, and if it’s going to break production, you’re in a bit of trouble there, right?
And then it goes back to, “Oh, replace human pen testers or human analysts.” The novelty, we’ve been training these things on clearly defined patterns, and while they do come up with occasionally novel things, most of the time any AI pen testing tool I’ve seen or just AI related security product uses existing patterns to go ahead and find stuff.
We have a massive backlog of things that we need to identify and patch, so I think it’ll be really effective and helpful for that. But then as adversaries start to use these things to create novel and sophisticated techniques to break into organizations, I don’t see how that’s going to become training data for these AI pen testing systems or being remotely representative of the stuff that’s happening in them.
So I think it’s cool. I think it’s a cool idea. I like the idea of continuously assuring your environment. I mean, daily scans, or nightly scans, or continuous scans have always been something that we’ve looked at, as opposed to just running scans in a DevSecOps pipeline. But I do worry about the… I would just say noise is probably the other thing I’d be really concerned about, because even if you have AI systems that are triaging these things and just saying like, “Oh, we have high confidence in these vulnerabilities and low confidence in these other ones.” We’ve done that kind of stuff previously in the past with filter sets, or trying to say, “Hey, are these reachable and exploitable?” Or looking at using ASPM to combine different types of tool findings together.
Ultimately, what I find is it just creates a hell of a lot of noise and very little signal for people, as opposed to manual interrogation and stuff. So look, jury’s out, let’s see how it goes. And I wish Aikido the best of luck.
The next one and last one I want to talk through was about the whole AI taking jobs narrative. I think it’s all bullshit. I haven’t seen many situations, like many at all, where AI has been the absolute root cause behind why somebody has had their job replaced. Yeah, I guess you could probably say, “Look, if you need 20 people to do something and then now you only need 15 people because AI has automated that stuff.” I think what you’ll find is in a not too distant future that that’ll probably scale back up to 20 to have to deal with the editing, and reviewing, and the changing of whatever stuff the AI is doing, or there’ll be new jobs created around managing and governing this. So it’s kind of a shift of jobs. I don’t see it as a reduction.
What I really see it is actually a smokescreen to hide from just having a lot of scared investors doing stupid things. Let’s say the two recent announcements were WiseTech and Block, and all of the coverage from both of them has been, “They’ve cut thousands of developers and customer service agents, and it’s going to be terrible because AI is taking over development. We don’t need more developers.” And we’ve seen this at Klarna in the past, and I’ve seen it recently with Woolworths in Australia, where they had to go bring customer service back, basically, because people couldn’t get what they wanted out of the bot and they were leaving that business. And with Woolworths, they had an agent that was telling them about their supposed mother when they’re a robot. It doesn’t make any sense to me.
So yeah, going back to those two initial companies, I think that they both had terrible business performance this year. WiseTech is a logistics firm. The founder of WiseTech has had a lot of controversy around making some decisions that are not very ethically minded. It’s up to him to choose what he wants to do with his life, but it’s had a significant impact on the governance of his company. And if they’re too busy dealing with constant PR problems, you’re going to struggle to attract good talent, and you’re going to struggle to be focusing on your business. So that, and very recently they had the whole SaaS selloff. Most SaaS businesses in our economy, whether they were in America or Australia, got really, really heavily sold off by retirement funds, by investors, et cetera, because they’re feeling that they’re just… Maybe not replicable, but just feeling that they’re a little bit too expensive for how much value that they really provide.
And we saw that with Atlassian. Atlassian’s 60% down year to date, and that’s tremendous wealth deprecation. And I’m confident, in the next month or two, we’ll be hearing about Atlassian making redundancies from artificial intelligence as well. But yeah, we even go to Block. What’s Block done? Old mate at Block was very into cryptocurrency and he bought a metric fuck-ton of Bitcoin. If you got a billion dollars of Bitcoin and then that billion dollars suddenly becomes, I don’t know, half a million dollars, then I would be very concerned. That’s a lot of wealth to lose. And so I don’t think that these AI layoffs are really as much as people seem to think they are. I just think that there’s private businesses that are not doing as well as they should be, that have been mismanaged, and then they’re using it as a reason to be getting rid of people and trying to give a good news story to the market.
And there’s lots of people theorizing that there’s economic headwinds, and inflation, and capital debt and all of that. And I’m not sophisticated about any of those kind of concepts. I just think that it’s just people trying to manage the messaging and using this as an easy cloud cover and smokescreen for doing something that they’ve wanted to do for a long time, which is make themselves more profitable and get rid of headcount in a way that’s not going to tank their share price.
But anyway, those are the three topics I wanted to talk a little bit about today. If this format’s interesting for you, let me know. I’d love to maybe start doing this on a weekly basis in conjunction with my normal interviews and just talk about my thoughts to do with some artificial intelligence, or I guess, software security concepts. Anyway, thank you all for coming on and I’ll see you next time.
Thanks a lot for listening to this episode of Secured. If you’ve got any feedback at all, feel free to hit us up and let us know. If you’d like to learn more about how Galah Cyber can help keep your business secured, go to galahcyber.com.au.











































