SECURED

Fix the Flag: Rethinking Secure Code Training with Pedram Hayati

CTFs are fun, but do they actually make developers write more secure code? In this episode of Secured, Cole Cornford is joined by Pedram Hayati (Founder of SecDim & SecTalks) to explore why most developer security training fails, and how SecDim’s “Fix the Flag” approach is changing the game.

From contrived WebGoat-style examples to frameworks that quietly eradicate entire bug classes, Cole and Pedram dive deep into the intersection of AppSec and software engineering. They unpack why developer experience is non-negotiable, why security needs to borrow design patterns from engineering, and how real-world incidents (like GitHub’s mass assignment bug or the Optus breach) make concepts stick far better than acronyms like “XSS” or “SSTI.”

This is a technical, opinionated episode for anyone who’s ever struggled to get developers engaged with security.

01:10 – Why Pedram built SecDim, the problem with pen test reports, and why CTFs don’t train developers
04:42 – From “Capture the Flag” to “Fix the Flag”: making training realistic and Git-first
06:30 – Training inside developer workflows and why contrived examples fail
10:28 – Using modern stacks, AI-tailored labs, and real-world incidents to make concepts stick
12:35 – Why security names suck (XSS vs. “content injection”) and the Optus hack as a teaching moment
17:37 – Secure design patterns vs. vague slogans, and why secure defaults beat secure by design
21:15 – Frameworks like React, Rails, and Angular that kill entire bug classes
23:23 – Engineering by-products: reproducibility, immutability, and orthogonality in secure coding
30:36 – PHP’s bad reputation, language quirks, and what’s actually most popular in security training today
33:41 – Why AppSec pros need to build and deploy apps (not just know vulnerability classes)
37:44 – Getting started with SecDim and hands-on secure coding

Cole Cornford
Hey, I’m Cole Cornford and you’re listening to Secured. Each episode, I bring on developers, security leaders, and innovators to discuss the reality of AppSec. We cover the wins, the slip-ups, and the stories that make this industry a little more interesting. Stick around and you’ll enjoy this one. Today, I’m joined by Pedram Hayati, founder of SecDim and SecTalks. He’s an industry legend who’s been building a secure code-training platform with an emphasis on using sharp feedback cycles and iterative Git commits as a way to reinforce those concepts. We have a great chat about nitty-gritty concepts, so techies and code reviewers, this is an episode for you. Pedram is switched on. If you enjoy this episode, you should absolutely trial SecDim and see what he’s been building. I’m sure he would love to hear your feedback and personally, so would I.


So Pedram, did you want to give everyone a bit of an outline about what SecDim is? Because I’ve known about the product for quite a long time and I was always a bit confused about the name as well. I think it’s security dimension, but it’s just always confused me.

Pedram Hayati
This was just looking for a short abbreviation to register a domain name because who will type in Security Dimension, right? So SecDim, clear and short. So where this idea came from, the idea was been there for quite a while. So even before SecDim, when I was involved with the founding members of elttam and being one of the founders there, we are always thinking about… I actually can go back, even like when I was doing pen testing for BAE System, it was always wanted this thing that our pen test report are being used by the people or especially the engineers and probably you know as well, you go year after year, you provide findings and it’s hard to move the needle. You will see the same repeat findings coming ahead or coming up.


So what happened was there was always a question, what is happening? Is it because of we don’t write a good pen test report? Or is it because you don’t communicate well or is it because there is a lack of awareness on dev side? So obviously, we started with some usual secure training workshops for devs and that goes back to eight to nine years. And in every single secure coding workshop that we have been running that is through BAE and then later elttam and other ones, there was always this part where you were like, “I want just a hands-on component.” And you were kind of very tempt to use CTF and I tried a lot of them, but by the end of the day, you will realize, no, CTFs is not really something for devs. It’s engaging, it’s interesting, it can shows them, wow how XSS or I don’t know, CSF works, but would that really result into they understand the good patch or the best practice? That was always lacking.


And if you look at the industry or just open source or industry, a lot of things focus a lot on the offensive side of the thing. Exploitation, hacking, is all about do this. And then when it comes to recommendation, suddenly, is everything very generic or boilerplate code or code snippets that they want to show an example of a security patch but there is no one in the universe will write the patch in that way in a production or enterprise [inaudible :39]. So I find the problem is both side, that we don’t know the [inaudible :46] engineering world and at the same time, the engineers, they don’t know our world. So the whole idea of the SecDim was to sort of bridge that gap but in a cool and a fun and more natural way rather than, “Hey, this is a mandatory training that you have to do and if you don’t do it, your KPI is going to go down or your manager is going to be angry at you.”

Cole Cornford
What do you mean? I don’t open the training video and just skip to the end and then I’ve got compliance, right? It’s all good.

Pedram Hayati
That sucks, right?

Cole Cornford
The CTFs is interesting for me because we do secure code training quite a lot and we recently lost a reasonable scale deal to a bigger business than my small little bird one because we lacked a CTF. And I remember saying back to the person who was buying a product when they’re interviewing us, I said, “CTFs are not how you train developers. It’s ineffective.” And so it’s good to hear that my viewpoint is validated by you even though I lost the deal.

Pedram Hayati
Exactly, Cole. Yeah. That was exactly that point. And I saw it through actually running it many times for enterprises, for private setups, and there was all this excitement and engagement. And people, they think that the learning happened, but in reality, it didn’t. So the whole idea of the SecDim was like, I always believed in CTF, even in the university courses that I teach over the years, I always encourage people to do CTFs, to learn security. This was one of the many ways to show your hands-on skill. It was just like, but can we have some sort of a CTF idea but in the dev environment or in a dev or engineering setting or a CTF friendly for devs? And the whole idea of came around that what if instead of capture the flag, we have fix the flag? So the idea is that okay, you go and find and detect, but you only get a score when you’re fixing it.


So we played around with this idea for a while. We had some proof of concept, we ran it in some trainings like MVP as they call it back in the days using Git repository. Because another thing was in order to learn something, you should not learn a platform just to start learn something new. Some of the trainings that I also see is that just to understand, just to start your learning journey, you need to first learn how to operate this platform or how to work with this person, where to click, where not to click. That itself for me is like why are we doing it? What if instead of we invite developers to come to us, we go to them and offer the content in the language, in the methodology in a way that they are familiar with them?

Cole Cornford
I think that’s a really good one because user experience of having to move people out into a different ecosystem and then bring them back into that context switching is killer. And I’ve seen it a lot with just traditional AppSec products. More recently, AI’s changed it a bit so that in your IDE you can get feedback constantly while you’re doing stuff like that’s written in natural language. But maybe four to five years ago, we had a proliferation of ecosystem based security products where you’d say you have GitLab and the security runners are just built directly into the pipelines. And so developers who are doing coding and GitLab get security findings there.


But then I think another five years prior to that, and you’d have a product like Checkmarx or Fortify and the developers would scan the code, that would get shipped off to an ecosystem over here, and it’s completely irrelevant and unrelated on their workflow. And so oftentimes, people wouldn’t even go over there to go look at the results. It just wouldn’t happen. And I can see it’s exactly the same for having to train people because if you’re asking them to have to first learn how to use the training environment before they actually get any value out of it, they’re probably going to stop.

Pedram Hayati
And exactly. And 100%. I think in security, is kind of we have this tendency that we create, as you said, our own ecosystem and I’m glad it’s changing over the time. You mentioned about CI/CD and the whole concept of DevSecOps. Upon checking the other day, the concept of DevOps, it was actually coined in the year 2001. Well, and when we start adopting it in security? Maybe five, six years ago. So we are a little bit lacking and this is a part, developers are pretty picky. They want a smooth experience. And sometimes because the way that they looking at any technology platform aside from what content they consume is how these guys have built this, is it using some old school weird code snippets or environments that nobody even do it these day? And that can completely put them off. So they are very picky in terms of how you should communicate to them.

Cole Cornford
Yeah, I mean, I can see a lot when… Because I’ve seen a lot of pen testing firms, not elttam by the way, elttam did good. I remember doing my secure programming in Java course back in 2018. So big thumbs up there. But other testing firms like large ones, and what they do is often go, [inaudible :55] top 10 and then go for every single vulnerability class showing you how to hack. And then the examples I use always either contrived or they use something like WebGoats and I’m like, “You cannot take WebGoat or Juice Shop seriously as a developer.” because you look at it and you think it’s a joke. You understand it’s a training ecosystem. But are people actually writing Java Spring applications as much anymore? And especially not with the kind of interface that’s there. It’s very Web 1.0 at this point.


So it’s one of the things I thought very heavily about when… Because in my company, I’ve got a product called Birdhouse that we use for our training and it’s a dockerize node.js application React. And one of the things I thought was really important for developers is to see not contrived examples like every single field has cross-site scripting because that’s not going to happen anymore. People use React. But I’ve put into the app situations where you’d say, “Okay, we have a field where we want to give people the ability, like MySpace back in the day, to just run some HTML for styling on the About Me page. And so let’s let them use that, we’ll block script tags, it’ll be fine.” And it’s a naive. It’s something that the developer thinks they’re doing the right thing, but we know that there’s ways around it. And so I wanted to show situations where you’re satisficing a business requirement with a security one and in a language and framework that’s reasonably modern. That sounds like you’ve been doing that with SecDim as well.

Pedram Hayati
Yeah, exactly, Cole. As I mentioned, they are quite picky. So I think I would kind of say in three angles where if you want to deal with devs, you should be always focused. First of all, use the latest and greatest tech that is out there that devs are using it. Like aside from… So SecDim, whatever content is delivered in SecDim is based on Git. So we have this developer first, git-based experience. In fact, they can even clone the whole content and lab locally. You don’t want to limit them, they can use their own IDE, they can use their own, these days, agentic AI to go through it because that’s the experience devs would like to have. And if you look at any modern dev tooling these days, they don’t limit them. They let them to extend it, to expand it. In fact, very recently, we also introduced our own AI MCP, which creates a customized hyper tailored learning path based on the vulnerability that’s identified in your code or based on your GitHub profile.


So it gives you like, “Hey, I know that you’ve been working on files or databases. Probably you want to know about this and this and this. You want to do this lab in [inaudible :40], you want to know this about log injection because I’ve seen you log into the files. So something very specific to them and they like those things. Now, they’re just getting it immediately within their, as you said, in their IDE, that someone wants to get additional resources, especially today is important because AI is now fixing things without they understand what the fix is about.


So we providing and supporting them with additional resources that devs, they kind of continue on their learning journey. That’s very, very key. I mean, the other thing is, as you mentioned, the examples or content you want to take them through need to resemble something in reality because that is another thing devs are picky about. And if you think about it, a lot of security vulnerability titles, and I complain about this all the time, have a terrible name, have a horrible name. If you just pick any of them, let’s just take the most obvious, more cross-sided scripting.

Cole Cornford
I hate it. I say content injection, I like content injection. But then everyone’s like, “What is this?” I’m like, “Well, it could be JavaScript, it could be…” They’ve heard of XSS, but I think it’s a lot clearer when you say content injection. But no one’s caught on on that apparently. So XSS is here.

Pedram Hayati
No, it’s just what do you mean by cross? What is site? We don’t have any concept of site anymore or server site template injection. And you abbreviate it to SSR, sorry SSTI. These are from engineering perspective. They’re like, “What this guy is talking about?” But if you derail down, there are some terminologies in engineering world that is equivalent of this insecure behavior that we can use. So these are…


And another thing is real world incident. Almost every content we have is attached an insecure concept to a reality of this thing happened to a company or it happened in a CBE. And this will go far away. I remember developers, they come to me and they say, “Hey, I think in this code, I have that issue, that GitHub hack issue.” I’m like, “What are you talking about, GitHub hack issue?” And I remember he’s referring to an incident that he did in one of the labs which was attached to a GitHub mass assignment issue that happened in mid 2000, sorry, mid… I think it was 2010, 2011, that someone was able to add his own public key to every single Git repository out there. So it was a mass assignment.


So he didn’t remember what mass assignment is as we call it, but he remember the GitHub hack, which I think for us to teach something to someone, this is the key, I don’t care if he remembers the title, but if he remembers that insecure pattern and when he writes or she writes the code, he remembers, “Okay, I should watch out for this.”

Cole Cornford
Yeah, I like using examples a lot as well. I pick known incidents or I pick assurance activities. So we have this AppSec as a service thing where we do all sorts of different things to get AppSec out of the box and part of it is just regularly refreshing, secure coding training stuff with people. And the way that we then tend to do that is to take say, pen test or code review results and then those can inform what we’re going to reinforce some kind of concept so they can see what’s related to in their code base and then also cross that with some kind of incident that we’ve seen in the past.


And the one that always comes up and I love using all the time is again, stupid name is the OMIGOD Azure, which is the Azure management plane or something, orchestration management interface. And the reason I like that one is it covers a variety of bug classes that are like people in and of themselves, they’re like, “Oh, I don’t see what the issue is.”, which is constructors providing default values and not being considered of what the default value is.


And another one is to do with not front loading. With defensive programming, you want to have guard conditions. So if you make a mistake, you want to exit your programs gracefully at the beginning before going down into execution. And what they did is they had all of the validation steps occurring intermingled into the logic of the program rather than out front. And so I say to people, “Well, if you have sensible constructors, it makes sense, which in this case, the constructor defaulted to zero, which was for a number is to [inaudible :05] zero is you can figure out what happened there.

Pedram Hayati
No, no, exactly. Yeah.

Cole Cornford
So I like those kind of examples. And also, pointing it to an issue, it’s like we did a pen test and we found this kind of thing and this would’ve been solved if you had authentication authorization checks at the top of every single function call. So then they can see how realistic it is and how they’re making those mistakes. So I think that’s really good that you’re using those examples and pointing to it. Because oh, the Optus hack, mate, how did the Optus hack happen? Oh, because no one did any authentication authorization checks on the front end.

Pedram Hayati
And did they relate to it, right? By the end of the day, we are humans and the more analogy we have with, the more keywords that we have, we keep people more engaged and make that thing more memorable is another side of the thing. And the other thing I think we are not doing it well in a security industry, and I don’t think we are well positioned to do it, is to come up with design patterns or secure design patterns. Almost every security guy is talking about, “Oh, yeah, defense in depth. I mean, least privileges. I don’t know.” I’m like, “These are so vague and high level.” They are great, but you need to talk about specifics. You need to talk about what design pattern they can follow or go and check. There are actually super amazing design patterns within engineering industry that devs, they know it. Probably if you emphasize that this design pattern is good, it can also help them with the security.


The example you mentioned that the whole idea of the constructors and putting value, we have a concept in engineering called value objects. If you tell it to the engineer, they will know what is it, right? And so just say, “Hey, if you value object, your objects, your domain models are safe.” Oh, okay, that’s it. They know what to do. You don’t need to talk about validation, where to put the validation and where to enforce it. Value object is a pattern that they know they can’t follow. I think for us in a security industry, we should make ourselves more familiar with these sort of things that happening there. Also, I feel a taxonomy, how to map the concept and concerns we have from a security industry to the engineering industry and that kind of helps them to kind of a breach more and more in gap.

Cole Cornford
Maybe that’s something I should look at creating is some kind of reference architecture for secure coding patterns or something, right? Because I see these things all the time. Things like not having default cases in if everything falls over, having a way to escape something or not front loading exit conditions or doing string concatenation instead of interpolation.


These kind of things, I keep seeing are short-circuiting of conditionals is another one. There’s a lot of things where… But then there’s a bunch of esoteric rules that you need to remember and when is the right rule to [inaudible :07] bringing into this situation and then that usually gets thrown out and people forget about that. So I think there’s value in apps that people [inaudible :14] those kind of things, but where I see there being true values, when these kind of concepts are put into some popular framework or library that people consume and use and then there’s abstraction and people don’t need to be thinking about it anymore. Because I take… React’s probably the most well-known one, but even before React, just like Ruby on Rails and just MVC-style architectures. The fact is if you use Rails, there’s a good chance that you’re eradicating a bunch of bug classes by default.

Pedram Hayati
Automatically. Yeah.

Cole Cornford
Because you’ll have active records, which at the time, SQL injection was a big thing. Everyone will try SQL Payloads. Yeah, we’ve got active record payloads, but whatever. There’s the ERB. So that’s output encoding that people don’t need to be thinking about. Every route has an orphan and authorization check applied to it, but people that haven’t adopted Ruby at the same scale as say, Spring or Express or Django and so on. And so I think that if we get people to be picking up and building insecure defaults into these technologies, then that’s how we get wide range improvements.

Pedram Hayati
No, exactly. Everyone talks about secure by design, but I always say secure by default is even better. But to reach there, I think there need to be a marriage between the engineering guys and the security guys because there are things we know, but there are a lot of things we don’t know how… The React is a great example of it. I think if you look at the whole history of cross-site scripting, which was originally coined by a Microsoft engineer and all the effort, the security guys, they did HTML purify, there were dump purifier, there was all these ideas we had, okay, this is how we should do fix the cross-site scripting. And almost none of them being adopted by engineering guys because they’re like, “What is this library?”

Cole Cornford
I fundamentally hate content security policy because I’ve seen engineering teams waste weeks, months, if not years on having to maintain a dynamic not generated thing that… And they say, “What’s the point of this?” And then someone in security will say, “Defense in depth.” And I’ll just say, “Sounds like you’re just adding lots and lots of overhead to your engineering function.” And browsers nowadays have features like trusted types and stuff built in, which you can add into your CSP. I don’t think I’ve seen a single company even adopt that. But what I have seen, lots of people choose to use Angular, lots of people choose to use Laravel, lots of people use React. So use those, they’re more secure. Don’t fuck around with constant security policy.

Pedram Hayati
Exactly. Because it works. It’s not just about security. So where I was going with that is when we have something like Angular or React, Angular came earlier than React in terms of those tested concept of Angular Sandbox, which was purely built to stop cross-site scripting. They pushed it up until 2006 and they found out, “Oh, this is not going to extend.” Because everybody was talking about now Angular Sandbox bypass. Eventually, it got fixed, but there was a lot of lesson learned that how we need to design a front-end library to by default, that’s a key, by default is secure against any of these content injection type attacks unless you really shoot yourself in the foot.


And React is beautiful because if you really want to render HTML raw, there is a function called dangerous innerHTML. That naming is beautiful. And again, React was not the first library doing it. Those guys, they adopted those lessons learned from, there was other programming language back in the days that they had. If you wanted to call something which is potentially result in a security issue, they should have such a obvious name that it shouldn’t call eval. Like eval [inaudible :17] should be called the most dangerous function you can call and then give the input.

Cole Cornford
A lot of these things only came about because engineers moved forward with how they were doing things. Because I think back in those days, Devogue was still, let’s use Bootstrap and some kind of front-end library like jQuery to design stuff. I think it was called SSCS or SSVS where you started to compile JavaScript server side and then turn that HTML into what you sent to the… Rather than having to directly modify the DOM in the browser. So that’s why React started to do well. But I think it was… The other thing is that there was a bandwidth was expensive for social media networks at that point. And so having to go into Ajax requests back and forth constantly, I think that that pattern started to fall over and people wanted to say, “Well, I want to make it look like we’re still doing that.” Which is how we ended up with React to solve a network and performance challenge.


And then they said, “Well, can’t we use this to also do security?” So I like when we can take an existing concept. Like there’s this framework people have been using for a while called Twelve-Factor where they say a Twelve-Factor app is quite resilient for all of these different reasons. And one of those that I like is just build reproducibility. Such a simple concept that I can just push a button and then start to finish that my software application can be built. Many places have manual steps or they require checklists or there’s some change control process or whatever, but they can’t reproduce builds from scratch every single time.


But then I, you get engineers to start doing that, you can have a conversation about if you have reproducible builds and you have an outage, you can then recover really quickly or if you’re… You can release software a lot faster because it’s quicker to deploy into production environments. And then a security person like me who says, “But the by-product is that if you have a security incident, you can recover very quickly. If you have a security, you need to patch really fast.” So it’s like as a by-product of doing good engineering, you get security [inaudible :32]. And I marvel all of that kind of stuff. And I wish I could spend more time just reading my software engineering books and stuff, but instead, I have to drink water with my flaming Galah glass.

Pedram Hayati
You will be surprised. There’s just so many amazing coding patterns and design patterns and concept in software engineering industry that it helps us in security to communicate in the right language to devs and we do not need to reinvent the wheel. The concept of immutability, I mean, that is similar to what you just described. This has been in the engineering industry since, I don’t know, ’80s because they knew that if you build something mutable, if something changes [inaudible :15] you’re probably going to have some bugs. So they were always advocating that build in the same way, write a program in a way that you don’t have a concept of variable, everything is immutable. And then the whole concept of immutable infrastructure cames around, which nowadays become a stock standard with Kubernetes and cloud infrastructures, which you can think about actually immutability goes hands in hand with security because it is what it is and if it is something happened, we can, as you said, reproduce the whole infrastructure and just push it and refresh it and done.


And it’s just for us to kind of expand on those. For a while, I was looking at different language communities like comparing what the JavaScript guys is talking about for the next version of JavaScript, sorry, the compiler [inaudible :06] and then what the Rust guys are now talking about, the Go guys are talking. You could clearly see which language industry is focused on what. One of the things in terms of JavaScript, you could see actually Python was a good example of it. There was a big discussion around if you have a switch statement in your code, should this switch statement has a default case by default or should we keep it open because languages like, for example, Rust, this is a non-negotiable, you must have a default case.


It goes back to the thing you talked about constructor having default values. You build some switch in Rust, cause different thing, but it has a default case that you know if none of these things, cases happen, that default case will kick in. But in languages like Python and JavaScript, it’s up to you and that is up to you can result into an undecided state and a security issue.

Cole Cornford
There’s the concept, I think it’s called orthogonality, which is where you have many ways to achieve the same outcome and I’m seeing that more languages are being restrictive and trying to get people down to a preferred or correct way to do something as opposed to having a lot of ways of doing something.


To give people an example that’s a bit easier, if you want to combine two numbers, you may go i+1, you may go i++, you may go i.add1, i=i+1. And so you can see that there’s four or five different ways and there’s probably quite a lot of nuances that are different between how these calculations are being performed, but you don’t really know what that’s going to be unless you really go behind the scenes into the weeds. And so two languages I know reasonably well are Golang and PHP, not that I ever get to see any Golang, which makes me sad because I spent so long. It’s a beautiful language which just don’t come across all that much.


But every time I do get to see my series of people with if or nil error statements over and over again, I’m like, “Guys, this is not how you’re supposed to do the program.” Whatever. Let them learn. They’re just junior engineers. They haven’t figured out the concept of errors in variables, right? So that’s all right. But Go’s very opinionated about making sure that you do the right thing every single time. And they don’t have the concept of a while loop for example, because they say that for, [inaudible :41] you can say for forever, right? There’s no while. But the other languages, I’ll have a for loop A for for each and a while and then suddenly, there’s slight nuances between each of these. So I like when they reduce orthogonality.


And security people, you’re reading all of these forums and documents to try to work out what’s happening in the future. Most security people I’ve spoken to, I’ll say, “Hey, PHP, what do you think?” And they’ll be like, “PHP? That’s the worst. There’s so many vulnerabilities and it’s horrific.” And I’m just like, “How long has it been since you’ve actually written PHP? Because they’ve basically backported all of the nice things across all the other languages into PHP 8.0. So if you’re still thinking that PHP 5.0 is the bees knees, I think that your knowledge set’s a little bit out of date, right?

Pedram Hayati
100%. Yeah. I mean, a lot of attacks, what I remember, register global. I mean, there are still some people they think that is enabled by default. That was really horrible because you could effectively mass assignment, everything in PHP, but it has come a long way. In fact, the default encryption algorithm in PHP for passwords is actually top-notch comparing to the… So for example in Python, it’s still [inaudible :04] you’re using it, but in PHP, they moved on to more recent one.

Cole Cornford
To the [inaudible :11] stuff. I don’t know. If Scott Contini was here, tell me about all the crypto things because he’s smart like that, but I just read things and see if it says [inaudible :21] and cry.

Pedram Hayati
But in terms of adoption, what we will see at least are still sadly, JavaScript is the top one, following with Python these days, especially with the hype around ML and AI. Is a lot of those easy to use frameworks, as you say, are in Python and JavaScript. So a TypeScript kind of just finding its way into… Ruby is a surprise to see… This is just on a security side of what we actually see because we run these contests and competitions. We offer people different vulnerabilities in the different languages. And yeah, JavaScript, Python, everyone gets it. Maybe following with Go, which is good. And Java, if enterprise settings, yeah, C#. But PHP, not that many.

Cole Cornford
Maybe I’ve just gone in a weird spot where I’ve just had a bunch of PHP customers [inaudible :24]. I’m like, “Why is everyone writing in Symfony and Laravel? What is going on?” It’s better than them coming to me and saying, “Hey, Cole, do you like kicks terminals and cobalt and some pulse groups wrap it all together?” And I’d be like, “Ooh, guys, let’s talk about the paragraphs and sentences.”

Pedram Hayati
Yeah. Yeah, yeah.

Cole Cornford
So I guess if people wanted to get a bit better at learning about this kind of stuff, other than just going and registering for an account sec team, where would you start?

Pedram Hayati
In terms of secure coding, secure code learning, or…

Cole Cornford
Let’s go a bit beyond that because most of the people who listen to this podcast are application security professionals, so they’re already aware of concepts like input validation, output encoding, and so on and so forth, but they may not quite understand reference architectures, design patterns, orthogonality, memoization, all of these software engineering things. What would you do or where would you suggest people go to build up that kind of skill base? Because I know lots of AppSec people right now are so oriented towards product that products define what their knowledge base is, which is usually vulnerability classes rather than good architecture patterns for secure coding principles.

Pedram Hayati
And this was even my number one recommendation to our pen testing back in the days, I was always saying, “Go and learn how you can build one app from scratch all the way to staging, to the dev, to the prod. Go through all the steps.” Because that is number one skill that I see is lacking within the people who mostly with the offensive security roles. They probably can come up with amazing idea, these edge cases to identify an edge case that result into a vulnerability, but they don’t really understand the inner working of today’s cloud native app or today’s application that are being deployed. Once you go through all this stage to understand how there’s today, when you can pick any language, I would suggest start with a JavaScript and Python and start from… I’m not start talking about building one app, building one API, expose some things. Maybe these days, use some of the… Offer some agentic interfaces there. But go take it all the way and learn, for example, how this thing need to be configured in a CI/CD.


So we need to learn now, I don’t know if you are in GitHub, GitHub Actions and learn, okay, now how you need to pass the secret to this CI/CD. Learn it, see it, just don’t conceptualize this thing, “Okay, yeah, I know how CI/CD works.” It’s very different from the time you actually start writing that GitHub workflow. And then learn Kubernetes or learn some of these cloud formation scripts or Terraform. These are the tools devs are working with. What you going to get at the end of this is you suddenly going to be exposed to a different territory of whole different bug classes, whole different vulnerabilities that nobody even know that happened. Starting from that in the IDE, and then you will think about, “Oh, what if?” Because you have this hacker mindset. What if one of these plugins, extension or these days, these MCP things that I have may be malicious? So the code that I’m writing from now on is already is flawed.


And then you will think about, oh, CICD, wow, there are just so many rooms for some malicious activity to happen. Kubernetes is whole another story. I think by personally myself, I learned a lot by just going and learning how engineers they do their thing and actually doing it. And it replicated across different languages. You mentioned Go. And see how Go, for example, in some cases by default, cater for some situations. But the same code, the same language, the same framework, you take it to JavaScript, doesn’t happen. And there you go. So you got a vulnerability identified straight away just because Go does it but JavaScript doesn’t do it. A lot of these, they don’t even have a name. We don’t really have a title for them. So I would say, and there are amazing materials out there to understand this and it helps you to… I feel every pen testers need to understand code. I feel today, if you don’t understand code, you are very limited to the things you can find.

Cole Cornford
Well, don’t worry. I’m sure that every pen tester is just going to say, “Well, AI is going to write my code for me, so what do I need to know, right? It’ll be fine.”

Pedram Hayati
All right, let’s leave it there.

Cole Cornford
Thank you so much for coming on the podcast. It’s been an absolute pleasure. Do you have any parting words or would you like to just do a quick pitch about SecDim to the audience?

Pedram Hayati
Well, no, thanks for having me. Obviously, if you are passionate about AppSec, DevSecOps, secure coding, obviously, SecDim offers an environment. We are actually open to open-source community. We do offer them a free access. Also, we have quite large number of community challenges that people, they can use and they can even contribute challenges there. And yeah, I will say the best way to learn is by doing it rather than reading.

Cole Cornford
All right. Well, thank you so much, everyone. Go check out SecDim and I’ll see you next time. Thanks a lot for listening to this episode of Secured. If you’ve got any feedback at all, feel free to hit us up and let us know. If you’d like to learn more about how Galah Cyber can help keep your business secure, go to galahcyber.com.au.