PSPF Changes Explained for Security Leaders
The Protective Security Policy Framework is meant to guide how government manages security risk, but constant updates make it harder to implement than to understand. In this episode of Secured, Cole Cornford is joined by Toby Amodio, Practice Lead at Fujitsu Cybersecurity Services and former senior cybersecurity leader across Australian government, to break down what actually changed in the latest PSPF update and why it matters in practice.
They examine the growing focus on personnel security and foreign interference risk, the inclusion of AI guidance that adds little beyond basic risk assessment, and the long overdue recognition of Secure Service Edge and SASE as compliant gateways. The conversation also explores why deny lists and centralised risk sharing sound sensible on paper but are far harder to enforce in reality, and why most security failures still come down to behaviour, accountability, and how technology is actually used rather than what policy says.
00:00 – Intro
01:18 – What the PSPF is and why it exists
02:49 – Annual updates, directives, and policy advisories
04:19 – What actually changed in the 2025 PSPF update
05:36 – AI in the PSPF and why it adds little value
08:14 – Tool hype vs implementation risk
10:32 – The AI policy advisory and trusted vendors
14:25 – Directive 3 and clearance disclosure risks
17:21 – Personnel security and enforcement reality
19:41 – Secure Service Edge and SASE recognition
23:39 – Commonwealth Technology Management directive
25:28 – Deny lists, transparency, and security through obscurity
28:05 – Centralised risk sharing and assessment overload
29:52 – Policy wonk or policy gronk
31:12 – Final takeaways and closing
Cole Cornford
Hey, everybody. Today I’m joined by Toby Amodio, who is the practice lead at Fujitsu Cybersecurity Services and has previously held government cybersecurity leadership positions at the ATO and the Australian Parliament House. Hey, Toby, how are you doing on this fine episode of Policy Wonks and Gronks?
Toby Amodio
I’m great, thank you, and it’s good to be back. I always appreciate having a chat with you, sir.
Cole Cornford
Thank you. So it’s this one we’re talking a little bit about the PSPF updates, which are rather confusing for me as the gronk in the room. So would you be able to explain to my audience what the PSPF is and the difference between ad hoc directives, policy advisory and annual updates is?
Toby Amodio
Yeah, perfect. So the PSPF, the Protective Security Policy Framework, is effectively the overarching north star for every government agency within Australia around security. And it’s not just cybersecurity, it’s personnel, physical, cyber, and cybersecurity governance writ large. The cyber in that is actually technology and risk, but I’ll just call it cyber for the purposes of the pod. Now, the great thing about the PSPF is it’s mandated that every agency follow it as a part of getting their money. And so every agency then has to really follow the updates to it. So I’m glad that we have a chat about it. And it’s also a dynamic document because inevitably the controls aren’t static, and so it’s constantly being updated. They have at least one annual update.
And so today we’ll talk through that update that occurred in the middle of the year in July. And then we’ll talk through some of the other minor updates that occur. And they have things called PSPF directives, which are emergency ad hoc updates to the PSPF, which includes specific directives, shockingly with the name, and the actions required for agencies to implement. And then those get rolled into the next annual update as the next annual update’s released.
But we’ve also got a new one this last three months, which was a policy advisory. And whilst I’m the wonk for the policy section here, I don’t actually know how an advisory differs from a directive. I think it’s just a hot take from Home Affairs, but we will talk through that as well. It’s not confusing at all that we’ve got multiple different ways that it updates, but it always keeps us on our toes. And I’ll say as well, to compliment that update, I really do appreciate the Australian government leads in on this and that Home Affairs are so diligent in their updates and are good at what they do, even if that makes our lives hard as we’re constantly chasing that moving bar.
Cole Cornford
Yeah, I was going to say hot takes. We should actually appreciate the government agencies to put the effort into doing these things. So good work, guys. I’m excited to see what future updates are for our future editions for policy gronks and wonks. But anyway, let’s start with just the annual update. But what’s the main thing that they’ve been able to change on it? So there’s a couple of things like adding a governance section and a few extra directives. What are the highlights for you?
Toby Amodio
Yeah, so the highlight for me was it’s a pretty sensible update where they incorporated the directives from over the last financial year into the update, which is fine. The overarching theme from it, if I had to summarize it, was three things, which is the PERSEC and the foreign intelligence risk is increasing. And because of that, we have to be more aware of it. So there’s a greater focus on personnel security controls, on foreign control and ownership threats, and how do we assess it? How do we manage it as entities, both from a technology perspective, but the personnel that manage that technology?
They’ve introduced an AI because AI is sexy and everything has to be AI at the moment. We’ll talk a little bit more about that in a second. And they’ve also finally relented, and it’s been one of my bugbears, but they’ve finally allowed secure service edge or SASE products, virtual gateways to be counted as a gateway. And I’ve been pushing for that for years because I helped roll out their secure service edge at one of the agencies I was at recently. So whilst there’s core themes and small mining updates, they’re the kind of three things that I would take away that people need to focus on if you’re implementing the PSPF. Do you want to talk about AI?
Cole Cornford
No, I never want to talk about AI, but unfortunately I have to because it’s just constant, it’s everywhere. And unfortunately for me, as much as being an application security company, everybody is investing so heavily in just AI anything. I was talking to people today and they were like, “Oh, do you do AI security?” And I think I’ve come around to realizing that AI security is just white labeled application security. So of course I do it. Yeah, why not? It’s fine. You want to talk about your MCP server? That’s just API security. You want to talk about producing heaps of really low quality vibe-coded applications? Oh, let’s just get some static analysis tools.
It just annoys me that that’s where a lot of capital is going to and people are so hyped on it. And I still have yet to see all that much benefit out of it. I just see people talking about it a lot. And these updates are a bit of a meme in the PSPF actually from, what was it? All it said is the same thing that you would apply to literally every other technology out there, which is that, hey, we are adopting a technology. Maybe you should consider where the technology is hosted, the type of data going into the technology, and maybe have compensating controls in place. Oh, shock horror. It’s like I’ve done a threat model before.
Toby Amodio
Correct. It’s one of those things where it’s like when they first release cloud computing and then they had a whole section dedicated cloud computing and then they realized cloud is just another technology and you assess it the same way you would risk assess any technology. And I feel like that’s what they’ve done where they’re like, “Hey, it’s AI, but you’ve just got to risk assess it the same way you’d risk assess any technology.” And I get really frustrated with it.
And as you mentioned, I feel like the technology itself is less interesting to me and the authorization of technology. Inevitably, they say the things like, make sure the data stays onshore, which with the internet becomes extremely hard. With processing for AI, it’s almost nonexistent. They say things like, “Hey, you have to make sure the personnel working on it.” Validating that for a heap of these vendors is nearly impossible.
The thing for me is it doesn’t say how are you going to handle the outputs to make sure that they’re trustworthy and look at things like the OWASP generative AI top 10 or those kind of elements that actually have the trust in the output and the quality of the output and validating the output logic that got it there is reliable so you can make a decision based on that output or action based on the output. And so for me, they’re fine controls, but they don’t literally add any value as I see it. And then the real value will come down the line as we start to work with people to be more intelligent about the outputs of AI and how we handle it. So it doesn’t just scale your job into infinity as people vibe code their way into oblivion.
Cole Cornford
Look, I’ll be honest, if people scale my job into infinity, that’s really good as one of the sole sovereign application security providers in the country. So please make as many rubbish applications as possible and then hit me up for pen tests, code reviews. I promise I won’t gouge you, I’ll just gouge the AI, okay?
Toby Amodio
That’s so funny.
Cole Cornford
It’s so bad, man. I’m just so disappointed that there’s so many people who just put all of their eggs into this basket and think it’s going to be a transformative technology and that you can’t put those security controls in place without really hampering innovation. And so-
Toby Amodio
Yeah, it’s a balance.
Cole Cornford
I think the government’s quite reluctant to come in and stop things from occurring, but at the same time, we have effectively no governance, it’s wild west. I don’t know if it’s going to go the way of crypto where it took 10 to 15 years before we decided that, hey, maybe it’s a good idea to treat crypto exchanges like, I don’t know, a place that people store-
Toby Amodio
Banks.
Cole Cornford
Yeah, like banks and have traceability of funds.
Toby Amodio
Know your clients and all that. Yeah.
Cole Cornford
Know your customer. What a concept. Imagine being able to track where money’s going and seeing it all ends up in North Korea. It’s like, hang on a second. Isn’t that part of the PSPF directives is to not ship money to nations that we’re actively hostile against? It’s a bit silly to me.
Toby Amodio
Yeah, I couldn’t agree more. And it’s made even more of a farce… Not a farce. It’s made even more funny by the PSPF team at Home Affairs then released a policy advisory. And I mentioned at the start, I don’t really know how a policy advisory sits in. I presume it’s somewhere between a PSPF update and a directive more as a vibe feel. But after their release of the PSPF update, they’ve released this policy advisory on AI, which talks to the fact that, and here’s an oversimplification, but basically if an entity uses a generative AI product that’s running on our infrastructure, is OpenAI or Anthropic, then they don’t need to do an assessment of the foreign control interference because Home Affairs has already done one so trust us, but also you should still authorize it yourself.
And it’s this weird juxtaposition where it’s like, “Hey, these two specifically, they’re good, but still check it yourself, lol.” And it’s jarring, I won’t lie, because I go, “Hey, again, you’ve said this is generically, you should just assess them like any other thing, but these two we like.” And I know that every business exec is only going to read the first dot point that says these two are good, not the second dot point that says, “But make sure you check it yourself.”
Cole Cornford
Make sure it’s good for the context that you’re operating within. Check yourself before you wreck yourself. No.
Toby Amodio
Correct.
Cole Cornford
And the other thing is also that accountable authorities have the ability to override and just choose what they want anyway. And that’s going to be a real big issue because even if you’ve listed the two safe technologies, which I guess they’re safe, there’s going to be people who say, “I can’t use those for various reasons, so I’m going to go off and just try this other stuff.” And there’s probably very little governance around it because there’s so much hype around innovating and trying to do stuff.
And the other thing that irritates me a bit about AI security is that so many folk are focused on the wrong thing, which is where there’s conversations about polluting training data, about bias in the model, about supply chain risks. And almost all of the issues that I see are just people using it incorrectly and that’s it. It’s like implementation issues. The same with cryptography. It’s like, oh yeah, AES is very secure, but if you use a key that’s just literally all zeros, it’s probably not going to help you encrypt things even if you get a checkbox on your ISM Control 922.
Toby Amodio
Yeah, I couldn’t agree more because it’s basically like, yeah, sure, going that… It’s like saying a car is safe, but it depends on who drives it and what they’re driving to do. And so going Anthropic and OpenAI are safe, are okay to be used for this purpose. But then for me, as you said, the use case is the more interesting piece. And agencies really need to think about not just the tool, but then the use case that business is going to use it for and how they put controls around that use case. Because it’s very different if you want your AI to generate a funny meme that’s going to be your logo for the next month, or if they’re going to choose who the next clients are for [inaudible :51].
And so those use cases have very different assurance requirements. And so whilst I’d say to agencies, whilst this is in there, make sure you’re really clearly capturing from the business teams what they want to use it for and the guardrails they’ll put around it, because that’s going to be the thing that will bite you, as you said. It’s not going to be the tool per se, it’s going to be the way that you’re using the tool.
The last thing I’ll say is it’s absolute whack-a-mole to stop the use of these pieces, as you would know. So having a way to go, “Hey, use this one because we like this one and use it for this reason,” really helps the users and empowers them. Too often just saying the security says no will just lead to them going around you like the rock in the ocean. And so make sure you’ve got the pathway for them to get onto a solution that meets their needs, if that makes sense.
Cole Cornford
Yeah, I’ve got a simple rule at Galah, which is if it involves people, be a person. And so that’s an extremely simple policy. It’s very straightforward though, where if you’re sending an email to someone because it’s involving a person, a report’s being given to someone, it’s involving a person, you’re having a sales conversation. If you outsource that to a robot, then you’re immediately creating a horrific customer experience to that person. And I think that you should… I know people talk about human in a loop and I think that’s a bit overused as a term, but to me, I think it’s exactly the same. It’s like, yeah, we can be using artificial intelligence to help us maybe process lots of information and try to understand things better and make some decisions, but ultimately the accountability can’t rest with the machine, it has to rest with an individual.
Toby Amodio
Yeah, no, I completely agree. And that focus on people is a perfect segue into the first of the PSPF directives for this financial year. And that’s the PSPF directive 3, which is requiring entities to manage the rising risk from personnel disclosing information that identifies their clearance. And so inevitably we have someone who put top secret in their LinkedIn profile and then got a date from a whole heap of interesting people from interesting countries to thank for this requirement. It’s one of those great pieces where they basically have a problem of people behavior and they’re trying to policy it out, which I always find is challenging.
And one of my challenges with this is, a PSPF directive as an agency, you get asked how you’ve implemented it. And pieces like this, which is like a mix of policy and enforcement becomes really interesting to me. And we’ll talk about it more in the next directive as well, where enforcement becomes extremely hard. It can be very jarring for the people trying to implement that. What did you think of the concept?
Cole Cornford
I just thought it was really good for my mates, John, Rob, and Sam, who’ve just had nothing but amazing dates for the last six months. It’s just, to me, the world’s going so much better. It’s great for them. I’ve never seen them happier in their lives disclosing as much information as possible. But jokes aside, I think that there’s not really all that much you can do because to me, if you’re working in, say, defense industry or intelligence, there’s a very good chance that people can identify that you work in one of those sectors, either from the fact that you have to probably apply via a recruitment agency or some kind of way, like go to some kind of event. So if anyone’s doing any genuine [inaudible :12], they can just sit in a carpark outside of Belco and then probably have a good chance of working out that you work for Home Affairs.
I don’t think that… We definitely have domestic people who are spying, living in Canberra, and that’s okay. If you don’t think that’s the case, then I don’t know, you’ve got rocks in your head, mate. So I don’t see what the… And if you have to apply for positions, they have to advertise that the position requires a clearance. And so you have to demonstrate that you’ve got the clearance as part of the application process. I don’t see… How is a recruitment agency going to be able to hire people?
Toby Amodio
Yeah, I agree. This just feels like we had an incident, so we have to make sure we write something to remind people of their obligation and make sure every agency explicitly has the policy. And as you said, it should be common sense, but the problem with common sense is it’s not common and it doesn’t make sense to some people. And so if you have a high level security clearance, please don’t plaster it all over LinkedIn or any social media. That is, as you said, just a great way to attract attention from people we definitely don’t want you to have attention from.
But it’s a wicked problem in the sense that it’s impossible to not have some level of linkage because roles require it and then people apply for those roles and then say that they’ve won those roles and you can do that correlation. And so it’s worth just recognizing if you’re in a high risk position within the government or in the private sector, but you still hold a high risk position clearance, please make sure that you are aware of the people interacting with you and what their motives and intentions would be.
And if you’re concerned, you can always reach out to our colleagues at AGSVA or ASIO and seek advice, and they recommend you do that if you have anything unusual. They call it the SOUP where it’s like suspicious, ongoing, unusual, persistent. So if you have anyone interact with you in that way, it’s reportable, but also just use the baseline of don’t be dumb. But I always love to think about, hey, we have to tell everyone that they can’t put it on any online web services, but I find it impossible to understand how any agency is going to monitor all of that to ensure compliance.
And that’s where I get down to that directive. No one’s going to monitor LinkedIn, have some keyword monitoring for all their staff, for all of their clearances and all the rest of it. It’ll be responsive when someone gets dobbed in. But yeah, this feels to me like it’ll be impossible to properly enforce apart from just writing some milquetoast policies, but it’ll be the stick that you’ll get hit with should you not comply with it. So at its core, make sure you’re aware of it if you’re in government and you’ve got a clearance.
Cole Cornford
Simplest thing you can do is probably just turn your LinkedIn profile off while you have the engagement of that government thing. There was a person who works at a telecommunications place now that previously worked in intelligence, and I think he’s got 30 connections on LinkedIn, but before that, he spent 16 years at one of our intelligence agencies, right? So guess what he wasn’t doing over 16 years? Connecting with random people from Ethiopia. So just if you’re doing that, you’re probably not doing the best thing by the Commonwealth, okay? So comps to that bloke. I don’t want to name him.
Toby Amodio
Yeah.
Cole Cornford
Moving on to SSEs, mate, tell me why you like SSEs. I don’t actually really get it because I’m not an NetSec person, I’m an AppSec person.
Toby Amodio
I mentioned up top, but one of the changes they did to the core PSPF is adapt the definition of the word gateway to include secure service edge or secure access service edge, which is basically virtual gateways. And the reason why I’m fully interested in this is I helped implement one of the SASEs at one of my previous entities and it was to me, aside from Windows Hello for Business, it was the best possible increase to security and productivity. Because it allowed split tunneling in a secure way from people’s edge devices, when they were working from home, they could access their Microsoft services directly in a secure way rather than having to trombone back through on-prem through a gateway. And it’s basically the gateway drug to getting zero trust for me.
And so whilst it’s not a sexy thing, and it’s definitely not for the gronks, the general gronks, it’s one of those pieces that I recommend if you’re in an entity, whether it’s private or public, and you haven’t gone down the pathway of a virtual gateway to manage your access to your cloud services or just the internet writ large, then you’re really missing a trick and it’s now finally properly endorsed by the government and so you can do it with their blessing and that’ll lead to good things in the long run.
So yeah, that’s where I sit on the SASE piece and it inevitably gives us that additional layer of security that previously if you had implemented SSE, it wasn’t in line with the government requirements because it wasn’t a traditional gateway. And so this shift is finally realizing where we should be and catching us up with the 2010s.
Cole Cornford
Yeah, it’s really challenging to get policy to reflect because technology changes so quickly. And whenever we put something into legislation, because ultimately PSPF is not legislation, it’s government policy, but it takes so long to even get policy changes because it’s like death by committee, because there’s a million people arguing over what needs to be there, what’s the language, what’s the terminology? So whereas tech changes really quickly.
I mean, as much as I dislike artificial intelligence, over the last two years, it’s gone from suggesting people eat rocks to being able to do complete transcriptions of meetings to summarizing random texts that are completely [inaudible :59], having agents do things. I wouldn’t say they’re doing smart things. I had a sales agent call me up the other day. It was beautiful, mate, told me I needed executive coaching and I said, “Oh, my stroke length is not quite long enough and I need to focus on my breathing technique. So I’m really interested in coaching. Can you tell me how to do it?” And they’re like, “Well, mindset coaching for executives is in fact the best way to get better at swimming. So how about I book a meeting with you and my boss?”
Toby Amodio
At least they focused on the-
Cole Cornford
Needless to say, the meeting never went ahead.
Toby Amodio
At least they focused on the core directive. I would’ve thought that they’d just full pivot and give you swimming coaching, so…
Cole Cornford
No, no, no. They were just very much kept going back to, because I kept asking, “Can I have more swimming coaching?” And they’re like, “No, do you want mindset coaching? Because mindsets make you better at swimming.” And I’m like, “I think just doing swimming makes you better at swimming, but that’s okay. Maybe it’s just my mindset, that’s the problem I have.”
Toby Amodio
Actually, 100%. Yeah, no. So yeah, it’s one of those beasts. And it is hard, as you said, for the policy directions to keep up with technology. So I’m glad that they make those adaptions to keep that pace. The last one that I wanted to chat on is that technology change, and it’s the fourth directive for the updates in 2025 and the latest so far. And it’s named Commonwealth Technology Management, which is appropriately obscure. And at its core, it requires all departments to remove products, applications, and web services that are on the deny list of the Commonwealth Technology Standard.
And I’ll talk about the other pieces in a second, but it goes on to say some other interesting things that are tangential, but two fun facts of that is that the Commonwealth Technology Standard doesn’t have a deny list at the moment and it hasn’t been articulated in the document specifically when it’ll be provided and how that’s managed. And I believe that the intention for this is really set up that they got a lot of heat for their previous directions on TikTok and DeepSeek, and instead they just want to have a running list that they can update in the backend without having to release new directives, which makes sense from a Home Affairs perspective, but not necessarily from a transparency perspective.
Cole Cornford
Yeah, because it’s good to have in policy that you must follow the standard and in the standard list all the things you need to do and just update the standard regularly rather than having an extremely long explicit policy. Because the PSPF specifically had a section called TikTok, which I felt was a bit dumb, but it was TikTok application actually. But anyway, I don’t know what to think about having a private deny list. And also, shouldn’t it be quite trivial to also identify applications that are on the deny list anyway? Because all you need really is for some yoho who’s working at any of these places to just start willy-nilly trying to install or run or interact with these kind of applications and seeing if it just gets blocks, and then any nation-state actor can then ascertain what the deny list is. So I don’t really see how it’s, that trend… Is it keeping it? It’s like what’s that security term? Security through obscurity. Yeah. I don’t see it being particularly useful in that way. So to me, it’s not a wonk decision, it’s more of a gronk, but what do you think?
Toby Amodio
I agree. And I get the intention. It’s just fraught with danger and implementation. And as you said, if that information then becomes non-releasable in some way, you get into these weird games to implement a control without talking about the system that you’re blocking in the control, which becomes hard for enforcement. I’d also just say that whenever we have to block web services, for agencies, that’s non-trivial. And I think it gets treated as trivial, if that makes sense, because a lot of agencies allow users to have BYOD devices and they don’t necessarily manage the internet on those BYOD devices.
So shameless plug back to SSE, but it means that most agencies now will have to buy SSE licenses deployed to all their mobile devices to do enforcement of mobile access to those web services, which will have significant increase in costs for licensing and control implementation complexity. And it’s one of those things that’s easy to say, but hard to do on two fronts because you don’t want the information to be made large and just practically implementing controls like that is really hard. If it was easy, everyone would already be doing it.
Cole Cornford
So what you’re telling me is that having a completely homogenous IT asset estate that is on the perfect versions and just never changes and that every staff member is fully aware of their obligations under the PSPF is the way we get to security utopia?
Toby Amodio
Correct. Yeah, just do that. Just do that. Get rid of all of your legacy and just make sure it’s all up to date. The other hilarious-
Cole Cornford
I have a bridge. Do you want to buy this nice bridge I have over here? It’s very good.
Toby Amodio
The other thing I love with it is, of course it says, “Hey, you need to block everything on this list.” But if you’ve got a legitimate business reason, you can still use it and then it lists the reason why, what are legitimate business reasons. And I just always feel like, “Hey, you should do this, but if you don’t want to, that’s cool too, as long as you tell us why.” Which I find it jarring as well. If it’s important enough to block holistically, then it’s important enough to block holistically. I just feel like it all comes down to risk assess and make your own choices similar to, not the directive, the policy advisory around AI where it’s like, use these two, but also make sure you do your own assessment. It’s like, “Hey, you should block these things, but make sure you do your own assessment.”
Cole Cornford
Just could just give everybody a get out of jail free card and just tell them, “It’s okay. Just do whatever you want, mate. It’s all right. You only get one of these, though.” So I imagine that with AI, make your choice, pick your product, and it’s going to be fine. And if you choose Kaspersky or whatever, then that’s on you. You are the accountable authority.
Toby Amodio
And I will say, and I jest about this, but there is one absolute gem in this random directive, which is that all entities must implement a policy to consider sharing risk assessments through the Department of Home Affairs centralized risk sharing capability. And there’s a couple of pieces just in that sentence. I really like this idea. I think every agency should centralize their assessment of tools so that we don’t have 199 different assessments of tools. That’s silly the way that we currently do it. We need to have a common risk language, common sharing of that so that we assess it once, use many across the Commonwealth.
But the fact that in a directive, they’ve made it optional by making it consider sharing the risk assessments is a little bit hilarious. I would have thought you would just mandate it and then deal with it. But also I would hate to be the Home Affairs centralized risk sharing capability because you’re about to get wrecked by a lot of risk assessments.
Cole Cornford
I was just going to say-
Toby Amodio
And then you’re going to be comparing apples with oranges with grapefruits. It’s going to be mental. I’m there for it, but better them than me.
Cole Cornford
I was going to say, don’t we already have this whole digital transformation agency thing that should have centralized capabilities that we lease out to other agencies? Wouldn’t that be a good idea for them to own technology? I don’t know. And I feel like there is some kind of government policy standard thing that tells people what to do. It’s the FPSSPM thing, IRAP, right? So if we just use that and then we’ll be okay. So I don’t know, man, I’m feeling that it’s a little bit gronky today. So I guess to sum up our episode, let’s start with the overall PSPF update. Are you thinking it’s policy wonk or gronk?
Toby Amodio
Look, I think it’s very much a policy wonk. It’s a very sensible update that has a few key inclusions, even if AI inclusion is a bit gronky.
Cole Cornford
Yeah, I’ll say AI is gronky, and I think unless I see the deny list and see Galah Cyber on there, I’ll say that’s wonky until I get cut off. Okay?
Toby Amodio
Indeed. And we said it at the start, but there’s a lot of really good intention in here and the direction is pretty clear that entities in Australia are an increased threat posture, so we should be considering the tools that we use both from a foreign interference risk perspective and then the nature of the tool risk perspective. The pieces I would tell people to take away with is we should share common risks between government agencies, make sure you’ve got those community practices in place.
If your agency or private sector is deploying AI, really focus on how the business is using it because thar be dragons. And lastly, if you’re going to move down the path of trying to enforce blocks of web services, you’re going to have to go down the journey of SSE and SASE and have management on your mobile devices, and that is a big task. So please make sure you don’t trip into that easily and reach out to people like Cole if you need assistance with deploying, shameless tag or I’m more than happy to have a chat.
Cole Cornford
Or Toby.
Toby Amodio
Exactly. Exactly.
Cole Cornford
All right, Toby, absolute pleasure to have you on today and I’ll speak to you next time.
Toby Amodio
Indeed. Thanks, Cole. Be safe.
Cole Cornford
Thanks a lot for listening to this episode of Secured. If you’ve got any feedback at all, feel free to hit us up and let us know. If you’d like to learn more about how Galah Cyber can help keep your business secured, go to galahcyber.com.au.











































