My question is about AI and is in 2 parts. How and where do you see AI being an integral part of a cyber defense strategy and how and where do you see AI being a Security risk and how will it be mitigated?

I’m really looking for some good examples.

I think there are a couple of places AI could usefully contribute to the defensive aspects of cybersecurity. The first would be relatively simple, where you feed an NLM (Niche rather than large) a bunch of information about the organisation; how it works now e.g. remote/office/hybrid working, multi/single site, national/international etc., any planned changes in that, whether it’s B2B/C, product or service orientated, what sorts of data are being held and so on. There’s another bunch of info needed which it can probably from from being fed the company policies.

The AI then comes back with a series of questions and suggestions for improvements you might consider, whether that’s a change in password policies, or an additional layer you could add for some staff, and might include recommendation of technologies. The latter might be particularly of value as the AI will have learnt capabilities rather than be hype/marketing driven in its recommendations. Tying in a generative LLM AI would mean some considerable help make the business case for any changes recommended.

On the flip side (although a version of these would likely be something the above would recommend you deploy as part of your cybersecurity suite) there’s a multitude of ways an AI could be deployed to seek security holes, whether that’s an automated probe on your security, or of commonly deployed software, or to generate less obvious phishing campaigns, support/invent elaborate frauds. Some of those things are already going on but are getting better; the phishing one could get very clever very fast.