I’m really looking for some good examples.
I think there are a couple of places AI could usefully contribute to the defensive aspects of cybersecurity. The first would be relatively simple, where you feed an NLM (Niche rather than large) a bunch of information about the organisation; how it works now e.g. remote/office/hybrid working, multi/single site, national/international etc., any planned changes in that, whether it’s B2B/C, product or service orientated, what sorts of data are being held and so on. There’s another bunch of info needed which it can probably from from being fed the company policies.
The AI then comes back with a series of questions and suggestions for improvements you might consider, whether that’s a change in password policies, or an additional layer you could add for some staff, and might include recommendation of technologies. The latter might be particularly of value as the AI will have learnt capabilities rather than be hype/marketing driven in its recommendations. Tying in a generative LLM AI would mean some considerable help make the business case for any changes recommended.
On the flip side (although a version of these would likely be something the above would recommend you deploy as part of your cybersecurity suite) there’s a multitude of ways an AI could be deployed to seek security holes, whether that’s an automated probe on your security, or of commonly deployed software, or to generate less obvious phishing campaigns, support/invent elaborate frauds. Some of those things are already going on but are getting better; the phishing one could get very clever very fast.
I’m currently advising on LLM security and working on an LLM course outline, in addition to the article series (see my recent post).
On defense:
AI/LLM capabilities are currently being built into enterprise cyber tooling in much the same way that GitHub Copilot is built into VSCode. Beyond this, lots of new startups are implementing AI into new tooling.
However, you don’t need to wait to buy new tooling or for your current tools to implement it. In many cases your team, with some good prompt engineering, can copy/paste data directly into an LLM for analysis.
This is going to affect every job in the cyber department.
Use cases are:
- For policy document owners, it can assess existing documents and help write or edit new ones. Assess for compliance with your favourite frameworks, etc.
- For threat hunting, it can assess logs from SIEMs, web server logs, etc to identify attacks or false positives.
- For AppSec, it can assess insecure source-code, e.g. triaging SAST from legacy non-AI tools. This one will scale in difficult depending on source complexity and sinks.
- For malware analysis, it can do analysis of disassembly and decompiled code.
- For network traffic analysis, it can do analysis of WireShark logs for attack patterns beyond also just explanations of network troubleshooting.
On risk:
The challenges for a CISO are:
- Recognizing the enterprise solutions that are just a basic LLM capability with a nice UI/UX and selling it for $$$$$ when you could have it for $. There’s a bit of snake-oil in this space right now too.
- Working with Data Sovereignty - keeping your data on-shore.
- Dealing with Shadow IT. You might try to block AI use at work but people will use it anyway. It’s going to be like DropBox all over again because they want/need this capability but it will be worse. Existing apps, browser plugins, etc will start using LLMs through APIs and users won’t even know.
- You need to choose which LLM partners to work with. Are you going to give your data to Anthropic, OpenAI, Cohere, Meta, Google, Grok or Amazon? Even the big names like OpenAI are having data leaks.
- Maybe you should go self-hosted.
- Deciding what data you’ll let the LLMs see knowing that it might leak. Are you going to give an LLM access to PII?
- Selecting the right consultants to help. At ThreatCanary I’m already an expert in penetration testing (among other things), and I know how to find the LLM vulnerabilities too.
This is just off the top of my head. Matt Flannery and I will have more to share in the StrategyMix briefing session on Friday, March 8th.