
AI and cybercrime: how small businesses can stay ahead
The evolving risks of AI-powered crime and how SMEs can stay protected
Reading Time 5 minutes
‘It’s an arms race,’ warns Vasilij Nevlev, director of UK data consultancy Analytium. ‘AI is being used on both sides.’ As artificial intelligence reshapes the business landscape, it is also transforming the darker side of digital life.
For SMEs, AI offers sharper insights, faster automation, and smarter defences. At the same time, it is empowering a new generation of cybercriminals. The challenge, says Nevlev and other experts, is learning to use AI without becoming a victim of it. ‘Many small businesses still overlook basic protections such as email authentication,’ he says. ‘These are straightforward to set up and can now be implemented with guidance from tools like ChatGPT.’
The AI-enabled threat landscape
AI has lowered the barrier to entry for both innovation and cybercrime, says Professor Vladlena Benson, director of the Cyber Security Innovation Centre at Aston Business School, who agrees that while small businesses are benefiting from AI-driven security monitoring and authentication, cybercriminals are benefiting in the same way.
Attacks that once required specialist skills can now be launched by anyone with an AI toolkit. ‘AI allows them to scale attacks, tailor phishing messages to specific individuals, and probe systems for vulnerabilities far more quickly,’ she says. The result? ‘Attacks are becoming faster, more convincing, and more difficult to spot using traditional defences.’
Among the most urgent threats she identifies are highly personalised phishing campaigns, deepfake impersonation, and automated attacks that continuously scan thousands of small businesses for weaknesses. Generative AI, Benson says, can now ‘analyse publicly available data about a business, its suppliers, staff roles, and communication style to generate messages that appear completely authentic.’ This kind of sophistication means an employee may unknowingly approve payments or share credentials, making those traditional warnings about spotting typos in phishing emails obsolete.
She also points to rising risks from AI-driven attacks on internet-of-things (IoT) devices, such as the smart cameras, card readers, and factory sensors that many SMEs now rely on. ‘Each device becomes a potential entry point,’ she warns, ‘and automated tools can scan for weak or default passwords at scale.’ In the worst cases, attackers can manipulate operational devices or shut down physical processes, creating business disruption well beyond data loss.
Common mistakes
AI itself can become a liability when misused, Nevlev cautions. ‘A common mistake is assuming that data shared with free AI tools such as ChatGPT is private,’ he says. ‘If a tool is free, user data is often used for model training or analytics.’ Even with paid platforms, businesses need to ensure data processing and confidentiality agreements are in place and clearly understood. His rule of thumb: ‘Avoid including sensitive data in prompts.’
He advises SMEs to ask tough questions of any third-party AI provider. Do they hold certifications such as ISO 27001 or SOC 2 Type II? Do they use encryption while in rest and active mode and control digital keys via a key management service? SMEs should also check whether their data or prompts are being used for model training and insist on restricted network access so that if access is leaked, it cannot be utilised.
Building security into AI projects
Nevlev recommends starting with a quick threat model and data-protection impact assessment and then writing a one-page AI risk policy defining what data can be used, which tools are permitted, and who approves exceptions.
For real-world frameworks and examples of threat modelling and data-protection impact assessments related to AI, see the UK Information Commissioner’s Office guidance on AI and data protection and its AI risk toolkit, including example DPIA triggers and templates. Cyber security threat modelling guidance for AI is also available from the UK’s National Cyber Security Centre.
Segmentation is vital. ‘Having separate development, staging and production environments, keys, and data stores will limit the impact if something is compromised,’ says Nevlev, who also advocates placing a gateway in front of large language models to enforce automated redaction, rate limits, and logging. If this is all sounding a bit intense for your small business, Nevlev suggests working towards the government-backed Cyber Essentials Plus certification first, then progressing to ISO 27001 as maturity grows.
Practical steps towards cyber-resilience
SMEs need not aim for ‘enterprise-level defences out of the box’, agrees Benson, but should focus on smart prioritisation and continuous improvement. However, small business leaders must treat cybersecurity seriously. ‘Senior leaders must recognise cyber risk as a business risk, not just an IT issue,’ she says. Regular staff training on AI-enabled scams, such as deepfake calls or realistic invoice fraud, can be one of the most cost-effective defences.
Secondly, get the fundamentals right. “Even in the age of AI, many successful attacks exploit basic failings such as weak passwords, unpatched systems and lack of segmentation,” says Benson. Her advice: enable multi-factor authentication, patch software promptly, enforce least-privilege access and back up key data offline.
And finally, adopt simplified frameworks like the Small Business Guide, a free, plain-language set of recommendations published by the UK’s National Cyber Security Centre. If your company does not have in-house expertise, Benson suggests using managed security service providers or cyber-insurance-linked offerings that include 24/7 monitoring
Latest articles
Find Out More


Inspire your team to start thinking differently about the future of your business
Help to Grow: Management Essentials is a free online course that provides the essential concepts required for business growth.
