ChatGPT, Deepfakes, and more sophisticated phishing scams are the biggest dangers for companies, which must learn to master AI to defeat it.
With ChatGPT4, the world has discovered the potential of generative artificial intelligence, a technology destined to change the tech sector and many other industries profoundly. Like any new element, GenAI offers several advantages to companies and hackers, who are always searching for new ways to hone attacks on companies’ cybersecurity systems. The latter, after all, is an activity that knows no stops, no obstacles and no limits, so much so that according to the ‘Top 10 Cybersecurity Predictions And Statistics For 2023‘ made by the research firm Cybersecurity Ventures, by the end of the year the damage generated by cyber-attacks globally will touch $8 trillion, rising to $10.5 trillion within the next two years. While forecasts may get the figures wrong, the trend is clear. It outlines a battleground that will force companies to invest more budget in finding the necessary countermeasures to ward off potential data breaches.
GenAi is perceived as a cybersecurity threat
Complicating a context in which moving around is already difficult is GenAI. A clear scenario for companies, not least because 60% of the 650 board members of organisations with more than 5,000 employees from Europe, the United States and other world regions surveyed by security firm Proofpoint for its report ‘Cybersecurity: The 2023 Board Perspective‘, are convinced that ChatGPT and other similar software should be considered a corporate security risk.
Among other possibilities, GenAI allows cybercriminals to create new malware, exploit automation for phishing, search for holes in security systems, and attempt breaches via deepfakes and malicious bots to impersonate those in roles at the top of the corporate hierarchy. The two frontiers that break with the past are the opportunity to replicate someone’s voice and image through specific audio and video creation. Because artificial intelligence can duplicate voices to fool an employee who thinks he is having a conversation with his boss (it has already happened that an imitated voice of a CEO was exploited to request the transfer of funds to an offshore bank account). At the same time, AI is perfect for packaging well-crafted clips in which characters in the guise of a CEO can request certain actions, such as sending money or spreading confidential information.
These are the two new options, but in terms of numbers, the main threat to cybersecurity remains ransomware and phishing. According to one of Darktrace’s latest reports, there was a 52% increase in account takeover attempts in the May-July quarter of 2023, while offensives carried out impersonating a member of the corporate IT team rose by 19%. “Although it is common for attackers to change and adapt their techniques when effectiveness declines, GenAI, particularly deepfakes, has the potential to disrupt this pattern in favour of attackers,” reads the cybersecurity firm’s report.
Generative AI has, therefore, allowed the quality and sophistication of phishing scams to improve, enabling hackers to multiply their attack schemes. “In the past, consumers were attacked rather abruptly, whereas now thanks to GenAI it is possible to be very targeted and reach significant amounts,” explains Greg Johnson, CEO of McAfee. This is why generative AI turns the tables. ‘It brings so many benefits to consumers in their lives but, paradoxically, the biggest benefits to consumers also benefit fraudsters,’ Johnson specifies.
How to counter the new pitfalls
We have to be even more careful than before, therefore, but the scenario is not totally negative. Because AI helps companies detect threats by automating processes, which is useful in part to remedy the shortage of cybersecurity professionals. A gap that, according to some surveys, left some 3.5 million positions unfilled in 2023. Besides detecting attacks and better assessing their possible impact, one example of how GenAI can be useful is to help analysts filter breach reports more efficiently, discarding false positives.
As for how to counter criminals, a simple solution that harks back to the past but remains effective is to rely on a keyword or code phrase as a recognition element to issue commands regarding sensitive and financial data. In this way, even the most successful video and audio that perfectly replicates the CEO’s voice can prove harmless if they are unaware of the magic word. Another obligatory remedy is training employees (everyone, from the boss to the intern) so that they know how to use GenAI and identify dangers based on it. The easiest way to get there is to get everyone to try ChatGPT, Bing AI and other software so that they understand how they work, what they suggest and how they behave in the face of certain requests.
Overall, generative AI can be an ally or the most mocking of hacks. Much depends on how one prepares oneself to deal with it, with the knowledge that the best choice is to learn about it to recognise it and thus stop it.