Ferrari deepfake: A Ferrari executive foiled a hacker attack thanks to his cunning. It is a story that offers food for thought because it puts deepfake, one of the most sophisticated weapons in the hands of cybercriminals, at the centre of the debate. This is what happened to the brand based in Maranello, near Modena. An executive received several messages on WhatsApp signed by Ferrari CEO Benedetto Vigna, which were, however, sent from an unknown number, different from Vigna’s personal number. The content indicated an important acquisition to be completed by means of some signatures that the executive was supposed to put in.
In another message, the self-styled CEO explained to the interlocutor that he would soon receive a non-disclosure agreement to be signed, adding that the market regulator and the Milan Stock Exchange were already informed about the pending acquisition. It should be noted that Vigna’s profile photo was different from the usual one, although it showed him in a suit and tie in front of the Ferrari logo.
Scepticism saved Ferrari
Later, the Ferrari executive received a call from his apparent CEO, again from an anonymous number, justified by the importance of the message. The voice sounded like Vigna’s, but the slightly metallic sounds between words he heard during the conversation made the executive suspicious. So, to dispel the doubt, the man said he had to identify the interlocutor in compliance with internal regulations.
The fateful question was about the book Vigna himself had recommended to him in the previous days: the answer was the ‘Decalogue of Complexity. Act, learn and adapt in the relentless becoming of the world’, written by Felice de Toni. Unable to answer, the fraudster ended the call, decreeing the attack’s failure via deepfake audio. This move was made possible through AI and training with Ferrari’s CEO’s vocals taken from interviews, conferences, and public speeches.
Ferrari deepfake
After Bloomberg revealed the story, Ferrari announced that it would open an internal investigation to understand what happened and identify any flaws that facilitated the path of the cyber criminals, stopped by the intuition of an executive. Beyond the Ferrari case and the ability to avert financial and reputational damage, the issue of deepfake poses an important question because it will be increasingly difficult for companies and even more so for people to defend themselves against this type of scam.
It should be considered that although still imperfect, audio and video deepfakes are already very easy for cyber criminals to fabricate, also benefiting from the low cost of the tools needed to create fake voices and videos that replicate the tone, timbre and physical characteristics of certain people.
A future full of traps
Of great concern is the belief that deepfake will become as widespread a weapon as or more than ransomware because the ratio of expense to potential gain for hackers is very lucrative. Paving the way for fraudsters is also the low level of preparedness and related defensive measures used by people. In companies, there is a certain awareness of the danger, with training courses being set up to avert the pitfall and methods such as the one used by Ferrari to verify the identity of the interlocutor. Companies, however, must also keep their attention high because, as AI improves, voices and images created to persuade executives will generate much more accurate results, with audio, photos and video tending to reduce the interlocutors’ and spectators’ margin of doubt.
Much worse will happen with ordinary people, especially the elderly and those less familiar with technology and its evolution. Lacking knowledge and defenceless, it may be too easy for fraudsters to mislead these people. Especially those who already take blatantly AI-edited photos and videos for granted. Spreading messages and dangers is incumbent on us. Still, it may prove useless if less knowledgeable people do not begin to familiarise themselves with the techniques that allow false rumours, photographs, and videos to be fabricated..