Manipulation of voice commands, identity theft and data privacy violations. These are the three main threats identified by Trend Micro regarding the risks associated with the use of AI digital assistants. The analysis of the American-Japanese cybersecurity company comes at a critical time because, with the surge in generative artificial intelligence, more and more people are getting closer to AI devices.
After all, in recent years, intelligent digital assistants have become another tool widely used in everyday life, not only at home and in the office, because their applications also involve more personal areas such as wellness and travel. And as the recent history of computer security teaches us, the widespread use of a device also brings with it an increase in cyber threats because now even AI digital assistants have become a target for cybercriminals, as they are a potential point of access to users’ sensitive data.
The insides of the AI wellness assistants
One of the most affected categories is related to wellness because AIcoaches are valuable sources of biometric data: from heart rate to sleep habits, from personal physiological parameters to stress level, we are dealing with relevant information that is too often underestimated. In fact, having access to this data allows you to falsify medical records, generate incorrect diagnoses or sell all your personal information on the dark web for fraudulent purposes.
To complicate matters, wellness devices can also be integrated with third-party health applications and connected IoT devices. This is useful for the user who takes advantage of it, but it is dangerous when the data ends up in the wrong hands. A flaw or human error allows malicious people to infiltrate healthcare networks, compromise medical data, and take control of personal devices.
The limits of AI travel assistants
Similarly, although using different data, digital travel assistants can backfire in case of errors or distractions. Sensitive information such as hotel payment details and other activities that take place when you are away from home are transmitted through these devices. In addition to this, reservations, travel preferences and itineraries, together with data on digital transactions, can lead to financial fraud, identity theft and unauthorised access to bank accounts.
In addition, given the high success rate, the danger of phishing should not be underestimated. Phishing uses advanced techniques such as using information stolen by hackers to send fake emails and messages to friends and colleagues in the user’s name, tricking the recipients into providing sensitive data or money. If the hacker is particularly unpleasant, they could exploit vulnerabilities in devices to modify or cancel hotel and restaurant reservations, as well as change flights and itineraries. It’s all too easy to imagine the inconvenience that would befall the unwitting victim.



The deepfake nightmare
Among the even more insidious risks associated with AI assistants is the danger of deepfakes related to the manipulation of voice commands. If encryption standards are insufficient or file storage is not adequately protected, it is all too easy today to create a fake video in just a few steps using a recording stolen from someone.
In this case, the only possible limit is represented by the hacker’s imagination, who could stage a clip with synthetic or pre-recorded voices but also show compromising behaviour (which never happened) of the person targeted. From identity theft to scams targeting relatives, friends or companies (one example is the use of voice recordings to trick the voice authentication systems of banks and companies), there are many successful options available to cyber criminals.
How to remedy the threats
Despite the evolution of AI digital assistants, the security measures adopted so far have not been able to keep up with the growing complexity of the threats. Among the most common risks identified by Trend Micro across many implementations, the most obvious limitations are related to fragmented security, with isolated rather than integrated protections; the lack of advanced encryption for sensitive data collected; and the absence of mechanisms that detect anomalies, which could identify attacks before they can cause damage.
One potential remedy to mitigate the risks is for developers to adopt proactive security approaches, relying on adaptive and resilient protection systems. Much also depends on greater awareness of the threats associated with the use of AI digital assistants that users have to acquire, many of whom do not realise that these devices can be a gateway for cybercriminals. A personalised guide, such as a training programme, would be ideal to take a general step forward and make it more difficult for cybercriminals to sneak into people’s homes.