Top

Honor deepfake killer will reach markets in April

Phonemaker Honor has announced that its Artificial Intelligence (AI)-powered deepfake detection tool, which uses real-time analysis of video and image content driven by this technology, will be available worldwide starting next April. This feature was first introduced by the technology company in June at the Mobile World Congress (MWC) in Shanghai and was later showcased at the IFA 2024 consumer electronics fair in Berlin, Germany, in September.

The Shenzhen-based company says that the deepfake detector analyzes subtle inconsistencies that are often invisible to the human eye. The business has also revealed that deepfake detection is part of its new Artificial intelligence (AI) strategy (Honor Alpha Plan), with the company prepared to invest US$10 billion in AI in the next five years. Additionally, the brand also announced that AI Deepfake detection will “soon arrive” in its latest flagship bar phones and foldable phones in international markets. One of the phones that will feature the technology is the recently released Honor Magic7 Pro, which recorded, in the 48 hours after launch,  a 76% increase in European markets compared to last year’s sales of the Magic6 Pro.

Pixel level deepfake detection

Specifically, AI Deepfake Detection, which will arrive next month, can detect synthetic imperfections at the pixel level, edge composition artefacts, frame-to-frame continuity, and consistency in hairstyles and facial features. The deepfake detection system has been trained through a large dataset of videos and images related to online scams, enabling the AI to perform identification, screening, and comparison within three seconds, according to Honor. If synthetic or altered content is detected by the feature, a risk warning is immediately issued to the user, deterring them from continued engagement with the potential scammers for their safety.

When manipulated content is identified, an immediate warning will be issued to protect users from the potential risks of deepfakes, as explained by the Chinese technology company in a press release. The company also recalled that, according to the cybersecurity institute Entrust, an attack of this kind occurs every five minutes. The business also noted that its feature responds to the need to address the “hidden challenges” brought by the rise of AI, including these manipulated videos that deceive users into believing they are watching a specific person making false statements or performing actions that never happened.

Detecting deepfakes, a real challenge

The issue of deepfakes has become a global problem, as only 0.1% of users can accurately distinguish a deepfake from real content, according to a recent study by biometric solutions provider iProov. Despite the fact that half of companies (49%) have experienced voice and audio deepfakes between November 2023 and November 2024, 61% of executives have admitted that they have not established protocols to minimize the risks their companies face, according to data provided by Honor. Moreover, deepfakes could also put global elections at risk, with  AI-powered manipulations being employed to sway public opinion and shape electoral outcomes in countries such as the United States, Pakistan or Indonesia.

Marc Cervera is a freelance journalist based in Barcelona, Spain, with over four years of experience contributing to leading Spanish and international media outlets. He holds a double degree in Journalism and Political Science from Universitat Abat Oliba and an MA in Political Science from the University of Essex. Marc has lived in the US, UK, Spain, and the Netherlands, and his work primarily explores economics, innovation, and politics.