Deepfakes featuring two of India’s top Bollywood actors have stirred up controversy online, portraying them as critical of Prime Minister Narendra Modi and endorsing the opposition Congress party in the nation’s ongoing general election.
In a 30-second snippet allegedly featuring Aamir Khan and another 41-second clip purportedly showing Ranveer Singh, the actors purportedly lambast Modi for his purported failure to fulfil campaign pledges and address pressing economic concerns throughout his two terms as prime minister.
The culmination of AI-generated videos featuring the Congress party’s symbol and slogan, “Vote for Justice, Vote for Congress,” underscores their significant traction on social media platforms. According to a Reuters analysis, the videos have garnered over half a million views within a week. This phenomenon highlights the growing concern surrounding the influence of AI-generated content in India’s expansive election process, which commenced in April and extends through June.
The proliferation of AI and its nefarious counterpart, deepfakes, is not unique to India’s electoral landscape. Similar instances have been observed in other countries, including the United States, Pakistan, and Indonesia, where AI-powered manipulations have been employed to sway public opinion and shape electoral outcomes.
Over 30% of Indians believe deepfakes
McAfee’s recent survey findings show a noticeable uptick in concern among Indians regarding deepfake content, especially during pivotal political and sporting occasions. The study reveals that a staggering 75% of Indians have encountered deepfake content, signalling widespread exposure to this AI-driven phenomenon. Particularly worrisome is its impact during elections, with 31% of respondents recognizing its potential to influence electoral outcomes.
“The ease with which AI can manipulate voices and visuals raises critical questions about the authenticity of content, particularly during a critical election year,” said Pratim Mukherjee, Senior Director of Engineering at McAfee. We encourage consumers to maintain a healthy sense of skepticism. Seeing is no longer believing, and it is increasingly becoming important to take a step back and question the veracity of the content one is viewing.”
The advanced capabilities of this technology in convincingly replicating public figures and manipulating audio and video content raise profound concerns about misinformation and its impact on democratic processes. Beyond political implications, worries extend to cyberbullying, with 55% expressing concern over its potential to generate harmful or misleading content, while 52% are alarmed by the prospect of fake pornographic material.
As AI technologies grow more sophisticated, distinguishing between genuine and fabricated content becomes increasingly challenging. Alarmingly, about 22% of survey participants admitted to encountering a political deepfake that initially appeared authentic. This difficulty is exacerbated by the prevalent dissemination of such content on social media platforms like WhatsApp and Telegram without proper verification processes in place.
The survey emphasizes the pivotal role of misinformation and disinformation, exacerbated by the proliferation of deepfakes, in skewing public perception. Notably, high-profile instances involving celebrities like Sachin Tendulkar and Virat Kohli underscore the extensive reach and profound impact of deepfakes. These occurrences serve as stark reminders of the far-reaching implications of misinformation within society at large.
A global threat
The menace of political deepfakes transcends the boundaries of India’s electoral landscape, posing a significant and conspicuous threat. With over half of the global population slated to participate in elections in 2024, AI-driven manipulation of audio, images, and videos casts a shadow of confusion over the political discourse worldwide.
“I am seeing more [political deepfakes] this year than last year and the ones I am seeing are more sophisticated and compelling,” said The Washington Post Hany Farid, a computer science professor at the University of California at Berkeley.
As policymakers and regulators worldwide scramble to draft legislation curbing AI-driven audio, images, and videos in political campaigns, a regulatory void looms. The groundbreaking AI Act won’t take effect in the European Union until after the June parliamentary elections. Similarly, bipartisan efforts in the U.S. Congress to outlaw the falsification of federal candidates using AI face an uncertain fate ahead of the November elections. While a handful of U.S. states have implemented laws penalizing the creation of deceptive videos about politicians, a disjointed policy landscape prevails across the nation.
Amid this regulatory vacuum, few deterrents prevent politicians and their allies from exploiting AI to deceive voters. Moreover, enforcers often struggle to keep pace with the rapid dissemination of deepfakes across social media and private group chats. With the democratization of AI, the responsibility falls on individuals like Jadoun to make ethical decisions to mitigate the potential chaos wrought by AI-driven election manipulation as regulators lag.