Top

The risks of bringing ChatGPT to every iPhone: too much, too soon?

Apple’s recent announcement that it will integrate OpenAI’s ChatGPT into its iPhones marks a significant moment in the ongoing evolution of artificial intelligence (AI) technology. For tech enthusiasts, this development is exciting, promising new levels of convenience and innovation. However, the broader public remains cautious, with many questioning whether the time is right for such a pervasive deployment of AI. As Apple steps into this new frontier, there are several reasons to believe that the integration of ChatGPT into iPhones might be happening prematurely.

ChatGPT is still maturing

While AI has made remarkable strides, it is important to recognize that the technology is far from perfect. OpenAI’s GPT-4 model, which underpins ChatGPT, represents a significant leap forward, particularly with its multimodal capabilities, allowing it to process voice, video, and text inputs. However, as impressive as these demonstrations are, they often highlight the technology’s limitations as much as its potential. For instance, during OpenAI’s demonstration of GPT-4, the AI model exhibited verbose responses that occasionally required the human presenter to intervene and steer the conversation. This raises concerns about the AI’s ability to function seamlessly in real-world applications, where brevity and accuracy are often more critical than the AI’s capacity to generate elaborate responses.

Moreover, if AI is expected to replace or augment human interaction in various scenarios, it must be better prepared for real-time use, which, currently, it is not. Incorporating such technology into devices as ubiquitous as iPhones could lead to widespread frustration if users find the AI’s responses unhelpful or, worse, disruptive. The potential for AI to misunderstand or misinterpret user commands could diminish its perceived value, especially in scenarios where quick and accurate responses are essential.

AI is not true intelligence

Another significant issue with deploying ChatGPT on iPhones at scale is the misconception that these AI systems possess genuine intelligence. AI models like ChatGPT are, at their core, sophisticated pattern-matching machines. They do not understand context or meaning like humans do; they simply generate responses based on patterns in the data they were trained on. This fundamental difference between human cognition and AI can lead to problematic outcomes. For example, users might assume that because an AI system is sophisticated, it is also reliable. However, AI models are prone to errors, sometimes producing “hallucinations” — outputs that are factually incorrect or nonsensical.

A well-known incident involved ChatGPT incorrectly asserting that no African countries start with the letter “K,” which even influenced Google search results, showing how AI errors can propagate misinformation. This risk becomes more pronounced as AI is integrated into everyday devices like iPhones, where millions of users might rely on it for information and decision-making. If the public starts to trust AI’s often shaky judgments, we could see a rise in misinformation, leading to a potentially dangerous overreliance on AI tools that are not equipped to handle the nuances of human knowledge and reasoning.

ChatGPT - iPhone
Photo by Solen Feyissa on Unsplash

Persistent bias in ChatGPT

The bias embedded in AI models is another critical issue that has yet to be adequately addressed. AI systems are only as good as the data they are trained on, and much of this data comes from the internet, a space rife with biases related to race, gender, and language. Despite efforts to mitigate these biases, they persist, and sometimes, these attempts at correction can lead to new, unintended errors. For instance, AI tools have been known to generate historically inaccurate content, such as when Google’s Gemini AI depicted Black soldiers as part of World War II-era German forces, a glaring historical inaccuracy. These biases can be particularly troubling when AI is perceived as impartial or objective simply because it is computer-generated. The integration of ChatGPT into iPhones, used by millions, could exacerbate the spread of biased or inaccurate information, leading to widespread misinformation.

This concern is compounded by the fact that users might not be fully aware of these biases. Many people might not scrutinize the information provided by AI as critically as they should, especially when the AI is embedded in a trusted device like an iPhone. This could lead to biased or incorrect information being accepted as truth, further entrenching societal biases.

Is there a demand for this?

The final question that needs to be addressed is whether there is a genuine demand for AI integration at this level. When ChatGPT was launched, it quickly became the fastest-growing app in history, reaching 100 million monthly users within two months. However, subsequent data suggests that this initial surge has not translated into sustained, widespread usage. According to a recent survey by the University of Oxford, four in ten Britons had not even heard of ChatGPT, and only 9% used it weekly or more frequently.

This suggests that while there is interest in AI, it may not be as pervasive as tech companies would like to believe. Apple’s decision to integrate ChatGPT into its devices could be seen as a push to create demand rather than respond to existing consumer needs. While commendable, the company’s efforts to secure user data and ensure privacy might not address the underlying issue: a lack of consumer appetite for AI-driven tools.

Moreover, integrating AI into iPhones might be more about generating new revenue streams in a maturing smartphone market than meeting consumer needs. With smartphone sales growth slowing post-pandemic, the push to embed AI into devices could be a strategy to reignite consumer interest and drive new sales. However, if consumers are not genuinely interested in or ready for AI on this scale, the strategy could backfire, leading to a lukewarm reception and potentially harming Apple’s brand.

For now, integrating AI into iPhones might feel more like having a well-meaning but untested intern at your side—eager to help but not yet fully capable of handling the job. While AI will undoubtedly become more integral to our lives in the future, companies like Apple must proceed with caution, ensuring that the technology is truly ready for widespread use before unleashing it on millions of unsuspecting users.

George Mavridis is a journalist currently conducting his doctoral research at the Department of Journalism and Mass Media at Aristotle University of Thessaloniki (AUTH). He holds a degree from the same department, as well as a Master’s degree in Media and Communication Studies from Malmö University, Sweden, and a second Master’s degree in Digital Humanities from Linnaeus University, Sweden. In 2024, he completed his third Master’s degree in Information and Communication Technologies: Law and Policy at AUTH. Since 2010, he has been professionally involved in journalism and communication, and in recent years, he has also turned to book writing.