Relying on AI is already a bad idea

A few days ago, there was talk of Amazon’s efforts to better integrate AI into its systems, from choosing the best size to summarizing user reviews; however, it seems that sellers have “beaten the clock” on Bezos’ company, with a slew of errors clearly coming out of ChatGPT. Users of the U.S. section of the world’s best-known e-commerce platform got a surprise between hilarious and puzzling a few days ago: several products, from garden chair sets to sofas to Chinese religious treaties and more, shared a name clearly attributable to ChatGPT, “Sorry, I can’t complete this request due to OpenAI policies.” If Amazon users had to continually deal with scams, fraudulent listings, or deceptive products, they would now have to take AI intervention into account as well.

In addition to headlines clearly reporting OpenAI, Amazon’s U.S. section was flooded with other messages that clearly referred to AI-related errors, such as “Sorry, but I can’t generate a response to this request,” or “Sorry, but I can’t provide the information you’re looking for”; in some cases, the AI also seems to be quite specific in its reasons why it cannot complete the request, such as “Requires use of trademark” or “Promotes a specific religious institution,” or even “Encourages unethical behavior.”

Big tools, big responsibilities

Using AI models to generate product names is, of course, not against Amazon’s guidelines. On the contrary, however, Amazon has launched its own AI tool for sellers, with the goal of “creating more accurate and appealing product descriptions, titles, and details.” Certainly, a shrewd seller will have had a chance to “correct” the AI error, but apparently working to make an Amazon listing seems to be too much of a demand for the platform’s scammers, for every seller easily identified by one of the above error messages, there are probably many others who are using AI in the “correct” way.

Relying on AI is already a bad idea
Relying on AI is already a bad idea

Amazon is not the only online platform where AI bots are showing up so easily: a quick search for “goes against OpenAI policy” and “as an AI language model” on Twitter-X reveals dozens of posts created by bots of their own, which are running into the same glitch, as cybersecurity engineer Dan Feldman pointed out back in April. As amusing as it may be to see AI bots curling in on themselves in an attempt to generate content at a very high frequency, this is a wake-up call, especially if we shift our gaze to cases such as Clarkesworld Magazine, a sci-fi fiction periodical overrun with submissions from phantom authors, or Amazon’s e-book store, which had to limit the number of daily publications per user to three after the system crashed last September.

Here is a demonstration that we cannot yet rely entirely on AI tools. What is the difference between writing a product description in a few minutes and having to go back to it after ChatGPT has written it to check that everything is okay? If technology is supposed to make our lives easier, AI is apparently complicating them, at least in some cases. There will probably come a day when AI will be better (and faster) at performing tasks than we are for tasks that are not that repetitive but certainly boring. But evidently, that day is not today, and we do not even see the horizon of it. Not yet, at least.

Antonino Caffo has been involved in journalism, particularly technology, for fifteen years. He is interested in topics related to the world of IT security but also consumer electronics. Antonino writes for the most important Italian generalist and trade publications. You can see him, sometimes, on television explaining how technology works, which is not as trivial for everyone as it seems.