Top

Generative AI software, the priority is to find shared rules before it is too late

Pope Francis is in an unusual pose with a flashy white duvet over his cassock of the same colour, but also Vladimir Putin is under arrest for war crimes. These are images circulated on social media and in hundreds of newspapers that millions have seen of people, but in reality, they are images from another world. Because someone invented them. Nothing is real, but everything looks real because there is nothing out of place when you look at the images. Only generative AI software produced them; those systems trained through huge amounts of data to replicate any prompt, which is the initial request in textual and photographic form. The more specific the latter, the more detailed the result proposed by the software used will be.

Dall-E

On the one hand, there is ChatGPT-4 for text, and on the other hand, Midjourney and Dall-E 2 for images. These are the best-known and most widely used generative AIs. Still, they are certainly not the only ones because big giants like Google and the small startups financed with truckloads of millions of dollars from many investment funds are developing similar software capable of finding precise and convincing answers to any prompt. However, the improvement of these systems must go hand in hand with the need to find shared rules for their use. Otherwise, we are trapped.

The example of the images of Pope Francis and Putin’s arrest cited above shows how easy it is to generate fake content that most people can believe to be true. And so they prioritise the search for norms of use on a global scale that can quell the spread of texts and images invented, intentionally or not, through generative artificial intelligence systems.

The fear of witnessing the multiplication of ad hoc created content that goes viral on social media, ending up imposing itself in the daily debate even on newspapers, radio and TV, is transforming the initial enthusiasm into fear of valuable and effective tools that, if used for propaganda, military or political purposes (and the list could be much longer), are perfect for feeding fake news to those who do not have the cognitive and technological tools to analyse what they are facing.

This is why a group of (many) entrepreneurs and (few) computer scientists signed an open letter inviting all laboratories training artificial intelligence software more powerful than Chat-GPT4 to suspend development immediately for at least six months.

Open AI

According to the signatories – which include Elon Musk (one of the early backers of Open AI, the company that developed Chat-GPT, Ball-E and many other generative AI systems) and experts on the subject such as Yoshua Bengio and Stuart Russell – there is a “risk of losing control of our civilisation“. This is why ‘powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable‘. This is why we need to pause for reflection, which ‘should be public and verifiable and include all key players. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium‘.

Beyond the entrepreneurial aims of some of the signatories, it is clear that standards must be identified to fix a perimeter to the current undefined scope of generative AI software. Not doing so would mean leaving the field open to man’s creative freedom and exploiting these tools to gain profit in terms of income and power over others.

Intervening now is salvific because texts and images are only part of what the Large Language Models connected to generative AI can produce: programming code, 3D models, and videos are also on the way. So, if by creating an image, one can induce the other to believe what one wants, one can only imagine what might happen if one were to make a video simulating a murder, a robbery, a sexual encounter, or any other act aimed at accusing or discrediting someone (who is innocent and unaware of what has been orchestrated against him), despite the fact that the fact that the ‘evidence’ created thanks to artificial intelligence seems to leave no room for doubt.

Alessio Caprodossi is a technology, sports, and lifestyle journalist. He navigates between three areas of expertise, telling stories, experiences, and innovations to understand how the world is shifting. You can follow him on Twitter (@alecap23) and Instagram (Alessio Caprodossi) to report projects and initiatives on startups, sustainability, digital nomads, and web3.