Two sides of the same coin: artificial intelligence and ethics. The times when experts in the field posed the question of control over advanced AI systems as fundamental are not far off. The risk was not, and still is not, that of watching helplessly the elitist rise of the machines, in Hollywood colossal style, but to complicate, instead of simplifying, everyday operations, associating them with greater complexity.
A few examples? One among many: the self-driving car. Ever since the subject first hit the headlines with the first real developments in technology, the question has been who is responsible and how in the event of an accident. And even more: how will the on-board computer behave in certain situations? When a pedestrian crosses a red light, will the AI be able and, more importantly, willing to stop? The answer is obviously yes, but in any case the scenario opens up different interpretations, both philosophical and factual.
In this climate of accelerating AI adoption, ethics has become a real concern. What is the risk that a set of simple mathematical instructions carries? The answer is that artificial intelligence expands the level of scope, breadth and speed of actions an individual can take, putting enormous, sometimes devastating, power in their hands. Databases, computing power and the ability to learn and act are characteristics that make AI not only a game-changer, but a vector of global transformation, because it can greatly accelerate the pace and impact of its effects on society, both positive and negative.
The more we rely on AI to make informed decisions, the greater the concern about how the model will deliver results, whether or not aligned with our intentions and related cultural and societal uses. One interesting case dates back to 2016, when Microsoft launched its TayTweet bot on Twitter. Targeted by some ‘extremist’ users with shamelessly racial and sexist posts, it started to behave accordingly, adapting to its surroundings. So one of the advantages of AI, adaptability, became a big problem for the Redmond giant, which had to withdraw its experiment after a few days.
The biggest limitation that pervades artificial models is that they do not know how to change position depending on the situation. According to the SAS company, which has been studying innovations in the AI field for years, there are three levels of intervention that help to classify as many moments of ‘growth’ of advanced models. The first is in the replication of behaviour: if we train an algorithm within a racist society, it will continue to operate according to these assumptions and beliefs.
The second level is inherent in the very essence of the algorithm. The issue lies in understanding what logic the model follows in making decisions. Today, we can have an algorithm that performs very well but is not very modelable, or the opposite, something that is explainable but has limited performance. Let’s think about the modalities behind car insurance. In this case, the algorithm is deliberately transparent: the less damage or accidents the driver causes, the lower the premium to be paid, also depending on the context in which one lives and the class one belongs to, as well as other parameters. The point is: can we create a neural network that creates better, personalised connections, that responds better to the variables of each individual?
This brings us to the third level: deployment. It is a key point. The model must not be the end of research and applicability, but a useful element to make a reasoned decision. In this way it is possible to achieve a certain ethicality, with a flow that is automated, but which nevertheless includes a human awareness. Think of the override in forecasting processes, where advanced models support the operations then taken by forecasters and are not the last unavoidable indication.
The path that led to the opening of a major debate on the ethics of artificial intelligence is cultural, but also IT-related. Over the years, we have moved from writing code to developing documentary tools for analysis. This is a great achievement, in terms of standardisation and usefulness. Once we have put new potential into the hands of business, we have achieved a maximum democratisation of technology. If anything, the risk is to realise how openness encourages the misuse of innovation. There is therefore an urgent need to expand regulations governing the application scenario of solutions such as machine learning. There is a central point for companies, which is planning. The ethos of open source, free to access, is perfect for those companies that don’t want to or can’t devote a long-term budget to IT, even over three or five years. They opt for free software, which is certainly valid, but whose work is delegated to untrained or poorly trained staff, precisely to meet the need for savings. Innovation does take place, of course, but only in the addition of external flows, which complement the internal ones, without these being modelled or improved. If these do not change, then no level of ‘intelligence’ is achieved in business processes. That is the real issue to be addressed in the coming years.