Top

Banning social media for children under 16. The UK thinks about how to do it

The UK is considering whether and how to ban the use of social media by under-16s. The news was reported by Bloomberg, which explained how Rishi Sunak‘s government is reflecting on what to do, starting from the premise that we need to change the relationship between the young people and social media.

I agree with this, because I believe it is necessary to protect the youngest from the pitfalls of the false myths peddled on social media, which are a source of worry and emotional crises for children, who are often victims of psychological complexes that ruin their adolescence due to the dominant models and offences that circulate on digital platforms. However, it is also a very complicated passage to put into practice, for several reasons.

First of all, one has to question whether it is right to deprive children of the possibility of using social platforms, or to work to correct the distortions of these digital spaces, which are now very influential in the daily lives of adults and children.

The idea maturing within the British government is not new, as already in France last spring, the possibility of introducing a law that would keep under-15s away from social media was discussed. In addition, the law provides for constant checks to ensure that young people do not break the rules by accessing platforms after lying about their age.

The French vision envisages the possibility of accessing socials if there is parental consent; the crucial point is that until now, there has been no effective system for verifying the age of members. Beyond the differences in minimum age between countries (in some cases, it is 15, in others 13, in others still 16), it has always been easy for those concerned to circumvent the ways platforms use to identify age. After all, in most cases, it is still enough to lie about one’s date of birth to overcome the hurdle.

The easiest and most effective way to verify the age of users is to force them to show the platform moderators official documents, such as their identity cards. This would mean giving the company in charge of the social network other personal data, in addition to those already entered during registration, opening up discussions about privacy and the treatment of sensitive data collected by companies whose earnings are based on the advertisements of advertisers interested in members’ data.

Implementing a system that certifies age, moreover, is also the solution ventilated by several parties to combat minors’ access to pornographic content, which is widespread on the web today. As with platform access, this would require looking at third-party companies that provide systems capable of verifying user information, such as Yoti, which we have discussed here.

The priority is to understand how to improve social media

Returning to the UK, it is known that in 2024, the government intends to launch a series of research to analyse what risks very young people active on Instagram, TikTok, Snapchat, X, Facebook and other platforms face. Asked by journalists, a government spokesperson replied that ‘it’s about finding ways to empower parents‘ and that currently ‘nothing has been approved by ministers’.

That there is a need to do something, however, is a given since ‘we have realised that there is a gap in the research and we will therefore evaluate the studies to be conducted,’ the spokesman said. Words hint at a possible extension of the Online Safety Act, a law already in place in the UK to protect minors from harmful content that provides fines of up to 10% of global turnover for companies that do not comply.

Most social media is designed in a way that is addictive, polarising and presents unrealistic lifestyles, so it ends up being a very bad place to spend adolescence. However, I am concerned that the immediate response is to exclude children from digital spaces instead of designing them to foster their growth. The words of influential online child safety activist Beeban Kidron hit the nail on the head. Because on the one hand it is undeniable that social media is a dangerous place for minors, but on the other hand it seems complicated today, perhaps not even right, to exclude children a priori from digital spaces that have become communities even and especially for the youngest.

Action is therefore needed to improve these spaces, to clean them of hostile presences and threatening characters for children. Collaboration is needed between governments; organisations specialised in the protection of minors, and the platforms themselves, which must invest part of their revenues in perfecting content moderation systems because the numbers on the dangers affecting minors are alarming.

The children’s charity NSPCC stated that since 2017, UK police forces had recorded 34,400 online grooming offences against children. The National Crime Agency, on the other hand, highlighted the changes resulting from Meta’s introduction of encrypted messaging on Facebook. With the company no longer being able to view content shared between users, a breeding ground could be created for those who exploit these means to lure minors or share content referring to the same minors with other criminals. This is an example of what should not happen on social media because, although Meta has updated the app for other purposes, minors and the threats to them should always be considered before any new functionality is launched.

Alessio Caprodossi is a technology, sports, and lifestyle journalist. He navigates between three areas of expertise, telling stories, experiences, and innovations to understand how the world is shifting. You can follow him on Twitter (@alecap23) and Instagram (Alessio Caprodossi) to report projects and initiatives on startups, sustainability, digital nomads, and web3.