How to regulate social media platforms effectively has become the quest of liberal and illiberal governments alike. The one to eradicate hate speech, and the other to control rebellious populations.
The UN defines hate speech as “any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factor.”
While this definition is broader than “incitement to discrimination, hostility or violence”, which is prohibited under international human rights law, according to the UN it has three important attributes. It covers:
- Hate speech conveyed through any form of expression, including images, cartoons, memes, objects, gestures and symbols and it can be disseminated offline or online.
- Hate speech that is “discriminatory” (biased, bigoted or intolerant) or “pejorative” (prejudiced, contemptuous or demeaning) of an individual or group.
- Hate speech that denigrates real or perceived “identity factors” of an individual or a group, including: “religion, ethnicity, nationality, race, colour, descent, gender,” but also characteristics such as language, economic or social origin, disability, health status, or sexual orientation, among many others.
The identification of memes intended to misinform and/or incite violence depends on algorithm engineers. Those same people and/or their companies must, therefore, carry responsibility for whatever escapes their oversight. So far, algorithm engineers need to be qualified in computer science, mathematics, programming, and software engineering – but not in social ethics.
Commenting on the recent spate of riots in the United Kingdom, Nobel Prize winning journalist and critic Maria Ressa noted that everything about the violence that erupted on Southport’s streets, and then in towns across the country, was fuelled by wild rumours and anti-immigrant rhetoric on social media (The Guardian, 3 August 2024):
“You see this chain reaction in these alternative news channels, where disinformation can spread so quickly and can mobilise people to take to the streets – who are then prone to using violence because there’s this anger and these really deep emotions that are, of course, being amplified. And then, from these alternative outlets, it’s carried on to X or on to the mainstream social media platforms.”
In 2023, UNESCO put forward an Action Plan to regulate social media. In 2024, UN member states and governments will be asked to consider how to implement it.
The Plan is based on seven key principles:
- The impact on human rights becomes the compass for all decision-making, at every stage and by every stakeholder.
- Independent, public regulators are set up everywhere in the world, with clearly defined roles and sufficient resources to carry out their mission.
- These independent regulators work in close coordination as part of a wider network, to prevent digital companies from taking advantage of disparities between national regulations.
- Content moderation is feasible and effective at scale, in all regions and in all languages.
- Accountability and transparency are established in these platforms’ algorithms, which are too often geared towards maximizing engagement at the cost of reliable information.
- Platforms take more initiative to educate and train users to think critically.
- Regulators and platforms take stronger measures during particularly sensitive moments like elections and crises.
TikTok and Telegram were prime collaborators in instigating the UK riots. Effective regulation is vital and urgent. But it is clear that without long-term cooperation between governments, regulatory authorities, civil society and the platforms themselves, social media will continue to be used by racist thugs to disrupt and hurt society.