Social Media: Accountability In The Face of Abuse
My childhood icon, inspiring sports professional, and 6th highest top scorer in Premier League History, Thierry Henry, has announced he is quitting social media because of online abuse.
This is hardly a surprising move.
On the decision, Thierry Henry has said: “The sheer volume of racism, bullying and resulting mental torture to individuals is too toxic to ignore. There HAS to be some accountability. It is far too easy to create an account, use it to bully and harass without any consequence and remain anonymous. Until this changes, I will be disabling my accounts across all social platforms. I’m hoping this happens soon.”
Social media has huge upsides: human connection, entertainment, the spread of knowledge, shared moments of joy and sadness. I work at a Social Transformation agency and have seen social media’s power to build brands, grassroots advocacy, and groundswell from the consumer.
Social Media as a power for good
The inspiring story of Captain Sir Tom Moore in 2020 was a great example of how everything can tie together. The power of social media took over. With people at home, the interest and sense of community brought everyone together and the overall figure raised towards the end of his challenge sat close to £40 million.
The results were underpinned by the timely outlook and most importantly – pure positive sentiment that poured through social media rallying in support of such an important cause.
This is but one example, among countless others, of how social can connect, inspire, create and cultivate community: ultimately speaking to societies greatest attribute, its power in coming together.
The ugly side of social media
However, Henry’s experience of social media paints a more realistic picture of social media. The list of high-profile personalities and minority groups experiencing racist, sexist or offensive abuse is telling enough.
From superstar footballers such as Marcus Rashford (MBE) receiving a barrage of racist abuse, despite his incredible efforts to feed the school children of the nation, to famed Arsenal legend, Ian Wright, failing to gain a ruling in his favour in a case against racist abuse received over social media, so far, hate-speech has very minimal consequences when taking place online.
In a world where Black and Asian female MPs received almost half (41%) of social media abuse sent to MPs (despite there being almost 8 times as many white MPs included in the study), where online violence against women is flourishing and where 25% of British Adults have experienced Cyberbullying, we ignore Henry’s request that there “HAS to be some accountability.”
The role of the law
The police have begun to take small steps towards combating online abuse by appointing a hate crime officer concentrating on social media. It’s a small change but one in the right direction.
However, despite a newfound focus by law enforcement, police often encounter roadblocks from the technology companies themselves. Detective Chief Constable, Mark Roberts explains, “the response of social media companies in assisting the police to identify abusers has been woeful and said in some cases they had been a key blocker to investigations.”
The responsibility of the social platforms
Platforms are, slowly but surely, beginning to take action.
On what Facebook is doing to combat online abuse, they said “We took action on 6.6 million pieces of hate speech content on Instagram, 95% of which we found before anyone reported it to us.”
This is a good start in terms of public articles or forums — however, the ability to use and send words that are considered ”hate speech” are not restricted on social media platforms. This means that those who are sending these messages are not being reprimanded or facing the consequences of sending hate speech.
The good news is that the technology to regulate user output currently exists.
For example, we already use this type of AI to seek out copyright infringements. The companies already can scan their platforms for a piece of music from a verified artist to which the user does not have rights. On YouTube, a piece of content from a paid-for subscription channel such as Netflix would last a mere 30 mins before being immediately taken down.
TikTok has banned certain words from being displayed to “minimize the spread of egregiously offensive terms,” they “remove all slurs” from their platform, “unless the terms are reappropriated, used self-referentially (e.g., in a song), or do not disparage.”
Policing of content isn’t new, therefore, it’s something we live with and accept as part of the user agreement we enter with the platforms to use them for free. We should question, however, why the same urgency and focus on copyrights isn’t applied to fundamental human rights.
Verification vs Anonymity
Another route of accountability would be in the form of authentication and verification of users of social media platforms, to safeguard the communities that use them. An understandably controversial idea, but again, not a new one and not something which technology companies are averse to introducing:
- Car sharing app hiyacar essentially requires you to upload all relevant driving documents on the app crossed referenced by the DVLA database. Bank card or payment verification is required to confirm the identity of the user. The system also scans and verifies users via facial recognition and allows access to the vehicle via Bluetooth once authenticated.
- Tinder similarly verifies profiles by uploading selfies with a series of commands or specific ‘tasks’ to ensure authenticity. This ensures the security of the users who use the platform and who meet up essentially without knowing or having seen someone.
There are very valid concerns about privacy and the ability to anonymise identity on social, after all, the ability to present versions of yourself that are exaggerated or specific to a niche section of your personality is one of great appeal to the billions who rely on social for connection daily. However, experiences like Henry’s are also part of the collective experience and one which has been normalised and protected to a level that needs analysis and intervention.
Social media platforms need to take responsibility and action for the behaviour that takes place in their spaces, to create environments where everyone is welcome. Without the policing hate-speech, many more personalities and individuals will remove themselves from these vibrant, exciting, stimulating breeding grounds of ideas and inspiration. The outcome? A homogenous pastiche of ‘free-speech’ with very little to offer.
If you’d like to have a conversation about how we can evolve the social media strategy for your brand, and help you create environments where everyone is welcome, drop us a line at info@1000heads.com ?.