The world wide web is not, nor has it ever been a safe space.
From the earliest days of wide commercial availability, there was a wide embrace of online platforms as a means for users to vent and release the most hideous and frightful aspects of their personalities. A digital Jekyll and Hyde syndrome if you will. It’s as synonymous with the internet as porn and viruses and continues to this day. It turned Twitter into a presidential platform, mIRC into the golden land of piracy and Tumblr into… well, Tumblr.
But companies are fighting the overall toxicity of the internet as best they can, with limited, and sometimes embarrassing results.
The social media giant announced this week that it would be adding new measures for the purpose of suicide prevention. If this is an act of public interest or because users were committing suicide on Facebook Live, is not something I’m here to debate. To say the least, it has reason enough to take such measure.
Users can now message the Facebook pages of support organizations (National Eating Disorder Association, National Suicide Prevention Lifeline, and Crisis Text Line) Facebook itself is also expanding these features to those watching Facebook Live.
Twitter’s efforts are an attempt at blocking internet toxicity at the source. Their first measure was to develop measures to stifle the creation of abusive accounts and augmented the search function to leave out content from blocked accounts.
Since then they’ve revealed that users will be able to mute various items on Twitter. This includes specific words, usernames, and accounts with the default “egg”. This muting extends to unverified accounts (via e-mail or phone numbers) and users can choose to mute anywhere from a day to indefinitely. These measures are in addition to a focus on greater transparency when dealing with reports filed by users.
Google’s attempts at fighting the era of trolling came with concerns as they promised to work with their offshoot Jigsaw to find a way to deal with toxic comments. As such, they developed an API called Perspective to monitor and calculate toxicity levels in comments. On February 23 the project went open-source.
Of course, once it did, people found that it suffers from a critical vulnerability, typos.
Yes, it seems that if a toxic or troll statement contains typos, it misses the AI’s assessment entirely. So unless hate speech online suddenly becomes well-crafted prose, it’s here to say.
Each company is taking on one of the oldest cornerstones of the internet in a different way. Some are incurring criticism of censorship while others are getting the argument of “Too little, too late.” Others still, maintain the idea that this is an unwinnable battle. What do you think?