Despite its efforts, Facebook’s become the subject of recent controversy. For those that are otherwise unaware, since the release of their streaming and video feature known as Facebook Live, there’s been a growing number of incidents of people who’ve used the platform for self-destructive purposes.
Without going into detail, there have been several instances of users streaming content of them causing self-harm, harm to others or both. It has become a clear and visible symbol of what some experts are labeling a growing problem. According to the American Foundation for Suicide Prevention death by suicide is the 10th leading cause of death in the United States with over 40 thousand Americans dying from it each year. And as all measure can gauge it is a growing statistic.
So it comes as no surprise in the wake of this problem that Facebook itself would take some action to respond to this dilemma. Especially with it closing on a user base of nearly 2 billion after reporting 1.86 billion users this past February. As such when the time for the announcement came it was from Zuckerberg’s Facebook profile.
https://www.facebook.com/zuck/posts/10103695315624661
It’s a surprising move as given this age of AI algorithms and robots the company chose to employ more personnel rather than adopt a program to do the work or take an existing startup whose technology could be adapted to accommodate this problem.
At this time there are two questions to be asked. First, will this measure prove to be effective? By Zuckerberg’s admission within this post, the company receives “millions of reports” weekly about content that users flag. As it is, him boasting about the team catching someone on Facebook Live considering suicide feels hollow, at such an early stage when none of the new systems to combat this problem aren’t even implemented.
The second question is if other major companies will employ a similar methodology? For those that know Google’s suffered a loss of advertisers recently due to ads appearing over extremist videos on YouTube. Their catch-all solution was to block ads on channels with under 10,000 views. A highly criticized decision since we’re quite sure the problem with extremism isn’t a low attendance.
For now, we must wait for the outcome as a public announcement of this size must prove to be viable for Facebook to justify the expense. If it proves successful, then it’s likely other companies will adopt this strategy which will see an increase in the hiring of personnel.
The problem remains though; more people are using social media to perform illegal acts. What methods will be the ones that work and thus become universal as we move forward? People or machines?