By Tanish Pradhan
Facebook has announced that they are using a new “proactive detection” software to prevent suicide broadcasts. It will scan posts to figure out which users are ready to commit suicide. The use of artificial intelligence based algorithms will help improve the response time to shut off broadcasts and provide help.
Suicide prevention tools have been available on Facebook for over a decade. These allow users to report posts and suggest ways to approach disturbed friends. There has, however, been an alarming rise in the number of broadcasted suicides. Exceptionally gruesome cases have also received widespread media attention. One such case where a Chicago man killed his toddler and then himself on a live video broadcast stands out. These have prompted the need for a quicker and more effective response, the kind that AI will be able to provide. The software is currently undergoing testing in the United States. It will soon be available worldwide. Due to their tricky privacy laws, however, the European Union has been excluded.
The need for AI
The social media giant has grown into one of the largest content generating and consuming websites over the last decade. It has more than 650 billion active users. With approximately 30 billion pieces of content created each month, curating it becomes a mammoth task. In May, the company announced that they would be adding 3000 employees to their 4500 employees “community operations” team. These teams are responsible for reviewing content reported for violent or disturbing material.
No matter how many workers they have on board, they would never be able to sift through all the material. This is where the AI-based software comes in. It uses algorithms to scan each post for keywords which indicate that the user might be in need of help.
Providing resources to users at risk
These posts, once flagged, will then be prioritised for review by their community operations team. The team will then decide whether to take down posts or contact the necessary first responders. They also help distressed users by connecting them with their friends or organisations to provide help. They have also tied up with 80 local partners in the US. These include Save.org, National Suicide Prevention Lifeline and Forefront. This will help provide resources to users who are at risk.
Privacy or saving lives?
The idea of Facebook scanning posts and using AI for profiling may spark fears related to privacy. Its Vice President of product management brushed these concerns aside, saying, “We have an opportunity to help here so we’re going to invest in that”. What is more is that the site does not allow users to opt out of the scans. While the feature is designed to enhance user safety, one cannot dismiss the idea of the software being used to detect instances of petty crime or political dissent.
The idea that a platform as large as Facebook would use AI technology to read all user-generated content for profiling its users would have seemed outrageous had the suicide broadcasts prevention aspect been omitted. In light of this, the technology might do wonders in providing users with a more comfortable and safer experience on the site. Further, at the same time proactively participating in community welfare and saving lives.
While the thought of such technology might make one a tad bit suspicious today, it might become commonplace in the future. In an age most people own a smartphone, everyone has become accustomed to sacrificing our privacy for luxury. With the growth and spread of AI-based profiling, the society will transform into a place where technology can be used to ensure the safety and security of all individuals at all times. Further, it is observed that giant corporations of tomorrow play a larger and more active role in addressing the most basic social issues. However, it is possible that privacy becomes nothing more than legend. This begs the question: Is our privacy worth more than the lives saved?
Stay updated with all the insights.
Navigate news, 1 email day.
Subscribe to Qrius