Recently,
Facebook announced that it is rolling out an artificial-intelligence tool
aimed at preventing suicide in its users. The tool monitors posts, videos, and
livestreams, and looks for signals from friends like “Are you OK?” and “Can I
help?”
Facebook’s
community operations team then reviews the content, and contacts the user (and
their friends) via Facebook Messenger with links to relevant pages, including
the National Suicide Prevention Lifeline and the Crisis Text Line .
The AI component
is a step beyond the platform’s existing suicide prevention efforts. Facebook
already allows users to report friends who they think might be at risk.
But at
Facebook’s annual Social Good Forum, CEO Mark Zuckerburg said AI could help
spot suicidal tendencies even quicker.
If the AI (or a
friend) pinpoints a user in immediate danger of suicide, Facebook will flag
local first responders — police, fire departments, or EMTs — who can aid the
person on the ground.
“When you’re
trying to keep people safe, speed is really important,” Zuckerberg said. “In
the last month in the US, the [AI] tool has helped first responders reach out
and help more than 100 people who needed that support quickly.”
Reaching out
The company is
also working to improve the technology to avoid false positives before the
community operation team reviews incidents. In recent months, the company has
made it easier for the team to find contact information for the correct first
responders.
Facebook is
rolling out the suicide prevention tool globally except in the European Union,
which has strict data-privacy laws. The tool is one of several humanitarian
initiatives that Facebook has recently launched.
Source: THE ECONOMIC TIMES- 11th December,2017