Menlo Park, CA— Flying in the face of Sarah Connor and Elon Musk’s worst fears, artificial intelligence is saving lives worldwide. Facebook recently made news by expanding its suicide prevention efforts to include AI-powered intervention. The company rolled out a proof-of-concept in the US back in March, and thanks to early successes, the program has already been globalized.

For years, the company has funded a community operations team that scans user posts and alerts local law enforcement to intervene when individuals at high risk for self-harm are identified. By adding AI muscle to the approach, both text-based posts and live video can now be scanned for flags that point to suicidal intent. In the past month alone, the social network claims that more than 100 interventions have taken place worldwide thanks to the AI engine and operations team. Additionally, the new tech is accelerating intervention times by flagging earlier and bringing in support twice as quickly as user-reported instances.

Despite this system’s high-tech nature, the human element of these prevention efforts is more crucial than ever. Over the past decade, Facebook has partnered with dozens of suicide-prevention organizations to truly understand the warning signs of suicide and develop better ways of intervention. The company is seeking to better protect its communities and leverage its significant role in billions of people’s lives for the greater good.

Why This Matters—

In this instance, the header above doesn't seem to do this topic justice. Of course using technology to save countless human lives is of immeasurable importance. But what can we learn from Facebook’s approach?

First, it’s worth noting that even a young tech giant realizes the importance of real human insight. Rather than relying on AI for everything, its partnerships with more than 80 suicide prevention organizations helped hone Facebook’s approach and establish the logic behind the digital tools. 

Second, it’s clear that artificial intelligence is finally reaching a new level of sophistication. It’s beginning to grasp or at least note human intent. This AI engine is adding more than ones and zeroes— it’s interpreting both text and video to understand what’s happening. What diagnostic doors will technology like this open in mental health and healthcare as a whole?

About the Author:

Drew Beck brings more than a decade of broad healthcare experience to GSW in his role as the Director of Innovation. He has enjoyed working for big healthcare names including Eli Lilly & Co. and GlaxoSmithKline in Global Marketing and Pharmaceutical Sales roles, but his start came from hands-on work in patient care in Emergency Medicine. This foundation has given him a deep understanding of both patients and healthcare professionals. In his current role, he combines all he has learned from this background with insights into current market trends to help clients drive the future of their brands.