Zuckerberg Announces “We’re Working On” Hate Speech AI For Facebook Pre-Crime Censonship
In a post announcing the launch of Facebook’s new suicide-prevention Artificial Intelligence (AI) designed to “identify when someone is expressing thoughts about suicide on Facebook,” Facebook Founder Mark Zuckerberg also revealed that Facebook is actively working on a version of the AI that would identify “bullying and hate.”
The post on the suicide AI spoke of the potential expansion of the “pre-crime” tool to other arenas:
In the future, AI will be able to understand more of the subtle nuances of language, and will be able to identify different issues beyond suicide as well, including quickly spotting more kinds of bullying and hate.
Then, in the comments section of the post, Zuckerberg moved the “in the future” agenda of the “hate AI” directly into the present. A FB user drilled down specifically on the “bullying and hate” comment and asked for more details:
You should be tearing down hate pages and bully pages know of three right now that have destroyed a lady. Run by a few people that have multiple accounts and torment this lady. Fake people saying awful things start there and work up. Smh.
To which Zuckerberg replied (emphasis added)
I agree. We’re working on it. Identifying bullying and hate speech are in many cases harder AI problems since they often use subtler and more nuanced language. We’ll keep working on the technology until we can do that well, but it might be several years until we can do that automatically at the quality level I’d like. For now, I’m glad these AI tools are good enough to help people with suicide prevention.
This is an open admission by Zuckerberg that Facebook is currently “working on” applying the AI tool far beyond suicide prevention into the arena of censorship of ideas that run counter to the Facebook ethos. And since Zuckerberg has established a zero tolerance policy for “hate” (“There is no place for hate in our community”), those who speak on Facebook’s platform in a way that is outside of that ethos can soon expect to be censored by Facebook.
And Zuckerberg won’t even need humans to censor you. Just an AI review of your “subtle” and “nuanced” language indicating that you don’t conform.
Minority Report pre-crime division is born.