The age of personalized troll filtering is here and it’s awesome

by

Personalization is the future: the future of education, the future of manufacturing (3d printers), and now, the future of conversation and trolls.

Both Google and Facebook made announcements in 2017 about the future of filtering.

Google’s Perspective

Perspective is a project created by Jigsaw and Google’s Counter Abuse Technology team. Their demonstration site lets you see how statements rate on a toxicity scale.

You can see how different statements are rated for potential toxicity.

Perspective will be most powerful as an aid to existing curation methods.

“The New York Times is planning to use [Perspective] as a first pass of all its comments, automatically flagging abusive ones for its team of human moderators.”

This improves existing censoring models. Current options include blacklisting certain words/users or shutting off comments completely. But Perspective enables sites to better prioritize which comments to review. It can even allow individuals to set their own preferences for what they see.

Mark Zuckerberg and Facebook’s Global Ambitions

In the Mark Manifesto (about the future ambitions of Facebook) Zuckerberg shares some current and future plans for filtering on Facebook.

Thoughts on filtering in coordination with human decision:

Looking ahead, one of our greatest opportunities to keep people safe is building artificial intelligence to understand more quickly and accurately what is happening across our community.

Artificial intelligence can help provide a better approach. We are researching systems that can look at photos and videos to flag content our team should review. This is still very early in development, but in initial testing, it already generates about one-third of all reports to the team that reviews content for our community.

Facebook’s future plans include far more sophisticated models of personal filtering

Additionally, sensationalism in media and commenting makes it harder to have real discussion:

Polarization exists in all areas of discourse, not just social media. It occurs in all groups and communities, including companies, classrooms, and juries, and it’s usually unrelated to politics. In the tech community, for example, the discussion around AI has been oversimplified to existential fear-mongering. The harm is that sensationalism moves people away from balanced nuanced opinions towards polarized extremes.

If this continues and we lose common understanding, then even if we eliminated all misinformation, people would just emphasize different sets of facts to fit their polarized opinions. That’s why I’m so worried about sensationalism in media.

While Facebook has its issues, the idea of a platform that creates and supports meaningful communities is noble. Again, this comes back to the issue of who sets the filtering. Thoughts about the community influencing their own filters is a great improvement on past filtering attempts.

Vidangel is already enabling personalized filtering

One other example of a company at the forefront of filtering is Vidangel. Vidangel is already providing individuals with personalized filtering for movies (or trying to in the midst of a lawsuit from studios who have continually shown animosity for personalized filtering). Vidangel allows studios to make movies how they want and individuals to consume how they want. It’s a brilliant way to allow for freedom to create and to consume. I’m very much rooting for them in their court case.

VidAngel has filtered better than others. They make some convincing arguments about why they should be allowed to do what they do. For example, they enable the Family Movie Act of 2005. Also they pursue many avenues to partner with studios to enable this.

Government and companies haven’t done a great job with enabling or setting filtering. Individuals and communities know how to best filter and technology is the tool allowing them to do it.

The future of filtering is bright

These types of personalization that Google, Facebook, and Vidangel envision will help more people stay in the conversation. More people listening and participating in constructive conversation benefits everyone. The worst thing we could do is give up hope and disengage from the discussion. This is the biggest issue with trolling. Similar to how terrorism causes people to change their public behaviors because of fear, trolling causes people to not join conversation out of fear.

We all have different levels of comfort with language and should be able to control how inappropriate or intense it should be. To think about having personal control over what you read and see is amazing. There is lots to still figure out, but I like the direction we’re taking.

ABOUT THE AUTHOR

Ryan Seamons writes about more human approaches to modern management.

Join Patterns for weekly ideas about making work better.

Also check out Manager School to become a better manager.