- Joined
- Jul 5, 2012
- Messages
- 14,775
- Reaction score
- 24,914
- Location
- Massachusetts
- Website
- elizabethbonesteel.com
Content warning: This story contains descriptions of violent acts against people and animals, accounts of sexual harassment and post-traumatic stress disorder, and other potentially disturbing content.
Be advised: the CW on this article is accurate. The descriptions of the content these people are asked to moderate - again and again - are horrific.
Interviews with Facebook content moderators
So my visceral response to this is pretty strong, but I'm struggling with what the right answer is. Shutting down Facebook (if it were possible) would, IMHO, do nothing but rearrange the deck chairs (although it might send a useful message). The social media genie is out of the bottle - and Twitter, where I found this article, is not exactly in a position to point fingers at anybody else.
This bit at the end struck me:
If you believe moderation is a high-skilled, high-stakes job that presents unique psychological risks to your workforce, you might hire all of those workers as full-time employees. But if you believe that it is a low-skill job that will someday be done primarily by algorithms, you probably would not.
Instead, you would do what Facebook, Google, YouTube, and Twitter have done, and hire companies like Accenture, Genpact, and Cognizant to do the work for you. Leave to them the messy work of finding and training human beings, and of laying them all off when the contract ends. Ask the vendors to hit some just-out-of-reach metric, and let them figure out how to get there.
Which is part of the continued upending of what capitalism is supposed to be about. These people are doing extremely hazardous work that's important to their employer (or in this case, their employer's employer), but they're paid crap because the tech companies are still convinced they're going to be able to solve the problem with machines. They're expendable seat-warmers, employed only until the "real" solution shows up.
This kind of thing isn't my area of expertise, but I'm pretty sure we're quite a few years - maybe decades - away from a reliable, fully-automated content analysis system. (Purely automated I'd say is impossible.) Treating these people like they're disposable because they may, someday, in the amorphous future be replaced by software is amoral and unconscionable.
The other thing that hits me is all the hoops Facebook and Google (and all the others) jump through to "protect" speech. They are not the government. They can absolutely have all the content guidelines they want. They can say no videos or no swearing or no using the word "azure." Will this impact their business model? Sure will. They'll have a few less billions to redistribute to the C suite, but I'm not weeping for any of them.
For myself...I'm still on Facebook and Twitter. I interact with friends and family on Facebook. I interact with the writing community on Twitter. They're useful tools, and I'm uncomfortable with my continued participation. (Facebook is completely dragged in this article, but Twitter's run by someone whose political beliefs are just about 180 degrees from mine.) Then again, I sold three books to News Corp, so it's late for me to be getting a conscience.
But it's never too late to recognize that there's something really, really wrong with how things are working. The question is, how do we go about repairing this?