Disclaimer: I am not a sociologist or psychologist, I am a writer and marketer. And as such, I study how humans think and interact in order to make my articles and stories more believable and relatable. Therefore, take my following analysis of human behavior with a grain of salt.
The human in the above picture is not representative of the entire population. No robots or animals were harmed in the creation of this blog.
In recent weeks, the leak of Twitter’s Project Lightning has left many questioning whether or not humans are up to the task of curating content on Twitter feeds. Are humans better than robots at curating and presenting content on social media?
There is something to be said for relying more on human ability when it comes to curating certain kinds of content for other humans to enjoy.
The war between humans and robots has already begun. Unfortunately, it’s not nearly as exciting as movies like The Terminator would have you believe. Instead, the war is being waged by humans about robots—specifically whether or not computers should be given the ability to learn.
I’m not here to proclaim the end of the world with this blog, but I will say that our total reliance on algorithms for social media reflects a greater problem for the internet and the economy in general.
Human Moderators Are Nothing New
For those of you freaking out that Twitter is making some grand, ground breaking experiment out of your feed, just calm down. Remember that Reddit, Youtube, and other platforms have used human moderators with moderate to much success for years. Content that appeals to the most amount of people (usually shown by “up voting” or “liking”) is moved to the top while content that is not appealing usually fades away beneath the good content. In short, human moderators have been successful in improving the social media experience for a long time.
Robots Are Not Always Better at Jobs than Humans
I think we—especially the younger generations—are naturally predisposed to think that computers can do a job better than a human. But curation is its own specialty that lends itself to the human eye.
Moving from the Bad Stuff to the Good Stuff
For years human moderators have been used for every social media site to sift through the bad stuff—porn, violence, racism—because computers were unable to recognize the difference “between Game of Thrones and ISIS video clips” or porn and a mother nursing her baby. That cultural knowledge required human insight of what was offensive to certain groups and what wasn’t. However, constantly looking at these horrible images had huge psychological tolls on human moderators and now Twitter has created an artificial intelligence that recognizes porn.
This kind of technology—the kind that helps people live happier, healthier lives—is what we need, not the kind that uselessly takes jobs that humans can fulfill (possibly more effectively than robots). While humans were perfectly capable of picking out harmful information, they shouldn’t have to
But who’s to say that a human’s cultural knowledge they used for finding bad, harmful stuff isn’t just as useful for finding good, interesting stuff?
Knowing What’s “Good” versus What’s “Popular”
Think about it like a museum curator: art pieces are carefully chosen and arranged by a human curator to create a unique and satisfying experience. While an algorithm may be able to choose a good photo based on certain qualifiers (symmetry, popularity, facial recognition, etc.), it takes a human’s understanding of what is culturally relevant and interesting to choose an image that will peak other humans’ interests above the rest. They can tell what stands out in a way that will make the viewer connect, question their assumptions, or stand in awe.
That’s the good stuff.
Notice here that I say “good stuff”…not “popular stuff.” Hopefully you’re not one of those people who think they’re synonymous, but even the people who think that this comparison means “highest rated” versus “most viewed” are at fault here. I’m talking about finding the diamond in the rough. Organizations like TED Talks and Upworthy, for instance, bring attention to good content and then make it popular, which I believe is the end goal of Project Lightning. Albeit with a bit of a “news source” twist then their innovation-centered predecessors.
Robots Share Biased Content, Too—and They’re Better at Keeping It That Way
I think one of the most honest and prominent problems people have with humans moderating the news is that they will certainly let their own political biases affect what they see as worthy to highlight for other peoples’ feeds. After all, that’s why Fox News and CNN have such strong stigmas attached to them.
But here’s the rub: robots share biased content, too. In fact, that’s one of the main reasons social media sites are so popular. Since keeping users engaged and on their pages is a main goal of social media sites, they’ve designed algorithms that track how you react to certain issues and use this to determine what to show you in your feed. This “filter bubble” makes you more likely share the content you come across and spend more time on the social media site.
However, even people with biases are capable of taking a step back and rethinking their opinions when someone challenges their beliefs in a well presented, interesting, or respectful way. In fact, this kind of media can often lead to good online conversation. Like cultural knowledge, the subtlety that separates constructive conversation from blatant arguing is easier for human’s to identify than for robots.
Humans Are “Just Afraid” of What They Don’t Understand…With Good Reason
The human race has committed horrible atrocities out of fear and hate. When we meet someone who is obviously smarter than us, one of two things will happen: we will look up to that person and see them as a leader or resource, or we see them as a threat. I think these strong reactions are the root of the problem when it comes to the idea of sentient computers. For so long humans have been the dominant force on the planet because of our intelligence, but now there seems to be a growing trend towards computers who can think and do everything better than we can. Our competitive instincts kick in.
But being cautious is different from being afraid, so let’s look at the realistic consequences of robots taking over everything.
Danger #1: Intellectual Automation May Crash the Economy (again)
We aren’t prepared for what will happen in a capitalistic society when “nearly half the working population is out of work through no fault of their own.” When computers can do jobs as well as or better than humans at a fraction of the price, human workers will quickly be replaced. The problem is that the number of available jobs is shrinking while the human population is growing and no one has proposed how to fix that problem other than “kill the robots.”
Human moderators may be part of the solution. Since humans provide unique value to curation (as explained above with their cultural knowledge), they deserve to be part of the strategy. Applied to a larger scale, humans that provide unique value in an industry should be seen as useful. This solution may help the employment rates as artificial intelligence progresses.
Danger #2: The Automation Domino Effect
This argument is a modification of “eventually we’ll be so reliant we won’t know what to do when something goes wrong.” Currently, humans are creating the algorithms such as the one that determines what you see in your social media feeds. However, it is likely that machine learning robots will soon be creating their own algorithms with little-to-no input from actual people.
As Moz’s Rand Fishkin explained in a recent video, humans will know less and less about what goes into an algorithm. The result: humans will become less and less able to create content that satisfy these algorithms. Soon, computers may be the only ones suited to write content that can satisfy algorithms.
You don’t have to stretch your imagination too far to see this cycle getting out of hand. If the user becomes less important (something we may not know one way or the other in the future), have we really fixed the problem of “finding relevant content” for users?
How human moderators may help reduce the effects of the automation domino effect. Since they rely less on algorithm tracking and determine for themselves what is valuable, they are more capable of both telling people how they choose pieces and of keeping the human audience in mind at all times.
Maybe Forcing a Combination of the Two Is the Best Case Scenario
Of course, neither Alanna nor I believe humans or robots should have complete and total control over the internet. However, the issues that robot-dominated moderation bring up can be partially—if not totally—prevented by supporting human moderators in the online world. We can use the strengths and weaknesses of both to make a better world on the internet and in the real world.