Disclaimer: I am not a computer genius, nor am I a programmer of any kind. I don’t claim to be an expert. I am a writer. The following views are based on years of reading and analyzing not only news stories, but classic and contemporary science fiction, how people have reacted to technological advances in light of these stories, and the resulting cultural changes.
In the following blog, I will be using the term “robot” as an overarching term to refer to:
- Robots (Hey, C-3PO!)
- AI (HAL, looking good.)
- Computers (I need a new one.)
- Algorithms (Thanks for the stats!)
The specific name is mentioned only when it refers to a specific application and excludes the others (i.e. algorithms).
The robot in the above picture is not representative of the entire population. No humans were harmed in the creation of this blog.
Twitter’s Project Lightning is a new way for users to have content curated for them on a new “events” feed. Namely, content that is curated by humans, not the robots that have taken on the job since the 1990s.
Twitter’s most recent change to involve human moderators is being called “revolutionary,” which to me is just silly. Having a robot that can recognize content and its relation to what’s happening in the world is revolutionary. It’s new. It’s exciting. It satisfies my need for science fiction miracles to come true.
Robots: The Ultimate Frenemy
There has always been big pushback against automation. People don’t want to be tracked and have what they searched for 10 minutes ago appear on their Facebook feed when they next log in. And it’s true that robots and computer programs can potentially take over everyone’s jobs, but I don’t dispute their necessity. I don’t think anyone would, other than the fiercest Luddite.
There are some jobs humans simply cannot do all by themselves, like rank sites for searches. Google and Yahoo used to have humans rank sites back in ye olde times, but they were soon phased out as the internet grew larger and bots were made to do the job much faster and more accurately. Google still uses humans to help rank sites, but the majority of it is done by Googlebot.
Robots make our lives more convenient and connected to others, so why do people dislike robots so much, yet love them at the same time? Two words: “Science Fiction.” Science fiction drives innovation, but it can also hinder it. To understand, we have to step back and understand the stories that almost all of us grew up with.
The Day the Friendship Died
I am a huge fan of science fiction. As an avid reader of Issac Asimov and other Golden Age sci-fi, I can tell you how optimistic people were about robots before the 1960s. They would help us with our chores, protect us in harsh environments, among a slew of other things.
Things shifted when fiction began to leak into reality. Even if a robot built in real life was created with no malicious intent, it could still be used for evil, if you want to use the dramatic term. For example, the first computer virus was created by a kid who simply used it to prank his friends before it got out of control. It wasn’t until it all became real that it was seen as a threat and authorities couldn’t tell the difference between fiction and reality.
These incidents caused stories of computers going rogue and taking over the world to really become prominent. I do concede that it’s not a recent phenomenon; that even in the more optimistic Golden Age, there was the theme exploring the dark side of technological advances. (Not to mention throughout history, where magic and technology often intertwined.)
Now, thanks to the change in culture, we are conditioned to primarily think of the evil robot archetype. I challenge you to think of a recent science fiction movie or book that had an optimistic message other than “we will beat the robots.” Even Stephen Hawking and a number of other scientists have proclaimed that advanced, intelligent computers will be the doom of humanity.
These pop culture patterns put most of us on the defensive when we think about robots having more control in our day-to-day lives. Even when it comes to robots deciding what we see on social media based on our own activity, we get apprehensive. These fears are only reinforced when something goes wrong with no fault to the robot itself.
Placing Blame When Something Goes Wrong
Let’s stop and take a deep, calming breath. The robots aren’t rising.
First of all, robots are controlled by humans, meaning the human should be to blame if the machine malfunctions. If an accident happens, the robot will be held responsible, not the human.
Computers do what they are programmed to do, but that doesn’t mean they can’t learn. They will make mistakes, especially when trying to discern what is considered inappropriate content. It is our responsibility to teach them.
The same goes for the many algorithms and programs that control a majority of our daily lives. If something goes wrong, say, a command doesn’t work correctly, the computer is blamed. Forget the fact that the programmer may have typed in a piece of code wrongly. Even data corruption is ultimately human error, intentionally or otherwise.
When done correctly, robots can be much better curators of social media content because of their ability to learn and quickly determine content that matters.
What Are You Watching? Robots Don’t Judge
Algorithms allow users to see what they want to see based on their history on a site. It can get clinical, lacking a human touch, but on sites like Netflix, I’m not too worried about a human touch. I’m happy with only a string of numbers knowing my viewing habits. A robot doesn’t care that I just watched two seasons of Supernatural in one night, though it will stop mid-view to ask me if I really, truly want to continue watching. (“Seriously, that can’t be healthy.”)
Human or No, Robots Are Still the Most Essential Part to Curation
Project Lightning will only be curating content based on what searches and hashtags are trending. The way the human editors know what topics are trending, and thus, what to curate, is through an algorithm telling them the statistics.
Of course, there is no evidence that a human will be looking through your data to curate content for you. For one, it’s an arguably impossible task for anyone to undertake given the amount of users a site like Twitter has. Sites like BuzzFeed are famous for its human curated content, though BuzzFeed does have a bad reputation due to its clickbait titles.
How Do Human Editors Decide What’s Important?
Human editors often decide what content is important based on:
- Their own subjective opinions
- What will get the most clicks
- What is trending at the time
In newsrooms and curation sites like BuzzFeed, the volume of the content put out and viewer engagement is important for the retention of the jobs of human curators. For example, a liberal or conservative news site will post what they know their audience will agree with. The same goes for anyone on social media with a strong opinion. As such, the content will successfully spread amongst that audience.
Not So “Soulless” After All: Robots Are Just as Creative as Humans
Historically, a curator has always been a human who chooses what pieces of art were to be displayed to the public. It’s here that I believe curation should remain in the hands of humans. I don’t think I would appreciate art that was chosen by a computer, even though we can’t tell the difference between a human composer and a robot one.
If taught correctly, there should be no discernible difference between human and robot curated content, much like our robotic composers.
Removing the Stress of Curation
Having robots curate content also takes a lot of stress off of humans who are responsible for sifting through content to keep graphic images like pornography and beheadings off of your feed. In the Philippines, people are paid to go through a site’s online content and remove graphic content. I cannot imagine the emotional and mental toll that being exposed to such content must have on someone. Not to mention the eventual desensitizing. I wouldn’t wish it on anyone.
That is why Twitter’s new AI is being geared up to take on the job. By teaching it what is considered appropriate and what is not, it will be much more efficient in recognizing explicit content and quickly removing it. However, humans do need to teach it what to look for because a sexual education video or chart could be accidently flagged as porn.
What Sides of a Debate Are They Going to Show?
As mentioned previously, sites with human editors tend to show the sides of a debate they agree with. With robots, they can be programmed to be unbiased, taking bits from both sides and showing them to you. A little respectful challenge to opinion is healthy and allows us to grow. If we are constantly shown content we agree with, we will always believe our way is correct.
Good Robot, Bad Robot: The Moral Implications
I do not dispute the fact that robots can be misused, intentionally or otherwise as evidenced here, here and here. But whether we like it or not, robots are here to stay and we shouldn’t ignore the moral implications of having robots curate social media content.
Most importantly, is there really any difference between a human going through your data to curate content and a robot doing it?
Now, I cannot answer this question because it is a whole ‘nother can of worms with no right, definite answer. Not to mention it would take up several more blogs, if not a book or two.
Nevertheless, we can’t let the fear of technology stop innovation and go through another technological “winter.” Used responsibly, robots can be a boon to humanity, providing us with better ways to connect through social media and ultimately teaching us what it means to be human.