I read a fabulous article in the New York Times the other day titled "The First Church of Robotics." The author, Jaron Lanier, is a computer scientist and technology guru who recently published a book titled You Are Not a Gadget. In the article, Lanier argues that the religious beliefs of many computer scientists often lead them to anthropomorphize technology, and as a consequence, dehumanize people.
We've all seen a movie about a robot or computer that became "self-aware," and manifested human-like characteristics. From Terminator to Wall-E, the concept of artificial intelligence is a popular one that makes for great science-fiction. It also serves as an effective marketing gimmick.
The belief in A.I. rests on the metaphysical assumption that consciousness is something that simply "emerges" whenever a highly complex material object (e.g. a computer or brain) is configured and working the right way. While that assumption may or may not be true, it is certain that scientists have come nowhere near achieving it. It is likely that they never will. Unfortunately, you wouldn't know that from the way new technologies are marketed and hyped in the press.
By describing their products in anthropomorphic terms, computer scientists have grossly overstated the "intelligence" of their technological innovations, probably for advertising purposes. A "smart-phone" sounds much more snazzy than a regular phone. Doesn't it? And who's not impressed with the i-Tunes "Genius" feature? (It's so smart. It's like it knows me better than I know myself). But most of these innovations are simply algorithms. They are not "smart" in any human sense of the word that implies understanding or awareness. And scientists know this implicitly. As Lanier points out, "Engineers don’t seem quite ready to believe in their smart algorithms enough to put them up against Apple’s chief executive, Steve Jobs, or some other person with a real design sensibility." In other words, they would never let an algorithm design their next product. They leave that to real people.
Marketing isn't the whole story however. The tendency of computer scientists to anthropomorphize computers is also due to their own metaphysical assumptions, i.e. their religious beliefs. Lanier explains:
"The influential Silicon Valley institution [Singularity University] preaches a story that goes like this: one day in the not-so-distant future, the Internet will suddenly coalesce into a super-intelligent A.I., infinitely smarter than any of us individually and all of us combined; it will become alive in the blink of an eye, and take over the world before humans even realize what’s happening... Some think the newly sentient Internet would then choose to kill us; others think it would be generous and digitize us the way Google is digitizing old books, so that we can live forever as algorithms inside the global brain... Yes, it sounds nutty when stated so bluntly. But these are ideas with tremendous currency in Silicon Valley; these are guiding principles, not just amusements, for many of the most influential technologists...which suggests something remarkable: What we are seeing is a new religion, expressed through an engineering culture." [emphasis mine]
It may seem like a harmless thing for computer scientists to believe wacky sci-fi theories and anthropomorphize technology as a result. But Lanier thinks this has a negative effect on society. "[B]y allowing artificial intelligence to reshape our concept of personhood, we are leaving ourselves open to the flipside: we think of people more and more as computers, just as we think of computers as people." I agree. And in my view, dehumanizing people is always a bad thing, even when it's unintentional.
It's sad to me that someone would even need to write a book titled You Are Not a Gadget. One would think that would be common sense. Unfortunately, it's not. I'm glad we have level headed, clear thinking scientists like Lanier. He offers some practical advice to help break the materialistic spell our culture as fallen into:
"Seeing movies and listening to music suggested to us by algorithms is relatively harmless, I suppose. But I hope that once in a while the users of those services [e.g. Netflix, Pandora, etc.] resist the recommendations; our exposure to art shouldn’t be hemmed in by an algorithm that we merely want to believe predicts our tastes accurately. These algorithms do not represent emotion or meaning, only statistics and correlations."
And Lanier's advice to his fellow computer scientists? Believe what you want, but sell technology without all the "metaphysical baggage."
Get articles and resources delivered to your inbox!
*A special thanks to Jocelyn for sending me the original article.