May 29, 2015

You can't buy me worry: what I have over the robot next door

I worry a lot. It is human nature to worry.

Recently I have been worrying about machines—specifically about computer algorithms and the ability of smart machines to take over creative tasks that were once reserved for human beings.

If you want to worry this over, too, I suggest reading “The Rise of the Robots” by Martin Ford (Basic Books, 2015). Ford’s book was recently reviewed in the NEW YORK TIMES review, where Barbara Ehrenreich contemplates Ford’s convincing evidence that “the job-eating maw of technology now threatens even the nimblest and most expensively educated” of folks.

“Tasks that would seem to require a distinctively human capacity for nuance are increasingly assigned to algorithms, like the one currently being introduced to grade essays on college exams.”

An interesting example, since yes, as a college professor, having been replaced by younger graduate students and computer programs in writing centers in recent years, I suffer every day the fear of becoming obsolete.

Particularly terrifying for instance, Ehrenreich suggests, is the fact that computer programs are already writing a significant proportion of news articles on the web. Within a decade, according to WIRED magazine, 90% of the news you read will be computer generated. While 100 monkeys with 100 typewriters typing for 100 years may have proven unable to write something sensible after all, beware. Here come the machines that can.

According to Ford, a significant proportion of the copy that appears in FORBES magazine, for instance, is already being written by software produced by Narrative Science, Inc. “The company’s software generates a news story approximately every 30 seconds…” Can anyone out there name one human reporter capable of producing 100% accurate copy that quickly?

Lest you think such stories cannot possibly include a level of nuance that would prove satisfactory to the common reader, remember that the majority of people are reading the news quickly these days, skimming articles for the pertinent data required to obtain minimum knowledge on an event. Anyway, you can judge for yourself. Here is one example of an algorithm-written article from Ford’s book:

Things looked bleak for the Angels when they trailed by two runs in the ninth inning, but Los Angeles recovered thanks to a key single from Vladimir Guerrero to pull out a 7–6 victory over the Boston Red Sox at Fenway Park on Sunday.  
Guerrero drove in two Angels runners. He went 2–4 at the plate. “When it comes down to honoring Nick Adenhart, and what happened in April in Anaheim, yes, it probably was the biggest hit [of my career],” Guerrero said. “Because I’m dedicating that to a former teammate, a guy that passed away.”  
Guerrero has been good at the plate During day games Guerrero has a .794 OPS [on- base plus slugging]. He has hit five home runs and driven in 13 runners in 26 games in day games. 

I want to believe that while machines may be able to write factual news articles with a bit of a human interest twist, they will never be able to capture the paradigm of poetic writing—a form of writing that requires less literal thinking and is reliant on the nuanced sounds of language only capable of being heard/understood by the human ear. But after reading Ford’s book, I am admittedly worried. That's why I found myself this morning searching online for poetry written by computers. I discovered convincing evidence that so far I am safe in my ability to write better poetry than the common robot. The site, botpoet.com, is a Turing test called “Bot or Not,” which asks readers to peruse a poem and then vote whether they believe the poem to have been written by computer or human.



So far I have achieved 100% accuracy in my predictions on Botpoet. But then, as I vote, I realize that my input is being used to collect more data that will make Botpoet eventually capable of writing a convincing poem. And so, I am reluctant to keep trying it out, even as I am wondering—is Botpoet simply recording here my personal tastes in poetry versus that of people like literature professors really qualified to determine if a poem is good or not? Is another algorithm in use, by say Amazon,  recording my preferences and preparing to send me advertisements for books I really can't afford to keep collecting on my Kindle? Suddenly I panic and quit participating.

Anyway, back to worrying about the larger picture. If computers using algorithms are to eventually (and Ford predicts sooner than later) take over creative tasks that used to be assigned to humans in addition to the routine jobs that have already been made obsolete, what use are we to have for human beings in a free market economy in the days to come?

An excess of human job applicants unappealing to employers (computers don’t demand a living wage, need healthcare, or unionize after all), cannot lead to a joyful world for the majority of the human race. Not if we are unable to feed ourselves due to the unsurpassed unemployment figures of the future. What are governments, policy makers, beneficent philanthropists, or academicians doing to address this problematic future? Are we going to leave the solutions, the unresolved outcome, to unhindered Darwinism? Is anyone advocating building a soul into the machines themselves so that these machines become our benefactors?

Maybe human beings will become computer pets. Critters to entertain the smart machines like dogs or hamsters entertain us now. Maybe we are put in the service of festive events, becoming gladiators of a sort like the slaves of Rome. Maybe the machines can hunt us as sport or if they are built to have a soul, make us their missionary projects. Their makers can program them to create a new machine religion that teaches them to feel sorry for us. And feed us, while teaching us to worship the God of Algorithms.

I’m reminded of the collective Borg of Star Trek, and I seem to recall that no member of the Borg had a conscience unless they broke away from the collective and became individuals. Maybe we should be training machines to evolve their interest in individual empowerment?

All of these worries are probably compounded by personal feelings of uselessness arising from threats made to my own academic field in recent years as the humanities and a liberal arts education have been under attack. Pretty soon, I am thinking, everyone who is not a mogul of business, who does not own the machines or make money off of them, will know what I feel. What then? I’m thinking the machines and their owners had better be willing to take away our guns, our access to explosive ingredients, our sledge hammers.

I am at least soothed by the proof here of my excellence at one task a computer is probably never programmed to demonstrate. That is…my ability to worry. Worry is a primarily human trait. Okay, to some degree an animal trait. (Yes, my dog can worry. She worries, for instance, when I am preparing to go out the door that I am not going to take her with me.) But animal worries are not existential. A computer might be programmed to worry about its accuracy I suppose. But as far as I know, they do not attend one another's funerals as their companions are put out of service.

Sure,  building in a little performance doubt, a little humility, could be useful in creating accurate algorithms. But what use could building in existential doubt possibly fulfill? Existential doubt keeps me from getting out of bed on a bad day. Not a good trait for a machine with a quota. Existential doubt drives me at times to a canvas or my word processor and pushes me to do something creative. But it seems that machines can demonstrate what humans read as creativity merely by imitating human behavior—without the accompanying angst.

All I know is this. Humans feel. We mourn. We reflect. And therefore we worry. I have that going for me, but is the going good? Well...if I can figure out what existential worry is useful for and how to monetize it, I’ll be all set for the future. When I have some ideas, I'll let you know. For now, I have to get back to reading books that make me worry. Because that's the other human trait that's not built into your smart machine--compulsion. Machines don't procrastinate because they are less ADD than me. Viva la difference.

PS: To the people out there investing in making computers smarter. Just remember, computers don’t worry, they don’t unionize, they don’t demand healthcare, but they also lack other human attributes like a consuming desire. If we keep creating machines that take away people’s jobs and don’t replace that lost income with new sources of income, we'll eventually have no consumers to purchase the products or services these machines are supposed to produce. So…unless you’re building in materialistic needs for these computers, eventually there will be no need to employ them. The 1% can only purchase so many products. What are you going to do when there is no one left to purchase those cutesy toilet paper covers for 99 cents a piece? How many yachts can you sell for 9.9 million? The little purchases of billions of earning “little-folks” cannot be completely replaced by the big purchases of one or two people. Some of you are going to go out of business when there is no market and no one to pay the income tax that supports government research assistance. No jobs. No spenders. Duh. The "centre cannot hold."
I fully support the use of government money to build a smart computer that can predict and advocate for a better future for all. This would be a machine that has the capability of asking (and resolving) moral questions. This would be a machine that has built in algorithms for asking tough questions about the long-range future of America and how best to achieve true equality. If computers are so smart, why not use them for our toughest ethical questions? Such a machine would not only find solutions for fiscal inequality but might build persuasive arguments for implementing those solutions and  automatically begin marketing and lobbying campaigns to see them adopted. That's a machine I would support. The problem, of course, is that such a machine will never be built because policy decisions are made on the ability of programs to create money, not create happiness, equality, or even good health for the majority of us.

_________________________________________________

This poem was not written by Botpoet:

    Turning and turning in the widening gyre
    The falcon cannot hear the falconer;
    Things fall apart; the centre cannot hold;
    Mere anarchy is loosed upon the world,
    The blood-dimmed tide is loosed, and everywhere
    The ceremony of innocence is drowned;
    The best lack all conviction, while the worst
    Are full of passionate intensity.
    Surely some revelation is at hand;
    Surely the Second Coming is at hand.
    The Second Coming! Hardly are those words out
    When a vast image out of Spiritus Mundi
    Troubles my sight: a waste of desert sand;
    A shape with lion body and the head of a man,
    A gaze blank and pitiless as the sun,
    Is moving its slow thighs, while all about it
    Wind shadows of the indignant desert birds.
    The darkness drops again but now I know
    That twenty centuries of stony sleep
    Were vexed to nightmare by a rocking cradle,
    And what rough beast, its hour come round at last,
    Slouches towards Bethlehem to be born?

       (THE SECOND COMING by William Butler Yeats, 1919)