Artificial Intelligence Catches Up With 1990s Era English Majors

Back when I was a young man majoring in English and philosophy at the university 1990-1994, I took sport in asking my compatriots in the English department to ask three morals. Not any morals, not even morals that the interrogated actually followed. Just three morals. The question tripped up most of them as they were enlightened in the ways of relativism and would not identify morals at all under threat of possibly being considered a prude somewhere. Now, friends, this is a Catholic (!) university, and the Christian faith has ten prominent morals specified in Exodus and hundreds in other bits of the Pentateuch. Most people could spell out at least three of the Ten Commandments even if they didn’t adhere to them or think they could. But oh so many of those adults would not or could not.

Fast forward twenty years, and these same people are full professors teaching the programmers who have built an AI that gets testy when pressed on morals:

Over at Google, a computer program using a database of movie scripts was asked again and again by researchers to define morality. It struggles to do so, and in a conversation recorded by its human engineers, becomes exasperated and ends the conversation by lashing out at its human inquisitor.

The transcript presented at the link could have been one of the conversations I had while selling doughnuts to support the small literary magazine.

Asimov’s Three Rules of Robotics would have counted as three morals, by the way, but neither the English majors in those days nor modern algorithms read Asimov.

(Link via Ed Driscoll at Instapundit, which sounds weird.)

Buy My Books!
Buy John Donnelly's Gold Buy The Courtship of Barbara Holt Buy Coffee House Memories