Language technology and society: ethics of artificial intelligence

Questions based on the Weizenbaum reading from "Computer Power and Human Reason: From Judgment To Calculation"

    • Understanding technology: Should people aim to understand all technology that they use? Would it make a difference? (If the

    • secretary understood perfectly well how Eliza was made, would she still converse with Eliza like a real psychotherapist?)

    • How can we make machine learning systems more explainable, technically?

    • Can a computer ever understand language? ``I had tried to say that no general solution to that problem was possible, i.e., that

    • language is understood only in contextual frameworks, that even these can er shared by people to only a limited extent, and that

    • consequently even people are not embodiments of any such general solution.''

      • Can a computer ever understand language, and if so, why?

      • Do people not understand language?

    • Weizenbaum argues against a view of the world, and of people, that is ``logic-based'', basically against a rule-based view of

    • humans.

    • Back then, artificial intelligence systems were rule-based. Nowadays, they are machine learning based.

      • Is there a new view of humans that is still based on a ``humans are like computers'' metaphor, but more similar to

      • machine learning systems?

      • Do you find an ``intelligent'' system more plausible when it is based on machine learning than completely rule-based?

    • On human cognition as following rules: We erlier discussed the homunculus argument, summed up here:

      • Assume a human is just following rules, like the rules of a computer program. We can maybe compare that to the rules of chess. But it is not the rules that play chess, it is the player using the rules. So if that is so, and if humans are just following giant sets of rules, who reads and executes the rules? Simplest idea: a homunculus inside the brain. (Like Searle in the Chinese room.) But then, the homunculus has cognitive processes as well. Are they rule-based? If so, we are in infinite regress, because then there must be another homunculus inside the homunculus.

      • Do you see a way out of this dilemma?

    • Weizenbaum states that computers cannot judge, just compute. They do not have compassion, or intuition. Would you agree?

    • Weizenbaum states that there are some things that computers ought not to do, like take on the role of psychiatrists. Would you agree?

    • ``we should never allow computers to make important decisions because computers will always lack human qualities such as compassion and

    • wisdom.''

    • Weizenbaum describes choosing as a genuinely human activity, and choice as the product of judgment, not calculation. He writes that Winograd's Shrdlu program will, when repeatedly asked ``why did you do this,'' at some point have to answer ``because I was programmed to do so''. People, on the other

    • hand, can say ``Because I chose to.''

    • But is it that easy? The concept of free will has been drawn into doubt.

    • What we can do, versus what we ought to do: Weizenbaum lists speech recognition as something we should not do, because it also involves natural

    • language understanding. He asks what can be the use of that, and the most reasonable thing that occurs to him is dictation of medical notes. But he warns that the government may use such technology to spy on us all.

    • What do you think of this argument? Weizenbaum's reasoning seems a bit quaint now -- but let us ask anyway, is natural language understanding something we should not attempt to do?

    • Is there more to the human mind than just the brain? Weizenbaum believed that even a replication of a brain would not achieve

    • realistic human behavior.

    • Does the body play an important role in the development of the mind?