The Ethics and Epistemology of AI
This is a very interesting article.
I agree with Véliz that Socrates provides important guidance on the ethics of AI, that AI technology presents us with several moral and philosophical problems, and that bullshitting is dangerous. However, I’m inclined to modify a couple of points.
First, Véliz writes: “In contrast to Socrates, large language models don’t know what they don’t know.” I would put the point differently: arguably, large language models don’t know anything and can’t know anything. Insofar as such models don’t know what they don’t know, they don’t know anything that there is to be known since they lack the capacity for knowledge in the first place.
The possession of knowledge entails the possession of consciousness and intentionality. But AI has neither characteristic. Hence, AI cannot know anything. AI is a tool for information processing and delivery. It is not aware of the information it processes and delivers.
Second, Véliz writes: “Large language models are the ultimate bullshitters because they are designed to be plausible (and therefore convincing) with no regard for the truth.”
As a metaphor, this claim is fine. However, if meant literally, it presents another problem concerning consciousness. Although bullshit is speech detached from concern for the truth, as Véliz astutely notes, the bullshitter is conscious and consciously concerned with something other than the truth, e.g., persuasion, power, etc. To bullshit is to be conscious.
Now, if AI is not conscious, then AI cannot engage in literal bullshit.
Yet Véliz’ deeper point about truth is right: we should be concerned about the idea that AI programs might deliver plausible falsehoods. Given that human beings are sometimes attracted to alluring falsehoods, significant intellectual caution is in order.