Notes for 3/13/2026
3/13/2026
[Philosophy Club every Monday, 4-5 pm, in the Buchtel College of Arts and Sciences room 436 ("The Cave")]
[Bioethics Club: Mondays from 5:30pm-6:30pm in Leigh Hall 408]
Do you believe a machine could ever really think?
Early AI holds that to think is to give the “right” sorts of outputs in response to the “right” sorts of inputs.
If the mind is not observable, then what criteria determine whether or not someone or something is thinking?
Behaviorist approaches say it can only be behavior. If something acts like it is thinking, we have to say that it is.
Of course, “acts like” must be construed broadly.
Turing Test: If a computer gives the same kinds of outputs in response to inputs as a human being, then the same reasons I say the human is intelligent should make me concede that the computer is intelligent.
Multiple realizability (the same function can be performed by different things – sometimes radically different)
Input-output (or “black box”) functionalism (standard view in traditional AI):
To be intelligent is to give the “right kinds” of outputs in response to the right kinds of inputs
(Intelligence/thought/cognition must be defined operationally) (relates to “the problem of other minds”)
Gilbert Ryle: “Ghost in the Machine”
The idea that you need something over and above the machine’s functions is a “category mistake”.
Example: University
Example2: Fear
If a machine/animal/other system behaves in operationally relevantly similar ways to me when I am thinking, then it must also be thinking. (Cognitive equivalence = behavioral equivalence)
The internal details are irrelevant because there can be different ways of doing the same thing. (multiple realizability)
Is artificial intelligence real intelligence?
Comments
Post a Comment