PhilosophyOfArtificialIntelligence
ThoughtStorms Wiki
Q : But Phil, do you really believe that the AI "understands" and "reasons" about things?
A : Who cares? It's like the question "do aeroplanes really fly?". What aeroplanes do is different from what birds and bees and bats do. So you might think that this form of locomotion deserves a different name. Or you might decide to just extend the use of the word "fly" to cover what aeroplanes do too. It's up to us.
Q : But come on! There has to be some reality about "understanding" and "reasoning".
A : Not really. This is what Wittgenstein means by "language going on holiday". Words like "understanding" and "reasoning" evolved in a particular situation of human to human communication. To deal with questions about whether a pupil has acquired a competency, or to judge the soundness of an opponent's argument. Take these words "on holiday," out of those contexts where they have that utility and, in a sense, all bets are off. The AI demonstrates competencies sufficiently like those which, were a human to demonstrate them, would unquestionably be called "reasoning" and "understanding". But many people don't WANT to say the data-centre (or the Chinese Room), really understands, and so they look for something deeper in the meaning of those words themselves, something that can help demarcate the real "understanding" that humans have from the unreal "understanding" of AI. But as Wittgenstein notes, this is the kind of thing that conjures "insoluble" philosophical problems out of thin air.
It's just a fancier way of us "deciding" whether or not to extend the meaning of the word "reason" to cover the case of what the machine does. But the philosophical questsion are insoluble. No-one is ever going to give us the definitive answer. Because ultimately the problem is rooted in assuming that the words have a meaning outside the niche they were evolved for.
Q : But words DO expand beyond their niche. And sometimes it's philosophers who do the expanding. And that enriches our understanding of the world.
A : Agreed. But philosophy isn't in the business of pinning down how words ought to be used in other contexts and by other people. I mean are we expecting philosophers of mind to adjudicate on whether calling something a "reasoning model" is dishonest false advertising, useful analogy or dead metaphor?
Quora Answer : Has anyone created a way to represent a thought on a computer?
Humanity invented written text to represent its thoughts. That's what books are about.
And computers have been holding and manipulating text since before ASCII.
So in that sense, we represent thoughts on computers all the time.
Neural networks are an experimental model of holding and manipulating knowledge in a form that's inspired by our understanding of how the brain works.
Today we are finding neural networks increasingly practical. And using them more and more. But the cost of that is that these neural networks are not highly detailed models of exactly what goes on the brain. They are abstractions borrowing a few of the basic ideas.
We do not, so far, have an accurate way to read a thought from a brain electronically, copy it into a computer or copy it directly back into a brain at the current time.
I suspect we might get closer to this than most people imagine within the next 50 years. I think mind-reading computers are going to be with us disturbingly soon. And we'll be freaked out when we see them.
But there's nothing like this yet.
Backlinks (3 items)