from the blog of friend and über-educator, Steve Sagarin…
Thursday, December 18, 2014
Can Computers Think? Magical Thinking and Computer “Thinking”
I asked my students if computers would eventually be able to think, assuming they can’t already. Some said yes. Some said no. Some said it depends on what you mean by thinking. Are neurons digital? Even if thinking isn’t digital, couldn’t something digital, perhaps, think? Software can already learn, some said. That’s not learning, others said. Round and round.
And then we learned about ENIAC, the Electronic Numerical Integrator And Computer, built at the University of Pennsylvania during World War II. ENIAC was the first “universal” computer, built according to Turing’s understanding of a “complete” machine; one built to do whatever it was programmed to do, and not simply a giant calculator or tabulator. Famously room-sized, made with vacuum tubes instead of transistors, and programmed with around one million punch cards, it had no memory and could solve between 300 and 400 multiplication problems per second. It was created to solve problems related to the trajectories of shells, but was used to create the first atomic bomb.
After learning about ENIAC, I asked the students, again, if they believed computers could think. Imagine a larger ENIAC, I said, as large as you like. Imagine a program of millions and millions of punch cards, as many as you like. Imagine it works more quickly, as quickly as you like. Imagine I attach some sort of conveyor belt to it so that its program will output new punch cards that can feed into the input stack and alter the existing program, I said. That’s what you mean when you tell me computers can learn, I told them. Nope, they said, not learning, not thinking. They all agreed that ENIAC would never be able to think.
But that’s all a computer is, I told them. Your smartest phone, your fastest laptop, the bestest supercomputer.
Is there something about the hidden, electrical, solid-state nature of computers built on microprocessors connected to glowing screens that seems, well, magical, Harry Potter? Something that convinces rational students of an irrational impossibility? Isn’t this the definition of superstition?
Humans can think, still. Computers clearly cannot—if you think about it clearly.
But that’s not the end of the story.
Every technological advance brings us power and control. And every technological advance robs something from us.
Clocks bring order and regularity to a conception of time, but, having invented the clock, we can… forget about time. The clock will remind us. Invent a printing press, and we can reproduce texts by the millions. And we can forget stories. We don’t need to remember, because the story is always waiting for us in the book. (How many stories did your children know by heart before they learned to read? And how many after?) Invent a light bulb, and we can forget about natural light. Invent GPS, and we can ignore where we are. And on and on.
I’m not arguing against technology, just pointing out one way and one direction in which it changes us, always.
So, what about the computer? Calculator, library, messenger, entertainer. The computer isn’t one thing. The computer is whatever we want it to be, within the limits of its digital existence.
Of what does it threaten to rob us? Of thinking itself. Not because the computer can think, but because, in using it, too often, we don’t have to.
And if we forget how to think, then it won’t matter much if computers can think or not, will it?