Tuesday, January 5, 2010

Artificial INTELLIGENCE vs ARTIFICIAL intelligence

Quasi-religious ponderings, part 2...

I was reading Seth Shostak's book (which is quite good, really, if you ever had an interest in the science of the SETI program) and one thing he brought up is something I've heard before... that we will soon surpass the power of the human brain with a single computer.

The implication is that the human brain ("a slow speed computer operating in salt water") is nothing special and since we are going to foreseeably surpass its raw power using computers we will soon replace ourselves with artificially intelligent machines. Silicon based lifeforms, essentially, that will out-think us, will end up being our replacement. Homo sapiens go extinct and thinking machines take over (whether this involves Arnold Schwarzenegger or Keeanu Reeves is not specified).

The problem I have with this concept is one which I would say can best be described as the difference between "artificial intelligence" and "artificial intelligence".


In days of yore, when we computer geeks talked about "artificial intelligence", what we meant was that we can come up with something that appears to be intelligent, but isn't. We can make a game of Pong and get the computer to display a level of intelligence in knocking the ball back to you but it is not, in any way, actually intelligent. It's running a very specific set of instructions. The intelligence is artificial, as in, it is not real.

Shostak uses Deep Blue -- that's the IBM chess machine that played chess champion Kasperov -- as an example of artificial intelligence, albeit one limited to playing chess. Kasperov made a comment that Deep Blue seemed to exhibit a sort of intelligence. However, Deep Blue's intelligence was artificial. That is to say, it was running a very specific set of instructions. Deep Blue was no more intelligent than my hand calculator, but it had a lot of computing power and a program designed to let it create every possible permutation of a chess board and thus decide which move would take the board in a direction most likely to result in a win. It was a very nice machine but it was not intelligent.


I believe Shostak (and certainly lots of other people) are using the term "artificial intelligence" to mean "intelligence which is artificially constructed" -- that is, the intelligence is real but the platform is circuitry rather than a squishy brain.

We are nowhere near having this. We haven't a clue on how to start. You might as well be discussing faeries and elves as artificial intelligence of this nature. Even though we will be able to pack more power than a human brain into a single machine, we don't know how to make it think. We don't know how to make it intelligent. We can't create artificial intelligence -- something which is genuinely intelligent but lives on hardware -- but we CAN create artificial intelligence-- something which seems smart (in a very limited, intentionally designed way) but isn't.

The evidence for this is already in front of us.

If we could create true intelligence, then we should already be able to do so, on today's computers -- it would just run a little slower than we'd like. The beauty of software is that it can simulate whatever you like, given enough time. It would be like creating software for a 32-bit machine which makes it act like a 64-bit machine. It's doable. It would obviously be slower than a real 64-bit machine but you could do it. Similarly, if human intelligence requires a computer 100x more powerful than the ones we have today, we should be able to build it right now, with software, but it would just be 100x slower than it should be.

In other words, we should, right now, with today's technology, be able to create a working human brain, which is, in every way, an intelligent learning being just as smart as any of us, but it might just be a bit slow, as if time was running slower for it, on account of the software having to simulate a better hardware environment than it really has access to.

But we don't know how.

We have no idea.

If we had more powerful computers, we would still have no idea how to make them alive.

In fact, you'd think we could at least create a reasonable artificial, say, pigeon. Perhaps an artificial dog? Surely we have the computing power for that. Surely your desktop machine has more power than a gerbil brain and surely we could do the AI research into creating "the artificial gerbil" which would be indistinguishable from the real thing. In fact, if we could work out the nervous system connections, we should be able to hook it up to a gerbil body and you wouldn't even know the difference, aside for perhaps the cable running from the gerbil's head to your machine.

But we can't. We aren't anywhere close to that. We don't have the artificial gerbil. We don't know where to begin. Actual intelligence -- actual life -- is still the realm of faeries and elves to us -- it's completely incomprehensible magic which we are entirely unable to reproduce, no matter how powerful computers become.


Oh sure, given enough time we might be able to create an artificial intelligence which could pass for a person (or at least a gerbil), but I predict it will take thousands, if not millions of times more processing power than a human brain requires to do its work. This is because it would not actually be intelligent. It would simply be a collection of a lot of smaller works -- it knows how to play chess because it took Deep Blue's coding. It knows how to drive a car because we created a program for that already. It can perhaps even carry on a conversation. But it's not alive and the evidence for this is in the staggering power and programming required to create what will be a rather poor stand-in for a human brain. It will not have true creativity or self-awareness. It will still just be a machine. It will still just be a puppet with programmers pulling the strings.


And this (to bring us back around) is why I'm not an atheist. Among other reasons. Because there is something in us that's not just machinery -- not just programming. We must ask questions like "what is consciousness?" "Who am I and what is this 'I' that's doing the asking?" How do we create software which is alive. What does "alive" really mean?

In order to program a machine, we must fully understand it. We cannot program a consciousness because we do not fully understand it. Consciousness, I believe, defies science. Perhaps it always will. I believe that if we did create an actual, intelligent machine which is truly alive and self-aware, we would not be able to understand how it works anymore than we can understand how we work. It would simply "become alive" and no matter how much digging we did through the circuitry and the programming, we would not be able to find the spark that is "life" or understand where it comes from or how it works.

I believe that life is a mystery.

I believe that "souls", if you will, are a distinct possibility. Are you more than your hardware? Is there something going on which cannot be explained by the number of neurons and connections in our heads?

I believe so, because when it comes to artificially replicating it, we don't even know where to start.

1 comment:

Alan said...

Science:
1) Observe
2) Theorize
3) Test

Rinse and repeat. The science of AI is no different. As computers continue to grow faster, that time to complete a cycle will get smaller. We will be able to learn more about what intelligence is in a shorter amount of time.

Attempting to simulate an artificial human today is a pointless exercise, you will certainly get it wrong, but it will take you years to figure that out.

Progress is being made in AI and will continue to be made.