Thursday, December 2, 2010

Mini-heroes

Superheroes...

They leap tall buildings in a single bound, deflect bullets with their eyeballs, etc, etc. They make great action movies.

The problem with superheroes is that they are expected to be heroic. When you can deflect bullets with your skin and shoot lasers from your eyeballs, it's fairly expected that you can maybe stop a murder in progress. When your character is a karate champion, it's expected that you will kick the ass of all the bad guys. Kicking the ass of bad guys is to a superhero as eating a cheese sandwich is to normal people. It's just something you do sometimes. It's an event that's hard to applaud.

So I've always preferred what I tend to think of as the "mini-hero". Sub-hero? Small hero? These are usually the background characters who tend to die for the sake of showing that the main character might be in some sort of danger. They don't have superpowers. They aren't especially good shots. They don't have laser eyeballs or armored skin or the power of teleportation. They woke up, maybe took a shower, put their pants on one leg at a time and went out and did the best they could.

I love mini-heroes. Probably to an odd extent.

One of my favorite scenes from the new Star Wars movies was where Padme fell from some troop transport. It took a hit or something and listed to the side and dumped Padme and some anonymous clone trooper. Typically, in this sort of situation, the main character is fine and the background character manages to break his neck or get eaten by a sandworm or something. In this case, though, the clone trooper actually got up and helped. I nearly stood and applauded the cinematic genius of this but I supposed that would be rude. I love it when background characters, who are ostensibly supposed to be competent, actually get to be competent. I realize the focus of the movie is the main character and all that but I really like it when a background character can do well.

It's almost to the point where I get disappointed to read a new book and discover that the main character has a secret power of summoning dragons or throwing fireballs or shooting people in the eye while diving sideways from 50 meters. Ah well, yes, latent superpower and all that. I suppose he'll be kicking ass then. Probably got a ring that makes him invisible, yes? Turns out he's a wizard and so forth, am-i-right? Got a magic horn and sword and shield and so forth, perhaps?

Bah.

Who's the biggest hero of Lord of the Rings?

Boromir.

Why?

Cause Boromir ain't got shit.

The hobbits all have a natural resistance to the ring, which is why Bilbo got to hang onto it for so long. Gandalf is a fucking wizard who kills Balrogs. Aragorn is a closet king with a super special sword and lives for like 200 years or something because he has in him the blood of the High Men. Legolas apparently rides shields down steps like they are surfboards. Boromir ain't got shit.  He's just a guy with a horn, a long way from home, who went to talk to the elves because he saw the world going to shit and wanted to see what could be done to stop it.

It's the sort of story I love.

Not a superhero. Not a master ninja karate expert mutant wizard. Just someone who does the best they can, knowing that they aren't good enough, and that they'll just have to pull through somehow. Maybe they're not perfect.  Maybe they fuck up.  There's no prophecy in their favor.   No awesome technology or magic protects them. They're just there, doing the best they can.

It's always a terrible disappointment to me when they get mowed down somewhere in the background, because that was the only role I could really identify with.

Thursday, July 22, 2010

Letter to Someone Who Will Never Read It

I don't like most people.

One day, I saw you.  You had what I would call "classic American beauty and style" -- the kind not seen so much these days.  The name "Bettie Page" sprang to mind.  Your hair was in a short bob style, with thick bangs in the front -- something most people can't pull off.  I liked it.  And that was interesting.

I heard you call in your order.  Distinct accent.  I liked that, too.  And that was interesting.

I had to get to know you.  But you were working, and you seemed to take your work seriously.  Not someone to want to be sidetracked by a random dude.  (And that was interesting.)

On another day, I saw you again -- I don't know if you would remember -- I was walking down the hall, you were standing at the end.  You smiled at me.  I hadn't seen you smile at anyone before.  I smiled back, but kept walking -- you were on the job again (and I am kind of dumb).  But still... a smile.  And that was interesting.

Eventually, I saw you at a party.  You were dressed head to toe in black.  And that was interesting.  You came up to the bar to order a drink.  I'm never good at breaking the ice with people, but there's a time and a place for everything, and this was my time.  So I went over and we had the following conversation:
Me: "Can I buy you a drink?"
You: "No."
Me: "Oh-kay."

And then I went back and sat back down with my friends and pretended that the last 15 seconds had occurred in some alternate universe that nobody ever need know about.

Later, we were all finishing our drinks, paying our tabs and getting ready to leave. I stood up to leave and turned around.  You were standing right there.

You: "Are you leaving?"
Me: "Yes."
You: "You can stay and have a drink with me."

The tone of this statement suggested that it was a command. I would stay and have a drink with you.

Me: "Oh-kay."

I have long since suspected that my answer of "Oh-kay" to your original "no" was, in fact, the correct response. During the remainder of the evening I learned that your life consists of someone hitting on you roughly every 15 minutes and not taking "no" for an answer. I have never before seen someone get down on one knee just to ask a girl out. It must get annoying after a while.

So we hung out.  You told me about how you didn't understand American fashion.  You were at a store, looking at clothes, and two other girls were there, staring at you and whispering to each other, and you felt like an outsider.  And that was interesting --

Let us detour a moment and discuss "strong" versus "tough".  Tough people simply do not feel the barbs and stings of day to day life.  Maybe they are oblivious.  Maybe they just don't care.  Whatever the reason, they don't deal with social pressure because the thorns of life don't reach them.  Tough people are hard for me to relate to.

Strong people feel all the thorns, but power through them through force of internal will.  From your story, I gleaned that you were not tough; you were strong.  Things bother you, but you are strong and can get through them.  And that was interesting.

Everything I learned about you made me like you that much more.

Anyway, we hung out and chatted a bit and arranged another day to hang out some more.

At the beginning of that day, you told me that you were not interested in a relationship. You were here on a temporary work visa and the status of an extension was in question and you just didn't need to get involved in a relationship. You said there was no real option for staying in America, and that there was no way an American would ever marry you, and you quickly changed the subject.  I was too baffled at the time to respond.  Later, of course, I thought of a number of very good and thoughtful and sincere responses to this.  Why wouldn't an American marry you?  I'm pretty sure you were wrong about that.  Alas, I did not think of these things to say right there on the spot (did I mention I am dumb?). At any rate, I had a very good time and a great conversation.

There are a number of things I would have said if I was a clever person and had thought of them on the spot, and not weeks later.

I wish I was a clever person.


Tuesday, January 5, 2010

Artificial INTELLIGENCE vs ARTIFICIAL intelligence

Quasi-religious ponderings, part 2...

I was reading Seth Shostak's book (which is quite good, really, if you ever had an interest in the science of the SETI program) and one thing he brought up is something I've heard before... that we will soon surpass the power of the human brain with a single computer.

The implication is that the human brain ("a slow speed computer operating in salt water") is nothing special and since we are going to foreseeably surpass its raw power using computers we will soon replace ourselves with artificially intelligent machines. Silicon based lifeforms, essentially, that will out-think us, will end up being our replacement. Homo sapiens go extinct and thinking machines take over (whether this involves Arnold Schwarzenegger or Keeanu Reeves is not specified).

The problem I have with this concept is one which I would say can best be described as the difference between "artificial intelligence" and "artificial intelligence".


In days of yore, when we computer geeks talked about "artificial intelligence", what we meant was that we can come up with something that appears to be intelligent, but isn't. We can make a game of Pong and get the computer to display a level of intelligence in knocking the ball back to you but it is not, in any way, actually intelligent. It's running a very specific set of instructions. The intelligence is artificial, as in, it is not real.

Shostak uses Deep Blue -- that's the IBM chess machine that played chess champion Kasperov -- as an example of artificial intelligence, albeit one limited to playing chess. Kasperov made a comment that Deep Blue seemed to exhibit a sort of intelligence. However, Deep Blue's intelligence was artificial. That is to say, it was running a very specific set of instructions. Deep Blue was no more intelligent than my hand calculator, but it had a lot of computing power and a program designed to let it create every possible permutation of a chess board and thus decide which move would take the board in a direction most likely to result in a win. It was a very nice machine but it was not intelligent.


I believe Shostak (and certainly lots of other people) are using the term "artificial intelligence" to mean "intelligence which is artificially constructed" -- that is, the intelligence is real but the platform is circuitry rather than a squishy brain.

We are nowhere near having this. We haven't a clue on how to start. You might as well be discussing faeries and elves as artificial intelligence of this nature. Even though we will be able to pack more power than a human brain into a single machine, we don't know how to make it think. We don't know how to make it intelligent. We can't create artificial intelligence -- something which is genuinely intelligent but lives on hardware -- but we CAN create artificial intelligence-- something which seems smart (in a very limited, intentionally designed way) but isn't.

The evidence for this is already in front of us.

If we could create true intelligence, then we should already be able to do so, on today's computers -- it would just run a little slower than we'd like. The beauty of software is that it can simulate whatever you like, given enough time. It would be like creating software for a 32-bit machine which makes it act like a 64-bit machine. It's doable. It would obviously be slower than a real 64-bit machine but you could do it. Similarly, if human intelligence requires a computer 100x more powerful than the ones we have today, we should be able to build it right now, with software, but it would just be 100x slower than it should be.

In other words, we should, right now, with today's technology, be able to create a working human brain, which is, in every way, an intelligent learning being just as smart as any of us, but it might just be a bit slow, as if time was running slower for it, on account of the software having to simulate a better hardware environment than it really has access to.

But we don't know how.

We have no idea.

If we had more powerful computers, we would still have no idea how to make them alive.

In fact, you'd think we could at least create a reasonable artificial, say, pigeon. Perhaps an artificial dog? Surely we have the computing power for that. Surely your desktop machine has more power than a gerbil brain and surely we could do the AI research into creating "the artificial gerbil" which would be indistinguishable from the real thing. In fact, if we could work out the nervous system connections, we should be able to hook it up to a gerbil body and you wouldn't even know the difference, aside for perhaps the cable running from the gerbil's head to your machine.

But we can't. We aren't anywhere close to that. We don't have the artificial gerbil. We don't know where to begin. Actual intelligence -- actual life -- is still the realm of faeries and elves to us -- it's completely incomprehensible magic which we are entirely unable to reproduce, no matter how powerful computers become.


Oh sure, given enough time we might be able to create an artificial intelligence which could pass for a person (or at least a gerbil), but I predict it will take thousands, if not millions of times more processing power than a human brain requires to do its work. This is because it would not actually be intelligent. It would simply be a collection of a lot of smaller works -- it knows how to play chess because it took Deep Blue's coding. It knows how to drive a car because we created a program for that already. It can perhaps even carry on a conversation. But it's not alive and the evidence for this is in the staggering power and programming required to create what will be a rather poor stand-in for a human brain. It will not have true creativity or self-awareness. It will still just be a machine. It will still just be a puppet with programmers pulling the strings.


And this (to bring us back around) is why I'm not an atheist. Among other reasons. Because there is something in us that's not just machinery -- not just programming. We must ask questions like "what is consciousness?" "Who am I and what is this 'I' that's doing the asking?" How do we create software which is alive. What does "alive" really mean?

In order to program a machine, we must fully understand it. We cannot program a consciousness because we do not fully understand it. Consciousness, I believe, defies science. Perhaps it always will. I believe that if we did create an actual, intelligent machine which is truly alive and self-aware, we would not be able to understand how it works anymore than we can understand how we work. It would simply "become alive" and no matter how much digging we did through the circuitry and the programming, we would not be able to find the spark that is "life" or understand where it comes from or how it works.

I believe that life is a mystery.

I believe that "souls", if you will, are a distinct possibility. Are you more than your hardware? Is there something going on which cannot be explained by the number of neurons and connections in our heads?

I believe so, because when it comes to artificially replicating it, we don't even know where to start.