Where the dangers presented by those with a little learning are allayed        

Maybe Azimov wasn't so Wrong

Deep into the last century, back when I was ten years old, I first read Azimov's robot stories, and heartily enjoyed them – in fact, I liked most of Azimov's work (the awful Foundation trilogy notwithstanding), and read just about everything he wrote.

At ten years old, I didn't know a lot about computers, which was nothing to be ashamed of, because almost no-one knew anything about computers – they'd barely been invented, and the PC wasn't even a twinkle in Big Blue's eye.

However, even with my extremely limited knowledge of computers and computing, it was pretty bluddy obvious to me that Azimov's Three Laws of Robotics were pretty much garbage.

In case you don't know them, here they are:


Image source: http://en.wikipedia.org/wiki/I,_Robot
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Nice, simple rules that seem nicely logical and sequential, but they include words like "injure", "inaction", and "harm", that require a million value judgements, which in turn require levels of human knowledge, intuition, and experience that Azimov's robots do not exhibit, because they are linear machines that do not have artificial intelligence.

But it's Sci-Fi, and most Sci-Fi is based on explaining how some things work, but avoiding the explanation of lots of other things – after all, you can't really expect the writer to know how everything works, or to even be interested in explaining everything; Sci-Fi isn't textbooks, it's snippets of futuristic stuff that are wrapped in stories about people.

So I let the inadequacies of the three laws slide, and just enjoyed the stories.

But now, decades later, after I've had years of experience in using, administering over, and programming for computers, it's dawned on me that maybe the three laws weren't so stupid, after all.

And Here's Why...

If you're as old as me:  There's a relatively new thing in programming called "Object Oriented" (OO) programming.

If you're not as old as me:  There's a thing that's been around, like, forever in programming called "Object Oriented" (OO) programming.

OO wasn't really new, even when it first became popular.  Back in my COBOL days, we used to write things that would nowadays be called "interfaces" and "classes", and even C can use hash defines to make sorta the same thing.

So what is special about OO?

Simply put, OO makes you code objects, rather than lines of code; things that have their own attributes and functions.  And it makes you code these things generically, as classes of objects.

For example, for a 10" kitchen knife with a wooden handle and a non-serrated edge, you would create a knife object from the generic Knife class, change its "length" attribute to 10", its "handle_type" attribute to "wooden", and its "serrated" attribute to "no".

The computer will then know those physical details about that particular knife.  It will also know what functions a knife can perform, because these were inherited when you created the knife object from the generic class.

Great!  The computer knows that there's a knife, which is 10" long, etc, and which can perform functions like "Cut" and "Chop", and if you have different knives, you can just change the attributes, e.g. to make them shorter or longer, and you can even add further attributes, e.g. "material", for ceramic/silver/INOX/etc.

The problem is that, although having a generic Knife class with lots of variable attributes means that computers can have knife objects for any kind of knife, the computer still does not know what a knife is.

Hell, it doesn't have a clue what a wooden handle is, either, or even what a length is!

All it knows is the numbers that you, a human, type into it to describe a knife in a way that you can understand it and use it in your programs; to the computer, it's just a bunch of numbers.

It has to be just numbers, because that's all the computer understands; it cannot understand any other kind of value, like the sentimental value of a rigging knife passed down to you from your sailor great-grandfather, or that a knife was used as a murder weapon.


Hey, Robbie!  Feel free to give this Person object a good kicking!

So, even though it could create person objects (from a Person class) that represent individual people, an azimovnian robot wouldn't have a cat in Hell's chance of understanding the real meanings of words like "injure", "inaction", or "harm" for each person object, which is different to all the other person objects.

it could only work with them if they had numeric values – and even then, it wouldn't know what they were; they would just be numbers, so it could only make judgements about them if there were numeric values for each and every possible judgement that could be made.

That means that huge volumes of data would be needed – really huge, like galaxy big – and, no matter how fast you can process data (there are, of course physical limits), you could not read all of it fast enough, even if it were stored in the fastest possible memory hardware.

But, with OO, you neither have to read it all, nor have artificial intelligence.  It can all be done with standard computing, and without any "magic" positronic guff.

How?

Let's make a Person class that isn't just one thing.

Let's make it comprise lots of other classes, e.g. a Leg class, from which two leg objects are created.

Then let's do the same for the Leg class; make it so that it comprises classes like Muscle, Tendon, Bone, Thigh, Toe, etc, each of which has its own attributes, functions, and sub-classes

OK, so now the Person class is getting complicated, with thousands of sub-classes and sub-sub-classes for legs, eyes, hair, noses, fingernails, etc. – but that's OK, because we already know that the human body is complicated, and we already know how it all fits together, so all we have to do is follow the pattern of a real human body, and we can't really go wrong – for example by putting eyes in legs, because we know that legs don't have eyes.

We would still need the galactically huge amounts of data, but it could be broken down into much, much smaller chunks, and any repetition could be just dropped – e.g. we wouldn't need a class for each muscle in the body; we could have a single class, with attributes that allow us to say "this is a thigh muscle" or "this is the muscle for wiggling earlobes".

Better yet, we would not need to access any of the data until it is actually needed, so it can be stored in a million databases/data stores which are available, but not retained in memory, and that don't need to be constantly read and ignored, to get to the next item.

So let's see how a computer like that could be programmed to react to a human with a limp.

The Scary Hard Pseudocode Bit

  • Alert: person object is not optimal.
  • Access primary sub-object list for person objects.
  • Which sub-object is not optimal?
  • leg object 0 (the left leg).  Access leg sub-object list.
  • Which leg sub-object is not optimal?
  • Lower leg.  Access lower-leg sub-object list.
  • Calf, ankle, or foot?
  • Ankle.  Access ankle sub-object list.
  • Bone or soft tissue?
  • Not known.  Formulate question for requesting more information from person object.
  • "Excuse me, Sir.  Have you hurt your ankle?"

Hardly any data needs to be read and processed for that, so even a slow processor could handle it – and we can assume that, by the time we have all the data and classes we need, computers will have Lots of processors (cores), so will be able to do lots of different things at the same time, e.g. it could do the above whilst walking, pre-chewing gum, making tea, and peeling a banana to make the guy slip and hurt his other leg (symmetry is pretty).

And this is without any pie-in-the-sky positronicalia, or any artificial intelligence that's more advanced than what we already have!

The physical and the programming technology needed for this is already available, and the ability to access data stores from anywhere is also available; all that's needed is people to compile the huge volumes of data, and write all the classes/interfaces.

... And people are already doing it!

For the most part, no-one knows that they're building stuff that's needed to make robots; they're just doing their jobs, which can be in just about any field, but what they're doing as a trend is creating classes and datastores that can be reused anywhere that they're needed – and one of those "wheres" is in the creation of non-intelligent, non-pie-in-the-sky robotics.

Companies like Pixar, for example, probably already have thousands of classes for human bodies – I'd wager that if they have to give one of their characters a limp, they go through pretty much the same process as in the pseudocode, above.

That's one chunk of the required classes and data.  Other companies and individuals are creating/compiling other chunks of classes and data that can be adopted into robotics software.

It's an incremental, evolutionary thing.  Gradually, more and more classes and data will become available, improve the possibility of a functioning azimovnian unintelligent robot, with no understanding whatsoever of what it is doing, but with enough data and enough processes to be able to follow Azimov's Three Laws of Robotics.

It will not understand the laws, to any degree whatsoever, but it will follow them.

So, for more decades than I'd like to admit, I've been wrong.

Robot brains will absolutely not be invented by a single person, as in the Azimov stories; they will evolve from the work of perhaps millions of unconnected people – but Azimov's Three Laws of Robotics aren't the "bluddy obviously pretty much garbage" I always described them as; they're bluddy obviously pretty much a certainty.

 

Incidentally...

I've been thinking on this for years, and have no end of notes, documents, diagrams, and even pseudocode expressing my thoughts, so if you're a patent troll, who thinks he can create loads of malicious patents and cripple development in the field, forget it.  I've got enough prior art to kick your thieving, conniving arse six ways to Sunday (I wouldn't have posted this article if I didn't have).


Go back to the Grumpy Old Scribe index page

Go back to the main site


This page is copyright © 2018 by Mark Wallace.  All Rights Reserved.