In a post a few days ago (Spot the Tenacious Dog) I didn’t realize at the time that what I was describing, and what I was asking that you go see for yourself, might well be the very component of AI that is the most troubling to many of us-its competence.

Many years ago, for reasons I can’t possibly imagine now, I engaged in a most thrilling activity for one day by donning a canine bite suit. This suit would later be used by a trained police dog who, after running me down and tackling me, then holding me continuously to the ground with very sharp teeth, would shake me viciously using that suit (as opposed to my arm or leg) as the source for gripping me relentlessly until his handler gave the order to release me. Thank goodness for the extra protection or I never would have volunteered.

Quite the adrenaline rush to be hit hard from behind by an animal that could easily have inflicted terrible bodily harm, perhaps even killed me, were that his unchecked intention. Fortunately, this dog training took place in a highly-controlled environment. Happy to report that I walked away without a scratch that day (well, almost…I had been forewarned that the dog, when muzzled with a heavy gauge metal contraption, had the habit of walking around and muzzle-butting participants in the groin. Not so happy to report that I was one of the dog’s victims and…ouch!)

So, for purposes of this illustration, let us say that this dog was quite competent in his goal of apprehending me, the suspect, and then relentlessly restricting my movement until the order to let me go had been issued. The police department would consider this dog a good investment.

Now, let’s move the same situation to a real-world scenario where the suspect is unprotected by a bite suit, the dog has been given the order to attack, and, at the last minute, it turns out that the police have identified the wrong guy. The supposed suspect is actually an innocent bystander now running for his life while the real bad guy gets away. It’s still soon enough to call off the attack, but the dog is running on sheer adrenaline in a highly charged situation. Wanting terribly to please his handler, he doesn’t hear the counter command clearly, has a hard-time stopping himself, and, in his confusion, is unable to do so. The wrong suspect gets bit, and hard.

In the end, an innocent person was still attacked, blood was shed, and much physical harm was done to a human before the handler could intervene and get the dog off who we can now unquestionably consider a victim of circumstances. (This may not be an accurate depiction of how things would most likely have panned out, and is only one possible example, entirely dependent on the quality of dog, the confidence of the handler, the level of training that had come before, and the quickness and accuracy of decisions made under enormously stressful conditions. Let’s say this scenario is not entirely out of the realm of possibilities; the likelihood of its actual occurrence is not the point of the example).

Now let’s pivot once more and talk about the robot dog in the video who is intent on getting through that door, despite the heckler’s (undoubtedly pre-scripted) attempts to prevent it from doing so. What many of the viewers see in this footage is what we are being told we should potentially fear most about Artificial General Intelligence (AGI)-competency. This is not the same as malevalency, or evil, or bad intent, or anything similar. It’s about the machine, the robot, achieving the goals that it has been programmed to achieve. Let’s face it, people don’t build things just to build them, at least not when there are investors involved. There are functions, missions, objectives, goals to be met-always. Building AI is no exception.

If a product is supposed to successfully self-drive itself down the road and get its passenger safely to a pre-appointed destination, but instead accidentally runs itself off a cliff, then its competency in fulfilling its mission has failed and the project’s funding plug is pulled.

Perhaps we can achieve a better kind of AI, one that, first and foremost, considers human-centric goals as the most important part of its competency.

What the experts in AI are suggesting is that there should be some uncertainty built into an intelligent machine’s competency. Let’s say the dog we see in the video, (SpotMini is actually Boston Dynamics’ assigned name for this particular make and model) gets the message late in the game that going through the door is perhaps not what it’s supposed to do, afterall. Does the dog ignore the unexpected alerts coming in and charge forward at all costs, or does it go into some sort of uncertainty mode where it questions its original goals and waits for further instructions, or clarifications, or confirmation?

Hard call to make even when it’s strictly humans involved in the decision-making. Perhaps even much harder when programming a robot with a mission to complete, but with the caveat that the mission might change at any point, based on changes occurring in the situation.

Who’s giving the mixed signals? Why? Maybe they are geared toward a better outcome because new information has been introduced (there’s a fire on the other side of the door and SpotMini will surely be destroyed if it makes it through). Or, perhaps the mixed signals are coming in via a hacked system because real malevalency has injected itself (a bad actor knows there are innocent people on the other side of that door waiting to be rescued by the robot dog, but the bad guy actually wants those people, his enemy, to perish).

What does the robot do, and how is that determination made? This, then, may be the hardest part of the human brain’s amazing capabilities to replicate in a learning machine that doesn’t yet possess the equivalent of what we call consciousness.

The burning question for the world of AI, before we get much further down the road, before our dream of giving birth to a super-intelligent mind is actually achieved, is simply this…how competent do we really want our AGI to be?

In the meantime, the pursuit of achieving AGI charges on, unabated, with not even the debate as to whether or not it’s possible yet settled. When and if we find out that it is, let’s hope we are fully prepared to deal with the power that we have unleashed into the wild. Let’s hope we’ve already donned a bite suit.

Links

YouTube videos

Spot the Tenacious Robot Dog