Flooded Planet

Exploring the Future...until we get there

Category: Robotics

Humble Uncertainty or Unchecked Competence?

In a post a few days ago (Spot the Tenacious Dog) I didn’t realize at the time that what I was describing, and what I was asking that you go see for yourself, might well be the very component of AI that is the most troubling to many of us-its competence.

Many years ago, for reasons I can’t possibly imagine now, I engaged in a most thrilling activity for one day by donning a canine bite suit. This suit would later be used by a trained police dog who, after running me down and tackling me, then holding me continuously to the ground with very sharp teeth, would shake me viciously using that suit (as opposed to my arm or leg) as the source for gripping me relentlessly until his handler gave the order to release me. Thank goodness for the extra protection or I never would have volunteered.

Quite the adrenaline rush to be hit hard from behind by an animal that could easily have inflicted terrible bodily harm, perhaps even killed me, were that his unchecked intention. Fortunately, this dog training took place in a highly-controlled environment. Happy to report that I walked away without a scratch that day (well, almost…I had been forewarned that the dog, when muzzled with a heavy gauge metal contraption, had the habit of walking around and muzzle-butting participants in the groin. Not so happy to report that I was one of the dog’s victims and…ouch!)

So, for purposes of this illustration, let us say that this dog was quite competent in his goal of apprehending me, the suspect, and then relentlessly restricting my movement until the order to let me go had been issued. The police department would consider this dog a good investment.

Now, let’s move the same situation to a real-world scenario where the suspect is unprotected by a bite suit, the dog has been given the order to attack, and, at the last minute, it turns out that the police have identified the wrong guy. The supposed suspect is actually an innocent bystander now running for his life while the real bad guy gets away. It’s still soon enough to call off the attack, but the dog is running on sheer adrenaline in a highly charged situation. Wanting terribly to please his handler, he doesn’t hear the counter command clearly, has a hard-time stopping himself, and, in his confusion, is unable to do so. The wrong suspect gets bit, and hard.

In the end, an innocent person was still attacked, blood was shed, and much physical harm was done to a human before the handler could intervene and get the dog off who we can now unquestionably consider a victim of circumstances. (This may not be an accurate depiction of how things would most likely have panned out, and is only one possible example, entirely dependent on the quality of dog, the confidence of the handler, the level of training that had come before, and the quickness and accuracy of decisions made under enormously stressful conditions. Let’s say this scenario is not entirely out of the realm of possibilities; the likelihood of its actual occurrence is not the point of the example).

Now let’s pivot once more and talk about the robot dog in the video who is intent on getting through that door, despite the heckler’s (undoubtedly pre-scripted) attempts to prevent it from doing so. What many of the viewers see in this footage is what we are being told we should potentially fear most about Artificial General Intelligence (AGI)-competency. This is not the same as malevalency, or evil, or bad intent, or anything similar. It’s about the machine, the robot, achieving the goals that it has been programmed to achieve. Let’s face it, people don’t build things just to build them, at least not when there are investors involved. There are functions, missions, objectives, goals to be met-always. Building AI is no exception.

If a product is supposed to successfully self-drive itself down the road and get its passenger safely to a pre-appointed destination, but instead accidentally runs itself off a cliff, then its competency in fulfilling its mission has failed and the project’s funding plug is pulled.

Perhaps we can achieve a better kind of AI, one that, first and foremost, considers human-centric goals as the most important part of its competency.

What the experts in AI are suggesting is that there should be some uncertainty built into an intelligent machine’s competency. Let’s say the dog we see in the video, (SpotMini is actually Boston Dynamics’ assigned name for this particular make and model) gets the message late in the game that going through the door is perhaps not what it’s supposed to do, afterall. Does the dog ignore the unexpected alerts coming in and charge forward at all costs, or does it go into some sort of uncertainty mode where it questions its original goals and waits for further instructions, or clarifications, or confirmation?

Hard call to make even when it’s strictly humans involved in the decision-making. Perhaps even much harder when programming a robot with a mission to complete, but with the caveat that the mission might change at any point, based on changes occurring in the situation.

Who’s giving the mixed signals? Why? Maybe they are geared toward a better outcome because new information has been introduced (there’s a fire on the other side of the door and SpotMini will surely be destroyed if it makes it through). Or, perhaps the mixed signals are coming in via a hacked system because real malevalency has injected itself (a bad actor knows there are innocent people on the other side of that door waiting to be rescued by the robot dog, but the bad guy actually wants those people, his enemy, to perish).

What does the robot do, and how is that determination made? This, then, may be the hardest part of the human brain’s amazing capabilities to replicate in a learning machine that doesn’t yet possess the equivalent of what we call consciousness.

The burning question for the world of AI, before we get much further down the road, before our dream of giving birth to a super-intelligent mind is actually achieved, is simply this…how competent do we really want our AGI to be?

In the meantime, the pursuit of achieving AGI charges on, unabated, with not even the debate as to whether or not it’s possible yet settled. When and if we find out that it is, let’s hope we are fully prepared to deal with the power that we have unleashed into the wild. Let’s hope we’ve already donned a bite suit.

Links

YouTube videos

Spot the Tenacious Robot Dog

Vandalizing Robots

Just finished reading an interesting article by Sara Harrison with Wired Magazine titled Of Course Citizens Should Be Allowed to Kick Robots (link below). The title alone was a real eye grabber, and I thought the article was terrific.

Basically, it endorses the idea that we, as citizens, should be able to physically abuse robots when we see them out “in the wild” as Harrison puts it. Not all robots, necessarily, just the ones that are intimidating, like those that provide security for their clients, recording such data as license plate numbers, and the comings and goings of people in the vicinity it patrols.

Harrison’s beef with this type of robot seems to be that it is imposing itself on us, the citizens, making us self-conscious about our actions, normal and law-abiding though they may be. The data the robot collects is provided to the client who pays for this mobile camera service, and can be used by that client in any way it sees fit.

I understand Harrison’s thoughts on this matter, although I’m not sure if her angst is only concerned with security robots, or with many other elements of modern society that seem equally intrusive, like law enforcement being able to read my license plate in the parking lot where I’m shopping.

Or security cameras tracking my every move as I progress through the shopping mall, or the city, and probably even the country.

Or Google tracking my whereabouts even when I think I’ve expressly asked them not to.

Or face recognition software reading my features and identifying me and those I’m with.

Or maybe my smart speaker assistant listening to my private conversations when it’s not supposed to (I don’t have a Siri or Alexa, and no plans to get one anytime soon).

I never knew I was so interesting to so many people. In fact, I’m not. I’m simply part of the landscape where everyone everywhere is being monitored all the time for our own safety and security. Such explanations have never worked in my privacy-centric mind, and are really just an end-run around the real goal of governments and private enterprises across the planet-monitoring its citizens all the time for no other purpose than to control. The more you know about me, the more you can manipulate me.

That being said, how is punching the robot different from say, punching the police officer who is hassling me for no apparent reason? I would go to jail. What about shooting out a security cam? Again, if caught, I would go to jail. The robot belongs to somebody, and was, no doubt, expensive to make. If you get caught vandalizing the robot, you will…again…go to jail.

So is recommending that we punch the robot because we find it intrusive the same thing as recommending that we punch the police officer because we also find him intrusive? Obviously not. What’s the key difference? One is an inanimate object, the other a human.

But, then why do so many others among us already feel a certain kind of empathy for the robot who is being kicked or pushed off balance, or otherwise abused by humans? Enter the rise of robot ethics (yeah, it’s already a thing). As the robots are designed to look more and more like us (or something that at least functions as a bi-ped and has a head and appendages), and as their AI advances at an astonishing pace, not only will we have to review the laws on the book with regard to causing harm, one human to another, but also create new laws that deal with the harm that can be caused by a robot to a human, or a human to a robot.

Perhaps a whole new branch of law will develop to address this very new and unfamiliar territory.

In the meantime, you won’t catch me doing any vandalizing of any robots. I’d be the guy who gets caught in the act on tape. No, if anything, I’d be the one to defend the stupid thing, knowing in my gut that the future holds something more surprising than anything we could ever expect. Basic rights for the bots. Citizenship will not be far behind.

Didn’t Sophia already get that?

Links

Wired Magazine Article

Wikipedia Article about Sophia

Spot the Tenacious Robot Dog

If you ever want to make a tenuous connection between seemingly disparate elements in a world of loosely connected AI ideas, do this:

Watch the Boston Dynamics videos of the robot dog (or dogs tag-teaming the task, depending on which video you watch) opening a door without assistance on YouTube. Admire the tenacity of these guys (yes, I’m guilty of anthropomorphism, but it’s all I can do when it comes to reacting to anything robotic that has been purposefully designed to evoke emotional responses from human beings by mimicking physiology that is familiar to me).

Their focus, their determination, is impressive, even more so when being forcibly opposed by a human “handler” (no doubt a developer or engineer testing the integrity of the software). Closely study the complexity of moves involved in the robot achieving its goal of, first grabbing the door knob, then turning it, then pulling on the knob, then parking a foreleg in front of the door so that it will not again close while the dog’s mouth-like claw then swivels around and further opens the door so that the rest of its robot body can make its way through. If you’re not astounded, maybe a bit mortified, you should be.

Next, after watching these disturbing and intriguing videos, read through the comments…all of them…and see what your responses are to the responses. How many comments sympathize with the robot, how many with the human? Are you surprised? Terrified by what you read? Fascinated? Is it a mixed bag of dread and fear that a robot apocalypse may be coming coupled with longing and hope that maybe everything will turn out okay?

Now go to Wikipedia’s article entitled Existential risk from artificial general intelligence and carefully read through the warnings. There are many. As with the comments that accompany the YouTube videos, also comb through the references this article has used to build its content. It is also impressive.

It’s not just you. Many are kept awake at night by the scenarios that abound, some of which will surely come to pass.

Now, go find an engineer involved with AI (any engineer will do) and have them look you in the eye while reassuring you that there’s nothing for us to worry about. Make sure he doesn’t blink in his response.

Links:

YouTube videos

Wikipedia Article

© 2019 Flooded Planet

Theme by Anders NorenUp ↑