Flooded Planet

Exploring the Future...until we get there

Author: G2 (page 1 of 9)

Whisper in the CRISPR

It has only been a few short years since the advent of CRISPR, and one instance of ethical impropriety (the one that we know of, and-Wow!-what a doozie) in the usage of this strangely simplified process for carrying out gene editing has already occurred. There will be more. Perhaps many more.

Such overstepping is the whisper that comes along with the usage of this technology. It’s too tempting for some. Some of us cannot resist the temptation to trespass where we dare not tread. But with a new technology that is cheap to manufacture, small enough to ship without much fuss, and apparently easy enough to learn, how do we legislate our way out of the desires, the yearnings to explore, the inevitability to answer questions we never should have asked? We won’t.

This particular researcher only revealed his actions after the deed was already done, and apparently only because it was leaked before he could mount his justifications for changing the human germline in a way that simply goes beyond logical explanation. Well intentioned or not, you can’t undo that. What’s done is done. Those children are altered irrevocably. Some experts have stated that they will possibly suffer in unexpected ways as a result of this one individual’s tinkering. I would have to agree.

We, as a species, will go about addressing the possibility of other such scary prospects in the manner we always do, with big organizations full of big brains documenting all kinds of rules and regulations designed to prevent labs around the world from engaging in risky business such as this researcher undertook, doing his work in plain sight at the university that employed him.

It should work…mostly. Most people don’t want to break rules on purpose and will go out of their way to stay within the bounds of the law. But then there are those scientists, those laboratories in countries around the world, who will skim up close to the edge, feeling the excitement of daring to do something taboo. Will somebody step over the line in a big time way within the next 3-5 years? I’d put money on it.

I equate such daredevilish behavior to the driving habits of those among us on any given highway on any given day, essentially flaunting, not only the law, but good judgment, as well. All the laws on the books don’t stop them from driving like idiots, as though they were the only one on the road who counted, and everyone else being placed in danger as a result is not their problem.

So we’ll listen to that whisper in the CRISPR, so full of promise, so bursting with risk. How will the future play out? We won’t know until we get there, and we can only hope Pandora hasn’t peeked inside her box.

Original Recipe or Extra CRISPR

The idea—the dream and the nightmare—of designer babies has been with us for longer than most of us probably suspect. But the ability to act on the impulse has never been easier, thanks to a new technology known, rather comically, as CRISPR. If you haven’t encountered a news story or two regarding this cutting edge tech, I’d be quite surprised (unless you’re someone who closely follows the Kardashians, in which case you are excused…no seriously, please go…no wait, maybe the Kardashians will use CRISPR to further enhance, so…at least stay tuned).

CRISPR, in a nutshell, offers a revolutionary new tool in the world of gene modification by introducing much more precision in the manner by which a particular strand of DNA is sliced, as well as the manner in which it may be spliced, or altered with new bits of inserted DNA.

Let’s not go into the technicalities of the process, fascinating though they may be (genuinely). Let’s instead state that, aside from the obvious benefits to be realized by such a powerful addition to our arsenal of disease fighting tools, the allure of using it for more commercialized (profitable) purposes is just as obvious. Enter stage right…the designer baby.

Imaginings of designer babies (some would say the products of eugenics by a more subtle, less onerous name) have been around for at least as long as they’ve been making Ken and Barbie, and possibly as far back as say, Mengele, or the ancient Greeks. Wherever the actual truth may lie in there (I’m gonna go with the Greeks, although I’m quite sure ancient Chinese culture probably beat them to the punch somehow), human imaginings of changing each new generation in ways that evolution is simply not capable of have been hanging around since time immemorial.

Of course, as would be expected, the main focus for CRISPR is to introduce new protocols that address existing human-related diseases, such as Sickle Cell Anemia, or Huntington’s, or HIV. No doubt, this is noble and worthy work that should unquestionably be pursued.

But what about the day that surely must come, and probably sooner rather than later, when the conversations you have with your primary care physician, your gynecologist, your obstetrician take on a different character? What happens when somebody throws out the idea that there are now technologies that can greatly increase (perhaps even guarantee?) the chances that your child will be that little girl you always wanted, and that her eyes will be blue, her blonde hair slightly wavy, her height exemplary, and her IQ well above the average? And the price for such desirable traits? Not nearly as expensive as you might have thought and you can pay it off like you do your car.

So you find yourself, referral in hand, walking through the pretty stainless steel glass doorway, surrounded on all sides by beautiful people of all shapes and sizes soaring heights and lighter skin tones? They sure look pretty/handsome (godlike in proportion and structure), their complexions flawless, their crystalline eyes staring down on you, their full lips all but accusing you of taking too long to get to this point, their posted IQs all but insinuating that the decision is a no brainer. My Gawd, you utter. What’s there to think about? You can hardly believe you’ve been granted this choice…of turning yourself around and high-tailing it back through those doors before it’s too late. Original recipe is just fine by you.

Slippery slopes don’t always appear as such until a deluge has covered over the skies and the dry cracked earth turns to slick, broken bone treachery. Such is the case with technologies that blow onto the scene so quickly, with so much promise, so much potential, so much profit to be had.

Such abruptness (I think this is what they call “disruptive technologies” nowadays) prompts some concerned groups to demand that we all collectively stomp on the brakes until we can have a better look at the future. Others will tamp down any such admonishments, much preferring something that tends not to be stopped easily once set in motion, like a jet, or maybe even a rocket.

Whatever may come, it’s coming soon, and the definition of the privileged versus the deprived will take on a whole different meaning. One that accounts for, not only what you have, but also what you have become.

Robots Robots Everywhere (and not a one can Think)

I was just watching a news story about a humanoid robot named FEDOR the Russians included as part of the payload on their recent mission to the International Space Station. FEDOR was launched on a Russian Soyuz rocket bound for ISS on Aug 22 of this year.

The six foot tall FEDOR can be remotely-controlled, while also capable of some autonomous movement. FEDOR was originally designed for rescue missions, but now apparently has a role to play in space when the mission includes aspects that might be too dangerous for humans to perform. FEDOR came back to Earth earlier today (September 7). I doubt his initial trek up to space was his last.

Of course, more and more robots looking increasingly similar to you and me are populating mostly specialized spaces for now. But that’s changing, and it’s only a matter of time before I encounter one. Possibly on the back of a garbage truck, or maybe on the docks of a warehouse unloading trucks. I’m quite interested to know when a humanoid robot and I will cross paths directly, and what my reactions will be.

I’m not talking about the run-of-the-mill robot encounter, like the one I experienced in Walmart several months ago. The clunky contraption was supposedly scanning the shelves for inventory purposes, but was then later observed by me wandering about aimlessly, getting in the way of shoppers.

The architecture was neither humanoid nor pet-like in its anatomy. Trundling slowly along in the performance of its duties, it seemed uninspired…and uninspiring. Most onlookers gave a quick glance, then immediately returned to their business at hand. It wasn’t their idea of a proper robot, anymore. They wanted to be dazzled, and if they couldn’t have that, then they surely wanted less.

I could perceive that many of them just wanted the piece of hulking plastic to get out of their way. Instead, it seemed to be causing stress in the employee who was handling it. In answer to my query, she said the investment was more trouble than it was worth.

The thing that is most appealing to the human eye is in finding the robotic something mimicking us humans in both appearance and motion. Whether we asked for it or not, the world of humanoid robots is here. So, if that’s the case, these inventions will have to deliver in order to gain our tolerance and acceptance, if not our respect.

How will I feel when I first encounter the package deal, something close to me in size, with eyes that seem to carry life, and movement that I could casually walk down the street next to and not feel slightly embarrassed? I sense that, before I ever witness this moment for myself, it will have already happened to many others around me, and likely involving my own friends and family. Why do I say this? Because, whereas I have not accepted the speaker assistant into my home, nor sport wearable tech on my wrist, nor push wireless headphones into my ears all day, an increasing number of those around me have.

So, it is likely that someone in my circle of friends will relay to me, after the uncomfortable transition period has passed, what it is like to have a personal robot assistant serving the family in their home. I’m certain that the news headlines will become increasingly interesting.

All this tech sits uncomfortably with me. Always has. It is for this reason that I explore it. Moving too fast into the future goes against my grain, my intuition. Have I ever once said that electronic switchboards have served me well, and are way more efficient than a human on the other end of the line? Never. Same answer would apply to a dozen other areas of everyday life I can think of.

So, the idea of robots invading my life by an increasing number of degrees, potentially smarter and stronger, and probably more intrusive and more manipulative than any human I can think of, that’s disturbing.

But here we are, already snuggled up close with Artificial Narrow Intelligence, the kind that’s all over our smartphones, and soon (possibly) to be flirting with Artificial General Intelligence, the variety that will be on equal footing with humans, but surpassing us with breath-taking speed in intelligence. God knows what it will be contemplating at that point. Some big brains on the planet (the human ones) warn that we should definitely be concerned. I know I am.

In the meantime, I’ll be waiting for that first encounter. Maybe he’ll be my pizza delivery guy. As I think about the poor bastard whose job he took, I’ll contemplate the decision of whether or not to tip him. He won’t care either way, briskly walking back in his corporate-branded skin to the self-driving car that will propel him to his next deluded customer. I’m pretty sure the pizza won’t taste as good, either.

Humble Uncertainty or Unchecked Competence?

In a post a few days ago (Spot the Tenacious Dog) I didn’t realize at the time that what I was describing, and what I was asking that you go see for yourself, might well be the very component of AI that is the most troubling to many of us-its competence.

Many years ago, for reasons I can’t possibly imagine now, I engaged in a most thrilling activity for one day by donning a canine bite suit. This suit would later be used by a trained police dog who, after running me down and tackling me, then holding me continuously to the ground with very sharp teeth, would shake me viciously using that suit (as opposed to my arm or leg) as the source for gripping me relentlessly until his handler gave the order to release me. Thank goodness for the extra protection or I never would have volunteered.

Quite the adrenaline rush to be hit hard from behind by an animal that could easily have inflicted terrible bodily harm, perhaps even killed me, were that his unchecked intention. Fortunately, this dog training took place in a highly-controlled environment. Happy to report that I walked away without a scratch that day (well, almost…I had been forewarned that the dog, when muzzled with a heavy gauge metal contraption, had the habit of walking around and muzzle-butting participants in the groin. Not so happy to report that I was one of the dog’s victims and…ouch!)

So, for purposes of this illustration, let us say that this dog was quite competent in his goal of apprehending me, the suspect, and then relentlessly restricting my movement until the order to let me go had been issued. The police department would consider this dog a good investment.

Now, let’s move the same situation to a real-world scenario where the suspect is unprotected by a bite suit, the dog has been given the order to attack, and, at the last minute, it turns out that the police have identified the wrong guy. The supposed suspect is actually an innocent bystander now running for his life while the real bad guy gets away. It’s still soon enough to call off the attack, but the dog is running on sheer adrenaline in a highly charged situation. Wanting terribly to please his handler, he doesn’t hear the counter command clearly, has a hard-time stopping himself, and, in his confusion, is unable to do so. The wrong suspect gets bit, and hard.

In the end, an innocent person was still attacked, blood was shed, and much physical harm was done to a human before the handler could intervene and get the dog off who we can now unquestionably consider a victim of circumstances. (This may not be an accurate depiction of how things would most likely have panned out, and is only one possible example, entirely dependent on the quality of dog, the confidence of the handler, the level of training that had come before, and the quickness and accuracy of decisions made under enormously stressful conditions. Let’s say this scenario is not entirely out of the realm of possibilities; the likelihood of its actual occurrence is not the point of the example).

Now let’s pivot once more and talk about the robot dog in the video who is intent on getting through that door, despite the heckler’s (undoubtedly pre-scripted) attempts to prevent it from doing so. What many of the viewers see in this footage is what we are being told we should potentially fear most about Artificial General Intelligence (AGI)-competency. This is not the same as malevalency, or evil, or bad intent, or anything similar. It’s about the machine, the robot, achieving the goals that it has been programmed to achieve. Let’s face it, people don’t build things just to build them, at least not when there are investors involved. There are functions, missions, objectives, goals to be met-always. Building AI is no exception.

If a product is supposed to successfully self-drive itself down the road and get its passenger safely to a pre-appointed destination, but instead accidentally runs itself off a cliff, then its competency in fulfilling its mission has failed and the project’s funding plug is pulled.

Perhaps we can achieve a better kind of AI, one that, first and foremost, considers human-centric goals as the most important part of its competency.

What the experts in AI are suggesting is that there should be some uncertainty built into an intelligent machine’s competency. Let’s say the dog we see in the video, (SpotMini is actually Boston Dynamics’ assigned name for this particular make and model) gets the message late in the game that going through the door is perhaps not what it’s supposed to do, afterall. Does the dog ignore the unexpected alerts coming in and charge forward at all costs, or does it go into some sort of uncertainty mode where it questions its original goals and waits for further instructions, or clarifications, or confirmation?

Hard call to make even when it’s strictly humans involved in the decision-making. Perhaps even much harder when programming a robot with a mission to complete, but with the caveat that the mission might change at any point, based on changes occurring in the situation.

Who’s giving the mixed signals? Why? Maybe they are geared toward a better outcome because new information has been introduced (there’s a fire on the other side of the door and SpotMini will surely be destroyed if it makes it through). Or, perhaps the mixed signals are coming in via a hacked system because real malevalency has injected itself (a bad actor knows there are innocent people on the other side of that door waiting to be rescued by the robot dog, but the bad guy actually wants those people, his enemy, to perish).

What does the robot do, and how is that determination made? This, then, may be the hardest part of the human brain’s amazing capabilities to replicate in a learning machine that doesn’t yet possess the equivalent of what we call consciousness.

The burning question for the world of AI, before we get much further down the road, before our dream of giving birth to a super-intelligent mind is actually achieved, is simply this…how competent do we really want our AGI to be?

In the meantime, the pursuit of achieving AGI charges on, unabated, with not even the debate as to whether or not it’s possible yet settled. When and if we find out that it is, let’s hope we are fully prepared to deal with the power that we have unleashed into the wild. Let’s hope we’ve already donned a bite suit.

Links

YouTube videos

Spot the Tenacious Robot Dog

Vandalizing Robots

Just finished reading an interesting article by Sara Harrison with Wired Magazine titled Of Course Citizens Should Be Allowed to Kick Robots (link below). The title alone was a real eye grabber, and I thought the article was terrific.

Basically, it endorses the idea that we, as citizens, should be able to physically abuse robots when we see them out “in the wild” as Harrison puts it. Not all robots, necessarily, just the ones that are intimidating, like those that provide security for their clients, recording such data as license plate numbers, and the comings and goings of people in the vicinity it patrols.

Harrison’s beef with this type of robot seems to be that it is imposing itself on us, the citizens, making us self-conscious about our actions, normal and law-abiding though they may be. The data the robot collects is provided to the client who pays for this mobile camera service, and can be used by that client in any way it sees fit.

I understand Harrison’s thoughts on this matter, although I’m not sure if her angst is only concerned with security robots, or with many other elements of modern society that seem equally intrusive, like law enforcement being able to read my license plate in the parking lot where I’m shopping.

Or security cameras tracking my every move as I progress through the shopping mall, or the city, and probably even the country.

Or Google tracking my whereabouts even when I think I’ve expressly asked them not to.

Or face recognition software reading my features and identifying me and those I’m with.

Or maybe my smart speaker assistant listening to my private conversations when it’s not supposed to (I don’t have a Siri or Alexa, and no plans to get one anytime soon).

I never knew I was so interesting to so many people. In fact, I’m not. I’m simply part of the landscape where everyone everywhere is being monitored all the time for our own safety and security. Such explanations have never worked in my privacy-centric mind, and are really just an end-run around the real goal of governments and private enterprises across the planet-monitoring its citizens all the time for no other purpose than to control. The more you know about me, the more you can manipulate me.

That being said, how is punching the robot different from say, punching the police officer who is hassling me for no apparent reason? I would go to jail. What about shooting out a security cam? Again, if caught, I would go to jail. The robot belongs to somebody, and was, no doubt, expensive to make. If you get caught vandalizing the robot, you will…again…go to jail.

So is recommending that we punch the robot because we find it intrusive the same thing as recommending that we punch the police officer because we also find him intrusive? Obviously not. What’s the key difference? One is an inanimate object, the other a human.

But, then why do so many others among us already feel a certain kind of empathy for the robot who is being kicked or pushed off balance, or otherwise abused by humans? Enter the rise of robot ethics (yeah, it’s already a thing). As the robots are designed to look more and more like us (or something that at least functions as a bi-ped and has a head and appendages), and as their AI advances at an astonishing pace, not only will we have to review the laws on the book with regard to causing harm, one human to another, but also create new laws that deal with the harm that can be caused by a robot to a human, or a human to a robot.

Perhaps a whole new branch of law will develop to address this very new and unfamiliar territory.

In the meantime, you won’t catch me doing any vandalizing of any robots. I’d be the guy who gets caught in the act on tape. No, if anything, I’d be the one to defend the stupid thing, knowing in my gut that the future holds something more surprising than anything we could ever expect. Basic rights for the bots. Citizenship will not be far behind.

Didn’t Sophia already get that?

Links

Wired Magazine Article

Wikipedia Article about Sophia

471 – Hadrian

Relating to post 470 – The Art of Memory, I chose Hadrian as my final historical figure to add to my memory list as I transition to an entirely different approach to my bizarre hobby.

Whereas before, I had decided to include no one who had been born after my own birth year (since they would be considered my contemporaries, not historical personalities), I will now flip flop that idea to include on my list, as well, persons who are alive or only recently deceased. Nor will they be ordered based on birth date, but, rather, in a manner that makes the most sense in terms of memorization.

Spot the Tenacious Robot Dog

If you ever want to make a tenuous connection between seemingly disparate elements in a world of loosely connected AI ideas, do this:

Watch the Boston Dynamics videos of the robot dog (or dogs tag-teaming the task, depending on which video you watch) opening a door without assistance on YouTube. Admire the tenacity of these guys (yes, I’m guilty of anthropomorphism, but it’s all I can do when it comes to reacting to anything robotic that has been purposefully designed to evoke emotional responses from human beings by mimicking physiology that is familiar to me).

Their focus, their determination, is impressive, even more so when being forcibly opposed by a human “handler” (no doubt a developer or engineer testing the integrity of the software). Closely study the complexity of moves involved in the robot achieving its goal of, first grabbing the door knob, then turning it, then pulling on the knob, then parking a foreleg in front of the door so that it will not again close while the dog’s mouth-like claw then swivels around and further opens the door so that the rest of its robot body can make its way through. If you’re not astounded, maybe a bit mortified, you should be.

Next, after watching these disturbing and intriguing videos, read through the comments…all of them…and see what your responses are to the responses. How many comments sympathize with the robot, how many with the human? Are you surprised? Terrified by what you read? Fascinated? Is it a mixed bag of dread and fear that a robot apocalypse may be coming coupled with longing and hope that maybe everything will turn out okay?

Now go to Wikipedia’s article entitled Existential risk from artificial general intelligence and carefully read through the warnings. There are many. As with the comments that accompany the YouTube videos, also comb through the references this article has used to build its content. It is also impressive.

It’s not just you. Many are kept awake at night by the scenarios that abound, some of which will surely come to pass.

Now, go find an engineer involved with AI (any engineer will do) and have them look you in the eye while reassuring you that there’s nothing for us to worry about. Make sure he doesn’t blink in his response.

Links:

YouTube videos

Wikipedia Article

Elon Musk

It’s an understatement to say that Elon Musk has numerous irons in the fire at any given time. What we want to explore in this post is the one that has resulted from, or is at least based on, Mr. Musk’s provocative statements regarding Artificial Intelligence (AI) and the need for humans to interface and integrate with it at the physiological level. According to Musk, brain implants are thought to be a logical route to achieve such integration, and the way to get there is the iron in the fire called Neuralink.

Musk, along with several other investors, founded the company Neuralink in 2016, with the short-term goals of the organization revolving around the idea of treating diseases of the brain. The longer-term goals involve creating brain implants that will result in a symbiotic relationship between the human brain and AI. If we are to survive in a world where AI has become superior to Homo sapiens in most, if not all, areas of intelligence, Musk insists that we, us obsolescing humans, must begin exploring the idea of brain implants. The very idea is the stuff of dystopian science fiction.

Although there are certainly many aspects of this most serious of pursuits to consider, one quite intriguing component of brain-AI interface is the idea that, as we begin talking about introducing changes to the inherent physiological landscape of the human brain, we are possibly confronted with an event in human evolution from which we cannot return.

What does it mean, after all, in terms of evolution and the future of the human species, to say that we are embarking on the very alteration of the human brain’s fabric for the express purpose of preventing annihilation by AI? We can’t possibly be fully prepared for such pursuits. Although, to be fair, nor are we in any way prepared for the potential of AI to surpass human civilization, or, at the very least, to control every aspect of it.

Musk operates in the world as a bit of a lightning rod when it comes to disruptive technologies, such as electric automobiles (Tesla), rockets (SpaceX), tunnel construction (The Boring Company), and the (so far) ill-fated idea of the hyperloop.

When visionaries are well-stocked with funding dollars, as Musk most certainly is, amazing things can happen. With regard to Neuralink, we’ll have to wait and see. In some ways, however, it’s not so important that the company is successful or not; it’s the idea that the effort is being made in the first place. Surely an inevitable result will be more startups formed with this and other terrifying and tantalizing ideas in mind. Unintended consequences will undoubtedly occur. I, for one, can’t wait.

Transhuman Meets Climate Change

Interesting times we’re living in, right? One camp constantly reminding us that we’re running out of time, that Climate Change is coming for us all, that our rebellious little planet will soon kick us clean off her surface for causing such chaos and upheaval to her delicately balanced systems.

Cut to another camp, equally vehement in its insistence that the technological wonders now upon us will soon revolutionize the human species as we know it. “Don’t worry,” members of this camp scream, all is well, all is fixable, technology can and will conquer all problems, even those presented by Climate Change and its enormous threats to our continued tenancy within Mother Earth’s domain.

Who’s right, who’s wrong? Who’s to say? In the meantime, as a guy who is quite interested in how this is all going to play out, the avenues that we can explore together on this blog are just about endless.

What’s most interesting to consider is how the participants in this breathless race to the finish line are influenced by one another. On the one hand, we have Mr. and Mrs. Tranhuman, participants who fully endorse the idea that the human genome is outdated, that it can and should be altered, transformed, enhanced with something superior to it in every way. Bigger, faster, stronger, smarter. Humans have worn out their welcome. It’s time for an upgrade, and maybe even an outright replacement.

On the other hand, we have Mr. and Mrs. Climate Change, advocates of the idea that the human genome is never going to get the opportunity to be changed to any meaningful degree, much less altered such that it might be made more capable of coping with the existential threats posed by Climate Change.

Is it possible that neither of these participants in the race are going to take the ribbon? Could an unexpected hybrid participant, something not quite human, but neither something that could be described as a superhuman, might cross the finish line first?

If Climate Change truly does have the potential to wipe us out, then the time required by technological advancements to offer us alternatives is severely limited. Some of the biggest brains on the planet warn us that we are quickly running out of time to do anything of substance, something that might wholly avert, or at least avoid a direct punch to the face, delivered by this heavweight contender called Climate Change.

We should think about where we are as a species, as a civilization, where we’re trying to get to, and how much time we realistically have to get there. So much to explore.

Let’s start with an obvious target – Elon Musk. He will be starring in our next post.

470 – The Art of Memory

Several years ago, I began to dabble in a hobby that involves memorizing lists of items. There is an approach called Method of Loci that grabbed my attention (I won’t go into much explanation about it, since the method’s details can be readily found numerous places online). I realized quickly that I was pretty good at remembering lists of historical figures (you know…like Galileo, Einstein, Eleanor of Aquitaine), so I decided to really give it a go and see how many I could remember. When I got distracted by the busy-ness of life, as we all do, and stopped adding additional characters (you know…like Rembrandt, Kepler, Lincoln) I was up to 470.

It’s now some 15 months later, and I have caught the bug again. Having revisited my previous list, I was initially a bit discouraged by how many holes and gaps had replaced the people who used to reside there. Now, about two weeks after that first revisit, I’m quite encouraged by the fact that the entire list has been firmly re-established in my gray matter. I proved to myself just today (August 9, 2019) that I am once again able to recite all 470 personages with no external memory aids, whatsoever. So, it’s time to start expanding the list again, and it’s always fun to consider who that next person will be.

Hmm, maybe…

« Older posts

© 2019 Flooded Planet

Theme by Anders NorenUp ↑