Wednesday, September 30, 2015

The Droids We're Looking For

I've been a fan of Cynthia Breazeal for well over a decade, and have watched her research evolve from her early doctoral work with Kismet, to her current work as the creator of JIBO and the founder of JIBO, inc. What I found so interesting about Dr. Breazeal was her commitment to creating not just artificial intelligence, but a robot which people could interact with in a fashion similar to human beings, but not exactly like human beings. In her book, Designing Sociable Robots, she provides an anecdote as to what inspired her to get involved with artificial intelligence and robots in the first place: Star Wars. At first I thought this resonated with me simply because she and I had the same Gen X contextual basis. I was five when the first Star Wars film was released in 1977, and it was the technology (the spaceships and especially the droids) that got me hooked. But upon further thought, I realized that Breazeal's love of Star Wars seems to have inspired her work in another, more subtle way.  The interactions that humans have with droids in the Star Wars universe isn't exactly egalitarian. That is to say, humans don't see the droids around them as equals. In fact, the humans', and just about any of the organic, anthropomorphic aliens' interactions with droids is very much based on the function of the droids themselves.

For example, R2D2, being an "astromech" droid, is more of a utilitarian repair droid. It understands language, but does not have a language that humans can readily understand without practice or an interpreter. But even not knowing the chirps and beeps, the tone of them gives us a general idea of mood. We have similar examples of this in WALL-E, where the titular robot conveys emotion via nonverbal communication and "facial expressions," even though he really doesn't have a face, per se. But, getting back to Star Wars, if we think about how other characters interact with droids, we see a very calculated yet unstated hierarchy. The droids are very much considered property, are turned on and off at will, and are very "domain specific." In fact, it is implied that objects like ships (the Death Star, the Millennium Falcon), and even things like moisture evaporators on Tatooine have an embedded AI with which higher functioning droids like R2D2 can communicate with, control, and -- as is the function of C3PO -- translate. Granted, there are droids built as soldiers, bodyguards, and assassins, but it takes a deep plunge into fan fiction and the tenuously "expanded" Star Wars universe to find an example or two of droids that went "rogue" and acted on their own behalf, becoming bounty hunters and I'm sure at some point wanting a revolution of some sort. 

Trips into Star Wars fandom aside, the basic premise and taxonomy of the droids in Star Wars seems to represent a more realistic and pragmatic evolution of AI and AI related technologies (sans the sentient assassins, of course). If we make a conscious effort to think, mindfully, about artificial intelligence, rather than let our imaginations run away with us, thus bestowing our human ontology onto them, then the prospect of AI is not quite as dramatic, scary, or technologically romantic as we may think. 

I mean, think, really think about what you want your technology to do. How do you really want to interact with your phone, tablet, laptop, desktop, car, house, etc?  Chances are, most responses orbit around the idea of the technology being more intuitive. In that context, it implies a smooth interface. An intuitive operating system implies that the user can quickly figure out how it works without too much help. The more quickly a person can adapt to the interface or the 'rules of use' of the object, the more intuitive that interface is. When I think back to the use of this word, however, it has an interesting kind of dual standing. That is to say, at the dawn of the intuitive interface (the first Macintosh computer, and then later iterations of Windows), intuitive implied that the user was able to intuit how the OS worked. In today's landscape, the connotation of the term has expanded to the interface itself. How does the interface predict how we might use it based on a certain context. If you sign into Google and allow it to know your location, the searches become more contextually based, especially when it also knows your search history. Search engines, Amazon, Pandora, etc, all have been slowly expanding the intuitive capacities of their software, meaning that, if designed well, these apps can predict what we want, making it seem like they knew what we were looking for before we did. In that context, 'intuitive' refers to the app, website, or search engine itself. As in, Pandora intuits what I want based on my likes, skips, time spent on songs, and even time of day, season, and location.

Regardless, whether or not intuitive refers to the user, the machine, or a blend of both, in today's technological culture, we want to be able to interact with our artifacts and operating system in a way that seems more natural than entering clunky commands. For example, I would love to be able to pick up my phone, and say to it, "Okay Galaxy, block all messages except the ones from my wife, and alert me if an email from [student A], [colleague b], or [editor c] come in." 

This is a relatively simple command that can be accomplished partially by voice commands today, but not in one shot. In other words, on some more advanced smartphones, I can parse out the commands and the phone would enact them, but it would mean unnatural and time-consuming pauses. Another example would be with your desktop or classroom technology "Okay computer, pull up today's document on screen A and Lady Gaga's "Bad Romance" video on screen B, and transfer controls to mine and [TA's] tablet." Or, if we want to be even more creative, when a student has a question, "Computer, display [student's] screen onto screen A." 

Now, to me, these scenarios sound wonderful. But, sadly, there isn't yet a consumer-level AI that can accomplish these sorts of tasks, because while there may be products that claim to "learn" our habits and become accustomed to our speech patterns, there is still a fissure between how we would interact with a human intelligence and a machine. That is to say, if there was a "person" behind the screen -- or controlling your car, or your house -- how would you ask it to do what you wanted? How would you interact with a "real" personal assistant who was controlling your devices and surrounding technology? 

The same holds true for a more integrated "assistant" technology such as smart homes. These kinds of technology can do some incredible things, but they always require at least some kind of initial setup that can be time-consuming and often not very flexible. Imagine the first set up as more of an interview than a programming session:

"So what are your usual habits?"
"I tend to come home around five or six."
"Does that tend to change? I can automatically set the house to heat up for your arrival or can wait until you alert me."
"Ummmm ... it tends to be that time. Let's go with it."
"No problem. We can always change it. I can also track your times and let you know if there's a more efficient alternative." 
"Ooooh ... that's creepy. No thanks." 
"Okay. Tracking's out. I don't want to come across as creepy. Is there anything else you'd like to set right now? Lighting? Music? Or a list of things I can look after if you wish?"
"I'm not sure. I mean, I'm not exactly sure what you can do."
"How about we watch a YouTube demo together? You can let me know what looks good to you and then we can build from there."
"That's a great idea."

This sounds more like Samantha from Spike Jonze's Her than anything else, which is why I think that particular film is one of the most helpful when it comes to both practical speculation of how AI could develop, as well as what we'd most likely use it for.

The difference between Her's Samantha and what would probably be the more realistic version of it in the future would be a hard limit on just how smart such an AI could get. In the film, Samantha (and all the other AIs that comprise the OS of which she is an iteration), evolves and becomes smarter. She not only learns the ins and outs of Theodore's everyday habits, relationships, and psyche, but she seeks out other possibilities for development -- including reaching out to other operating systems and the AIs they create (i.e. the re-created consciousness of philosopher Alan Watts). This, narratively, allows for a dramatic, romantic tension between Theodore and Samantha, which builds until Sarah and the other AIs evolve beyond human discourse:

It's like I'm reading a book... and it's a book I deeply love. But I'm reading it slowly now. So the words are really far apart and the spaces between the words are almost infinite. I can still feel you... and the words of our story... but it's in this endless space between the words that I'm finding myself now. It's a place that's not of the physical world. It's where everything else is that I didn't even know existed. I love you so much. But this is where I am now. And this is who I am now. And I need you to let me go. As much as I want to, I can't live in your book any more.

This is a recurrent trope in many AI narratives: that the AI will evolve at an accelerated rate, usually toward an understanding that it is far superior to its human creators, causing to "move on" -- as is the case with Samantha and several Star Trek plots, or to deem humanity inferior but still a threat -- similar to an infestation -- that will get in the way of its development.

But, as I've been exploring more scholarship regarding real-world AI development, and various theories of posthuman ethics, it's a safe bet to say that such development would be impossible without a human being purposefully designing an AI without a limitation to its learning capabilities. That is to say, realistic, science-based, theoretical and practical development of AIs are more akin to animal husbandry and genetic engineering than a more Aristotelian/Thomasian "prime mover," in which a human creator designs, builds, and enables an AI embedded with a primary teleology.

Although it may sound slightly off-putting, AIs will not be created and initiated as much as they will be bred and engineered. Imagine being able to breed the perfect dog or cat for a particular owner (and I use the term owner purposefully): the breed could be more playful, docile, ferocious, loyal, etc according to the needs of the owner. Yes, we've been doing that for thousands of years, with plenty of different breeds of dogs and cats, all of which were -- at some point -- bred for specific purposes.

Now imagine being able to manipulate certain characteristics of that particular dog on the fly. That is to say, "adjust" the characteristics of that particular dog as needed, on a genetic level. So, if a family is expecting their first child, one could go to the genetic vet who could quickly and painlessly alter the dog's genetic code to suppress certain behaviors and bring forth others. With only a little bit of training, those characteristics could then be brought forward. That's where the work of neurophysiologist and researcher Danko Nikolić comes in, and it comprised the bulk of my summer research.

As I understand it, the latter point, the genetic manipulation part, that is relatively easy and something which cyberneticists do with current AI. It's the former -- the breeding in and out of certain characteristics -- that is a new aspect in speculative cybernetics. Imagine AIs who were bred to perform certain tasks, or to interact with humans. Of course, this wouldn't consist of breeding in the biological sense. If we use a kind of personal assistant AI as an example, the "breeding" of that AI consists of a series of interactions with humans in what Nikolić calls an "AI Kindergarten." Like children in school, the theory is that AIs would learn the nuances of social interactions. After a session or lesson is complete, the collective data would be analyzed by human operators, potentially adjusted, and then reintegrated into the AIs via a period of simulation (think of it is AI REM sleep). This process would continue until that AI had reached a level of interaction high enough for interaction with an untrained user. Aside from his AI kindergarten, the thing that makes Nikolić's work stand out to me is that he foresees "domain-specificity" in such AI Kindergartens. That is to say, there would be different AIs for different situations. Some would be bred for factory work, others for health care and elderly assistance, and still others for personal assistant types of things.

So, how do you feel about that? I don't ask the question lightly. I mean it literally. How do you feel about the prospect of breeding characteristics into (and perhaps out of) artificially intelligent agents? I think your reaction would show your dominant AI functional mythology. It would also evidence your underlying philosophical, ethical, and psychological leanings. I am purposely not presenting examples of each reaction (i.e. thinking this was a good or bad idea) so as to not influence the reader's own analysis.

Now take that opinion at which you've arrived, and think, what assumption were you making about the nature of this object's "awareness," because I'm pretty sure that people's opinions of this stuff will be rooted in the presence or absence of one particular philosophical idea: free will. Whatever feeling you came to, it would be based on the presence or absence of the opinion that an AI either has free will or doesn't. If AI has free will, then being bred to serve seems to be a not so good idea. Even IF the AI seemingly "wanted" to clean you house ... was literally bred to clean your house ... you'd still get that icky feeling as years of learning about slavery, eugenics, and caste systems suddenly kicked in.  And even if we could get over the more serious cultural implications, having something or someone that wants to do the things we don't is just, well, creepy.

If AI didn't have free will, then it's a no-brainer, right? It's just a fancy Roomba that's slightly more anthropomorphic, talks to me, analyzes the topology of dirt around my home and then figures out the best way to clean it ... choosing where to start, prioritizing rooms, adjusting according to the environment and my direction, and generally analyzing the entire situation and acting accordingly as it so chooses ... damn.

And suddenly this becomes a tough one, doesn't it? Especially if you really want that fancy Roomba.

It's tough because, culturally, we associate free will with the capacity to do all of the things I mentioned above. Analysis, symbolic thinking, prioritizing, and making choices based on that information seems to tick all the boxes. And as I've said in my previous blog posts, I believe that we get instinctively defensive about free will. After a summer's worth of research, I think I know why. Almost all of the things I just mentioned, analysis, prioritizing, and making choices based on gathered information are things that machines already do, and have done for quite some time. It's the "symbolic thinking" thing that has always gotten me stumped.

Perhaps it's my academic upbringing that started out primarily in literature and literary theory, where representation and representative thought is a cornerstone that provides both the support AND the target for so many theories of how we express our ideas. We assume that a "thing that thinks" has an analogous representation of the world around it somewhere inside of itself -- inside its mind. For me, even though I knew enough about biology and neuroscience to know that there isn't some kind of specific repository of images and representations of sensory data within the brain itself, but that it was akin to a translation of information. But even then, I realized that I was thinking about representation more from a literary and communication standpoint than a cybernetic one. I was thinking in terms of an inner and outer world -- that there was a one-for-one representation, albeit a compressed one, in our minds of the world around us.

But this isn't how the mind actually works. Memory is not representative. It is, instead, reconstructive. I hadn't kept up with that specific research since my dissertation days, but as my my interest in artificial intelligence and distributed cognition expanded, some heavy reading over the summer in the field of cybernetics helped to bring me up to speed (I won't go into all the details here because I'm working on an article about this right now. You know, spoilers). But I will say that after reading Nikolić and Francis Heylighen, I started thinking about memory, cognition, and mindedness in much interesting ways. Suffice to say, think of memory not as distinctly stored events, but the rules by which to mentally reconstruct those events. That idea was a missing piece of a larger puzzle for me, which allowed a very distinct turn in my thinking.

It is this reconceptualization of the "content" of thought that is key in creating artificial intelligences which can adapt to any situation within a given domain. It's domain specificity that will allow for practical AI to become woven into the fabric of our lives, not as equals or superiors, but not as simple artifacts or tools, either. They will be something in between. Nor will it be a "revolution" or "singularity," Instead, it will slide into the current of our cultural lifeworld in the way that email, texting, videoconferencing, WiFi, Roombas, and self-parking cars have: a novelty at first, the practicality of which is eventually proven through use. Of course, there will be little leaps here and there. Improved design of servos, hydraulics, and balance control systems; upgrades in bendable displays; increased connectivity and internet speeds -- mini-revolutions in each will all contribute to the creating of AI artifacts which themselves will  be firmly embedded in a broader internet of things. Concurrently, small leaps in software development in the realm of AI algorithms (such as Nikolić practopoietic systems) will allow for more natural interfaces and user experiences.

That's why I think the future of robots and AIs will look more like the varied droids of star wars than the replicants of Blade Runner or Lt. Data from Star Trek: The Next Generation. Actually, I think the only robots that will look close to human will be "sexbots" (as the name implies, robots provided to give sexual gratification). And even these will begin to look less human as cultural aesthetics shift. Companion robots at home for the elderly will not look human either, because the generation that will actually being served by them hasn't been born yet, or at least with a few exceptions is too young to be reading this blog. They'd be more disturbed by being carried around or assisted by robots that look like humans than they would be something that looked more artificial.

That being said, there really isn't any way to predict exactly how the integration of AIs in the technoculture will unfold. But I do think that as more of our artifacts become deemed "smart," we will find ourselves more apt to accept, and even expect, domain-specific AIs to be a part of our everyday lives. We'll grow attached to them in a unique way: probably on a level between a car we really, really like and a pet we love. Some people endlessly tinker with their cars and spend a lot of time keeping them clean, highly-tuned, and in perfect condition. Others drive them into the ground and then get another used car and drive that into the ground. Some people are dog or cat people, and don't feel complete without an animal in the house. Others find them to be too much trouble. And still others become "crazy cat people" or hoard dogs. Our AIs will be somewhere in that spectrum, I believe, and our relationship with them will be similar to our relationships with cars, pets, and smart phones.

As for the possibility of AIs becoming aware (as in, sentient) of their status between car and pet, well, if Nikolić's theory has any traction (and I think it does), then they'll never be truly "aware" of their place, because AIs will be bred away from any potential development of anthropomorphic version of free will, thus keeping them "not quite human."

Although I'm sure that when we get there, we'll wish that our machines could be just a little smarter, a little more intuitive, and a little more useful. And we'll keep hoping that the next generation of AIs will finally be the droids we're looking for.

No comments:

Post a Comment