Tuesday, May 10, 2016

End-of-the-semester update!

Things have been quiet at Posthuman Being, but for good reason. I've been working on a couple of articles for publication, both of which were due within a month-and-a-half of each other.  Add to that the usual end-of-semester grading, and time for writing does get squeezed out. But the semester is over at Western and my grades are in. I'll be traveling for a couple of weeks to recharge and re-center. But I have a few post ideas brewing on some interesting topics, and hopefully some things for IEET.

Thanks to everyone for their patience. I'm looking forward to exploring some interesting territory!

Tuesday, January 19, 2016

Mythic Singularities: Or How I Learned To Stop Worrying and (kind of) Love Transhumanism

... knowing the force and action of fire, water, air the stars, the heavens, and all the other bodies that surround us, as distinctly as we know the various crafts of our artisans, we might also apply them in the same way to all the uses to which they are adapted, and thus render ourselves the lords and possessors of nature.  And this is a result to be desired, not only in order to the invention of an infinity of arts, by which we might be enabled to enjoy without any trouble the fruits of the earth, and all its comforts, but also and especially for the preservation of health, which is without doubt, of all the blessings of this life, the first and fundamental one; for the mind is so intimately dependent upon the condition and relation of the organs of the body, that if any means can ever be found to render men wiser and more ingenious than hitherto, I believe that it is in medicine they must be sought for. It is true that the science of medicine, as it now exists, contains few things whose utility is very remarkable: but without any wish to depreciate it, I am confident that there is no one, even among those whose profession it is, who does not admit that all at present known in it is almost nothing in comparison of what remains to be discovered; and that we could free ourselves from an infinity of maladies of body as well as of mind, and perhaps also even from the debility of age, if we had sufficiently ample knowledge of their causes, and of all the remedies provided for us by nature.
- Rene Descartes, Discourse on the Method of Rightly Conducting the Reason and Seeking Truth in the Sciences, 1637

As a critical posthumanist (with speculative leanings), I found myself always a little leary of transhumanism in general. Much has been written on the difference between the two, and one of the best and succinct explanations can be found in John Danaher's "Humanism, Transhumanism, and Speculative Posthumanism." But very briefly, I believe it boils down to a question of attention: a posthumanist, whether critical or speculative, focuses his or her attention on subjectivity; investigating, critiquing, and sometimes even rejecting the notion of a homuncular self or consciousness, and the assumption that the self is some kind of modular component of our embodiment. Being a critical posthumanist does makes me hyper-aware of the implications of Descartes' ideas presented above in relation to transhumanism. Admittedly, Danaher's statement "Critical posthumanists often scoff at certain transhumanist projects, like mind uploading, on the grounds that such projects implicitly assume the false Cartesian view" hit close to home, because I am guilty of the occasional scoff.

But there really is much more to transhumanism than sci-fi iterations of mind uploading and AIs taking over the world. Just like there is more to Descartes than his elevation, reification, and privileging of consciousness. From my critical posthumanist perspective, what has always been the hardest pill to swallow with Descartes wasn't necessarily the model of consciousness he proposed. It was the the way that model has been taken so literally -- as a fundamental fact -- that has been one of the deeper issues which drive me philosophically. But, as I've often told my students, there's more to Descartes than that. Examining Descartes's model as the metaphor it is gives us a more culturally based context for his work, and a better understanding of its underlying ethics. I think a similar approach can be applied to transhumanism, especially in light of some of the different positions articulated in Pellissier's "Transhumanism: There are [at least] ten different philosophical catwgories; which one(s) are you?"

Rene Descartes's faith in the ability of human reason to render us "lords and possessors of nature" through an "invention of an infinity of arts," is,  to my mind, one of the foundational philosophical beliefs of transhumanism. And his later statement, that "all at present known in it is almost nothing in comparison of what remains to be discovered" becomes its driving conceit: the promise that answers could be found which could, potentially, free humanity from "an infinity of maladies of bodies as well as of mind, and perhaps the debility of age." It follows that whatever humanity can create to help us unlock those secrets is thus a product of human reason. We create the things we need that help us to uncover "what remains to be discovered."

But this ode to human endeavor eclipses the point of those discoveries: "the preservation of health" which is "first and fundamental ... for the mind is so intimately dependent on the organs of the body, that if any means can ever be found to render men wiser and more ingenious ... I believe that it is in medicine that it should be sought for."

Descartes sees an easing of human suffering as one of the main objectives to scientific endeavor. But this aspect of his philosophy is often eclipsed by the seemingly infinite "secrets of nature" that science might uncover. As is the case with certain interpretations of the transhumanist movement, the promise of what can be learned often eclipses the reasons why we want to learn them.  And that promise can take on mythic properties. Even though progress is its own promise, a transhuman progress can become an eschatological one, caught between: a Scylla of extreme interpretations of "singularitarian" messianism and a Charybdis of  similarly extreme interpretations of "survivalist transhuman" immortality.  Both are characterized by governing mythos -- or set of beliefs -- that are technoprogressive by nature, but risk fundamentalism in practice, especially if we lose sight of a very important aspect of technoprogressivism itself:  "an insistence that technological progress needs to be wedded to, and depends on, political progress, and that neither are inevitable" (Hughes 2010. emphasis added). Critical awareness of the limits of transhumanism is similar to having a critical awareness of any functional myth. One does not have to take the Santa Claus or religious myths literally to celebrate Christmas; instead one can understand the very man-made meaning behind the holiday and the metaphors therein, and choose to express or follow that particular ethical framework accordingly, very much aware that it is an ethical framework that can be adjusted or rejected as needed.

Transhuman fundamentalism occurs when critical awareness that progress is not inevitable is replaced by an absolute faith and/or literal interpretation that -- either by human endeavor or via artificial intelligence -- technology will advance to a point where all of humanity's problems, including death, will be solved. Hughes points out this tension: "Today transhumanists are torn between their Enlightenment faith in inevitable progress toward posthuman transcension and utopian Singularities, and their rational awareness of the possibility that each new technology may have as many risks as benefits and that humanity may not have a future" (2010).  Transhuman fundamentalism characterized by uncritical inevitablism would interpret progress as "fact." That is to say, that progress will happen and is immanent. By reifying (and eventually deifying) progress,  transhuman fundamentalism would actually forfeit any claim to progress by severing it from its human origins. Like a god that is created by humans out of a very human need, but then whose origins are forgotten, progress stands as an entity separate from humanity, taking on a multitude of characteristics rendering it ubiquitous and omnipotent: progress can and will take place. It has and it always will, regardless of human existence; humanity can choose to unite with it, or find itself doomed.

Evidence for the inevitability of progress comes by way of pointing out specific scientific advancements and then falling back on speculation that x advancement will lead to y development, as outlined by Verdoux's "historical" critique of faith in progress, holding a "'progressionist illusion' that history is in fact a record of improvement" (2009). Kevin Warwick has used rat neurons as CPUs for his little rolling robots: clearly, we will be able to upload our minds. I think of this as a not-so-distant cousin of the intelligent design argument for the existence of God. Proponents point to complexity of various organic (and non-organic) systems as evidence that a designer of some kind must exist. Transhuman fundamentalist positions point to small (but significant) technological advancements as evidence that an AI will rise (Singularitarianism) or that death itself will be vanquished (Survivalist Transhumanism). It is important to note that neither position is in itself fundamentalist in nature. But I do think that these two particular frameworks lend themselves more easily to a fundamentalist interpretation, due to their more entrenched reliance on Cartesian subjectivity, enlightenment teleologies, and eschatological religious overtones.

Singularitarianism, according to Pellissier, "believes the transition to a posthuman will be a sudden event in the 'medium future' -- a Technological Singularity created by runaway machine superintelligence." Pushed to a fundamentalist extreme, the question for the singularitarian is: when the posthuman rapture happens, will we be saved by a techno-messiah, or burned by a technological antichrist?  Both arise by the force of their own wills. But if we look behind the curtain of the great and powerful singularity, we see a very human teleology. The technology from which the singularity is born is the product of human effort. Subconsciously, the singularity is not so much a warning as it is a speculative indulgence of the power of human progress: the creation of consciousness in a machine. And though singularitarianism may call it "machine consciousness," the implication that such an intelligence would "choose" to either help or hinder humanity always already infers a very anthropomorphic consciousness. Furthermore, we will arrive at this moment via some major scientific advancement that always seems to be between 20 and 100 years away, such as "computronium," or programmable matter. This molecularly-engineered material, according to more Kurzweilian perspectives, will allow us to convert parts of the universe into cosmic supercomputers which will solve our problems for us and unlock even more secrets to the universe. While the idea of programmable matter is not necessarily unrealistic, its mythical qualities (somewhere between a kind of "singularity adamantium" and "philosopher's techno-stone"), promise the transubstantiation of matter toward unlimited, cosmic computing, thus opening up even more possibilities for progress. The "promise" is for progress itself, that unlocking certain mysteries will provide an infinite amount of new mysteries to be solved.

Survivalist Transhumanism can take a take a similar path in terms of technological inevitabilism, but pushed toward a fundamentalist extreme, awaits a more Nietzschean posthuman rapture.  According to Pellissier, Survivalist Transhumanism "espouses radical life extension as the most important goal of transhumanism." In general, the movement seems to be awaiting advancements in human augmentation which are always already just out of reach but will (eventually) overcome death and allow the self (whether bioengineered or uploaded to a new material -- or immaterial -- substrate) to survive indefinitely. Survivalist transhumanism with a more fundamentalist flavor would push to bring the Nietzschean Ubermensch into being -- literally -- despite the fact that Nietzsche's Ubermensch functions as an ideal toward which humans should strive.  He functions as a metaphor for living one's life fully, not subject to a "slave morality" that is governed by fear and placing one's trust in mythological constructions treated as real artifacts. Even more ironic is the fact that Ubermensch is not immortal and is at peace with his immanent death. Literal interpretations of the Ubermensch would characterize the master-morality human as overcoming mortality itself, since death is the ultimate check on the individual's development. Living forever, from a more fundamentalist perspective, would provide infinite time to uncover infinite possibilities and thus make infinite progress. Think of all the things we could do, build, and discover, some might say. I agree. Immortality would give us time -- literally.  Without the horizon of death as a parameter of our lives, we would -- eventually -- overcome a way of looking at the universe that has been a defining characteristic of humanity since the first species of hominids with the capacity to speculate pondered death.

But in that speculation is also a promise. The promise that conquering death would allow us to reap the fruits of the inevitable and inexorable progression of technology. Like a child who really wants to "stay up late," there is a curiosity about what happens after humanity's bedtime. Is the darkness outside her window any different after bedtime than it is at 9pm? What lies beyond the boundaries of late-night broadcast television? How far beyond can she push until she reaches the loops of infomercials, or the re-runs of the shows that were on hours prior?  And years later, when she pulls her first all-nighter, and she sees the darkness ebb and the dawn slowly but surely rise just barely within her perception, what will she have learned?

It's not that the darkness holds unknown things. To her, it promises things to be known. She doesn't know what she will discover there until she goes through it. Immortality and death metaphorically function in the same way: Those who believe that immortality is possible via radical life extension believe that the real benefits of immortality will show themselves once immortality is reached and we have the proper perspective from which to know the world differently. To me, this sounds a lot like Heaven: We don't know what's there but we know it's really, really good. In the words of Laurie Anderson: "Paradise is exactly like where you are right now, only much, much better." A survivalist transhuman fundamentalist version might read something like "Being immortal is exactly like being mortal, only much, much better."

Does this mean we should scoff at the idea of radical life extension? At the singularity and its computronium wonderfulness? Absolutely not. But the technoprogressivism at the heart of  transhumanism need not be so literal. When one understands a myth as that -- a set of governing beliefs -- transhumanism itself can stay true to the often-eclipsed aspect of its Cartesian, enlightenment roots: the easing of human suffering. If we look at transhumanism as a functional myth, adhering to its core technoprogressive foundations, not only do we have a potential model for human progress, but we also have an ethical structure by which to advance that movement. The diversity of transhuman views provides several different paths of progress.

Transhumanism has at its core a technoprogressivism that even critical posthumanism like me can get behind. If I am a technoprogressivist, then I do believe in certain aspects of the promise of technology. I do believe that humanity has the capacity to better itself and do incredible things through technological means. Furthermore, I do feel that we are in the infancy of our knowledge of how technological systems are to be responsibly used.  It is a technoprogressivist's responsibility to mitigate a myopic visions of the future -- including those visions that uncritically mythologize the singularity or immortality itself as an inevitability.

To me it becomes a question of exactly what the transhumanist him or herself is looking for from technology, and how he or she sees conceptualizes the "human" in those scenarios. The reason I still call myself a posthumanist is because I think that we have yet to truly free ourselves of antiquated notions of subjectivity itself. The singularity to me seems as if it will always be a Cartesian one. A "thing that thinks" and is aware of itself thinking and therefore is sentient. Perhaps the reasons why we have not reached a singularity yet is because we're approaching the subject and volition from the wrong direction.

To a lesser extent, I think that immortality narratives are mired in re-hashed religious eschatologies where "heaven" is simply replaced with "immortality." As for radical life extension, what are we trying to extend? Are we tying "life" to the ability to simply being aware of ourselves being aware that we are alive? Or are we looking at the quality of the extended life we might achieve? I do think that we may extend the human lifespan to well over a century. What will be the costs? And what will be the benefits?  Life extension is not the same as life enrichment. Overcoming death is not the same as overcoming suffering. If we can combat disease, and mitigate the physical and mental degradation which characterize aging, thus leading to an extended life-span free of pain and mental deterioration, then so be it.  However, easing suffering and living forever are two very different things. Some might say that the easing of suffering is simply "understood" within the overall goals of immortality, but I don't think it is.

Given all of the different positions outlined in Pellissier's article, "cosmopolitan transhumanism" seems to make the most sense to me. Coined by Steven Umbrello, this category combines the philosophical movement of cosmopolitanism with transhumanism, creating a technoprogressive philosophy that can "increase empathy, compassion, and the univide progress of humanity to become something greater than it currently is. The exponential advancement of technology is relentless, it can prove to be either destructive or beneficial to the human race." This advancement can only be achieved, Umbrello maintains, via an abandonment of "nationalistic, patriotic, and geopolitical allegiances in favor [of] global citizenship that fosters cooperation and mutually beneficial progress."

Under that classification, I can call myself a transhumanist. A commitment to  enriching life rather than simply creating it (as an AI) or extending it (via radical life extension) should ethically shape the leading edge of a technoprogressive movement, if only to break a potential cycle of polemics and politicization internal and external to transhumanism itself. Perhaps I've read too many comic books and have too much of a love for superheroes, but in today's political and cultural climate, a radical position on either side can unfortunately create an opposite. If technoprogressivism rises under  fundamentalist singularitarian or survivalist transhuman banners, equally passionate luddite, anti-technological positions could potentially rise and do real damage. Speaking as a US citizen, I am constantly aghast at the overall ignorance that people have toward science and the ways in which the very concept of "scientific theory" and the very definition of what a "fact" is has been skewed and distorted. If we have groups of the population who still believe that vaccines cause autism or don't believe in evolution, do we really think that a movement toward an artificial general intelligence will be taken well?

Transhumanism, specifically the cosmopolitan kind, provides a needed balance of progress and awareness. We can and should strive toward aspects of singularitarianism and survivalist transhumanism, but as the metaphors and ideals they actually are.


References:

Anderson, Laurie. "Language is a Virus" Home of the Brave (1986)

Descartes, Rene. 1637. Discourse on the Method of Rightly Conducting the Reason and Seeking Truth in the Sciences.

Hughes, James. 2010. "Problems of Transhumanism: Belief in Progress vs. Rational Uncertainty." (IEET.org).

Pellissier, Hank. 2015. "Transhumanism: There Are [at Least] Ten Different Philosophical Categories; Which One(s) Are you?" (IEET.org)

Verdoux, Philippe. 2009. "Transhumanism, Progress and the Future."  Journal of Evolution and Technology 20(2):49-69.

Saturday, January 2, 2016

New Developments and Working with IEET

Thanks to a link by Danko Nikolic, a few weeks ago IEET (the Institute for Ethics and Emerging Technologies) reached out to me to repost some of my entries. I'm really excited to work with them and hopefully produce some original content for them as well. My first entry is now live: "The Droids We're Looking For."

Since Posthuman Being will probably be getting a few more hits than usual, I wanted to take the opportunity to quickly summarize the overall purpose of my blog, as opposed to original pieces I may write for other sites or chapters/articles in other publications. 

I've always viewed Posthuman Being as an informal -- but still somewhat academic -- "sandbox" for my ideas in relation to the classes I teach at Western State Colorado University and the more formal academic writing in which I am engaged. I am currently working on a few projects which have their roots in several of the posts which have appeared here. 

As you can see, there are usually some large gaps in time between posts. This is due to my teaching schedule as well as the other projects in which I'm involved. However, as things evolve, I hope to post shorter, more regular entries. 

I have also established a public Facebook page for regular updates and announcements, and as always I will be updating on  my Google+ page as well.

I look forward to this next stage of my research and hope that these past (and future) entries are interesting, informative, and spark more discussion!  

Thanks, 

Anthony 

Wednesday, September 30, 2015

The Droids We're Looking For

I've been a fan of Cynthia Breazeal for well over a decade, and have watched her research evolve from her early doctoral work with Kismet, to her current work as the creator of JIBO and the founder of JIBO, inc. What I found so interesting about Dr. Breazeal was her commitment to creating not just artificial intelligence, but a robot which people could interact with in a fashion similar to human beings, but not exactly like human beings. In her book, Designing Sociable Robots, she provides an anecdote as to what inspired her to get involved with artificial intelligence and robots in the first place: Star Wars. At first I thought this resonated with me simply because she and I had the same Gen X contextual basis. I was five when the first Star Wars film was released in 1977, and it was the technology (the spaceships and especially the droids) that got me hooked. But upon further thought, I realized that Breazeal's love of Star Wars seems to have inspired her work in another, more subtle way.  The interactions that humans have with droids in the Star Wars universe isn't exactly egalitarian. That is to say, humans don't see the droids around them as equals. In fact, the humans', and just about any of the organic, anthropomorphic aliens' interactions with droids is very much based on the function of the droids themselves.

For example, R2D2, being an "astromech" droid, is more of a utilitarian repair droid. It understands language, but does not have a language that humans can readily understand without practice or an interpreter. But even not knowing the chirps and beeps, the tone of them gives us a general idea of mood. We have similar examples of this in WALL-E, where the titular robot conveys emotion via nonverbal communication and "facial expressions," even though he really doesn't have a face, per se. But, getting back to Star Wars, if we think about how other characters interact with droids, we see a very calculated yet unstated hierarchy. The droids are very much considered property, are turned on and off at will, and are very "domain specific." In fact, it is implied that objects like ships (the Death Star, the Millennium Falcon), and even things like moisture evaporators on Tatooine have an embedded AI with which higher functioning droids like R2D2 can communicate with, control, and -- as is the function of C3PO -- translate. Granted, there are droids built as soldiers, bodyguards, and assassins, but it takes a deep plunge into fan fiction and the tenuously "expanded" Star Wars universe to find an example or two of droids that went "rogue" and acted on their own behalf, becoming bounty hunters and I'm sure at some point wanting a revolution of some sort. 

Trips into Star Wars fandom aside, the basic premise and taxonomy of the droids in Star Wars seems to represent a more realistic and pragmatic evolution of AI and AI related technologies (sans the sentient assassins, of course). If we make a conscious effort to think, mindfully, about artificial intelligence, rather than let our imaginations run away with us, thus bestowing our human ontology onto them, then the prospect of AI is not quite as dramatic, scary, or technologically romantic as we may think. 

I mean, think, really think about what you want your technology to do. How do you really want to interact with your phone, tablet, laptop, desktop, car, house, etc?  Chances are, most responses orbit around the idea of the technology being more intuitive. In that context, it implies a smooth interface. An intuitive operating system implies that the user can quickly figure out how it works without too much help. The more quickly a person can adapt to the interface or the 'rules of use' of the object, the more intuitive that interface is. When I think back to the use of this word, however, it has an interesting kind of dual standing. That is to say, at the dawn of the intuitive interface (the first Macintosh computer, and then later iterations of Windows), intuitive implied that the user was able to intuit how the OS worked. In today's landscape, the connotation of the term has expanded to the interface itself. How does the interface predict how we might use it based on a certain context. If you sign into Google and allow it to know your location, the searches become more contextually based, especially when it also knows your search history. Search engines, Amazon, Pandora, etc, all have been slowly expanding the intuitive capacities of their software, meaning that, if designed well, these apps can predict what we want, making it seem like they knew what we were looking for before we did. In that context, 'intuitive' refers to the app, website, or search engine itself. As in, Pandora intuits what I want based on my likes, skips, time spent on songs, and even time of day, season, and location.

Regardless, whether or not intuitive refers to the user, the machine, or a blend of both, in today's technological culture, we want to be able to interact with our artifacts and operating system in a way that seems more natural than entering clunky commands. For example, I would love to be able to pick up my phone, and say to it, "Okay Galaxy, block all messages except the ones from my wife, and alert me if an email from [student A], [colleague b], or [editor c] come in." 

This is a relatively simple command that can be accomplished partially by voice commands today, but not in one shot. In other words, on some more advanced smartphones, I can parse out the commands and the phone would enact them, but it would mean unnatural and time-consuming pauses. Another example would be with your desktop or classroom technology "Okay computer, pull up today's document on screen A and Lady Gaga's "Bad Romance" video on screen B, and transfer controls to mine and [TA's] tablet." Or, if we want to be even more creative, when a student has a question, "Computer, display [student's] screen onto screen A." 

Now, to me, these scenarios sound wonderful. But, sadly, there isn't yet a consumer-level AI that can accomplish these sorts of tasks, because while there may be products that claim to "learn" our habits and become accustomed to our speech patterns, there is still a fissure between how we would interact with a human intelligence and a machine. That is to say, if there was a "person" behind the screen -- or controlling your car, or your house -- how would you ask it to do what you wanted? How would you interact with a "real" personal assistant who was controlling your devices and surrounding technology? 

The same holds true for a more integrated "assistant" technology such as smart homes. These kinds of technology can do some incredible things, but they always require at least some kind of initial setup that can be time-consuming and often not very flexible. Imagine the first set up as more of an interview than a programming session:

"So what are your usual habits?"
"I tend to come home around five or six."
"Does that tend to change? I can automatically set the house to heat up for your arrival or can wait until you alert me."
"Ummmm ... it tends to be that time. Let's go with it."
"No problem. We can always change it. I can also track your times and let you know if there's a more efficient alternative." 
"Ooooh ... that's creepy. No thanks." 
"Okay. Tracking's out. I don't want to come across as creepy. Is there anything else you'd like to set right now? Lighting? Music? Or a list of things I can look after if you wish?"
"I'm not sure. I mean, I'm not exactly sure what you can do."
"How about we watch a YouTube demo together? You can let me know what looks good to you and then we can build from there."
"That's a great idea."

This sounds more like Samantha from Spike Jonze's Her than anything else, which is why I think that particular film is one of the most helpful when it comes to both practical speculation of how AI could develop, as well as what we'd most likely use it for.

The difference between Her's Samantha and what would probably be the more realistic version of it in the future would be a hard limit on just how smart such an AI could get. In the film, Samantha (and all the other AIs that comprise the OS of which she is an iteration), evolves and becomes smarter. She not only learns the ins and outs of Theodore's everyday habits, relationships, and psyche, but she seeks out other possibilities for development -- including reaching out to other operating systems and the AIs they create (i.e. the re-created consciousness of philosopher Alan Watts). This, narratively, allows for a dramatic, romantic tension between Theodore and Samantha, which builds until Sarah and the other AIs evolve beyond human discourse:

It's like I'm reading a book... and it's a book I deeply love. But I'm reading it slowly now. So the words are really far apart and the spaces between the words are almost infinite. I can still feel you... and the words of our story... but it's in this endless space between the words that I'm finding myself now. It's a place that's not of the physical world. It's where everything else is that I didn't even know existed. I love you so much. But this is where I am now. And this is who I am now. And I need you to let me go. As much as I want to, I can't live in your book any more.

This is a recurrent trope in many AI narratives: that the AI will evolve at an accelerated rate, usually toward an understanding that it is far superior to its human creators, causing to "move on" -- as is the case with Samantha and several Star Trek plots, or to deem humanity inferior but still a threat -- similar to an infestation -- that will get in the way of its development.

But, as I've been exploring more scholarship regarding real-world AI development, and various theories of posthuman ethics, it's a safe bet to say that such development would be impossible without a human being purposefully designing an AI without a limitation to its learning capabilities. That is to say, realistic, science-based, theoretical and practical development of AIs are more akin to animal husbandry and genetic engineering than a more Aristotelian/Thomasian "prime mover," in which a human creator designs, builds, and enables an AI embedded with a primary teleology.

Although it may sound slightly off-putting, AIs will not be created and initiated as much as they will be bred and engineered. Imagine being able to breed the perfect dog or cat for a particular owner (and I use the term owner purposefully): the breed could be more playful, docile, ferocious, loyal, etc according to the needs of the owner. Yes, we've been doing that for thousands of years, with plenty of different breeds of dogs and cats, all of which were -- at some point -- bred for specific purposes.

Now imagine being able to manipulate certain characteristics of that particular dog on the fly. That is to say, "adjust" the characteristics of that particular dog as needed, on a genetic level. So, if a family is expecting their first child, one could go to the genetic vet who could quickly and painlessly alter the dog's genetic code to suppress certain behaviors and bring forth others. With only a little bit of training, those characteristics could then be brought forward. That's where the work of neurophysiologist and researcher Danko Nikolić comes in, and it comprised the bulk of my summer research.

As I understand it, the latter point, the genetic manipulation part, that is relatively easy and something which cyberneticists do with current AI. It's the former -- the breeding in and out of certain characteristics -- that is a new aspect in speculative cybernetics. Imagine AIs who were bred to perform certain tasks, or to interact with humans. Of course, this wouldn't consist of breeding in the biological sense. If we use a kind of personal assistant AI as an example, the "breeding" of that AI consists of a series of interactions with humans in what Nikolić calls an "AI Kindergarten." Like children in school, the theory is that AIs would learn the nuances of social interactions. After a session or lesson is complete, the collective data would be analyzed by human operators, potentially adjusted, and then reintegrated into the AIs via a period of simulation (think of it is AI REM sleep). This process would continue until that AI had reached a level of interaction high enough for interaction with an untrained user. Aside from his AI kindergarten, the thing that makes Nikolić's work stand out to me is that he foresees "domain-specificity" in such AI Kindergartens. That is to say, there would be different AIs for different situations. Some would be bred for factory work, others for health care and elderly assistance, and still others for personal assistant types of things.

So, how do you feel about that? I don't ask the question lightly. I mean it literally. How do you feel about the prospect of breeding characteristics into (and perhaps out of) artificially intelligent agents? I think your reaction would show your dominant AI functional mythology. It would also evidence your underlying philosophical, ethical, and psychological leanings. I am purposely not presenting examples of each reaction (i.e. thinking this was a good or bad idea) so as to not influence the reader's own analysis.

Now take that opinion at which you've arrived, and think, what assumption were you making about the nature of this object's "awareness," because I'm pretty sure that people's opinions of this stuff will be rooted in the presence or absence of one particular philosophical idea: free will. Whatever feeling you came to, it would be based on the presence or absence of the opinion that an AI either has free will or doesn't. If AI has free will, then being bred to serve seems to be a not so good idea. Even IF the AI seemingly "wanted" to clean you house ... was literally bred to clean your house ... you'd still get that icky feeling as years of learning about slavery, eugenics, and caste systems suddenly kicked in.  And even if we could get over the more serious cultural implications, having something or someone that wants to do the things we don't is just, well, creepy.

If AI didn't have free will, then it's a no-brainer, right? It's just a fancy Roomba that's slightly more anthropomorphic, talks to me, analyzes the topology of dirt around my home and then figures out the best way to clean it ... choosing where to start, prioritizing rooms, adjusting according to the environment and my direction, and generally analyzing the entire situation and acting accordingly as it so chooses ... damn.

And suddenly this becomes a tough one, doesn't it? Especially if you really want that fancy Roomba.

It's tough because, culturally, we associate free will with the capacity to do all of the things I mentioned above. Analysis, symbolic thinking, prioritizing, and making choices based on that information seems to tick all the boxes. And as I've said in my previous blog posts, I believe that we get instinctively defensive about free will. After a summer's worth of research, I think I know why. Almost all of the things I just mentioned, analysis, prioritizing, and making choices based on gathered information are things that machines already do, and have done for quite some time. It's the "symbolic thinking" thing that has always gotten me stumped.

Perhaps it's my academic upbringing that started out primarily in literature and literary theory, where representation and representative thought is a cornerstone that provides both the support AND the target for so many theories of how we express our ideas. We assume that a "thing that thinks" has an analogous representation of the world around it somewhere inside of itself -- inside its mind. For me, even though I knew enough about biology and neuroscience to know that there isn't some kind of specific repository of images and representations of sensory data within the brain itself, but that it was akin to a translation of information. But even then, I realized that I was thinking about representation more from a literary and communication standpoint than a cybernetic one. I was thinking in terms of an inner and outer world -- that there was a one-for-one representation, albeit a compressed one, in our minds of the world around us.

But this isn't how the mind actually works. Memory is not representative. It is, instead, reconstructive. I hadn't kept up with that specific research since my dissertation days, but as my my interest in artificial intelligence and distributed cognition expanded, some heavy reading over the summer in the field of cybernetics helped to bring me up to speed (I won't go into all the details here because I'm working on an article about this right now. You know, spoilers). But I will say that after reading Nikolić and Francis Heylighen, I started thinking about memory, cognition, and mindedness in much interesting ways. Suffice to say, think of memory not as distinctly stored events, but the rules by which to mentally reconstruct those events. That idea was a missing piece of a larger puzzle for me, which allowed a very distinct turn in my thinking.

It is this reconceptualization of the "content" of thought that is key in creating artificial intelligences which can adapt to any situation within a given domain. It's domain specificity that will allow for practical AI to become woven into the fabric of our lives, not as equals or superiors, but not as simple artifacts or tools, either. They will be something in between. Nor will it be a "revolution" or "singularity," Instead, it will slide into the current of our cultural lifeworld in the way that email, texting, videoconferencing, WiFi, Roombas, and self-parking cars have: a novelty at first, the practicality of which is eventually proven through use. Of course, there will be little leaps here and there. Improved design of servos, hydraulics, and balance control systems; upgrades in bendable displays; increased connectivity and internet speeds -- mini-revolutions in each will all contribute to the creating of AI artifacts which themselves will  be firmly embedded in a broader internet of things. Concurrently, small leaps in software development in the realm of AI algorithms (such as Nikolić practopoietic systems) will allow for more natural interfaces and user experiences.

That's why I think the future of robots and AIs will look more like the varied droids of star wars than the replicants of Blade Runner or Lt. Data from Star Trek: The Next Generation. Actually, I think the only robots that will look close to human will be "sexbots" (as the name implies, robots provided to give sexual gratification). And even these will begin to look less human as cultural aesthetics shift. Companion robots at home for the elderly will not look human either, because the generation that will actually being served by them hasn't been born yet, or at least with a few exceptions is too young to be reading this blog. They'd be more disturbed by being carried around or assisted by robots that look like humans than they would be something that looked more artificial.

That being said, there really isn't any way to predict exactly how the integration of AIs in the technoculture will unfold. But I do think that as more of our artifacts become deemed "smart," we will find ourselves more apt to accept, and even expect, domain-specific AIs to be a part of our everyday lives. We'll grow attached to them in a unique way: probably on a level between a car we really, really like and a pet we love. Some people endlessly tinker with their cars and spend a lot of time keeping them clean, highly-tuned, and in perfect condition. Others drive them into the ground and then get another used car and drive that into the ground. Some people are dog or cat people, and don't feel complete without an animal in the house. Others find them to be too much trouble. And still others become "crazy cat people" or hoard dogs. Our AIs will be somewhere in that spectrum, I believe, and our relationship with them will be similar to our relationships with cars, pets, and smart phones.

As for the possibility of AIs becoming aware (as in, sentient) of their status between car and pet, well, if Nikolić's theory has any traction (and I think it does), then they'll never be truly "aware" of their place, because AIs will be bred away from any potential development of anthropomorphic version of free will, thus keeping them "not quite human."

Although I'm sure that when we get there, we'll wish that our machines could be just a little smarter, a little more intuitive, and a little more useful. And we'll keep hoping that the next generation of AIs will finally be the droids we're looking for.



Saturday, July 11, 2015

The Posthuman Superman: The Rise of the Trinity

"Thus,  existentialism's first move is to make every man aware of what he is and to make the full responsibility of his existence rest on him. And when we say that a man is responsible for himself, we do not only mean that he is responsible for his own individuality, but he is responsible for all men."
-- Sartre, Existentialism is a Humanism

[Apologies for any format issues or citation irregularities. I'll be out of town for the next few days and wanted to get this up before I left!]

Upon the release of the trailer for Batman v Superman: Dawn of Justice, a few people contacted me, asking if the trailer seemed to be in keeping with the ideas I presented in my Man of Steel review. In that review, I concluded that the film presented a "Posthuman Superman," because, like iterations of technological protagonists and antagonists in other sci-fi films, Kal-El is striving toward humanity; that "Superman is a hero because he unceasingly an unapologetically strives for an idea that is, for him, ultimately impossible to achieve: humanity." That quest is a reinforcement of our own humanity in our constant striving for improvement (of course, take a look at the full review for more context).

This is a very quick response, mostly due to the fact that I'm not really comfortable speculating about a film that hasn't been released yet. And we all know that trailers can be disappointingly deceiving. But given what I know about various plot details, and the trajectory of the trailer itself, it does very much look like Zach Snyder is using the destruction that Metropolis suffered in Man of Steel, and Superman's resulting choice to kill General Zod, as the catalyst of this film, where a seasoned (and somewhat jaded Batman) must determine who represents the biggest threat to humanity: Superman or Lex Luthor.

What has activated my inner fanboy about this film is that, for me, it represents why I have always preferred DC heroes over Marvel heroes: core DC heroes (Superman, Batman, Wonder Woman, Green Lantern, etc) rarely, if ever, lament their powers or the responsibilities they have. Instead, they struggle with the choice as to how to use the power they possess. In my opinion, while Marvel has always -- very successfully -- leaned on the "with great power comes great responsibility" idea; DC takes that a step further, with characters who understand the responsibility they have and struggle not with the burden of power, but the choice as to how to use it. Again, this is just one DC fan's opinion.

And here I think that the brief snippet of Martha Kent's advice to her son is really the key to where the film may be going:

"People hate what they don't understand. Be their hero, Clark. Be their angel. Be their monument. Be anything they need you to be. Or be none of it. You don't owe this world a thing. You never did."

Whereas Man of Steel hit a very Nietzschean note, I'm speculating here that Batman v Superman will hit a Sartrean one. If Kal-El is to be Clark Kent, and embrace a human morality, then he must carry the burden of his choices, completely, and realize that his choices do not only affect him, but also implicate all of humanity itself.

As Sartre tells us in Existentialism is a Humanism:


"... I am responsible for myself and for everyone else. I am creating a certain image of man of my own choosing. In choosing myself, I choose man."

And if we take into account the messianic imagery in both the teaser and the current trailer, it's clear that Snyder is playing with the idea of gods and idolatry. Nietzsche may dismiss God by declaring him dead, but it's Sartre who wrestles with the existentialist implications of a non-existent God:

"That is the very starting point of existentialism, Indeed, everything is permissible of God does not exist, and as a result, man is forlorn, because neither within him nor without does he find anything to cling to.  He can't start making excuses for himself."

Martha Kent's declaration that Clark "doesn't owe the world a thing" places the degree of Kal-El's humanity on Superman's shoulders. Clark is the human, Kal is the alien. What then is Superman? I am curious as to whether or not this trinity aspect will be brought out in the film. Regardless, what is clear is that the Alien/Human/hybrid trinity is not a divine one. It is one where humanity is at the center. And when one puts humanity at the center of morality (rather than a non-existent God), then we are faced with the true burden of our choices:

"If existence really does precede essence, there is no explaining things away by reference to a fixed and given human nature,. In other words, there is no determinism, man is free, man is freedom. On the other hand, if God does not exist we find no commands to turn to which legitimize our conduct. So in the bright realm of values, we have no excuse behind us, nor justification before us. We are alone, with no excuses."

For Sartre, "human nature" is as much of a construct as God. And Clark is faced with the reality of this situation in his mother's advice to be a hero, an angel, a monument, and/or whatever humanity needs him to be ... or not. The choice is Clark's. If Clark is to be human, then he must face the same burden as all humans: freedom. Sartre continues:

"That is the idea I shall try to convey when I say that man is condemned to be free. Condemned, because he did not create himself, yet, in other respects is free; because, once thrown in to the world, he is responsible for everything he does. the existentialist does not believe in the power of passion. He will never agree that a sweeping passion is a ravaging torrent which fatally leads a man to certain acts and is therefore an excuses. He thinks that man is responsible for his passion."

If Clark is to be the top of the Clark/Kal/Superman trinity, then he cannot fall back on passion to excuse his snapping of Zod's neck, nor can he rely on it to excuse him from the deaths of thousands that resulted from the battle in Man of Steel. Perhaps the anguish of his tripartite nature will be somehow reflected in the classic "DC Trinity" of Superman/Batman/Wonder Woman found in the comics and graphic novels, in which Batman provides a compass for Superman's humanity,while Wonder Woman tends to encourage Superman to embrace his god-like status.

And the fanboy in me begins to eclipse the philosopher. But before it completely takes over and I watch the trailer another dozen times, I can say that I still stand behind my thoughts from my original review of Man of Steel, this is a posthuman superhero film. Superman will still struggle to be human (even though he isn't), and the addition of an authentic human in Batman, as well as an authentic god in Wonder Woman, will only serve to highlight his anguish at realizing that his choices are his own ... just as Sartre tells us. And in that agony, we as an audience watch Superman suffer with us human beings.

Now we'll see if all of this holds up when the film is actually released, at which point I will -- of course -- write a full review.




Thursday, May 28, 2015

Update: Semester Breaks, New Technology, New Territory

This is more of an update post than a theory/philosophy one.

The semester ended a couple of weeks ago and I am acclimating to my new routine and schedule. I am also acclimating to two new key pieces of technology: my new phone, which is a Galaxy Note 4; and my new tablet, which is a Nexus 9. I attempted a slightly different approach to my upgrades, especially for my tablet: stop thinking about what I could do with them and start thinking about what I will do with them. One could also translate that as: get what you need, not what you want. This was also a pricey upgrade all around; I had been preparing for it, but still, having to spend wisely was an issue as well.

The Galaxy Note 4 upgrade was simple for me. I loved my Note 2. I use the stylus/note taking feature on it almost daily. The size was never an issue. So while I momentarily considered the Galaxy S6 edge, I stuck with exactly what I knew I needed and would use.

As for the tablet, that was more difficult. My old Galaxy Note 10.1 was showing its age. I thought -- or rather, hoped ... speculated -- that a tablet with a stylus would replace the need for paper notes. After a full academic year of trying to do all of my research and class note-taking exclusively on my tablet, it was time for me to admit that it wasn't cutting it. I need a full sheet of paper, and the freedom to easily erase, annotate, flip back and forth, and see multiple pages in their actual size. While the Note tablet can do most of that, it takes too many extra steps; and those steps are completely counter-intuitive than when using pen and paper.

When I thought about how and why I used my tablet (and resurrected chromebook), I realized that I didn't need something huge. I was also very aware that I am a power-user of sorts of various Google applications. So -- long story short -- I went for the most ... 'Googley' ... of kit and sprang for a Nexus 9, with the Nexus keyboard/folio option. I was a little nervous at the smaller size -- especially of the keyboard. But luckily my hands are on the smallish side and I'm very, very pleased with it. The bare-bones Android interface is quick and responsive; and the fact that all Android updates come to me immediately without dealing with manufacturer or provider interference was very attractive. I've had the Nexus for a week and am loving it.

This process, however, especially coming at the end of the academic year, made me deeply introspective about my own -- very personal -- use of these types of technological artifacts. It may sound dramatic, but there was definitely some soul-searching happening as I researched different tablets and really examined the ways in which I use technological artifacts. It was absolutely a rewarding experience, however. Freeing myself up from unrealistic expectations and really drawing the line between a practical  use rather than a speculative use was rather liberating. I was definitely influenced by my Google Glass experience.

From a broader perspective, the experience also helped me to focus on very specific philosophical issues in posthumanism and our relationship to technological artifacts. I've been reading voraciously, and taking in a great deal of information. During the whole upgrade process, I was reading Sapiens, A Brief History of Humankind by Yuval Noah Harari. This was a catalyst in my mini 'reboot.' And I know it was a good reboot because I keep thinking back to my "Posthuman Topologies: Thinking Through the Hoard" chapter in Design, Mediation, and the Posthuman, and saying to myself "oh wait, I can explain that even better now ..."

So I am now delving into both old and new territory, downloading new articles, and familiarizing myself even more deeply with neuroscience and psychology. It's exciting stuff, but a little frustrating because there's only so much I can read through and retain in a day. There's also that nagging voice that says "better get it done now, in August you'll be teaching four classes again." It can be frustrating sometimes. Actually, that's a lie. It's frustrating all the time. But I do what I can.

Anyway, that's where I'm at right now and I'm sure I'll have some interesting blog entries as I situate myself amidst the new research. My introspection here isn't just academic, so what I've been working on comes from a deeper place, but that's how I know the results will be good.

Onward and upward.





Monday, March 30, 2015

Posthuman Desire (Part 2 of 2): The Loneliness of Transcendence

In my previous post, I discussed desire through the Buddhist concept of dukkha, looking at the dissatisfaction that accompanies human self-awareness and how our representations of AIs follow a mythic pattern. The final examples I used (Her, Transcendence, etc.) pointed to representations of AIs that wanted to be acknowledged or even to love us. Each of these examples hints at a desire for unification with humanity; or at least some kind of peaceful coexistence. So then, as myths, what are we hoping to learn from them? Are they, like religious myths of the past, a way to work through a deeper existential angst? Or is this and advanced step in our myth-making abilities, where we're laying out the blueprints for our own self-engineered evolution, one which can only occur through a unification with technology itself?

It really depends upon how we define "unification" itself. Merging the machine with the human in a physical way is already a reality, although we are constantly trying to find better, and more seamless ways to do so. However, if we look broadly at the history of the whole "cyborg" idea, I think that it actually reflects a more mythic structure. Early versions of the cyborg reflect the cultural and philosophical assumptions of what "human" was at the time, meaning that volition remained intact, and that any technological supplements were augmentations or replacements to the original parts of the body.*  I think that, culturally, the high point of this idea came in the  1974-1978 TV series, The Six Million Dollar Man (based upon the 1972 Martin Caidin novel, Cyborg), and its 1976-78 spin-off, The Bionic Woman. In each, the bionic implants were completely undetectable with the naked eye, and seamlessly integrated into the bodies of Steve Austin and Jamie Summers. Other versions of enhanced humanity, however, show a growing awareness of the power of computers via Michael Crichton's 1972 novel, The Terminal Man, in which prosthetic neural enhancements bring out a latent psychosis in the novel's main character, Harry Benson . If we look at this collective hyper-mythos holistically, I have a feeling that it would follow a similar pattern/spread of the development of more ancient myths, where the human/god (or human/angel, or human/alien) hybrids are sometimes superhuman and heroic, other times evil and monstrous.

The monstrous ones, however, tend to share similar characteristics, and I think that most prominent is the fact that in those representations, the enhancements seem to mess with the will. On the spectrum of cyborgs here, we're talking about the "Cybermen" of Doctor Who (who made their first appearance in 1966) and the infamous "Borg" who first appeared in Star Trek: The Next Generation in 1989. In varying degrees, each has a hive mentality, a suppression or removal of emotion, and are "integrated" into the collective in violent, invasive, and gruesome ways. The Borg from Star Trek and the Cybermen from the modern Doctor Who era represent that dark side of unification with a technological other. The joining of machine to human is not seamless. Even with the sleek armor of the contemporary iterations of the Cybermen, it's made clear that the "upgrade" process is painful, bloody, and terrifying, and that it's best that what's left of the human inside remains unseen. As for the Borg, the "assimilation" process is initially violent but less explicitly invasive (at least from Star Trek: First Contact), it seems to be more of an injection of nanotechnology that converts a person from inside-out, making them more compatible with the external additions to the body. Regardless of how it's done, the cyborg that remains is cold, unemotional, and relentlessly logical.

So what's the moral of the cyborg fairy tale? And what does it have to do with suffering? Technology is good, and the use of it is something we should do, as long as we are using it and not the other way around (since in each its always a human use of technology itself which beats the cyborgs). When the technology overshadows our humanity, then we're in for trouble. And if we're really not careful, it threatens us on an what I believe to be a very human instinctual level: that of the will. As per my the final entry of my last blog series, the instinct to keep the concept of the will intact evolves with the intellectual capacity of the human species itself. The cyborg mythology grows out of a warning that if the will is tampered with (giving up one's will to the collective), then humanity is lost.

The most important aspect of cyborg mythologies are that the few cyborgs for whom we show pathos are the ones who have come to realize that they are cyborgs and are cognizant that they have lost an aspect of their humanity. In the 2006 Doctor Who arc, "Rise of the Cybermen/The Age of Steel," the Doctor reveals that Cybermen can feel pain (both physical and emotional), but that the pain is artificially suppressed. He defeats them by sending a signal that deactivates that ability, eventually causing all the Cybermen to collapse into what can only be called screaming heaps of existential crises as they recognize that they have been violated and transformed. They feel the physical and psychological pain that their cyborg existence entails. In various Star Trek TV shows and films, we gain many insights into the Borg collective via characters who are separated from the hive, and begin to regain their human characteristics -- most notably, the ability to choose for themselves, and even name themselves (i.e. "Hugh," from the Star Trek: The Next Generation episode "I, Borg").

I know that there are many, many other examples of this in sci-fi. For the most part and from a mythological standpoint, however, cyborgs are inhuman when they do not have an awareness of their suffering. They are either defeated or "re-humanized" not just by separating them from the collective, but by making them aware that as a part of the collective, they were actually suffering, but couldn't realize it. Especially in the Star Trek mythos, newly separated Borg describe missing the sounds of the thoughts of others; and must now deal with feeling vulnerable, ineffective, and most importantly to the mythos -- alone.  This realization then vindicates and legitimizes our human suffering. The moral of the story is that we all feel alone and vulnerable. That's what makes us human. We should embrace this existential angst, privilege it, and even worship and venerate it.

If Nietzsche were alive today, I believe he would see an amorphous "technology" as the bastard stepchild of the union of the institutions of science and religion. Technology would be yet another mythical iteration of our Apollonian desire to structure and order that which we do not know or understand. I would take this a step further, however. AIs, cyborgs, singularities, are narratives, and are products of our human survival instinct: to protect the self-aware, self-reflexive, thinking self -- and all of the 'flaws' that characterize it.

Like any religion, then, anything with this techno-mythic flavor will have its adherents and its detractors. The more popular and accepted human enhancements become, the more entrenched will anti-technology/enhancement groups will become. Any major leaps in either human enhancement or AI developments will create proportionately passionate anti-technology fanaticism. The inevitability of these developments, however, is clear: not because some 'rule' of technological progression exists; but because suffering exists. The byproduct of our advanced cognition and its ability to create a self/other dichotomy (which itself is the basis of representational thought) is an ability to objectify ourselves. As long as we can do that, we will always be able to see ourselves as individual entities. Knowing oneself as an entity is contingent upon knowing that which is not oneself. To be cognizant of an other then necessitates an awareness of the space between the knower and what is known. And in that space is absence.

Absence will always hold the promise (or the hope) of connection. Thus, humanity will always create something in that absence to which it can connect, whether that object is something made in the phenomenal world, or an imagined idea or presence within it. simply through our ability to think representationally, and without any type of technological singularity or enhancement, we transcend ourselves every day.

And if our myths are any indication, transcendence is a lonely business.





* See Edgar Allan Poe's short story from 1843, "The Man That was Used Up." French Writer's Jean de la Hire's 1908 character, "Nyctalope," was also a cyborg, and appeared in the novel L'Homme Qui Peut Vivre Dans L'eau (The Man Who can Live in Water)