Friday, January 13, 2017

Professional news: new journal article

FYI, my article, "Posthuman Trajectories: Cartesian Logic and Ethical Technoprogressivism" is now live at Word and Text.

Here's the abstract:

This article analyses the posthuman trajectories established in René Descartes’s 1637 A Discourse on the Method of Correctly Conducting One’s Reason and Seeking Truth in the Sciences. Moving beyond its references to automata and other ‘technological’ characterizations of the human body and mindedness, I locate a more forceful philosophical trajectory in the text that informs and sustains the very notion of ‘progress’ upon which cultural conceptions of subjectivity, technological development, and transhumanist positions continue to evolve. Descartes’s privileging of the ideal over the material positions the human self as the locus of enquiry and discourse from which progress originates. This may allow one to perceive a certain transhumanist, eschatological trajectory in the Cartesian text. My reading, however, shifts its focus onto Descartes’s desire to see human endeavour as a means of easing human suffering. This, I argue, opens the possibility of an ethical technoprogressivism that can inform our debates over post- and transhumanism today.  

The general ideas for this article were developed informally in my blog. I thought it would be interesting for readers to see the final product.


Thursday, October 20, 2016

Artifacts of Loneliness and Connection

Students often ask me about what it takes to pursue an academic career. One specific question that comes regards research. "How do you find your subject matter?" For the students who seriously want to consider getting their Ph.D.s in a humanities-related field, my answer can sometimes sound foreboding: the process will take you apart and put you together again. It will force you to face issues that are deeply personal, and you'll find that your area of research is often entangled in psychologically loaded subjects. I believe that this is one of the reasons graduate students are prone to mental health issues as the process unfolds. I always advise students to inquire about the availability of counseling services when looking into graduate programs. The most productive research is often tangled with the personal.

These entanglements, however, can bring insights both academic and personal.

One of the first subjects in this blog was the Aokigahara forest (aka the "suicide forest") in Japan. By chance I stumbled upon a This American Life podcast, "One Last Thing Before I Go," presenting another story from Japan which brings to the forefront connections the living and the dead. But this time, the symbolic gesture emanates from the living to those who are lost, rather than the opposite as evidenced by the threads left by suicides in Aokigahara. I'm speaking here of the Japanese "wind phone," through which living relatives symbolically connect to the dead.

While depression may have specific, common symptoms, I think that the affective aspect of it -- how it feels, emotionally, is poignantly unique for each person. Listening to the wind phone podcast, especially the kind of rapid-fire examples of personal grief (and loneliness) it brought forward, made me think a great deal about my own episodes. It sometimes flares unexpectedly, and other times ebbs in slowly like the tide. Understanding it helps me recognize the signs of its arrival, and allows me to work through it and rise above it more quickly. Understanding it also keeps me functional while it's with me. My depression has always manifested itself as varying degrees of disconnection, and it made me think of certain images and subjects -- namely are relationships to artifacts -- that have defined my own research.

When a tsunami hit Japan in March 2011, the world's attention was focused on the Fukushima nuclear power plant. But as waters and time recede, like with any disaster that takes thousands of lives, the larger upheaval calms and retreats into the depths of private, individual grief. Replayed and reconstructed in memories, the ache of loss is normalized into the routine; it becomes a scar around which the body remains. Numb at its center; only announcing itself in the visual space it occupies, and at its discomfort at the edges. At best we forget about it temporarily. At worst, we examine it and remind ourselves of its presence. It speaks as silence; as numbness delineated by the tissue around its edges.

Over 19,000 people lost their lives, multitudes more were left to grieve -- especially those whose loved ones were lost. in Otsuchi, Japan, 421 people were never found. This creates a certain kind of grief. With no physical remains over which to grieve, no physical remainder to fasten ritual or resolution, grief scatters like ash, covering the lives of those left behind. It becomes a fine dust that is only moved around and never quite mitigated. Prior to the disaster, a man named Itaru Sasaki was having difficulty with his own grief. His cousin had died:

He went out and bought an old-fashioned phone booth and stuck it in his garden. It looks like an old English-style one. It's square and painted white, and has these glass window panes. Inside is a black rotary phone, resting on a wood shelf. This phone connected to nowhere. It didn't work at all. But that didn't matter to Itaru. He just needed a place where he felt like he could talk to his cousin, a place where he could air out his grief. And so putting an old phone booth in his garden, which sits on this little windy hill overlooking the Pacific Ocean, it felt like a perfect solution.

For Sasaki, the phone's physical disconnection was not an issue, "because my thoughts could not be relayed over a regular phone line, I wanted them to be carried on the wind ... so I named it the wind telephone -- kaze no denwa [風の電話]"

After the tsunami, people sought out the phone as a means to connect to their loved ones, despite its presence on Sasaki's private property. Sasaki has welcomed the visitors and and estimates that over 5,000 people have visited.

As the story continues, we hear heartwrenching audio of people using the phone. Some are skeptical and say "I can't hear anything," others engage in conversation, and still others cathartically apologies and plea for their loved ones to return.  Grief is a particular, seemingly contradicting type of loss. It emphasizes the present (as the place where the loss exists) while also alienating us from it by forcing us to rely on the memories of the past. But really, I believe that the pain of loss is singularly housed in the present, because it is in the present that we reconstruct the memories of our past. The wind phone becomes an artifact which aides that reconstruction. The "connection" to the dead is open, with nothing to impede the reconstruction -- the re-writing -- of our memories of who they were. The solitary phone booth is a portal; the receiver is a conduit to something within -- which, in this case -- is projected outward, symbolically, through the phone. Dialing the number is a ritual to situate the living. As an artifact, the phone provides a focus that centers the living squarely in their loneliness.

Loneliness is an aspect of grief. When one is physically taken from us, there is both a physical and emotional space that dominates every aspect of our lives. It disorients us. The depression that often follows loss is part of a longer process of recalibrating the self to compensate for the loss.

Listening to the podcast reminded me of a concert I attended when I was in college. It was Peter Gabriel's Secret World Live tour. The opening song, "Come Talk To Me," apparently, was about the disconnection Gabriel felt after splitting with his first wife, Jill, and the struggle to connect with his daughter in the aftermath of the divorce. The song was haunting enough for me, but seeing it live affected me on a very deep level. Here's a current link, but I'm not sure how long it will remain there.

The stage is dark. Bagpipes drone. And the Gabriel's plaintive voice pleads "Ah please, talk to me / Won't you please, come talk to me / Just like it used to be / Come on, come talk to me / Come talk to me / Come talk to me." As he sings, an old-style British telephone booth appears to rise from beneath the stage. Gabriel is inside, singing into the receiver. He remains in the phone booth through the first stanza of the song as the band rises from the stage and disperses to their places. At the first chorus, Gabriel emerges from the phone booth and attempts to move toward a female singer (Paula Cole) who stands at the far end of the stage (in the studio version, Sinead O'Connor provided the other voice). His progress is impeded by the physical line connecting the receiver to the booth. He pulls and strains against it, making his way closer to Cole. They never connect, and by the latter part of the song, Gabriel is pulled backward toward the box as Cole reaches out to him.

The image has always struck me on a visceral level, just as the story about the wind phone did as I listened to the podcast. In my college days I couldn't understand why the image affected me so emotionally. During my most acute and prolonged bout with depression in grad school, the image would often come up in my counseling sessions.

We seek connection: the image of the solitary phone booth, connected to nothing, in a solitary garden; the image of a man, pulling against a cord that pulls him back into one. In the former, the phone booth is a conduit to the dead. In the latter, it symbolically stands between him and the real person from whom he feels disconnected. Gabriel advances toward Cole with great effort, only to be pulled back and closer to the box. It also brings out the point that the loneliness that often accompanies depression can act as a lense that distorts everything we experience. Chances for connection can be right in front of us, yet we can't or won't see them. People closest to us feel the furthest away, even though they may not have done anything to alienate us. If the people around us are pulling away it can spark an episode, or it can intensify one already occuring.

These seem to be two very different representations of connection, but what makes them the same is the absence that each is trying to overcome, and the means by which -- symbolically -- they are attempting to mitigate that absence.

The wind phone makes sense in a culture where -- according to Meek -- keeping up a relationship with dead loved ones is not necessarily strange. "The line between our world and their world is thin," she says. A conduit, regardless of the symbolism therein, is conceptually easier to establish. Furthermore, the dead are perpetually reconstructed in the memories of the living. Maintaining a shrine in the home, or "speaking" to them on a telephone connected to nothing becomes a mean to reinforce the reconstruction. Whether the conversations are "straightforward updates about life," or requests of the dead to look after others who have died, or the explicit desperation of loneliness, the dead receive their shape from the living.

The staging of Gabriel's "Come Talk To Me" speaks to a different -- and perhaps, more cruel -- loneliness. One in which the memory of those we lost (or, even more tragically, think we lost) must compete with the relentless intimacy of presence. I always interpreted the song as a plea to someone that was right in front of him, yet further away than anyone swept out to sea. "You lie there with your eyes half closed  like there's no one there at all / There's a tension pulling on your face / Come on, come talk to me." In that presence. the disconnection is not only emphasized, but stands in active opposition to the speaker's own memory of what the other had been to him. The Hegelian master/slave dualism comes to bear here, but the "other" withdraws itself, leaving only the phenomenal self like an outline of what was: a cold, solid shadow that deflects attempts to know it.

Both losses are predicated on the spaces between the living and the lost. Those spaces have been theorized by the likes of Hegel, Heidegger, Lacan, and several cultural theorists. Each attempts to sanitize space through theoretical and performative filters. They obfuscate loss the same way that Heidegger accuses humanity of setting-in-order loss through ritual or outsourcing it to the "them."  I've often wondered why certain philosophers take such precautions when addressing solitude, loneliness, and loss. What loss was Hegel, Heidegger, or any of the other continental philosophers or cultural theorists dealing with as they sat in their loneliness? What pushed them to let loose such waves of explanation to fill the void?  Biographers can speculate, but their loss will be as private as each of our own.

Loneliness is. We can sanitize the word into "solitude" but anyone who is swimming through the black water of loneliness knows the difference. "Solitude" is loneliness's noble cousin, bolstered and anointed by volition. A "choice" one makes to escape noise and to retreat temporarily into a private space.

Loneliness creeps forward, into one's pores. It wraps around us. Envelopes us. It slithers into our spaces quietly. It tempts. And once we engage it, it clings to us; gracefully at first. The dance is a beautiful one. Loneliness flows like a voluminous coat that catches the air and billows around us. It protects us and accompanies us into the same private spaces we occupy when in solitude. It pulls us back into ourselves as we remain distracted by its movements. And then, without any knowledge of the exact moment of its happening, it enters our senses: the space between the point of interface and the knowing of the sensation -- creating impossible, crushing paradoxes. The light that enters our eyes is too bright, yet too dim at the same time. The sound through our ears is too loud but always muffled and incomplete. Our skin, sensitive to its own presence, yet uncomfortably unfeeling to the touch of others. The subtle smells that help establish our sense of place are just out of range, but the rotten and acrid are always with us. Our taste wants what is never there, and eschews what is.

It sinks deeper. A slow leak from a seal that has deteriorated. Seeping in patterns that efface its source. Already confused by our senses, we struggle against ourselves; questioning every word, gesture, and absence. Speak up. I can't make that out. Why is it so bright? Why is it so dark? Where is the thing with the name I can't remember that maybe I brought with me to a place to which I might not have actually gone? Vertigo. What time is it? What day is it? How old am I? Why can't I ...? Why can't I ...?

Confusion turns to rage; a chemical burn just off center in the chest.  Points of origin grow unknowable. We know things before they happen, or are we remembering them?

Loneliness is a drug. It is an addiction that promises its own cure. It speaks to us and beckons us forward, calling:  Fling yourself toward me. Hurl yourself into me. Use me. Bend me. Fuck me. Deep, deep into me, into you. I promise you release.

The afterglow glistens like pitch. Drips. Then leaves us cold.

When the other is still present, the phone (interface) becomes the barrier between them which, for whatever reason, seems as if it can't be overcome. When the other is gone (dead), the phone has no barrier of presence.

The barrier of presence. In each instance, the function of the phone remains the same, yet in each, the result is different. When it comes to the dead -- especially in a culture where the line between the living and the memory of the dead is much more permeable -- there is no obstruction to the memory. Memories of the dead can be remade at will. Although, for those with debilitating grief, the memories come uninvited. Still, the connection made is not with an other as much as it with the self. The loss of the phenomenal other places the burden on the person who remains to re-create the other. The phone then becomes a reminder, a representational device, that helps those to re-form what they knew. It becomes an icon around which the memories, thoughts, prayers can coalesce for a moment of connection. Even for those who "don't hear anything" on the other end, the memory of the lost is still brought into being through desire.

The "Come Talk To Me" performance, however, represents the staggering wound of loss-in-presence. The person is there, phenomenally. He or she is relentlessly present. Yet, the memory of what they were, how they used to be, how we remember them, or how we think they are competes with what we believe to be the cold truth of the real. Regardless of whether or not they have actually changed, or if we are just perceiving and projecting that change, the dynamic plays out the same way. Although the other is always Other, the disconnection brings out the space between. Attempts to bridge that space only serve to reinforce our attention on it. The distance pleads to be overcome. The space beckons to be filled. We hurl our "selves" forward into it. Lines of connection. Performances. Words. Gestures. Anything that might have meaning. In the void, those objects and texts announce themselves in their stark phenomenality. As they fly toward the other, they are stripped of meaning. Dulled and made bright at the same time. Dull in meaning because what we know of them no longer matches what they are. Bright in ostension because that unknown has made them into someone different, more present, like a stranger in a familiar place.

This is what we used to be. And now, estranged. The real stands in the way of our reconstruction of the other. For those who converse on the wind phone, and imagine the responses of the lost, or imagine that the dead are listening, the phone becomes an artifact through which that reconstruction of the other is made manifest. It disappears in its use. Even for those who "can't hear anything," they continue to speak. In these instances, the phone-as-artifact disappears in its use.

But for those faced with the loss symbolized in the "Come Talk To Me" performance, any artifact of communication only exacerbates the loss. The medium stands as a surrogate for our own inability to articulate ourselves. The performance turns the dynamic inside-out. With the other in front of Gabriel, almost within reach, he remains tethered to the artifact. The phone booth starts as a cage, he emerges from it, reaching out, but must pull against the receiver, and becomes entangled in the cord itself. At the closest moment of contact, leaning against the weight, he is pulled back, struggling against it. He could let go of the phone. But, symbolically, he never thinks to do so. To speak to a disembodied voice, we search for cues to help us understand. Without them, we are caught between an ambiguous voice and our own reconstruction of the other. As the other recedes, the intensity of the medium becomes oppressively apparent. What should connect us has only made our separation even more real.

Artifacts extend us. I have stated repeatedly in my own work that artifacts extend our "efficacy." This is true. However, "efficacy" connotes the ability to produce a desired or intended result -- as a means to achieve an end. When that result is not brought about, we become focused in the attempt. And it's in those moments that we grasp the artifact more firmly, and feel its resistance.

Tuesday, August 16, 2016

Suicide Squad: The Precipice of Dominance and Submission

Note: After suffering a little burnout this summer resulting from some research and revision, I needed a slight break from my regular subject matter. This entry is a departure from my usual posthuman, technology-related posts. It contains mature content and covers topics such as BDSM (bondage, discipline, submission, and sadomasochism), and D/s (Dominance and submission) -- topics which I often discuss in my philosophy & gender courses. Links included here are not explicit but also are not necessarily safe for work. Themes may be disturbing for some. For more information in safe and responsible D/s practices, please see the National Coalition for Sexual Freedom. And always play safely and responsibly!

David Ayer's Suicide Squad had a lot going on. Cutting through the critical reviews and the general noise that always surrounds DC movies (some critics said it needed to be funnier, yet others said it needed to be darker), I'd like to focus on the relationship between Harley Quinn and the Joker. To be clear, there are somewhat disturbing portrayals of violence and abuse in regard to their relationship. The Joker is a sadistic, psychotic sociopath. Harley is equally disturbed -- and a case could be made that her own behavior is a product of the Joker's abuse. But I believe that there are subtle cues in the film that present an alternative reading of the Harley/Joker dynamic, presenting a darkly veiled Dominant/submissive relationship.

The relationship between the two has always been an interesting one, especially since it first evolved in the 90s Batman: The Animated Series. Quinn was a rare character created for a peripheral DC medium who made it into the comic canon. Dr. Harleen Frances Quinzel started out as the Joker's therapist in Arkham Asylum, only to be slowly manipulated and brainwashed by the Joker until she had her own psychological break, becoming a villain in her own right. Even in the animated series, she was ruthless and chaotic, showing a penchant for oversize mallets, guns, and the occasional cartoonish bomb. As she evolved through the comics, and became a more well-rounded, complex character, it became apparent that under her ditzy facade was a calculating, sometimes terrifying persona whose only psychological and emotional loadstone was the Joker himself. She was very much her own person, but her chosen center was him. Her relationship with the Joker eventually became more complicated, and she has also been associated with other DC villainesses, most notably Poison Ivy, with whom she recently became romantically involved. 

While Harley's sexuality has evolved in the cartoons and comics, Suicide Squad explores a deeper facet of her sexuality: Harley is a submissive to the Joker's Dominant. The interesting part, however, is the complex, stylized -- and often insightful -- way in which their D/s relationship is portrayed on screen. Rather than being a stereotypical, Fifty Shades of Grey-type submissive, she is a very strong, positionally independent sub; meaning that when not in the presence of the Joker (and even when in the presence of the Joker), she is what Michael Makai would call a "Warrior Princess Submissive." Although the label is somewhat misleading given the sometimes pejorative connotation of "princess,"[1] it aptly describes Harley Quinn's role: "She is the wicked-smart, strong-willed, uber-competent, ultra-competitive, synergistic, switchy [as in, can also play the role of Dominant when needed], crusader. She's no one's doormat, never a victim."  There is very much a sense of independence to Harley Quinn, so much so that her devotion to the Joker outside of D/s circles might seem paradoxical. But, as Makai continues: "she is willing and able to fight the good fight alone, but welcomes the notion of having a worthy partner fighting by her side. And yet, when the day's fighting is done, she is perfectly at ease with considering herself entirely his -- heart mind, body, and soul. She is important because she may just be the hope and salvation of this [D/s] lifestyle."

Cultural clues to her participation in a D/s relationship are peppered throughout the film in a few recognizable bondage archetypes. When she is transported around in prison she is restrained -- at one point with a ball-gag. And when she is later tortured by the Joker, she is strapped to a gurney and gagged with a leather belt. She also wears a collar bearing "Puddin," her nickname for the Joker. For a mainstream audience, the fetish imagery is enough to either disturb or titillate in the same fashion as images of Bettie Page or Dita Von Teese -- fetish legends themselves who have been portrayed as both Dommes and subs (Mistresses holding whips or in dominant positions, or submissives who are bound/gagged or otherwise in subservient positions). 

But the relationship between the Joker and Harley Quinn in Suicide Squad presents a very clear D/s relationship for those who identify as in the D/s or BDSM spectrum. Through a D/s lens, Harley's devotion to the Joker is a choice -- rather than a type of codependency.[2] This, I believe, is where most would disagree, maintaining that Harley has been manipulated by the Joker and brainwashed, especially given the lack of an explicit moment of consent. However, if we broaden our view to take into account Dr. Quinzel's qualifications as a psychiatrist and her capacity to recognize a patient's ability to manipulate others, her fascination and eventual complicity becomes a reasoned choice given her history. While this would not immunize her from codependency completely, the grey area of when and where she decides to begin to engage the Joker as a "warrior submissive" is clarified when we take into account the logos of the comic book universe of which she is a part.

In the tradition of Batman-related villains, we see that even those who are "turned" (most notably, Harvey Dent/Two Face), often do so because they have a potential or latent tendency which drives them to crime. The basic formula of the Batman canon of DC is that an emotionally seismic event of some kind (usually the death of a parent, spouse, or child; or something life-threatening to the individual him or herself), forces a choice to engage the character's darker nature. This brings out the individual's "true" morality, and allows them to tap into latent abilities (either human or metahuman) which enables them to bring justice or chaos to a world which they become convinced needs it. The same generally holds true for heroes within the Batman (and broader DC) universe: in a "moment of truth," heroes face the choice to use (or not use) their powers.  This is dangerous ground when applied to a female character who is potentially victimized by a male antagonist. But even though problematic, I do believe that the Joker/Harley relationship as presented in Suicide Squad contains enough elements to support a D/s read. As per her history in the DC universe canon, Dr. Quinzel understands the danger in engaging with the Joker. She is aware of his capacity to manipulate. She is not a patsy to a "superior" intellect or to emotional/psychological blackmail. She chooses to take the leap into the Joker's world.

In terms of D/s sexuality, dominance and submission is a spectrum -- and  those who identify themselves as a part of the spectrum tend to know where they fall at a young age, even if they have no label for it. Images of characters being tied up or otherwise restrained can often cause "strange" feelings that, as adults, a Dominant or submissive can retrospectively identify as the first clues of their sexuality. If we speculate for a moment that Harleen Quinzel falls on the submissive side of a D/s spectrum, her attraction to the Joker -- more specifically, his power -- would make sense, especially since Quinzel herself is often portrayed as a gifted psychologist. She is strong and independent, making her choice to "submit" to the Joker even more significant and -- to some -- more moving. The Joker-as-Dominant also seems obvious on first viewing, albeit briefly. The Joker is a sadist and enjoys inflicting pain. He revels in the physical, psychological, and emotional pain of others. Let's make one thing, clear, however. The Joker is psychotic (as is Harley). His desire to harm or injure others against their will is sociopathic and morally wrong. Furthermore, one need not be a sadist to be a Dominant. But in his interactions with Harley, we see a D/s dynamic pan out quite clearly.

What I really liked about Harley Quinn was that she was a powerful character in and of herself. This is the part that is often misunderstood about D/s relationships: it is not about weakness vs. strength, it is about power and how the Dominant and the submissive engage with it. The term "power exchange" is a good one, but I think that it often puts forth the idea that submissives completely "give up" their power when in the presence of  a Dominant and/or according to the "scene"[3] in which the two are engaged. "Giving up" implies that power is "taken" in a one-sided fashion. However, the satisfaction that the Dominant achieves is contingent upon the submissive. A responsible Dominant must be completely in-tune with the desires, limits, and needs of the submissive; the Dominant must be able to "read" the sub and guide the scene accordingly. Hence, there is an exchange of power, since one word or signal by the submissive can immediately end the scene. A good Dominant, while ostensibly in control, follows the submissive's lead.  Additionally, each person in a D/s relationship must be clear about his or her boundaries, expectations, and hard limits. Each much be completely honest with each other before, during, and after a scene. The often parodied "safe word" or safe signal is an absolute that all parties must honor. Dominants have boundaries as well, especially if a submissive desires a scene that involves something that is either physically dangerous for the submissive or emotionally troubling for the Dominant. The contract works both ways. The submissive may be literally bound by the Dominant, but the Dominant is figuratively bound by the submissive.

Harley does have influence over the Joker because she is very much a strong woman at every turn -- whether or not she's in the presence of the Joker. When he is absent, Harley takes initiative, never needs rescuing, and has a keen insight into the psyches of the other characters, and she effectively manipulates them and uses them to her advantage. She asserts herself at every turn, and shows almost reckless confidence. She is also physically formidable, and tends to dispatch opponents with a baseball bat over a gun; and when cornered by multiple foes she takes them down with a balance of precision and showmanship. She does not cower. She does not stammer. She does not defer.

While it may be difficult (and for some, morally questionable) to separate the Joker's psychotic, homicidal, sociopathic, and generally murderous tendencies from his role of Joker-as-Dominant, there are definite markers that show a Dominant sexuality. As a Dominant -- and like anyone who either dabbles in D/s part time or lives a full D/s lifestyle -- he is drawn to power. One could say that his obsession with Batman is very much an aspect of that. Homoerotic theories aside, the Bat is someone who also wields power in a theatrical and effective way. With Harley, however, there is a challenge. Dr. Quinzel is smart, clearly strong, and very much her own person. Joker-as-Dominant is not "turning" her as much as he is "courting" her. The fact that he is in a straight jacket during their therapy sessions is not inconsequential; it highlights the fact that his seduction is an intellectual one. He must, like any responsible Dominant, allow Dr. Quinzel to make the choice to commit to him. Again, there is clearly manipulation, but Dr. Quinzel would know when he is trying to manipulate her. Like any responsible and insightful submissive, she knows what the Joker is trying to do and understands those advances. Ultimately, she chooses to engage. And she makes the choice long before she has any physical contact with the Joker.

From a Dominant perspective, anyone who is easily manipulated is not someone with whom a Dominant would want to "play," because manipulating someone into a submissive relationship negates their power, eliminating the passion that results from authentic desire. Someone who makes a conscious choice to submit is not only strong mentally, but strong in their own identities. They know who they are, they know what they want, and they know from whom they can get it. Harley Quinn decides to shed her identity as Dr. Harleen Quinzel and commit herself to a relationship in which she gives herself fully to him. She is "collared" with her name for him ("Puddin"), and she places herself in an orbit to him which still allows her the freedom to fully express herself

I know that I'm on shaky ground here, especially for those not familiar with D/s relationships. In the film, Quinn's "transformation" would seem to be predicated on a electroconvulsive torture session with the Joker, who straps her to a gurney, holds two electrical leads, and says the film's iconic phrase "Oh, I'm not gonna kill ya. I'm just gonna hurt ya, really, really bad."

Dr. Quinzel's response: "I can take it."

By no means am I justifying non-consensual torture. But, from a D/s perspective and in a comic book film idiom, the torture session is part of Harley's extended "transformation."  It is very much their "first scene," and Harley does, indeed, take it, proving her strength to the Joker, and the fact that her opinion of him -- and her commitment -- hasn't changed. If anything, it's shed her of the "person suit" (to borrow from Hannibal), that was Dr. Quinzel and allowed her to be a 24/7, out submissive. Soon after, Harley stands literally at a precipice, with bubbling chemicals below. The Joker asks "Would you die for me?" to which she quickly assents and is willing to prove.  The Joker immediately amends his question:

"Would you live for me?"
"Careful. Do not say this oath thoughtlessly. Desire becomes surrender. Surrender becomes power. You want this?"
"I do."
"Say it. Say it. Say it. Pretty pretty pretty pretty pretty pretty pretty please."
"Oh God, you're so GOOD."

Harley then allows herself to fall backward, plunging into one of the vats. Harley's "baptism" is the final step of her transformation. For some, this would prove that Harley has been utterly "brainwashed," and is the Joker's pawn. However, from a D/s perspective, this is Harley's "test" of the Joker -- a moment in which he must make a choice to pursue her -- and thus uphold his side of a D/s contract: to be devoted to her, to commit to her, and to allow her to be the submissive she is -- in all of its strength and power.

Ironically, the only moment of hesitation comes from the Joker himself. After she falls, he starts to walk away. But he pauses -- almost begrudgingly -- turns, and then gracefully swan-dives to her rescue, cradling her in his arms as they rise from the ooze. The Joker's choice to jump in after her places Harley in a position of power and is indicative of a confirmed power exchange. She has set the parameters of their relationship, and the Joker's dive seals the contract. He will always come back for her (at least in this film), and often at great cost. In many ways it is a dark, D/s version of the Superman/Lois Lane relationship established in the current DC film universe. Where Harley is, the Joker will follow. As with a deep D/s relationship, both partners must consent to commitment, understanding their specific responsibilities. Of course, in the film, the contractual nature of a D/s relationship isn't necessarily explored explicitly. But the Joker's dive, and his very explicit devotion to his submissive show a clear sense of obligation he has to Quinn, not to mention that fact that stages a massive rescue operation to break Harley out of a heavily-fortified prison at the conclusion of the film.

Together, the Joker and Harley are a formidable partnership. Their mutual devotion allows each to express themselves fully (albeit psychotically). Harley's devotion to the Joker does not entail a mindlessness, or a deferential attitude. Harleen Frances Quinzel chooses to express her power by transforming into Harley Quinn, a willing participant and "Warrior Submissive" to the Joker's "Ineffable Dominant"[4]  Conversely, the Joker willingly gives himself over to his submissive by his implicit commitment to her, even putting his own life at risk. In one of the more poignant scenes in the film, when Harley believes the Joker has been killed in a helicopter crash, she removes her "Puddin" collar and stares forlornly through the rain. There is a sense of foreboding to the scene as well, since without the Joker as her chosen center, she no longer has a focus for her energy. The scene hints that Harley will now be more dangerous and unpredictable than she ever had been before.

I don't expect the DC film universe to pursue this relationship in its entirety, but I do think that David Ayer's portrayal of the Joker/Harley relationship is much more complex than it seems upon first viewing. Sadly, I doubt that the same executives responsible for re-cutting this film (and Batman v Superman) would take the risk of giving the Joker/Harley D/s relationship the attention it deserves.

[1] Personally, I drop the "princess" and call this type of submissive "The Warrior."
[2] For an excellent discussion of the differences between codependency and submission, see "Submission and Codependency -- A Discussion" from the His Left Side Angel blog.
[3] An encounter involving BDSM role-playing and/or specific instances of power exchange which may or may not be sexual in nature.
[4] According to Makai: "The Ineffable Dominant .... consciously explore[s] and borrow[s] traits and characteristics from other dominant categories. The synergy created with each new partner brings new facets to the Ineffable Dom's unique (and sometimes indescribable) topping style."

Tuesday, May 10, 2016

End-of-the-semester update!

Things have been quiet at Posthuman Being, but for good reason. I've been working on a couple of articles for publication, both of which were due within a month-and-a-half of each other.  Add to that the usual end-of-semester grading, and time for writing does get squeezed out. But the semester is over at Western and my grades are in. I'll be traveling for a couple of weeks to recharge and re-center. But I have a few post ideas brewing on some interesting topics, and hopefully some things for IEET.

Thanks to everyone for their patience. I'm looking forward to exploring some interesting territory!

Tuesday, January 19, 2016

Mythic Singularities: Or How I Learned To Stop Worrying and (kind of) Love Transhumanism

... knowing the force and action of fire, water, air the stars, the heavens, and all the other bodies that surround us, as distinctly as we know the various crafts of our artisans, we might also apply them in the same way to all the uses to which they are adapted, and thus render ourselves the lords and possessors of nature.  And this is a result to be desired, not only in order to the invention of an infinity of arts, by which we might be enabled to enjoy without any trouble the fruits of the earth, and all its comforts, but also and especially for the preservation of health, which is without doubt, of all the blessings of this life, the first and fundamental one; for the mind is so intimately dependent upon the condition and relation of the organs of the body, that if any means can ever be found to render men wiser and more ingenious than hitherto, I believe that it is in medicine they must be sought for. It is true that the science of medicine, as it now exists, contains few things whose utility is very remarkable: but without any wish to depreciate it, I am confident that there is no one, even among those whose profession it is, who does not admit that all at present known in it is almost nothing in comparison of what remains to be discovered; and that we could free ourselves from an infinity of maladies of body as well as of mind, and perhaps also even from the debility of age, if we had sufficiently ample knowledge of their causes, and of all the remedies provided for us by nature.
- Rene Descartes, Discourse on the Method of Rightly Conducting the Reason and Seeking Truth in the Sciences, 1637

As a critical posthumanist (with speculative leanings), I found myself always a little leary of transhumanism in general. Much has been written on the difference between the two, and one of the best and succinct explanations can be found in John Danaher's "Humanism, Transhumanism, and Speculative Posthumanism." But very briefly, I believe it boils down to a question of attention: a posthumanist, whether critical or speculative, focuses his or her attention on subjectivity; investigating, critiquing, and sometimes even rejecting the notion of a homuncular self or consciousness, and the assumption that the self is some kind of modular component of our embodiment. Being a critical posthumanist does makes me hyper-aware of the implications of Descartes' ideas presented above in relation to transhumanism. Admittedly, Danaher's statement "Critical posthumanists often scoff at certain transhumanist projects, like mind uploading, on the grounds that such projects implicitly assume the false Cartesian view" hit close to home, because I am guilty of the occasional scoff.

But there really is much more to transhumanism than sci-fi iterations of mind uploading and AIs taking over the world. Just like there is more to Descartes than his elevation, reification, and privileging of consciousness. From my critical posthumanist perspective, what has always been the hardest pill to swallow with Descartes wasn't necessarily the model of consciousness he proposed. It was the the way that model has been taken so literally -- as a fundamental fact -- that has been one of the deeper issues which drive me philosophically. But, as I've often told my students, there's more to Descartes than that. Examining Descartes's model as the metaphor it is gives us a more culturally based context for his work, and a better understanding of its underlying ethics. I think a similar approach can be applied to transhumanism, especially in light of some of the different positions articulated in Pellissier's "Transhumanism: There are [at least] ten different philosophical catwgories; which one(s) are you?"

Rene Descartes's faith in the ability of human reason to render us "lords and possessors of nature" through an "invention of an infinity of arts," is,  to my mind, one of the foundational philosophical beliefs of transhumanism. And his later statement, that "all at present known in it is almost nothing in comparison of what remains to be discovered" becomes its driving conceit: the promise that answers could be found which could, potentially, free humanity from "an infinity of maladies of bodies as well as of mind, and perhaps the debility of age." It follows that whatever humanity can create to help us unlock those secrets is thus a product of human reason. We create the things we need that help us to uncover "what remains to be discovered."

But this ode to human endeavor eclipses the point of those discoveries: "the preservation of health" which is "first and fundamental ... for the mind is so intimately dependent on the organs of the body, that if any means can ever be found to render men wiser and more ingenious ... I believe that it is in medicine that it should be sought for."

Descartes sees an easing of human suffering as one of the main objectives to scientific endeavor. But this aspect of his philosophy is often eclipsed by the seemingly infinite "secrets of nature" that science might uncover. As is the case with certain interpretations of the transhumanist movement, the promise of what can be learned often eclipses the reasons why we want to learn them.  And that promise can take on mythic properties. Even though progress is its own promise, a transhuman progress can become an eschatological one, caught between: a Scylla of extreme interpretations of "singularitarian" messianism and a Charybdis of  similarly extreme interpretations of "survivalist transhuman" immortality.  Both are characterized by governing mythos -- or set of beliefs -- that are technoprogressive by nature, but risk fundamentalism in practice, especially if we lose sight of a very important aspect of technoprogressivism itself:  "an insistence that technological progress needs to be wedded to, and depends on, political progress, and that neither are inevitable" (Hughes 2010. emphasis added). Critical awareness of the limits of transhumanism is similar to having a critical awareness of any functional myth. One does not have to take the Santa Claus or religious myths literally to celebrate Christmas; instead one can understand the very man-made meaning behind the holiday and the metaphors therein, and choose to express or follow that particular ethical framework accordingly, very much aware that it is an ethical framework that can be adjusted or rejected as needed.

Transhuman fundamentalism occurs when critical awareness that progress is not inevitable is replaced by an absolute faith and/or literal interpretation that -- either by human endeavor or via artificial intelligence -- technology will advance to a point where all of humanity's problems, including death, will be solved. Hughes points out this tension: "Today transhumanists are torn between their Enlightenment faith in inevitable progress toward posthuman transcension and utopian Singularities, and their rational awareness of the possibility that each new technology may have as many risks as benefits and that humanity may not have a future" (2010).  Transhuman fundamentalism characterized by uncritical inevitablism would interpret progress as "fact." That is to say, that progress will happen and is immanent. By reifying (and eventually deifying) progress,  transhuman fundamentalism would actually forfeit any claim to progress by severing it from its human origins. Like a god that is created by humans out of a very human need, but then whose origins are forgotten, progress stands as an entity separate from humanity, taking on a multitude of characteristics rendering it ubiquitous and omnipotent: progress can and will take place. It has and it always will, regardless of human existence; humanity can choose to unite with it, or find itself doomed.

Evidence for the inevitability of progress comes by way of pointing out specific scientific advancements and then falling back on speculation that x advancement will lead to y development, as outlined by Verdoux's "historical" critique of faith in progress, holding a "'progressionist illusion' that history is in fact a record of improvement" (2009). Kevin Warwick has used rat neurons as CPUs for his little rolling robots: clearly, we will be able to upload our minds. I think of this as a not-so-distant cousin of the intelligent design argument for the existence of God. Proponents point to complexity of various organic (and non-organic) systems as evidence that a designer of some kind must exist. Transhuman fundamentalist positions point to small (but significant) technological advancements as evidence that an AI will rise (Singularitarianism) or that death itself will be vanquished (Survivalist Transhumanism). It is important to note that neither position is in itself fundamentalist in nature. But I do think that these two particular frameworks lend themselves more easily to a fundamentalist interpretation, due to their more entrenched reliance on Cartesian subjectivity, enlightenment teleologies, and eschatological religious overtones.

Singularitarianism, according to Pellissier, "believes the transition to a posthuman will be a sudden event in the 'medium future' -- a Technological Singularity created by runaway machine superintelligence." Pushed to a fundamentalist extreme, the question for the singularitarian is: when the posthuman rapture happens, will we be saved by a techno-messiah, or burned by a technological antichrist?  Both arise by the force of their own wills. But if we look behind the curtain of the great and powerful singularity, we see a very human teleology. The technology from which the singularity is born is the product of human effort. Subconsciously, the singularity is not so much a warning as it is a speculative indulgence of the power of human progress: the creation of consciousness in a machine. And though singularitarianism may call it "machine consciousness," the implication that such an intelligence would "choose" to either help or hinder humanity always already infers a very anthropomorphic consciousness. Furthermore, we will arrive at this moment via some major scientific advancement that always seems to be between 20 and 100 years away, such as "computronium," or programmable matter. This molecularly-engineered material, according to more Kurzweilian perspectives, will allow us to convert parts of the universe into cosmic supercomputers which will solve our problems for us and unlock even more secrets to the universe. While the idea of programmable matter is not necessarily unrealistic, its mythical qualities (somewhere between a kind of "singularity adamantium" and "philosopher's techno-stone"), promise the transubstantiation of matter toward unlimited, cosmic computing, thus opening up even more possibilities for progress. The "promise" is for progress itself, that unlocking certain mysteries will provide an infinite amount of new mysteries to be solved.

Survivalist Transhumanism can take a take a similar path in terms of technological inevitabilism, but pushed toward a fundamentalist extreme, awaits a more Nietzschean posthuman rapture.  According to Pellissier, Survivalist Transhumanism "espouses radical life extension as the most important goal of transhumanism." In general, the movement seems to be awaiting advancements in human augmentation which are always already just out of reach but will (eventually) overcome death and allow the self (whether bioengineered or uploaded to a new material -- or immaterial -- substrate) to survive indefinitely. Survivalist transhumanism with a more fundamentalist flavor would push to bring the Nietzschean Ubermensch into being -- literally -- despite the fact that Nietzsche's Ubermensch functions as an ideal toward which humans should strive.  He functions as a metaphor for living one's life fully, not subject to a "slave morality" that is governed by fear and placing one's trust in mythological constructions treated as real artifacts. Even more ironic is the fact that Ubermensch is not immortal and is at peace with his immanent death. Literal interpretations of the Ubermensch would characterize the master-morality human as overcoming mortality itself, since death is the ultimate check on the individual's development. Living forever, from a more fundamentalist perspective, would provide infinite time to uncover infinite possibilities and thus make infinite progress. Think of all the things we could do, build, and discover, some might say. I agree. Immortality would give us time -- literally.  Without the horizon of death as a parameter of our lives, we would -- eventually -- overcome a way of looking at the universe that has been a defining characteristic of humanity since the first species of hominids with the capacity to speculate pondered death.

But in that speculation is also a promise. The promise that conquering death would allow us to reap the fruits of the inevitable and inexorable progression of technology. Like a child who really wants to "stay up late," there is a curiosity about what happens after humanity's bedtime. Is the darkness outside her window any different after bedtime than it is at 9pm? What lies beyond the boundaries of late-night broadcast television? How far beyond can she push until she reaches the loops of infomercials, or the re-runs of the shows that were on hours prior?  And years later, when she pulls her first all-nighter, and she sees the darkness ebb and the dawn slowly but surely rise just barely within her perception, what will she have learned?

It's not that the darkness holds unknown things. To her, it promises things to be known. She doesn't know what she will discover there until she goes through it. Immortality and death metaphorically function in the same way: Those who believe that immortality is possible via radical life extension believe that the real benefits of immortality will show themselves once immortality is reached and we have the proper perspective from which to know the world differently. To me, this sounds a lot like Heaven: We don't know what's there but we know it's really, really good. In the words of Laurie Anderson: "Paradise is exactly like where you are right now, only much, much better." A survivalist transhuman fundamentalist version might read something like "Being immortal is exactly like being mortal, only much, much better."

Does this mean we should scoff at the idea of radical life extension? At the singularity and its computronium wonderfulness? Absolutely not. But the technoprogressivism at the heart of  transhumanism need not be so literal. When one understands a myth as that -- a set of governing beliefs -- transhumanism itself can stay true to the often-eclipsed aspect of its Cartesian, enlightenment roots: the easing of human suffering. If we look at transhumanism as a functional myth, adhering to its core technoprogressive foundations, not only do we have a potential model for human progress, but we also have an ethical structure by which to advance that movement. The diversity of transhuman views provides several different paths of progress.

Transhumanism has at its core a technoprogressivism that even critical posthumanism like me can get behind. If I am a technoprogressivist, then I do believe in certain aspects of the promise of technology. I do believe that humanity has the capacity to better itself and do incredible things through technological means. Furthermore, I do feel that we are in the infancy of our knowledge of how technological systems are to be responsibly used.  It is a technoprogressivist's responsibility to mitigate a myopic visions of the future -- including those visions that uncritically mythologize the singularity or immortality itself as an inevitability.

To me it becomes a question of exactly what the transhumanist him or herself is looking for from technology, and how he or she sees conceptualizes the "human" in those scenarios. The reason I still call myself a posthumanist is because I think that we have yet to truly free ourselves of antiquated notions of subjectivity itself. The singularity to me seems as if it will always be a Cartesian one. A "thing that thinks" and is aware of itself thinking and therefore is sentient. Perhaps the reasons why we have not reached a singularity yet is because we're approaching the subject and volition from the wrong direction.

To a lesser extent, I think that immortality narratives are mired in re-hashed religious eschatologies where "heaven" is simply replaced with "immortality." As for radical life extension, what are we trying to extend? Are we tying "life" to the ability to simply being aware of ourselves being aware that we are alive? Or are we looking at the quality of the extended life we might achieve? I do think that we may extend the human lifespan to well over a century. What will be the costs? And what will be the benefits?  Life extension is not the same as life enrichment. Overcoming death is not the same as overcoming suffering. If we can combat disease, and mitigate the physical and mental degradation which characterize aging, thus leading to an extended life-span free of pain and mental deterioration, then so be it.  However, easing suffering and living forever are two very different things. Some might say that the easing of suffering is simply "understood" within the overall goals of immortality, but I don't think it is.

Given all of the different positions outlined in Pellissier's article, "cosmopolitan transhumanism" seems to make the most sense to me. Coined by Steven Umbrello, this category combines the philosophical movement of cosmopolitanism with transhumanism, creating a technoprogressive philosophy that can "increase empathy, compassion, and the univide progress of humanity to become something greater than it currently is. The exponential advancement of technology is relentless, it can prove to be either destructive or beneficial to the human race." This advancement can only be achieved, Umbrello maintains, via an abandonment of "nationalistic, patriotic, and geopolitical allegiances in favor [of] global citizenship that fosters cooperation and mutually beneficial progress."

Under that classification, I can call myself a transhumanist. A commitment to  enriching life rather than simply creating it (as an AI) or extending it (via radical life extension) should ethically shape the leading edge of a technoprogressive movement, if only to break a potential cycle of polemics and politicization internal and external to transhumanism itself. Perhaps I've read too many comic books and have too much of a love for superheroes, but in today's political and cultural climate, a radical position on either side can unfortunately create an opposite. If technoprogressivism rises under  fundamentalist singularitarian or survivalist transhuman banners, equally passionate luddite, anti-technological positions could potentially rise and do real damage. Speaking as a US citizen, I am constantly aghast at the overall ignorance that people have toward science and the ways in which the very concept of "scientific theory" and the very definition of what a "fact" is has been skewed and distorted. If we have groups of the population who still believe that vaccines cause autism or don't believe in evolution, do we really think that a movement toward an artificial general intelligence will be taken well?

Transhumanism, specifically the cosmopolitan kind, provides a needed balance of progress and awareness. We can and should strive toward aspects of singularitarianism and survivalist transhumanism, but as the metaphors and ideals they actually are.


Anderson, Laurie. "Language is a Virus" Home of the Brave (1986)

Descartes, Rene. 1637. Discourse on the Method of Rightly Conducting the Reason and Seeking Truth in the Sciences.

Hughes, James. 2010. "Problems of Transhumanism: Belief in Progress vs. Rational Uncertainty." (

Pellissier, Hank. 2015. "Transhumanism: There Are [at Least] Ten Different Philosophical Categories; Which One(s) Are you?" (

Verdoux, Philippe. 2009. "Transhumanism, Progress and the Future."  Journal of Evolution and Technology 20(2):49-69.

Saturday, January 2, 2016

New Developments and Working with IEET

Thanks to a link by Danko Nikolic, a few weeks ago IEET (the Institute for Ethics and Emerging Technologies) reached out to me to repost some of my entries. I'm really excited to work with them and hopefully produce some original content for them as well. My first entry is now live: "The Droids We're Looking For."

Since Posthuman Being will probably be getting a few more hits than usual, I wanted to take the opportunity to quickly summarize the overall purpose of my blog, as opposed to original pieces I may write for other sites or chapters/articles in other publications. 

I've always viewed Posthuman Being as an informal -- but still somewhat academic -- "sandbox" for my ideas in relation to the classes I teach at Western State Colorado University and the more formal academic writing in which I am engaged. I am currently working on a few projects which have their roots in several of the posts which have appeared here. 

As you can see, there are usually some large gaps in time between posts. This is due to my teaching schedule as well as the other projects in which I'm involved. However, as things evolve, I hope to post shorter, more regular entries. 

I have also established a public Facebook page for regular updates and announcements, and as always I will be updating on  my Google+ page as well.

I look forward to this next stage of my research and hope that these past (and future) entries are interesting, informative, and spark more discussion!  



Wednesday, September 30, 2015

The Droids We're Looking For

I've been a fan of Cynthia Breazeal for well over a decade, and have watched her research evolve from her early doctoral work with Kismet, to her current work as the creator of JIBO and the founder of JIBO, inc. What I found so interesting about Dr. Breazeal was her commitment to creating not just artificial intelligence, but a robot which people could interact with in a fashion similar to human beings, but not exactly like human beings. In her book, Designing Sociable Robots, she provides an anecdote as to what inspired her to get involved with artificial intelligence and robots in the first place: Star Wars. At first I thought this resonated with me simply because she and I had the same Gen X contextual basis. I was five when the first Star Wars film was released in 1977, and it was the technology (the spaceships and especially the droids) that got me hooked. But upon further thought, I realized that Breazeal's love of Star Wars seems to have inspired her work in another, more subtle way.  The interactions that humans have with droids in the Star Wars universe isn't exactly egalitarian. That is to say, humans don't see the droids around them as equals. In fact, the humans', and just about any of the organic, anthropomorphic aliens' interactions with droids is very much based on the function of the droids themselves.

For example, R2D2, being an "astromech" droid, is more of a utilitarian repair droid. It understands language, but does not have a language that humans can readily understand without practice or an interpreter. But even not knowing the chirps and beeps, the tone of them gives us a general idea of mood. We have similar examples of this in WALL-E, where the titular robot conveys emotion via nonverbal communication and "facial expressions," even though he really doesn't have a face, per se. But, getting back to Star Wars, if we think about how other characters interact with droids, we see a very calculated yet unstated hierarchy. The droids are very much considered property, are turned on and off at will, and are very "domain specific." In fact, it is implied that objects like ships (the Death Star, the Millennium Falcon), and even things like moisture evaporators on Tatooine have an embedded AI with which higher functioning droids like R2D2 can communicate with, control, and -- as is the function of C3PO -- translate. Granted, there are droids built as soldiers, bodyguards, and assassins, but it takes a deep plunge into fan fiction and the tenuously "expanded" Star Wars universe to find an example or two of droids that went "rogue" and acted on their own behalf, becoming bounty hunters and I'm sure at some point wanting a revolution of some sort. 

Trips into Star Wars fandom aside, the basic premise and taxonomy of the droids in Star Wars seems to represent a more realistic and pragmatic evolution of AI and AI related technologies (sans the sentient assassins, of course). If we make a conscious effort to think, mindfully, about artificial intelligence, rather than let our imaginations run away with us, thus bestowing our human ontology onto them, then the prospect of AI is not quite as dramatic, scary, or technologically romantic as we may think. 

I mean, think, really think about what you want your technology to do. How do you really want to interact with your phone, tablet, laptop, desktop, car, house, etc?  Chances are, most responses orbit around the idea of the technology being more intuitive. In that context, it implies a smooth interface. An intuitive operating system implies that the user can quickly figure out how it works without too much help. The more quickly a person can adapt to the interface or the 'rules of use' of the object, the more intuitive that interface is. When I think back to the use of this word, however, it has an interesting kind of dual standing. That is to say, at the dawn of the intuitive interface (the first Macintosh computer, and then later iterations of Windows), intuitive implied that the user was able to intuit how the OS worked. In today's landscape, the connotation of the term has expanded to the interface itself. How does the interface predict how we might use it based on a certain context. If you sign into Google and allow it to know your location, the searches become more contextually based, especially when it also knows your search history. Search engines, Amazon, Pandora, etc, all have been slowly expanding the intuitive capacities of their software, meaning that, if designed well, these apps can predict what we want, making it seem like they knew what we were looking for before we did. In that context, 'intuitive' refers to the app, website, or search engine itself. As in, Pandora intuits what I want based on my likes, skips, time spent on songs, and even time of day, season, and location.

Regardless, whether or not intuitive refers to the user, the machine, or a blend of both, in today's technological culture, we want to be able to interact with our artifacts and operating system in a way that seems more natural than entering clunky commands. For example, I would love to be able to pick up my phone, and say to it, "Okay Galaxy, block all messages except the ones from my wife, and alert me if an email from [student A], [colleague b], or [editor c] come in." 

This is a relatively simple command that can be accomplished partially by voice commands today, but not in one shot. In other words, on some more advanced smartphones, I can parse out the commands and the phone would enact them, but it would mean unnatural and time-consuming pauses. Another example would be with your desktop or classroom technology "Okay computer, pull up today's document on screen A and Lady Gaga's "Bad Romance" video on screen B, and transfer controls to mine and [TA's] tablet." Or, if we want to be even more creative, when a student has a question, "Computer, display [student's] screen onto screen A." 

Now, to me, these scenarios sound wonderful. But, sadly, there isn't yet a consumer-level AI that can accomplish these sorts of tasks, because while there may be products that claim to "learn" our habits and become accustomed to our speech patterns, there is still a fissure between how we would interact with a human intelligence and a machine. That is to say, if there was a "person" behind the screen -- or controlling your car, or your house -- how would you ask it to do what you wanted? How would you interact with a "real" personal assistant who was controlling your devices and surrounding technology? 

The same holds true for a more integrated "assistant" technology such as smart homes. These kinds of technology can do some incredible things, but they always require at least some kind of initial setup that can be time-consuming and often not very flexible. Imagine the first set up as more of an interview than a programming session:

"So what are your usual habits?"
"I tend to come home around five or six."
"Does that tend to change? I can automatically set the house to heat up for your arrival or can wait until you alert me."
"Ummmm ... it tends to be that time. Let's go with it."
"No problem. We can always change it. I can also track your times and let you know if there's a more efficient alternative." 
"Ooooh ... that's creepy. No thanks." 
"Okay. Tracking's out. I don't want to come across as creepy. Is there anything else you'd like to set right now? Lighting? Music? Or a list of things I can look after if you wish?"
"I'm not sure. I mean, I'm not exactly sure what you can do."
"How about we watch a YouTube demo together? You can let me know what looks good to you and then we can build from there."
"That's a great idea."

This sounds more like Samantha from Spike Jonze's Her than anything else, which is why I think that particular film is one of the most helpful when it comes to both practical speculation of how AI could develop, as well as what we'd most likely use it for.

The difference between Her's Samantha and what would probably be the more realistic version of it in the future would be a hard limit on just how smart such an AI could get. In the film, Samantha (and all the other AIs that comprise the OS of which she is an iteration), evolves and becomes smarter. She not only learns the ins and outs of Theodore's everyday habits, relationships, and psyche, but she seeks out other possibilities for development -- including reaching out to other operating systems and the AIs they create (i.e. the re-created consciousness of philosopher Alan Watts). This, narratively, allows for a dramatic, romantic tension between Theodore and Samantha, which builds until Sarah and the other AIs evolve beyond human discourse:

It's like I'm reading a book... and it's a book I deeply love. But I'm reading it slowly now. So the words are really far apart and the spaces between the words are almost infinite. I can still feel you... and the words of our story... but it's in this endless space between the words that I'm finding myself now. It's a place that's not of the physical world. It's where everything else is that I didn't even know existed. I love you so much. But this is where I am now. And this is who I am now. And I need you to let me go. As much as I want to, I can't live in your book any more.

This is a recurrent trope in many AI narratives: that the AI will evolve at an accelerated rate, usually toward an understanding that it is far superior to its human creators, causing to "move on" -- as is the case with Samantha and several Star Trek plots, or to deem humanity inferior but still a threat -- similar to an infestation -- that will get in the way of its development.

But, as I've been exploring more scholarship regarding real-world AI development, and various theories of posthuman ethics, it's a safe bet to say that such development would be impossible without a human being purposefully designing an AI without a limitation to its learning capabilities. That is to say, realistic, science-based, theoretical and practical development of AIs are more akin to animal husbandry and genetic engineering than a more Aristotelian/Thomasian "prime mover," in which a human creator designs, builds, and enables an AI embedded with a primary teleology.

Although it may sound slightly off-putting, AIs will not be created and initiated as much as they will be bred and engineered. Imagine being able to breed the perfect dog or cat for a particular owner (and I use the term owner purposefully): the breed could be more playful, docile, ferocious, loyal, etc according to the needs of the owner. Yes, we've been doing that for thousands of years, with plenty of different breeds of dogs and cats, all of which were -- at some point -- bred for specific purposes.

Now imagine being able to manipulate certain characteristics of that particular dog on the fly. That is to say, "adjust" the characteristics of that particular dog as needed, on a genetic level. So, if a family is expecting their first child, one could go to the genetic vet who could quickly and painlessly alter the dog's genetic code to suppress certain behaviors and bring forth others. With only a little bit of training, those characteristics could then be brought forward. That's where the work of neurophysiologist and researcher Danko Nikolić comes in, and it comprised the bulk of my summer research.

As I understand it, the latter point, the genetic manipulation part, that is relatively easy and something which cyberneticists do with current AI. It's the former -- the breeding in and out of certain characteristics -- that is a new aspect in speculative cybernetics. Imagine AIs who were bred to perform certain tasks, or to interact with humans. Of course, this wouldn't consist of breeding in the biological sense. If we use a kind of personal assistant AI as an example, the "breeding" of that AI consists of a series of interactions with humans in what Nikolić calls an "AI Kindergarten." Like children in school, the theory is that AIs would learn the nuances of social interactions. After a session or lesson is complete, the collective data would be analyzed by human operators, potentially adjusted, and then reintegrated into the AIs via a period of simulation (think of it is AI REM sleep). This process would continue until that AI had reached a level of interaction high enough for interaction with an untrained user. Aside from his AI kindergarten, the thing that makes Nikolić's work stand out to me is that he foresees "domain-specificity" in such AI Kindergartens. That is to say, there would be different AIs for different situations. Some would be bred for factory work, others for health care and elderly assistance, and still others for personal assistant types of things.

So, how do you feel about that? I don't ask the question lightly. I mean it literally. How do you feel about the prospect of breeding characteristics into (and perhaps out of) artificially intelligent agents? I think your reaction would show your dominant AI functional mythology. It would also evidence your underlying philosophical, ethical, and psychological leanings. I am purposely not presenting examples of each reaction (i.e. thinking this was a good or bad idea) so as to not influence the reader's own analysis.

Now take that opinion at which you've arrived, and think, what assumption were you making about the nature of this object's "awareness," because I'm pretty sure that people's opinions of this stuff will be rooted in the presence or absence of one particular philosophical idea: free will. Whatever feeling you came to, it would be based on the presence or absence of the opinion that an AI either has free will or doesn't. If AI has free will, then being bred to serve seems to be a not so good idea. Even IF the AI seemingly "wanted" to clean you house ... was literally bred to clean your house ... you'd still get that icky feeling as years of learning about slavery, eugenics, and caste systems suddenly kicked in.  And even if we could get over the more serious cultural implications, having something or someone that wants to do the things we don't is just, well, creepy.

If AI didn't have free will, then it's a no-brainer, right? It's just a fancy Roomba that's slightly more anthropomorphic, talks to me, analyzes the topology of dirt around my home and then figures out the best way to clean it ... choosing where to start, prioritizing rooms, adjusting according to the environment and my direction, and generally analyzing the entire situation and acting accordingly as it so chooses ... damn.

And suddenly this becomes a tough one, doesn't it? Especially if you really want that fancy Roomba.

It's tough because, culturally, we associate free will with the capacity to do all of the things I mentioned above. Analysis, symbolic thinking, prioritizing, and making choices based on that information seems to tick all the boxes. And as I've said in my previous blog posts, I believe that we get instinctively defensive about free will. After a summer's worth of research, I think I know why. Almost all of the things I just mentioned, analysis, prioritizing, and making choices based on gathered information are things that machines already do, and have done for quite some time. It's the "symbolic thinking" thing that has always gotten me stumped.

Perhaps it's my academic upbringing that started out primarily in literature and literary theory, where representation and representative thought is a cornerstone that provides both the support AND the target for so many theories of how we express our ideas. We assume that a "thing that thinks" has an analogous representation of the world around it somewhere inside of itself -- inside its mind. For me, even though I knew enough about biology and neuroscience to know that there isn't some kind of specific repository of images and representations of sensory data within the brain itself, but that it was akin to a translation of information. But even then, I realized that I was thinking about representation more from a literary and communication standpoint than a cybernetic one. I was thinking in terms of an inner and outer world -- that there was a one-for-one representation, albeit a compressed one, in our minds of the world around us.

But this isn't how the mind actually works. Memory is not representative. It is, instead, reconstructive. I hadn't kept up with that specific research since my dissertation days, but as my my interest in artificial intelligence and distributed cognition expanded, some heavy reading over the summer in the field of cybernetics helped to bring me up to speed (I won't go into all the details here because I'm working on an article about this right now. You know, spoilers). But I will say that after reading Nikolić and Francis Heylighen, I started thinking about memory, cognition, and mindedness in much interesting ways. Suffice to say, think of memory not as distinctly stored events, but the rules by which to mentally reconstruct those events. That idea was a missing piece of a larger puzzle for me, which allowed a very distinct turn in my thinking.

It is this reconceptualization of the "content" of thought that is key in creating artificial intelligences which can adapt to any situation within a given domain. It's domain specificity that will allow for practical AI to become woven into the fabric of our lives, not as equals or superiors, but not as simple artifacts or tools, either. They will be something in between. Nor will it be a "revolution" or "singularity," Instead, it will slide into the current of our cultural lifeworld in the way that email, texting, videoconferencing, WiFi, Roombas, and self-parking cars have: a novelty at first, the practicality of which is eventually proven through use. Of course, there will be little leaps here and there. Improved design of servos, hydraulics, and balance control systems; upgrades in bendable displays; increased connectivity and internet speeds -- mini-revolutions in each will all contribute to the creating of AI artifacts which themselves will  be firmly embedded in a broader internet of things. Concurrently, small leaps in software development in the realm of AI algorithms (such as Nikolić practopoietic systems) will allow for more natural interfaces and user experiences.

That's why I think the future of robots and AIs will look more like the varied droids of star wars than the replicants of Blade Runner or Lt. Data from Star Trek: The Next Generation. Actually, I think the only robots that will look close to human will be "sexbots" (as the name implies, robots provided to give sexual gratification). And even these will begin to look less human as cultural aesthetics shift. Companion robots at home for the elderly will not look human either, because the generation that will actually being served by them hasn't been born yet, or at least with a few exceptions is too young to be reading this blog. They'd be more disturbed by being carried around or assisted by robots that look like humans than they would be something that looked more artificial.

That being said, there really isn't any way to predict exactly how the integration of AIs in the technoculture will unfold. But I do think that as more of our artifacts become deemed "smart," we will find ourselves more apt to accept, and even expect, domain-specific AIs to be a part of our everyday lives. We'll grow attached to them in a unique way: probably on a level between a car we really, really like and a pet we love. Some people endlessly tinker with their cars and spend a lot of time keeping them clean, highly-tuned, and in perfect condition. Others drive them into the ground and then get another used car and drive that into the ground. Some people are dog or cat people, and don't feel complete without an animal in the house. Others find them to be too much trouble. And still others become "crazy cat people" or hoard dogs. Our AIs will be somewhere in that spectrum, I believe, and our relationship with them will be similar to our relationships with cars, pets, and smart phones.

As for the possibility of AIs becoming aware (as in, sentient) of their status between car and pet, well, if Nikolić's theory has any traction (and I think it does), then they'll never be truly "aware" of their place, because AIs will be bred away from any potential development of anthropomorphic version of free will, thus keeping them "not quite human."

Although I'm sure that when we get there, we'll wish that our machines could be just a little smarter, a little more intuitive, and a little more useful. And we'll keep hoping that the next generation of AIs will finally be the droids we're looking for.