Monday, January 14, 2019

Academic Work and Mental Health

I've always said to my students -- especially those thinking of doing Masters or Ph.D. programs -- that graduate work (and academic work in general) can psychologically take you apart and put you back together again. It will often bring up deeper issues that have been at play in our day-to-day lives for years.

As I was annotating a book the other day, I felt a familiar, dull ache start to radiate from my neck, to my shoulders, shoulder blades, and eventually lower back. I took a moment to think about how I was sitting and oriented in space: I was hunched over -- my shoulders were high up in and incredibly unnatural position close to my ears.  I thought about what my current acupuncturist, ortho-bionomist, and past 3 physical therapists would say. I stretched, straightened myself out, and paused to figure out why I hunch the way I do when I write.

It’s like I’m under siege, I thought to myself.

And then I realized there was something to that.

If there’s one refrain from my childhood that still haunts me when I work it’s “You’re lazy.”

My parents had this interesting pretzel logic: The reason I was smart was because I was lazy. I didn’t want to spend as much time on homework as the other kids because I just wanted to watch TV and do nothing. So I’d finish my homework fast and get A’s so “I didn’t have to work.”

No, that doesn’t make sense. But it was what I was told repeatedly when I was in grade school. Then in high school, on top of all of the above, I was accused of being lazy because I didn’t have a job at 14, like my father did.

And then in college, despite being on a full academic scholarship, getting 4.0s most semesters, making the deans list, (and eventually graduating summa cum laude), I was perpetually admonished by my parents for not getting a job during the 4 week winter break, or getting a “temporary job” in the two or three weeks between the last day of classes and the first day of my summer jobs (lab assistant for a couple of  years, and then day camp counselor). Again, according to them, it was because I was “lazy.” My work study jobs during the school year as an undergraduate didn’t count because they weren’t “real jobs.”

And even though I was doing schoolwork on evenings and weekends, my parents often maintained that I should be working some part-time job on the weekends.

So doing schoolwork (that is to say, doing the work to maintain my GPA, scholarships, etc.,) wasn’t “real work.” In retrospect, the biggest mistake of my undergrad days was living at home. But I did so because I got a good scholarship at a good undergrad institution close to home. It was how I afforded college without loans.

But just about every weekend, every break, or every moment I was trying to do work, I was at risk of having to field passive aggressive questions or comments from my mother and father regarding my avoidance of work.

My choice to go to grad school because I wanted to teach was, of course, because I didn’t want a “real job.”

Most confusing, though, was how my parents (my mother in particular) would tout my achievements to family and friends, even telling them "how hard [I] worked.” But when relatives or friends were gone, the criticism, passive aggressive comments, and negativity always came back. It’s no wonder why I hunch when I do work. I am in siege mode. It explains also why my dissertation took me so long to write, and why that period of my life was the most difficult in terms of my mental health: the more I achieved, the more lazy I thought I was actually being.

Even though I have generally come to terms with the complete irrationality of that logic, I do have to take pains (often literally) to be mindful of how I work, and not build a narrative out of the negative thoughts that do arise as I submerge into extended research. I went back into counseling last summer, mainly because I was starting to feel a sense of dread and depression about my sabbatical, which I knew made no sense. I'm so glad I did.

The things we achieve -- whether academic, professional, personal, etc. -- are things of which we should be proud. Sometimes we have to be a little proactive in reminding ourselves of how to accept our own accomplishments.

And maybe every 30 or 60 minutes, stand up and stretch.






Friday, January 4, 2019

Excavations and Turns

"To take embodiment seriously is simply to embrace a more balanced view of our cognitive (indeed, our human) nature.  We are thinking beings whose nature qua thinking beings is not accidentally but profoundly and continuously informed by our existence as physically embodied, and socially and technologically embedded organisms."
 -- Andy Clark, Supersizing the Mind: Embodiment, Action, and Cognitive Extension, (217).  

I've reached a point in my field-related research that I've internalized certain ideas to the extent that they have become the conceptual bedrock of my current project. However, as I dug up my annotations of Andy Clark's Supersizing the Mind, I realized that I have taken certain assumptions for granted ... and had briefly forgotten that I didn't always think the way that I do about phenomenology, materialism, and particularly distributed cognition. Apparently, as little as seven years ago, I wasn't convinced of Clark's hypothesis regarding the ways in which our cognition is functionally and essentially contingent upon our phenomenal environments. Now, of course, I am. But reading my sometimes-snarky comments and my critiques/questions about his work gave me valuable insight into my own intellectual development, and pointed at ways to sharpen my arguments in my current project.

Seven years ago, I was still thinking that language was the mitigating factor in the qualia of our experience. In fact, I had written a chapter for an anthology around that time, working under the aforementioned idea. Now I realize why that chapter was rejected and left to literally collect dust in my office. The rejection of that chapter really affected me, because it was an anthology in which I really wanted to be included. I knew that something was off with it. It never felt quite "right."

Then, filed next to those notes, was a different set of notes written around eight months later. Those notes represented a complete 180 degree turn in my thinking. Unsurprisingly, that chapter was accepted into a different anthology ("Thinking Through the Hoard" which appeared in Design, Mediation, and the Posthuman).  That piece was really the beginning of my current journey. I suppose Clark's ideas had "sunk in" with the help of other authors who pointed out some of the broader implications of his work (like Jane Bennet and Hans Verbeek).

There are a couple of takeaways from this anecdote: 1) as academics/researchers, our ideas are always evolving. Several philosophers, including Heidegger, experienced "turns" in their thinking, marked by a letting go of what seemed to be foundational concepts of their work. My own work in posthumanism has made a couple of turns from its original literary theory roots, to an emo existential phase, to its current post-phenomenological flavor. 2) Embrace the turns for that they are. There are reasons why we move on intellectually. Remember why we move on is helpful when anticipating critiques to your current thinking.







Monday, December 31, 2018

Sabbatical: The True Meaning of Time

So no apologies or grand statements regarding that my quiet academic blog is now alive and awake again. No promises as to what it will become, or how often I will update. I'm going to let this evolve on its own. Like all the best things I do, I have a sketch in my head as to what I'd like this blog to be while I'm sabbatical -- as I research and write what will hopefully be another book. But, things happen and unfold in interesting and unpredictable ways. I have been doing a great deal of research in the past several months, all in preparation for what will be several months of concentrated work.

For those who may not be familiar with what a sabbatical is or how it works, it's basically a paid leave from one's usual responsibilities on campus in order to do intensive research or writing. Most universities grant year-long sabbaticals; but since Western isn't the most cash-flush or research-oriented university, our sabbaticals are one semester long ... we can take a year if we'd like, but at half-pay. Since I can't afford to live on half of my salary, I opted for the semester-long sabbatical. Sabbaticals are something for which faculty have to apply and be approved. It's a multi-step process that requires proposals and evidence that one has actually done research while they're gone. Once you are on the tenure track, you can apply for a sabbatical once every 7 years. 

This is my first sabbatical. So I have no idea what to expect nor can I wax philosophical on what it's like. 

I can say, however, that this will be the first time I'm not on an academic schedule since I first started going to school. And I don't mean grad school. I mean Pre-K. My years have been portioned by the academic calendar since I was 4. Elementary school. High school. College. Grad school. Teaching. There were no breaks. I have always either been in a classroom as a student or as an instructor since I was 4 years old. I am now 46. You do the math. Sure, there are semester breaks, but this was the first time I entered a semester break without having to think about the next semester's classes. It was less disorienting than I thought it would be. 

Not many people who aren't teachers understand exactly how much time and energy teaching requires. I normally have a teaching load of 4 classes per semester. I'm physically in the classroom for 3 hours per course per week (spread out over a Monday/Wednesday/Friday or Tuesday/Thursday schedule for each). I am also required to have at least 5 hours per week of "office hours" for students. So that's 17 hours per week of teaching/office hours. That doesn't include class preps, grading, committee work, meetings, and the administrative side of directing the philosophy program. Most days, I arrive on campus by 8:30am and leave after 5pm. Most days before or after that I'm prepping/reading for classes, grading, or doing paperwork. Weekends are the same. When I leave for the day, I bring work with me. 

I get up at 5am on weekdays in order to have a little under 60 minutes to do my own research. Semester breaks are also times when I've been able to do my own research. But 1/3rd to 1/2 of those breaks are filled with writing recommendations for students, prepping for the next semester's classes, and dealing with the inevitable committee work that brings me to campus during those breaks. 

With a sabbatical, 85%-90% of the above work goes away. 

This is why sabbatical are precious ... because it gives us time. 

Time to let the big thoughts develop. Time to sit down and THINK. Time to actually read something that isn't a student paper or a committee report. Time to write through a problem without looking at the clock and thinking about how you're going to make Kant into a remotely interesting class. Time to focus on your own work instead of the at-risk student who has been looking really tired in class and probably isn't eating because they just got dumped by their fiancee or their dog is sick or they flipped their car over for the 3rd time in 2 years. Time to sit in quiet instead of dealing with yet another new directive from administration to fund raise or recruit even though you have zero experience or expertise in doing so. Time to read relevant writing in your field instead of being asked to justify the importance of your field or to report back as to exactly where your students from 7 years ago are working now and how your classes got them that particular job. 

There is time. 

Time to recharge myself so that when I do return, Kant will be an interesting class. Time to become re-invested in my field and feel legitimate as an academic again so that I can pay better attention to my students and reach out when I know they're at risk. Time to research so that when I return I have evidence of exactly how important my field is, and exactly why studying it isn't just important, but imperative to making students marketable to employers. 

There is time for me to focus on me, so that I can eventually focus better on my job and doing it well. 

That's what sabbatical is all about, Charlie Brown. 






Wednesday, August 29, 2018

Posthuman Determinism: Possibility through Boundaries

In my "Posthuman Topologies: Thinking Through The Hoard," I end on a somewhat cryptic note about "posthuman determinism." In all honesty, that was one of those terms that just came out as I was writing that I hadn't thought about before. For me, it was a concept that served as a good point of departure for more writing.

As I'm deep into a new project (and heading toward a sabbatical for the Spring semester), the idea has come to the forefront, with the help of a wonderful book called The Incorporeal: Ontology, Ethics, and the Limits of Materialism, by Elizabeth Grosz. As she questions and re-frames the realtionship between the ideal and the material in the works of the Stoics, Spinoza, Nietzsche, Deleuze, Simondon, and Ruyer, she provides a thoughtful critique of materialism (and, consequently "new materialism" -- my own sub-specialty) that reinvigorates certain points of idealism while maintaining the importance of the material substrate of existence. It's a similar maneuver to Kant's critique of the rational/empirical dichotomy in the 1700s. 

Thankfully, as I started taking notes on Grosz's book, the idea of "posthuman determinism" kept coming back, and with it, a journey back to the core of my philosophical worldview: how do the artifacts which we use -- and which surround us -- contribute to the self. Note here, I'm not saying "contribute to the idea of the self." While we may have ideas of who we are, my position -- as a posthumanist, post-phenomenologist, and new materialist -- is that the objects which surround us and their systems of use are essential and intrinsic parts of the very mechanisms that allow ideas themselves to arise. Ideas may be representations of phenomena or mental processes, but the material of which we are made and that surrounds us make representation itself possible. This means that -- unlike a Cartesian worldview that puts mind over matter, and privileges thought over the material body which supports it -- I place my emphasis on the material that supports thought. That includes the body as well as the physical environments that body occupies.

In that context, a "posthuman determinism" is a way of saying that the combination of our physical bodies and physical spaces those bodies occupy create the boundaries and parameters of experience; and, to a certain extent, create boundaries and parameters of the choices we have and our capacity to make those choices. Our experiences are determined -- not predetermined -- by the material of which we are composed. The trick is to think about the difference between "determinism" and "predeterminism." In relation to the human, the former states only that all events are determined by causes which are external (read, material) to the will; while the latter implies that all human action is established in advance. Determinism emphasizes causality while predeterminism emphasizes result. That is to say, ascribing to a deterministic philosophy implies only that human action always has a cause: that specific factors guide how human beings express their will. Predeterminism implies that the specific choices that humans make are somehow established in advance and that each of us is moving toward a specific, fixed point. That would mean that our choices are themselves illusory, and that regardless of what we choose, we will arrive at a specific end.

Ascribing to a deterministic worldview does not mean -- despite what people critical of philosophy  may tell you -- that nothing matters and that we are not responsible for our choices. In fact, quite the opposite: in a deterministic philosophy everything literally matters. We are responsible for our actions by understanding the causes and conditions that supervene on our decisions. What factors affect the choices I have, and how do those factors contribute to my own decision-making processes? That is to say, What factors instantiate the mechanisms through which I make my choices? From my materialist point of view, I believe that our ability to think and our ability to choose are bounded by the material properties of our bodies and the world around us.*

So although I may ascribe to a certain posthuman determinism, I still believe in "free will," but one that has specific limits and boundaries. To us, there may seem to be infinite choices we can make in any given situation; and, indeed, there may be many choices we can make, but those choices are not unlimited. A person can't imagine a color that isn't a shade, variation, or combination of a color (or colors) that person has already seen. We can't imagine an object that isn't some component, combination, or variation of an object that we've experienced before.

None of the above is new. Both Hume and Descartes say similar things, although Descartes's (and to some extent, Kant's) valorization of the mind's ability to conceive of things like infinity and perfection prove that the mind can move beyond its physical limitations.For me, however, that's the mind moving within them. Infinity is a concept that is born of ones learned awareness of time and space.

All in all, there are limits and boundaries to free will. But those boundaries are what make volition itself possible. We can only think and act through the physical bodies and physical world those bodies occupy. Boundaries are not necessarily prohibitive, they make things possible, and give shape to the specific qualia of experience itself.







*Someday, will our computers be powerful enough to calculate the myriad physical properties around us and predict our behavior based solely on our brain chemistries coupled with the properties of the physical world around us? I think if humans survive long enough to develop that technology, then, yes. At that point, I do think the machines will literally think FOR us; transforming the human species into something very different than it is now -- something beyond the realm of our imagining ... literally. We can't think of what that thinking would be like because we literally do not have the biological capacity nor the material support to allow us to think that way.




Tuesday, March 6, 2018

Research, Sabbaticals, and the Reality of Higher Ed

It has been quite a while since I've posted, and -- for once -- it's for a good reason. I've been working on some new research which is very timely and somewhat sensitive, in that I am hoping that it is the start to a new larger, hopefully book-length, piece. I was recently granted a sabbatical for the Spring semester of 2019. While a year's sabbatical would be more conducive to research, my university only grants year-long sabbaticals at half-pay, which wasn't feasible financially.

I won't get into the details of my current project work here, but I hope to be posting more often, writing what I envision to be "parallel" pieces that indirectly relate to what I'm working on. Apologies for the intrigue, but sometimes when you've got a really good project that you think has legs, you want to keep it under wraps for fear of being distracted or getting "scooped." It's an aspect of posthumanism that hasn't really been explored in any meaningful way, and I'm hoping to be one of the first to do so.

It's an interesting feeling now, post-promotion to full professor, to establish a research agenda that -- while tempered by demands of my own field -- is my own. As academics, we often find ourselves driven by the desire to land positions that offer some kind of security amid various market pressures and political attacks. And even when we do find those positions, we're faced with internal pressures to engage in research that will ensure tenure and promotion. In most cases, academic freedom allows us to research what we'd like, but we also know that it has to be something publishable. And even then, as economic pressures on higher ed tempt universities to re-create themselves according to certain "identities" (i.e. we are a "destination" or "technical" or "public service" university etc), we find that rushed and panicked marketing campaigns begin to trickle down into discussions of liberal arts and general education: "perhaps if we taught more of [insert fundraising magnet field here], then we'd get more money."

It's especially frustrating for me when the perspective and knowledge I've gained from posthuman studies shows that competing and popular fields pushing these discussions forward are doomed given the demands of the coming decades. You can see the paths ahead to create curricula and programs that could make an institution a real force, but you're told -- directly -- that there have to be donors to support those changes. "Show us a donor with eight million dollars and we can talk about it." When those words can be spoken aloud -- to faculty --  at a university, it's hard to engage in research agenda not affected by those forces (whether it's to try to attract money or to purposely entrench in one's own research agenda out of classic academic spite).

Both extremes are destructive.

I'm not going to stand on the perspective of tenure or promotion to justify my position, because tenure and promotion mean nothing when your program is eliminated. But I can and will speak from the perspective of two decades' worth of experience. I know that to be an effective instructor and researcher, I need to engage in the research that speaks to my own passions and interests. I also know professionally that I have to adapt and shape those results into something that is marketable. And if it doesn't fit into the newest identity one's university is trying on for size, it has to be marketable enough to be published, and perhaps get a little attention. Even if a professor isn't publishing in the most popular majors, universities will still plaster their pictures up on website splash pages to tout their faculty's achievements.

My own research has taken a turn into something that is both meaningful and important to me but could also be timely and popular (well, as popular as academic writing can get). And my upcoming sabbatical is a chance for me to lose myself in it without dealing with the institutional noise and growing list of tasks that are being heaped upon faculty on a daily basis: write the copy for your program for our marketing materials for the 6th time in five years because we've fired the last five marketing people and have no idea where any of that information is; come to this campus discussion about how we're going to revolutionize our curriculum to the point where we're "encouraging" you to add certain content into your own classes; call prospective students to convince them to come.

At a teaching university, all of those are things that take me out of the classroom and interfere with my primary duties as in instructor. All of those are things that directly interfere with my face-time with students. All of those are things that contribute to the fatigue that makes me pass on sitting on committees that could actually make a difference. Some instructors make the transition from professor to fundraiser, although the titles they are given mask that fact: "Director" or "Dean" of something seems much more palatable than "chief fundraiser." The one token course they might teach a year become pegs upon which whatever pedagogical integrity they had is precariously hung.

I do, however, understand the need for people who can chase millionaires and billionaires for funds which are desperately needed to keep universities afloat. It's become a sad reality. And I have no problem speaking to parents and prospective students when they visit campus; I do see that as an aspect of what I need to do in order to actually remain employed. But my old mantra which I've said to the multiple marketing people who have come and gone has been "you get them into the classroom and I'll keep them here." That, sadly, is no longer enough.

It's ironic that sabbatical will take me out of the classroom which I so enjoy -- and have always enjoyed. It's not the classroom or the students from which I need a break, it's literally everything else. I am, in fact, very nervous to be without that classroom energy for a semester, because my students have always sustained and inspired me. But, in the bigger picture, losing myself in research will be a way for me to re-charge my classes and give the students the experience they all deserve.

"Your sabbatical isn't a break," I was told by an administrator at my university, who weeks before had told me that despite my "excellent proposal" I had "about a 50/50 shot" at getting sabbatical due to budget cuts.

But it is a break. A break from the things that distract me from what I do best. When the burdens of non-teaching duties and increased pressure to do the jobs of others encroaches on my class preps and time with students, then stepping away from that for even a semester IS a break. And during that time, I'll tap into the excitement of research that was the core of what allowed me to become a professor in the first place. As I said to a student recently, I knew early on that I wanted to be a professor, but my initial problem was that I saw research and the dissertation as a hurdle or impediment to that goal rather than the path to it. That research was a foundation upon which to build a career; a springboard for my passion to teach.

So, after twenty years, it's time to revisit my foundation, inspect it, and shore it up where necessary. I know I'll be a better professor for it.




Monday, September 25, 2017

Alas, Poor Jibo

I recently did a little check on Jibo to see how things were going with the launch of this "revolutionary" robot. I've been interested in Jibo since I first heard about it a few years ago, but then when Google and Amazon soon after came forward with a less-humanoid voice interface, I immediately knew that Jibo was in trouble.

I've written before about Cynthia Breazeal's vision in regard to home robots; her desire to create "companions" since her childhood fascination with droids from Star Wars and her incredible, prescient, and visionary work with the robot, Kismet. 

Jibo's introduction to the world needed 2 things: the ability of the company to change people's expectations of what a home robot could be; and the ability to roll out something that was intuitive and useful for consumers. However, in a press release to backers from Jibo's CEO Steve Chambers, I realized that somehow, Breazeal's vision had become obfuscated by inattention to what consumers want and need, and what probably was a disconnect in the development team between the creative people and the engineers.

In his letter to backers, Jibo CEO Steve Chambers points out a few examples of the problems experienced in beta testing. A couple, like router/Wifi configuration problems were definitely to be expected, as would be various "latency" or system lag problems. However, two of them were most telling and especially disappointing:

  • "Discoverability: Users had trouble discovering what Jibo could do. This is partially due to the fact that we have an early stage product with limited skill functionality, and partially due to some changes we need to make from a user experience standpoint."
  • "Error mitigation: When users had trouble discovering what to say, Jibo was not helping to mitigate those errors by guiding the user properly. Many times users didn’t know what to say or do and Jibo didn’t know how to help them break the cycle, creating confusion and frustration for the user."
The fact that early adopters -- those being most aware of Jibo as an innovative device and thus more likely to be more patient in the "discovery" process -- were having difficult figuring out what Jibo could do was troubling.  Jibo was purportedly designed around an evocative interface; one that would intuitively evoke or build and awareness of how Jibo could best be used simply through "getting to know" it. That is to say, out of the box, Jibo should have been able to lead users toward an understanding of what it could do and what it had to offer them. Also, the core feature of  Jibo was its ability to naturally interact with people, yet it was impeded by its inability to not only understand users, but to guide them in how to best interact. Missing the mark on the foundational elements of creating an intuitive interface makes me believe that if Jibo ever does roll out, it will be to toy stores, or perhaps next to the massage chairs at Sharper Image-type stores.

But these shortcomings led me to two possible conclusions: that Jibo's engineers and designers had an expectation that non-engineers and non-tech people would react to Jibo in a certain way; or that there was an expectations that users would intuit how Jibo should be used. The "error mitigation" issues makes me think that it was the former, because in the lab engineers and software people knew exactly what to say and do to get Jibo to be "useful."

Technicians and engineers deal with new technologies in a vacuum, surrounded by people who think as they do, who see interaction between humans and machines as a  general problem to be solved rather than as a relationship that must be forged from experience. And after reading Brazeal's work, I'm thinking that her vision of what robot interaction could be actually became too steeped in fantasies of human/robot companionship. C3PO was a person playing a role, as was Robbie the Robot, David from AI, Data from Star Trek, etc. Humans in the bodies of robots -- or at least speaking as robots. The general artificial intelligence that is being sought after here is nothing more than human companionship. In this way, Jibo was doomed to failure before it started, because the underlying goal was to make another human; not to make a new kind of robot.

I have always maintained that the most successful technologies are the ones that become part of the landscape of the human lifeworld without announcing themselves as such. email, cell phones, appliances, etc., etc. They became woven into our lifeworld without us realizing they had. Google and Amazon were aware of this. They were able to see the best uses of the cell phone and spin those uses off into the home; relying on the known quantity of speech recognition and voice identification technology to create appliances that did just enough to make them useful, and allow people to forge their own relationships with them that weren't exactly the same as relationships to humans, but more than their relationships to their cell phones. 

Where Jibo is failing is in a lack of vision: they weren't trying to create a new relationship, they were trying to re-create a human one. 

Personally, I was incredibly disappointed. As a fan of Breazeal, I saw the potential with Jibo. Sure, the animatronics were a gimmick; but I hoped that the vision of the company went beyond Jibo, and saw the little companion as a stepping stone to a truly different technology -- something that forged a new type of human/robot interaction. Clearly, this is not the case. The shortcomings outlined in the CEO's letter  reek of engineers thinking very well like engineers, with a lack of vision for how people not only would actually USE the technology, but how they might forge a different relationship with it. Jibo could have been so much. 

I can't be too hard on Brazeal or Jibo, Inc. My own fantasy scenarios that it was a company with a true vision as to creating a new kind of relationship between user and machine with Jibo as a stepping stone were just that: an optimistic fantasy. On the flip side, though, this reinforces my idea that being aware of the topologies of interface (how this artifact is woven into the spaces in which it will be used) are a key aspect in material design. Jibo was excruciatingly cute. It's movements and gestures were inviting in and of themselves. But I think the main concept-people in the company saw that design as making it more human, rather than making it more "machine." People are more apt to interact with Google Home or Amazon's Echo because they announce themselves as technology. Jibo's blurred line makes users think about how they should interact with it, rather than interacting with it. There's nothing wrong with creating a new interface, but I think the most successful artifacts (and companies that create them) will be the ones that are keenly aware that this IS a new interface, one that is different than what came before, but not human. Jibo was designed without an awareness of domain specificity. Used in the home, then its intelligence must be designed around it and all that occurs there.

It's not a question of creating more human-like robots. It's an issue of creating robots with an eye toward the environments in which they will be used -- including the home. A home robot isn't a "companion," it is a facilitator.

I also think that Google and Amazon have merely scratched the surface with their respective Home
and Echo devices; and Amazon might have a slight edge in its development of related hardware like "Dot" and "Show." I also believe that both companies have an edge in collecting data on how those devices are being used, meaning that they are tracking the evolution of users awareness, skills, and intuitive tendencies and making software changes on-the-fly to keep up -- and eventually inform the next versions of their respective hardware flagships. These companies are successfully figuring out how AIs will be woven into the fabric -- and spaces -- of our daily lives. The advances in human-AI interactions will bring about a more natural interaction, but one that isn't quite exactly how we speak to other people. And that's okay. Our language will evolve with these systems of use.

What will put each company (or any others that might arise) ahead is an awareness of how we function with these artifacts in space, topologically. Home and Echo don't use fancy animatronics. They don't coo and flash animated hearts or cartoon eyes: they function within a specific space in a certain way. And people are responding.

Alas, poor Jibo. We never knew it, Dr. Breazeal. It hath born on its back the failures of discoverability and error mitigation, and now, how non-intuitive to the imagination it is.

(Apologies to both Dr. Breazeal and Shakespeare).