Review Semesters III and IV
Apr 16, 2021
Initial Position
During the second year in my Master I tried to translate ways of treating technological artifacts as quasi-social actants into design research. During this process I needed to revise my original hypothesis as well as the accompanying research-questions.
Initially I was heavily invested into the subject of animism, especially aspects of it coming from contemporary anthropology and philosophy. Animism basically means, that something is perceived as animated, be that physically or on a epistemological level. The hypothesis I built circled around the notion of improved relationship to consumer technology with a positive impact on ecology through less trashed electronics. I wanted to pursue this path by introducing concious practices of treating things as animated.
I settled on voice assistants, like Amazon’s Echo or Google’s Nest, as study case. It is a disruptive technology, which just recently established itself for good. It hasn’t been studied thoroughly already, bringing with it many research gaps to be filled as well as potentials to be looted. The aspect of speaking with it also brings added benefits if going the path of animism.
What’s been done
With this in mind I carved the following research plan, built up in three consecutive phases.
Phase 1 - Fieldwork: Interviews with user as well as observatory participation, user journeys, capturing online voices through product reviews and forum discussions as well as collect image material related to customizing voice assistants as well as advertisement - Thematic analysis of the material
Phase 2 - Expert Workshop with designers in order to conceive prototype-able material, building on a animistic framework as well as the research data from phase 1 - Prototyping of possible animistic approaches to voice assistants
Phase 3 - Testing Prototypes
The whole process was accompanied by ongoing literature review as well as discussions with experts from philosophy and anthropology, since the the aspects of animism I was particullarly interested in proved hard to translate into design research. That might also had to do with my approach…
I want to go briefly into the important bits. In Phase 1 I wanted to concentrate on finding weak signals that point to animist practices. An animist practice would be an act or behavior that treats a thing as animated or alive with a focus on acknowledging and respecting the other-than-human. Examples of this can be found in other cultures and communities, for example some ritual practice the Japanese Shinto religion, where animist practices are consciously cultivated. I hoped I could find traces of something similar here.
This was not the case. Within the interviews I could do as well as the many hours of observation, no such conscious behavior was found or mentioned. What I found where ample examples of problems in the communication with the voice assistants. In many cases I found the voice assistant treated as a quasi-social actant, which is not necessarily surprising, given that we talk with them. The depths of it may be illustrated by the following excerpts. I asked an interviewee to describe the device or assistant in their own words:
Ok… I would have said like an imposter. Like a little alien, which is kind of … like a plastic-alien cylinder that spies in here a little bit and listens a little bit and then every now and then, I don’t know. I watched a tv show once, and they mentioned the name Alexa all the time and she was like mimimi, wanted to give her two cents all the time. And I was like, dude, stop it! Yeah, just a bit of a nag.
Another participant told me how he sometimes kicks his Amazon Echo device out of frustration. He further expanded
It would be nice if there really was a reaction, I mean, I say “Alexa, you asshole” quite often. And there’s just no reaction. […] nothing happens. And that’s frustrating. You can swear on her as much as you want, but nothing happens.
Both positions, coming from the assistant as well as the user, indicate behavior that is hard to tolerate and point to problems of human-computer-interaction and communication.
Further reading -> Done with collecting
The collected material was transcribed into the research archive, which also built the base for the thematic analysis.
TA is a classical method of labeling your data or coding it and then developing overlapping themes out of the codes. I opted for a reflexive approach after Braun and Clarke as it lends itself to mixed media data analysis. An important aspect of this method is the back-linking of codes and themes to the actual data, and thus verify and making sure the analysis has substance. Thematic analysis is said to be compatible with phenomenology in that it can focus on participants’ subjective experiences and sense-making, which was important for me, as I strived towards the field of user experience and interaction design. In the reflexive approach the researcher’s active part in the creation of theme is important. Themes are not emerging but are actively generated, created and constructed.
Further reading -> Thematic Analysis
After not finding the hoped-for weak signals of animist practice in the field work, the thematic analysis concentrated mainly on aspects of communication. I developed six themes all in all, 4 of which I pursued further. The themes mainly circled around issues of unnatural communication as well as trust and privacy issues. Three of them concerned the different main actants or stakeholders, the users, the devices and the brands behind the devices. The fourth theme concentrated on the communication and behavior between these three parties.
Conditioning the User Despite being a natural way of interaction, the user still need to heavily adapt speech and behavior in favor of the machine.
What is a voice assistant Users are often not very sure what a voice assistant actually is, or how to address or behave towards it.
Questions of scale Most users are aware, that it is only a minuscule part of a gigantic, planet-spanning infrastructure. Though they hardly ever can imagine the true size of it. And that there is always a companies intention behind them.
Communication, Language and Behavior It could be argued, that all technologies need the user to adapt to them in a certain way. That is not necessarily a natural, given thing. In the case of voice assistants we sometimes have to go extrem ways to accomodate this technology.
During the thematic analysis, the once important animism, finally fell out of focus. After not finding it as weak signals in the field work and not appearing in the thematic analysis, it needed to be dropped as main part from the hypothesis. But I only realized this later on.
I prepared the material, the themes, as well as excerpts from the data and image material for the workshop. I was lucky enough to be able to hold the workshop in real life, as all the participants agreed to meet in physical space. To be able to concentrate during the workshop I produced a research zine from the material which functioned as presentation of said material as well as a working tool.
I further added material from two frameworks which were very helpful in conveying my approach. The first was the neo-animist framework by Betti Marenko, which introduces four categories for animist design - agency, embodiment, ecology and uncertainty. Coming from philosophy, these categories are not directly translatable into a practice, but they help to think about and analyze design in animist ways. The second framework is by Jonathan Chapman and is called emotionally durable design. It proposes nine categories that can be directly applied in design to enable stronger emotional bonds, one of which is “consciousness” which can be aligned with Marenko’s agency as well as uncertainty.
The aim of the expert workshop was to imagine prototypes. These were to be based on my research framework as well as the field findings and had the aim to improve the relationship between users and voice assistant. I invited four different people coming from four different design disciplines. Included were industrial design, service design, somebody working mainly with text and communication and a person coming from ceramic craft. I wanted to have different perspectives on the same material.
I had two threads built into the workshop. The first was about bringing the different positions together. This thread was organized by starting with silent solo-exercises going towards more discursive group exercises. The second thread was about opening up the imagination. I borrowed heavily from speculative and future design exercises. Both threads culminated into a final task of shared world-building. It was important for the process, that the problem found in the fieldwork were not looked at from an engineering point of view, but from said design perspectives, and how these imagine alternatives to the contemporary practices that I recorded.
In the final exercise I asked the participants to speculate about how our relationship to voice assistants looks like in the next 15 to 20 years and write a story or text about that and further, to tell this story. I found the stories and their content extremely inspiring and they took up a few very valid point on how to go forward. Nonetheless, a clear path was not visible. The hope-for prototypes didn’t emerge.
Further reading -> Evaluation of Expert Workshop
What came after was nothing short of an existential crisis towards the project, it’s focus and goals. I ruminated on that in
The basic problem was, if it is not about animism, what is it then? And what do I actually want to do?
With the help of my coach I started a reflectory process in which I attempted to reframe the project. I needed to continue were I left of, but change the questions and hypothesis drastically. This was not very easy for me, as the subject of animism is dear to me and I wanted to have it in the focus. It is a personal project linked to my past. Another problem was also, that the project was tied up in accomodating a future PhD which added layers of complexity to this process.
A first draft went into the direction of using voice assistants, or digital assistans per se, in the application of therapy as supporting tools. A draft for that can be found in Digital Companions in Therapeutic Settings. This outline was welcomed by my coach and peers. It made sense. After further introspection and talks with experts I needed to be honest about it. Working this direction would have made sense as a new project, but jump into it that late, without having any experience in the field wasn’t working out. I also got advised that there are many institutions and companies with quite enough money already on track with this. A brief research over the landscape of available apps confirmed that.
Luckily I started to read Relating to Things, a book on technology like voice assistants, thought about through a post-penomenological lense. If phenomenology is concerned about how our perception builds our understanding of the world, post-phenomenology is concerned with how technology mediates that perception. If the former is interested in how you see things, the later is interested in your glases.
One chapter is focusing on issues of privacy, which can be remated as intimacy from a user perspective. I found this link quite fitting, as I encountered it in my field research quite a bit. I also saw a possible way forward in experimenting with the two frameworks of Betti Marenko and Jonathan Chapman to investigate and better the experience the interaction between user and voice assistant.
I outlined a draft for this approach here Negotiating Privacy with Voice Assistants.
The possed problem seemed singular enough to be researched through a prototypial approach, but also expandable into more questions. That’s were I am right now. As for the moment I started with building up the development plattform, with which I can setup up experiments.