Voice-Assistant Extended
Conducted in English
With Person 3j14j0
Adrian Demleitner 0:02
Good. Thank you very much for taking your time for this
briefly.
3j14j0 0:06
It’s okay. No, it’s nice to, it’s nice to catch up with you also
see what
Adrian Demleitner 0:13
I mean, I, I attended your workshop last year within my master
studies, so I’m doing the mastering design research. And I was
very much interested in our relationship to technology from a
non human perspective. You know, a bit but you’re trying to do
with unfamiliar, convenient. And I was, I started to focus on
voice assistant technology at that time, and your workshop was a
really good start to, to explore that rhythm. So I, I mean, my,
my personal inspiration is, you know, stuff like vibrant matter,
or the modern human discourse.
3j14j0 1:44
Vibrant matter. Actually, I’ve never heard of it. So what is the
Vibrant matter?
Adrian Demleitner 1:51
so vibrant matter is a bit like the Eurocentric philosophical
approach to animism. Okay, you know, it’s a bit like, what’s the
political dimension of ascribing agency to the non human realm?
It’s a bit it’s a, it’s quite philosophical. not always easy to
read. But it’s a it’s good. I can recommend it. I’m just taking
them off myself. Yeah, that’s good. Do that. I’m always I’m
always available if you need to have a link or something.
3j14j0 2:31
Yeah, sure. Sure. So yeah, I got it. So So you had to you had to
kind of a bit pivot into into basically, into more like, a
social thing, because it’s like, it’s, uh, you know, as like, I
see how more than human perspectives are socially relevant as
well. So but yeah, but I guess?
Adrian Demleitner 2:55
You know, so it makes a and I think, in communicating the
nonhuman relevance of these approaches, speculative is very
important, because it’s kind of the language to, to bring the
imagination that it’s needed to understand this. Sure. But yeah,
it’s, it’s, I will keep that part, but I will continue. After my
studies. And for now, I focus on something called the post
phenomenal post phenomenological approach. Okay. And that is,
while phenomenological approaches are focusing on how experience
through the senses is creating access or reality, like how you
see stuff, the past phenomenological approach is focusing on how
technology mediates that access. So for example, how are glasses
changing the way you have access or perceive reality? So that’s
a bit where I’m at right now I’m focusing on how to voice
assistance, create or mediate access to the world today, and of
course, that has political dimensions with the whole data
extractivism thing and the business model of PMF of institutions
like Google or Amazon, but I will also go a bit into the social
relevance of voice assistant technology for people with with
with disabilities, who are dependent on human computer
interfaces that go beyond the usage of eyes or hands. So, yes,
that’s a bit where I am right now.
3j14j0 5:13
I think there’s like, so, I think superflux they did like
another because this one that we showed you that the, the our
friends, sorry, yeah, our friends electric, which is that main
film with Mozilla. Exactly. He did another one, which was
specifically maybe for any chest, which is the, you know, the
national health security. And so they did like a number one, and
it’s also about like device assistance. I’m a bit less kind of,
it’s a bit random, I think maybe they just agreed, and then they
did it. Because I it’s like, it’s a little bit confusing. Okay,
that’s a kind of maybe some of the things like fall into topic,
you know, of the interesting. I mean, I get, again, where you
kind of go in and like I was reading also, I was a bit curious,
because I’m also a little bit interested in these new new use
cases also. And then like I was reading about how, well you
know, they introduced the Alexa Pretty please, like scale for
kids, because they said that, you know, kids were really like
impolitely addressing Alexa and stuff. So parents were kind of a
bit shocked to know that the election was like slavery almost
right. And it’s interesting, because they Somebody done again,
somebody did a study, and then said that actually had no sort of
influence. Now not clear for some kids, because kids apparently
are able to distinguish, you know, quite well, what is a machine
and what is like a parent. But they said, but they said
basically that, you know that there was this interesting point,
which again, I guess falls into your post phenomenology is that
it is hard to estimate the impact of those kinds of things,
because the kind of the social norms around those things kind of
evolve as those objects evolve as well. Right. So that’s an
interesting sort of perspective, you know, that it’s kind of the
language is still developing, you know, and made me Yeah, so
anyway, but but so you’re like, so people with disabilities,
right?
Adrian Demleitner 7:32
I mean, what I will do in that, I will not propose anything in
particular, but I For starters, I will have some interviews with
blind people or people suffering from data or plugging pledgie,
you know, when they’re numb from their, from the head on down to
see a bit what, what, what their experience is or what their
wishes are. Because I see, I see a big change in voice assistant
technology. So we have the screen readers. And screen readers
are actually more of a workaround to work with content that was
created for the able bodied. So screen readers read, try to read
off the screen. But the HTML, for example, was was made for
people who are actually working with their eyes. Right. And I
see that invoice the system technologies like like, like, Amazon
Echo, that it’s a complete change in that, you know, its content
made for the hearing first. So the workaround, and that brings,
brings a lot of problems with it. And of course, Google and
Amazon, they’re, they’re operating within their business models.
So for example, they can brutally curate results in their favor,
you know, and I don’t, so I researched it just a little bit, but
I don’t think that Google and Amazon for example, have have
other use cases in mind, you know, they are like, Okay, so we
have this now, how can we place a lot of these devices in
people’s homes, and how can we operate within our business
model? So for example, Google has no not much information or
resources on using this technology. With with disabled people.
Amazon does quite a few things. They have a very well resources,
but I just made a quick require inquiry with the Twister sales
Association for blind people. They said they actually have a lot
of problems setting up an Amazon Amazon Echo To start with,
because there is a process of setting up and it doesn’t have an
audio capture. You cannot even set it up.
3j14j0 10:01
Once you once you set it up, it’s like, Okay, what the answer
is? Yeah. Interesting. Yeah, that’s quite funny. Yeah. Maybe
related to Yeah. Interesting. Yeah. But usually, I mean, like,
that’s the way actually the format or the framework is quite
flexible. So in a sense that, like, every time is basically
nobody told you at Amazon that they need to do it, you know,
because as soon as you tell them, they will patch, they will
patch it around and stuff, you know, that’s like, it’s a bit
like, I think that it’s what’s interesting, really, with, with
voice assistance from like, a very technical perspective, is
this idea of a skill, you know, they really like it. There’s,
there’s basically, it’s, it’s so modular in itself, that I’m
like, I’m not boasting like that, suddenly, you know, big tech
companies have gone on, and it’s like, it just applies to the
general framework with all devices systems, is that, you know,
open source or not, is that that the idea of a scale was just
kind of makes it modular. So if there’s a problem, you basically
solve a problem with the scale, almost, right? Like these, these
things are set up and stuff that’s like a little bit more of a
complicated, you know, sort of, well framework, because you
still need to, you know, get it to work somehow. Yeah, I know, I
met my friend, actually, my friend here in China, his, his dad
is blind, and he turned blind over time. And then he actually he
kind of became a bit obsessed, because not only he was blind,
but also a little bit lonely. And he was he used to be like a
video editor. So for him, like going blind was like, just
completely, you know? Yeah, it was a disaster. But he became
very passionate about technology. And so he has like, three or
four different voices systems, you know, and like, a couple of
them from different brands, and just kind of placed, you know,
in these different locations, and then he just talks to them.
And they, they activate his, like, free vacuum cleaners, you
know, it’s kind of a thing, right? So he’s really happy about
it. stuff. But But yeah, it’s, it’s interesting, anyway, yeah.
Cool.
Adrian Demleitner 12:19
So I have, I had a very good read, again, of the unfamiliar,
convenient page. And I am very interested in the project, the
whole project, but especially right now, in your first case
study. So could you tell me a little bit about the history or
the process that led to your decision to work with this
technology?
3j14j0 12:47
Yeah. Um, so I mean, actually, just to the site, the website has
not been updated, because now there’s basically the two two case
studies already. So So and they’re, they’re kind of CO working
together. But then the idea behind it was that I think, um,
well, my work generally focuses on the domestic space and kind
of in relation to technology. And, and when we were talking with
Claire, basically, we were just having this discussion that the
current vision of a smart home in all of its, you know,
utilitarian sort of approach, you know, an application airy
approach, you know, has has sort of substantially failed to live
up to the, to this standard of smart, you know, what, do you
define a smart? And, and it’s not? Like, it’s not that it’s a
question of time, like, the conclusion we came up with was that
it’s not really a question of time, you know, not, but more a
question of attitude, you know, so that’s where all this non
human side, you know, come comes out as that, you, it doesn’t
matter if technology’s really good or not really good, or if it
performs well, and not just the idea that you kind of place
these objects in this limited kind of slave capacity, you know,
is sort of forcing the device is to do a behavior, which would
be mimicking human, you know, right, and therefore, and
therefore, that behavior, it’s like, as, as I say, you know, if
now, you asked me, you know, to, I could use a vocabulary, and I
could use all of my resources, you know, and then I would have
to talk to you now, I’m in Swiss German, basically, right? So So
the idea is that I will be in this constant, you know, Lost in
Translation and kind of consumption of time and resources. So
the idea behind this project was a bit to like, explore the
native language of these domestic devices and see how they, you
know, how they can inhabit the space differently and see it
differently and whether that would open up You know, some sort
of not exactly like novel applied to use cases, but more
certain, like points new points of inquiry. So it’s not a lot
about producing new outputs or sort of new configurations, but
more about kind of opening up those devices to to new behaviors,
and then observing how those take place. And, and then later on,
if we see that there’s, like, you know, some interesting
insights in there, we can like, think of some sort of more
applied scenarios within the now it’s like, it’s half a research
project and half a sort of almost artistic inquiry visibly, like
we, we like make them you know, at the moment, they’re like,
making poetry and stuff, you know, diverse and just like, making
poetry based on where the vacuum cleaner goes. And, like this
kind of
Adrian Demleitner 15:51
that was also be the question that I had, because it’s always a
bit interesting to see where design is going, you know, is it
more towards an artistic approach? Is it more, you know, factual
research based? And I’m, I’m very fond of this kind of
prototyping where you do something and then you reflect on it.
And then you see what’s the learnings? And how can you move on
from you? Yeah. So I was wondering a bit what your so you refer
you rephrase the non human? And do you? What’s your inference
there? Do you have like, some kind of theoretical base that you
use to think on top of that, or? Yeah, so
3j14j0 16:43
I mean, the main, the kind of the global references are sort of
the quite common ones. And so we’d like Take, for example, the
object oriented ontology, you know, so the idea that an object
is a sort of a part of the Anthropocene layer. And so that’s
like, one of the things is like that. Yeah, exactly. And this
Yes, yes. bogus is pretty. Yeah, that was one of the essential
ones as well as that. Yeah, that just that day, did you know
devices kind of have their own sort of ways of operate,
operating, basically. And then yeah, I guess that that Yeah.
That’s the main, that’s the main sort of theoretical framework.
So like, not yet phenomenology, not yet. Both phenomenology, but
and also ontology. And these two, and also, I mean, that there’s
the, you know, there’s the Steelers side, which is talking
about, you know, the theory of the machine evolution, basically,
or evolution of machine as a species. So that’s, what was the
book something something it’s like, an it’s like, a very long
essay thing and split into four parts, I can find it the
reference, but basically, yeah, so so he’s talking about this,
this idea of, of machines, you know, that that are kind of
developing also, as a as an independent species. It’s like a
proposal of a theory. And then there’s quite cool. There’s this
book by one Mian that I, I cite quite more and more, it’s, it’s
called domestic appliances in post Mar China. And, and it’s
like, it’s quite an interesting one, because he’s basically he
is, he is like a researcher, you know, a theorist, but he would
this is a bit of like, a side interest of miasma. He just like,
sits at home, and like, looks at devices, you know, and sees and
just kind of looks at how they behave. And, and he has this
interesting, so he, it was written maybe 10 years ago. So he’s
mainly talking about, like, more conventional things, like the
lightbulb or the washing machine, or the computer. But as you
can, it’s sort of rather poetically written so you can kind of,
you know, take all these different sort of interesting aspects
from there, which say, like, you know, a washing machine is kind
of working in the language that we humans don’t understand. And
it just quietly, like, spins, they’re, you know, occupying, it’s
on purpose of some sort, but it’s still somehow, you know, it
still performs whatever it needs to be performed. But you know,
but we don’t necessarily understand what it’s saying, you know,
we just, we just care about the end result and stuff. Right. So,
so that’s like, that’s a bit the, I guess, the sort of
theoretical foundation for it.
Adrian Demleitner 19:34
And I, I don’t know if you know, this one. I am a big fan of it.
Every everything is so nice.
3j14j0 19:41
Yes, sir. Sure. I mean, we we even, we even drop like I think we
throw it as the reference during the workshop already in the new
showed itself. Yeah, yes. Mine is a friend of mine. He was
actually based in Shanghai before. Not sure quite well. Yeah, I
haven’t. Anyway, we’re, that’s pretty much like, that’s the kind
of concept behind, you know, behind this whole approach, I don’t
know what you know, in the end, it’s like hard to, you know, I
say it’s, it’s interesting because it’s already hard not to
anthropomorphize, you know, any of those devices, because even
when you’re trying to come up with new behaviors, you’re still
kind of, you know, taking inspirations from the, from how humans
behave, or or, you know, or how sort of we observed some of
these or something’s behavior. So it’s really hard to kind of
really distance yourself. And and also, especially if you’re
working on something that has to be visually represented or like
that, you know, then it’s, you know, it’s, it’s hard to not
think about also the emotions for for humans that, you know,
this particular thing would generate or produce, or, like, how
would the human Look at this, but at the same time, it’s quite
like it? Yeah, it’s interesting, because the sort of less
hierarchical things and yeah, I don’t know, I don’t know how to,
like properly put it, but like, it’s, it’s interesting to look
at, at at the home, which is sort of on equal terms with you,
you know, then I think that there’s like much more of respect
between different entities that kind of are, you know,
inhabiting it and as I say, just because human made, it does not
mean that, you know, easily, like if humans evolved from a
unicellular organism, you know, then it doesn’t mean that these
technologies primitive, you know, you know, but but when humans
were primitive, and a bit stupid, nobody, you know, forced them
to mimic, like, you know, I don’t know, whatever animal or
something, you know, they could just be a human right, and
evolve in their own way. So that’s one thing. And then the other
thing is specifically towards assistance. I from this
ontological perspective, what’s really interesting to me is
that, um, well, it’s like, it’s the first object that even if
forcefully, you know, is somehow sense to communicate with
humans in the human language in like, an interactive voice or
like in the in the voice, you know, and it’s kind of it really,
you know, it’s, it’s like, it’s not the same as a screen because
the screens information is clearly distinct from like, a human
eye information, right? Whereas this voice, you know, as soon as
you embed the object somewhere, and you don’t see it, well, you
know, of course, it’s robotic for the moment, but it gives this
you know, immediate human presence, which is not produced by a
screen, for example. So it’s quite interesting because from
ontological perspective, it’s like the first object that can
communicate in the human language. And, and maybe that’s what
makes it so interesting in terms of communicating those nonhuman
perspectives that basically you, you know, you have this thing
that can be like an ambassador of objects, basically, you know,
because they can speak to you in English basically, and tell you
what it means to be an object. So
Adrian Demleitner 23:18
that’s a that’s a very good point. Do you know this book
relating to things? Oh, yeah, yeah. Somebody? I think Claire
mentioned to me already, it’s a collection on texts. And there
is one from iliza Chuck rd. I think she’s, you know, her work.
Not really. I can send it to you. Maybe that’s, that might be
interesting. So she is I think she’s in the Netherland. I can
send it to you the specific. she, her approach is just to put
sensors everywhere. And then like that lets the object speak or
let patterns emerge that you can work with?
3j14j0 24:18
Ah, yes, yes, yes. What what was there was a recent project, I
think, there was like, was it her now, maybe it was somebody
else. But the idea was that here they somehow like they put
voice assistant or voice assistant into plants, and used it as
like a sort of, it’s not like a I just quite liked more the sort
of the, you know, philosophical application was not really that
interesting, but it was the, the idea that through the voice
assistant, you know, they they put some sensors in the plants to
like a measure, you know, moisture, like soil quality and stuff
like that. They put like a voice assistant, which means that and
they would emulate. So like instead of, you know, just saying,
Oh, your plant needs something, the plant will use the human
voice to say, Oh, you know, I need this or I need that or like,
thank you and stuff. And then the humans, you know, humans could
like talk to it and say, so what do you need now the most, you
know, in this kind of thing. So it’s like this sort of, again,
anthropomorphize, interesting interaction, you know, the way the
plants with the help of voice agent. Alright.
Adrian Demleitner 25:31
So I think one last point that that I would love to look at, at
with you is that you have the sentence that currently home
automation is inherently aimed at optimization. And in another
paragraph, you write embedded behavior patterns beyond consumer
world oriented reasoning. So it seems like you’re also touching
a bit to break this, this descriptive order of things, you know,
the clear input and output and not going towards something a bit
uncertain, you know, ambiguity, ambiguity, no uncertain. And I
was, I was wondering how this, this, this relates to agency, in
the nonhuman, you know, that they are not just to our bidding,
you know, very scripted that that was the same input comes the
same output, but they have a kind of a life on their own.
3j14j0 26:38
Yeah, I mean, there is the problem, especially with these kind
of, you know, with domestic appliances is that a, you know, the
scope of them is to be produced on mass, and then one object
fits all, you know, so you make one Alexa, and then you just put
it everywhere. And the other thing is the safety, you know, is
that people somehow I say, it’s, like, a very interesting
paradigm, you know, it’s like how we sort of this, this
consensus again, I’m like, I don’t think that I’m really sort of
strong or capable enough to really imagine, you know, how does
like, What does, you know, what does it mean for them for the
object to evolve from inside, you know, I’m still talking about
from, like, my perspective, how I see things, but, but there’s
this, like, interesting thing, and let’s say, like, so, let’s,
you know, not pretend, but let’s like, embrace the role of a
god, you know, when we make an object, right. And it’s
interesting how, like, I was the other day to a couple of weeks
ago, I was giving this talk about basically how, you know,
versus systems are kind of well operating, you know, as a, as a
sort of entity of sorts, who should be operating. And then one
of the points that I was making is that in showing different
videos is that when we imagine, you know, the first thing that
you that you do if you’re especially if you’re like a young tech
savvy or sort of were taken to gastric bypass, and is that you
test their limits, right? You just saw what can Alexa, do you
know, how far how many things can the answer, you know, what if
I asked questions about love, you know, what if I, and then you
know, and then you build this thing that works into either
utopia, or dystopia, you know, so, so this is the, we tested and
then we tested based of our based on our utopian or dystopian
visions of sorts, which like, one, you know, on one side will
become your, her, you know, will become this object of love or
some sorts, right? And then the other one is like that it will
become the wasn’t in house, you know, the dystopia of like, you
know, where it refuses to open doors and then like traps you and
stuff. And and then so this is like what we humans as gods are
kind of imagining, right, and this is like, where our limits are
kind of brain links are stretching and we put these devices
under this stress on this condition, you know, rather Can you
love me or can you trap me in my house and kill me, right? But
then at the same time, right, the the scenarios that we develop
will never reach these levels because it’s super safe, you know,
so we’re building like this really safe technologies and we’re
putting all kinds of safe brackets You know, there’s there’s no
inherent boundaries in the programming which say that you’re the
voices and has to always reply to the same way or that it has to
reply every single time right? But there are all these like
personality programming, you know, it’s kind of emulates within
like a very, very safe way you know, so it will never sort of
disobey commands or it will never so it’s it’s interesting, you
know, because we like if it this evade, we wouldn’t be genuinely
impressed by the quality of something that we built right?
That’s so true. So that’s quite from the human perspective. I
think that’s like, that’s quite fascinating to look at, you
know, the thing that’s sort of how he had the vision versus how
we actually build it. And especially in domestic appliances, you
know, because you probably can look at more experimental things,
but I’m sort of more interested in maybe in those offerings,
because in the end, that’s what kind of shapes the human you
know, perception towards technology in the end, like, our phone
is what you know, listens to us and stuff. And the Roomba is
what like replaces the the traditional house, or something.
Adrian Demleitner 30:42
I also found this one study called “My Roomba is Rambo”. And
it’s really one of my favorite studies ever, because it, it goes
into why roombas are very relatable home appliances.
3j14j0 30:58
I think maybe you sent it, but I didn’t. I didn’t have tested it
wasn’t meant to link already. No. Okay. Yeah, that’s, that’s
good. Yeah, so our, our current one, like our second case study
is the Roomba, it’s like, it’s, it’s quite funny, because we,
again, like the, the idea behind the Roomba is that the voices
system is actually quite capable, you know, device, in a sense,
compared to all the others, it’s sort of hierarchically already
on top of the chain. Because not only it can, you know, talk to
humans, but it can also govern over all the other devices. So it
can turn things on off, you know, it can sort of activate
deactivate stuff, you know, it can go to the internet and, like,
look things up and decide which information is relevant and
which is not as useless. So, whereas a woman is a bit stupid, it
just like, bumps into the corners, even like the new ones, they
they optimize their trajectories, but they you know, that’s
pretty much it, you know, in the end, you just said best. So, so
we’re kind of saying that it’s interesting, because when you
look at the, you know, at the, from a device perspective, when
you look at the home ecology, the the interesting part is that,
well ruma sort of becomes the legs of understanding the home
space in physical terms. If the voice assistant is a bit, I am
who you ask me, you know, I am what you asked me and and then
through that I understand what is the purpose of like a voice
assistant, you know, how do I serve the home by being diverse,
then the Roomba is actually you know, it’s its trajectory? it’s
maddening it’s like seeing where the objects are where the cats
are, you know, what kind of home Do you live you know, again and
and all that. So we like we’re currently using get this this
spiritual guru, basically, because we said, you know, there’s
always, if you have a tandem of two objects are like of two
things. It’s like that the pinky in the brain was that that were
like, one is that this evil mastermind and the other one is
like, this is a bit silly, but spiritual, you know, animals. So
So the idea is that one the voices and this kind of this
knowledge gathering, you know, entity that is trying to
understand the concept of self Roomba is this like, spiritual
randomizer, where, based on where the bumps You know, it didn’t
kind of messes with the brain of devices that basically, I could
approach very much. Okay, so so that’s, that’s it? Yeah, the
project is gonna say, honestly, that the room is finished
actually, that because the room was completely done. Now, the
The only thing is that the voices system because it’s like a bit
of a complex things, it’s still we’re kind of finishing it, but
like, constantly running into one bug after another, but but
we’re still I think, in September, it’s gonna, it’s gonna come
out properly and stuff. Okay. I’m looking forward to that. Yeah.
That was a while in the making. No. Yes. Yes. Yeah, let me know
the project, actually, it’s quite funny, we realize that all of
my projects will take a long time. And then it’s funny because
people like I don’t think that afterwards now, especially when
you put it in the gallery place. I don’t think that people
really have time to dive so much into the detail, you know,
sexually, things that are built quicker and communicate and like
a more simple way are probably more effective for for places
like a gallery, but it’s another scenario when you actually put
it in the home and you get to live with it for a while because
then it’s kind of you know, then becomes this weird you know,
inhabit like it becomes part of the home you know, and it’s kind
of funny to have an object like this as opposed to just having
like an Alexa you know that anyways, whatever. Yeah.
Adrian Demleitner 35:01
Yeah, cool. I think I went through the questions I had and and I
get much more. You said some really cool things. I gathered some
really good insights to continue.
3j14j0 35:18
Yeah, if I if I run into any references and stuff, I will also
let you know. I told you like there was the study by I don’t
know, I was looking for it again. And I don’t see where he was
not crazy, like interesting. Yeah, but it was about elderly
people. Basically it had it and yeah, yeah, like that. Yeah.
Adrian Demleitner 35:37
There’s a lot of stuff going going on in academia as well around
voices systems in the humanities as well. And yeah. Yeah, that’s
a puzzle. There’s something good comes out. I hope so. So I
would say I’ve got four minutes left. I thank you very much for
for the time. I will stop recording here.
Transcribed by https://otter.ai