Trust Issues
The problems around privacy hover mainly around the data tracking habits of issuing companies like Google or Amazon. Most of my participants were acutely aware of these practices. They either took this as the price they have to pay or were considering changing providers if a privacy-sensitive alternative would exist.
Generally, these devices are placed within intimate contexts of one’s own life. On the other hand, voice assistants are invisible interfaces that withdraw from the negotiation of intimacy. The problem boils down to us inviting these devices into our innermost intimacies, while not offering anything similar in return.
Adding to these two points is the fact, that users have little to no control over how they want to interact with the voice assistants. Users have no way of giving consent to many of the processes that are run by the device.
The general problem that arises out of these two themes would then be as follows.
Users of voice assistants miss a proper vocabulary to deal with the other-than-human presence, especially in case of errors, and generally have a hard time bonding with voice assistants. They are further unable to deal with issues of trust and can assert only little control over the negotiation of this important aspect.
Neither the user nor the device knows enough or even can know enough, about the other to enable communication and interaction that would lead to trust and bonding. The interaction with a voice assistant needs to feature implementations of design characteristics, that enable the user to no just tolerate but embrace the chaotic nature of voice assistants while still being to unlock their full potential.