Principles

I used insights from users to create voice prototypes with Voiceflow, and screen prototypes with Origami Studio with which I had different kinds (pro, casual, non-) users of voice assistant technology interact.

Then I refined the interactions through co-creation suggestions and based on their feedback. What you see in these videos is a demo of those prototypes, from which the design principles have been extrapolated

#1 Set Expectations Upfront

Not just manage them. Users only know what you show them. There’s no hover state for interactions with voice assistants. It's up to the designers to not disappoint users by setting the bar too high

RECEPTION

Seen as an improvement

People who didn't use VAs thought it was super normal

Frequent users thought it was a good solution and made them more curious as to what else they could do with it

KEY FEEDBACK

“It's good that it tell you that it has trouble and lets you train it. Sometimes it doesn't work for names I say all the time"

Feasibility

Totally technically feasible! This is a design decision.

What Happened in the Video?

#2 Pull Back the Curtain

These systems are wildly complex and it's amazing that they work at all. However, the end result is sometimes explainable, and it helps people become less frustrated to know what's happening.

Designing to make AI understandable is a challenge. But not understanding AI can pose real world challenges in much more high stakes ways. This is a start.

RECEPTION

Created empathy

People felt more forgiving of the mistakes because they understood where the assistant was coming from

KEY FEEDBACK

“It's important that it doesn't do this too often. I'm paying money for a top of the the line machine and it’s only okay if does this in the beginning”

Feasibility

Totally technically feasible! This is what's going on behind the scenes with NLP decisions anyway.

What Happened in the Video?

#3 Give Users Agency

Technology can evolve to solve a lot of our problems, but in this case we know how to help ourselves. Virtual assistants should encourage their own training to help users, in order to build trust over time

RECEPTION

Another insight into how the system things

Allows you to correct it and learn how it works at the same time

Users were relieved to know they can intervene

KEY FEEDBACK

Time to prompt is important

“It needs to prompt me only when something goes wrong, not when/if it actually did understand"

Feasibility

Possible, but not advisable. Giving users free reign to make the voice assistant potentially say whatever they'd like could be exploited. However, there are workarounds, such as listing the top 3 guess and asking users to pick the one that's correct (or none at all).

What Happened in the Video?

#4 Just Human Enough

Virtual assistants only need to be human enough to make them easier to interact with, but not deceive or mislead users (through their own speech or through outbound communication and marketing material) about their nature. This only leads to misunderstanding and frustration.

It can do this through eschewing defaults and giving users options that they may not realise exist.

RECEPTION

We'll have to see. Voice assistants are surprisingly young for how ubiquitous they are. It's more of a long term thing.

KEY FEEDBACK

“That’s what a person would say/how I would ask someone if I didn't understand“

Feasibility

That’s up to the makers of this tech

What Does That Even Mean?

Conclusion →