HCDE 451: Voice Interaction Process Blog
Integrating voice interactions with a fridge
For this week’s assignment, we would be exploring how voice interactions could be incorporated with an IoT device as opposed to on-screen interactions. While it would have been great to continue to add on to the Joint Pressure Sleeve as I’ve done in previous assignments, I knew that incorporating voice interactions with a knee brace would not have been very practical. Instead, this week revolved around a different IoT device: the refrigerator.
Like a lot of people, I tend to open the refrigerator to take a look inside but end up closing it after a few moments without grabbing anything; this actually happens more times a day than I would like to admit. I thought incorporating voice interactions into a refrigerator would (hopefully) prevent the endless stare from occurring again. After some brainstorming, I eventually came up with a list of tasks that could be accomplished by incorporating voice interactions into a refrigerator:
Task 1: Adding food to shopping lists to buy later
Task 2: Suggesting recipes based off what foods are in the refrigerator
Task 3: Checking expiration dates
Task 4: Enough ingredients for recipe
Task 5: Time until drinks are cold
The hope for this assignment was to explore the viability of using voice interactions in place of actually opening the fridge for everyday users.
Coming into this assignment, I didn’t realize how hard it would be to create the first drafts dialogues for each task. I found myself consistently creating the intended dialogue outcomes instead of exploring error handling. I really hadn’t thought too much about how error handling would work in general because as a user of smart devices with voice interactions, I mostly only focused on getting the correct outcome. After a good bit of back and forth in my mind about error handling, I finally drafted out some sample dialogues that I felt good about.
Testing and Conversation Flows
While the dialogues made sense in my head as I wrote them, I didn’t realize how awkward they were when I did a table reading of them with others. In particular, the thing that made it awkward to read through was that a lot of the voice assistant’s dialogue were wordy and not to the point. When I think about my interactions with Siri or Alexa, their answers to my prompts were more straight forward and succinct. With that mindset, I then set forth to revise and then expand upon my dialogues with a conversation flow.
Feedback and Analysis
After finishing my conversation flows, I felt pretty good the way I adjusted the dialogues to reflect the feedback I got during testing. However, it seemed like I did too good of a job of making the dialogues straight forward and succinct as it was pointed out that I don’t have much, if any, conversational tones in my voice interactions. I guess this came as a reflection of how I usually interact with voice assistants; I’m not one to converse with a voice assistant but rather just get the answer that I’m looking for. Though, I realize that not everyone interacts with voice assistants the same way I do. If I got a chance to revisit this assignment, I would explore on how a voice assistant can fit the conversational wants of a user instead of just honing in one one particular type.