Background
I joined Adept.ai as the Lead Product Designer and worked on designing how humans can best utilize AI agents to complete tasks for them. Two months later the team got acquired* by Amazon.
At Amazon, my team of two was initially tasked with aligning the Alexa org with the Adept team. Then we decided to design and prototype a solution for what we think is the right interaction model for Adept's workflows.
Breaking away from the turn-by-turn chat
The challenge
It's cumbersome to have a back-and-forth conversation with a voice agent to get real-life tasks done.
How do we minimize back-and-forth steps with the customer?
Inventing a new way to use agents
The first prototype
My small team built a working protoype of a UI component embedded into the chat interface. The user can talk naturally, and the interface instantly updates as you speak.
The prototype asks you follow-up questions, but as a subtle visual nudge—taking into account that visual recognition only takes milliseconds.
I named this pattern "Live Notes."
Building groceries cart
I turned the interface into a full-screen experience outside of the chat view (due to some political design territory pressures). This is me talking naturally to my groceries cart to add items.
Ordering food
Asking questions about the menu while you're ordering food
Alexa+ Launch Demo
A few months later—our prototype, built by three people including myself, got on the stage for the Alexa+ launch demo.
