This is work for a multimodal project improving an existing user flow that currently is web only. When the user calls the IVR, the system can present a multimodal app (voice and visuals) for the client to continue the interaction with the added benefit of the screen, while also being able to leverage the voice/phone channel to control and enter information.
I analyzed the current user flow, and optimized for efficient use of multimodal interaction, as well as simplifying and reducing the steps needed to complete the process.
I used a process of iterative exploration through multiple concepts, utilizing both voice and visual display of information as well as accepting touch and voice input from the user. Additional exploration took into consideration how the visuals can interact and change with voice interaction and the IVR can respond to touch interaction.