What’s This? A Voice and Touch Multimodal Approach for Ambiguity Resolution in Voice Assistants
Jaewook Lee, Sebastian S. Rodriguez, Raahul Natarrajan, Jacqueline Chen, Harsh Deep, Alex Kirlik
To Appear at ICMI 2021
B.S. in Computer Science, Minor in Psychology
GPA: 3.95/4.0, Technical GPA: 3.93/4.0
Dean's List: FA18, SP19, FA19, SP20, FA20, SP21
James Scholar: FA19, SP20, FA20, SP21, FA21
Can we enable mobile devices to extract information (context) from its users or surrounding (e.g., using sensors or computer vision)?
Can we equip mobile devices with the necessary technologies (e.g., machine learning) such that they can process and make sense of contextual data?
Can we improve the way mobile devices display results of processing the data by superimposing them onto the real world?
Leading an effort to create an open source toolkit with a goal of facilitating remote user studies for XR researchers.
Done in collaboration with Dr.Eyal Ofek at Microsoft Research.
Researching which AR navigation visualizations work best in a given scenario and which contexts do pedestrians pay attention to.
Advised by Prof. David Lindlbauer at CMU HCII.
Conducting a field study using Decipher, a data-driven visualization tool that facilitates understanding of large sets of feedback.
Done in collaboration with Joy Kim at Adobe Research.
(* denotes co-authorship)
Wenxuan Wendy Shi, Akshaya Jagannadharao, Jaewook Lee, Brian P. Bailey
To Appear at CSCW 2021
JiWoong Jang*, Jaewook Lee*, Vanessa Echeverria, LuEttaMae Lawrence, Vincent Aleven
Taehyun Kim, Jaewook Lee, Robb Lindgren, Jina Kang
Sebastian S. Rodriguez, Jacqueline Chen*, Harsh Deep*, Jaewook Lee*, Derrik E. Asher, and Erin Zaroukian