The iWalk navigation service for people with visual disabilities
Subject matter expert in dialog systems, natural language generation, speech processing
My main research topic is natural language generation in interactive systems: that is, designing algorithms that enable computer systems to produce more helpful, targeted and informative presentations in the context of an ongoing spoken, textual or multimodal interaction. I work in all stages of the natural language generation pipeline, from content selection (deciding "what to say") through sentence planning and surface realization (deciding "how to say it").
I am also interested in adaptation in dialog systems. For example, I have studied how discourse context can be used to improve natural language generation, how knowledge of the user's location can improve language modeling, and how user models can be used to improve information presentations.
Interactive systems come in many types, from speech-input speech-output dialog systems through to multimodal systems that support human-human communication. I have also worked on assistive technology, focusing on speech-driven applications for people who are blind or have low vision, and on tools to help people maintain their privacy online.
I am vice president of the Special Interest Group on Discourse and Dialogue (SIGdial) and a rotating editor for the journal Dialogue and Discourse. I also am the scheduler for the DiSpeLL talk series at AT&T Research -- if you are in the New York/New Jersey area, and would like to visit us to give a talk about dialog, speech, language or learning, please contact me.
Prior to coming to AT&T Research, I was a faculty member at Stony Brook University. Together with a colleague from Stony Brook, I am co-author of the book "The Princess at the Keyboard: Why Girls Should Become Computer Scientists".