We study many areas of speech and language processing, ranging from core statistical techniques to approaches designed for specific tasks. We are interested in classic applications like speech recognition and machine translation as well as emerging areas like sign language recognition and social media analysis. Specific recent research projects are listed below :
- Multi-view learning of representations for acoustics and text (Livescu & Stoehr, 2009; Arora & Livescu, 2012, 2013, 2014; Andrew et al., 2013)
- New deep network architectures for feature learning
- Sub-word modeling for speech recognition
- Articulatory methods for speech processing (Jyothi et al., 2011, 2012; Tang et al., 2012; Prabhavalkar et al., 2011, 2013; Wang et al., 2014)
- Example-based methods
- Issues of low-resource settings for spoken term detection and speech recognition (Prabhavalkar et al., 2012, 2013; Levin et al., 2013)
- Automatic sign language recognition, especially handshape modeling (Kim et al., 2012, 2013)
- Discriminative segmental (semi-Markov) models (Tang et al., 2014)
-
Surface web semantics for natural language processing, including:
- dependency and constituent parsing (Bansal & Klein, 2011)
- coreference resolution (Bansal & Klein, 2012)
- lexical intensity ordering (de Melo and Bansal, 2013)
- structured taxonomy induction (Bansal et al., 2014)
- Task-specific continuous word representations, e.g., for dependency parsing (Bansal et al., 2014)
- Text-to-image coreference resolution (Kong et al., 2014)
- Neural models of paraphrase, compositionality, and sentiment
- Weakly-supervised NLP (Gimpel & Bansal, 2014)
- Generative models for word sense induction (Wang et al., 2015)
- Coreference, relation extraction, and question answering
- Diversity in linguistic structure prediction (Gimpel et al., 2013)
- New models and formalisms for machine translation (Gimpel & Smith, 2014)
- Social media analysis, including part-of-speech tagging (Owoputi et al., 2013) and forecasting (Sinha et al., 2013)