people

Eric Zavesky

Zavesky, Eric
9505 Arboretum Blvd #3S05B
Austin, TX
Subject matter expert in interactive video indexing and retrieval, multimedia content processing, multimodal interfaces, machine learning, biometrics, data mining, and natural language processing.

Eric Zavesky joined AT&T Labs Research in October 2009 as a Principle Member of Technical Staff.  At AT&T, he has collaborated on several projects to bring alternative query and retrieval representations to multimedia indexing systems including object-based query, biometric representations for personal authentication, and work to incorporate spatio-temporal information into near-duplicate copy detection.  His prior work at Columbia University studied semantic visual representations of content and low-latency, high-accuracy interactive search. 

Projects
Assistive Technology, At AT&T Labs - Research, we apply our speech, language and media technologies to give people with disabilities more independence, privacy and autonomy.

Connecting Your World, The need to be connected is greater than ever, and AT&T Researchers are creating new ways for people to connect with one another and with their environments, whether it's their home, office, or car.

Content Augmenting Media (CAM), Leverage multimedia metadata to provide live alerts and intelligent content consumption.

Content-Based Copy Detection, Content-based Copy Detection is an enabling technology to discover repeated content and events in a large-scale content database.

Enhanced Indexing and Representation with Vision-Based Biometrics, Leveraging visual biometrics for indexing and representations of content for retrieval and verification.

iMIRACLE - Content Retrieval on Mobile Devices with Speech, iMIRACLE uses large vocabulary speech recognition for content retrieval with metadata words (titles, genre, channels, etc.) and content words that occur in recorded programs.

MIRACLE and the Content Analysis Engine (CAE), The Multimedia Information Retrieval by Content (MIRACLE) project encompasses the technologies for video indexing, analysis, and retrieval with audio, textual, and visual content information.

VidCat - Simplified Personal Photo and Video Managmenet, VidCat permits simplified personal photo and video management (i.e. a Video Catalog) from a webpage or your favorite mobile device.

Video - Content Delivery and Consumption, A background on the delivery and consumption of video and multimedia and references to projects within the AT&T Video and Multimedia Technologies and Services Research Department.

Video - Indexing and Representation (Metadata), Video and multimedia indexing and representations (i.e. metadata), their production, and use. Links to projects within the AT&T Video and Multimedia Technologies and Services Research Department.

Video and Multimedia Technologies Research, The AT&T Video and Multimedia Technologies Research Department strives to acquire multimedia and video for indexing,retrieval,and consumption with textual,semantic,and visual modalities.

Visual API - Visual Intelligence for your Applications, The Visual API provides Visual Intelligence to applications and developers through REST-based APIs powered by the AT&T Developer Program.

Visual Semantics for Intuitive Mid-Level Representations, Represent content with mid-level visual semantics for retrieval, filtering, and tagging.

Technical Documents

Appearance, Visual and Social Ensembles for Face Recognition in Personal Photo Collections
Eric Zavesky, Raghuraman Gopalan, Archana Sapkota
IEEE International Conference on Biometrics: Theory, Applications and Systems,  2013.  [PDF]  [BIB]

IEEE Copyright

AN AUGMENTED MULTI-TIERED CLASSIFIER FOR INSTANTANEOUS MULTI-MODAL VOICE ACTIVITY DETECTION
Dimitrios Dimitriadis, Eric Zavesky, Stevens Institute of Technology Matthew Burlick
Annual Conference of International Speech Communication Association (Interspeech) ,  2013.  [PDF]  [BIB]

International Speech Communication Association Copyright

Large-Scale Analysis for Interactive Media Consumption
David Gibbon, Andrea Basso, Lee Begeja, Zhu Liu, Bernard Renger, Behzad Shahraray, Eric Zavesky
TV Content Analysis,  TV Content Analysis,  CRC Press,  2012.  [PDF]  [BIB]

CRC Press, Taylor Francis LLC Copyright

Combining Content Analysis of Television Programs with Audience Measurement
David Gibbon, Zhu Liu, Eric Zavesky, DeDe Paul, Deborah Swayne, Rittwik Jana, Behzad Shahraray
IEEE Consumer Communication and Networking Conference, (CCNC),  2012.  [PDF]  [BIB]

IEEE Copyright

AT&T Research at TRECVID 2011
Eric Zavesky, Zhu Liu, Behzad Shahraray, Ning Zhou
TRECVID Workshop,  2011.  [PDF]  [BIB]

NIST Copyright

AT&T RESEARCH AT TRECVID 2010
Eric Zavesky, Behzad Shahraray, Zhu Liu, Neela Sawant
TRECVID 2010 Workshop,  2010.  [PDF]  [BIB]

NIST Copyright

Patents

Relational Display Of Images, May 6, 2014
Dynamic Access To External Media Content Based On Speaker Content, May 6, 2014
Brief And High-Interest Video Summary Generation, June 5, 2012

Publications

TV Content Analysis for Multiscreen Interactive Browsing and Retrieval
David Gibbon, Andrea Basso, Lee Begeja, Zhu Liu, Bernard Renger, Behzad Shahraray, Eric Zavesky
TV Content Analysis,  CRC Press,  2011.  [BIB]

Unsupervised Event Segmentation of News Content with Multimodal Cues
Mattia Broilo, Eric Zavesky, Andrea Basso, Francesco G. B. De Natale
AIEMPro’10 at ACM Multimedia,  2010.  [BIB]

AT&T RESEARCH AT TRECVID 2010
Zhu Liu, Eric Zavesky, Behzad Shahraray, Neela Sawant
2010.  [BIB]

A Guided, Low-Latency, and Relevance Propagation Framework for Interactive Multimedia Search
Zavesky, Eric
Graduate School of Arts and Sciences, Columbia University,  2010.  [BIB]

Visual Islands: Intuitive Browsing of Visual Search Results
Eric Zavesky, Shih-Fu Chang, Cheng-Chih Yang
ACM International Conference on Image and Video Retrieval,  pp 617--626,  2008.  [BIB]

CuZero: Embracing the Frontier of Interactive Visual Search for Informed Users
Eric Zavesky, Shih-Fu Chang
ACM Multimedia Information Retrieval,  2008.  [BIB]

Cross-Domain Learning Methods for High-Level Visual Concept Classification
Wei Jiang, Eric Zavesky, Shih-Fu Chang, Alex Loui
IEEE International Conference on Image Processing,  2008.  [BIB]

Columbia University/VIREO-CityU/IRIT TRECVID2008 High-Level Feature Extraction and Interactive Video Search
Shih-Fu Chang, Junfeng He, Yu-Gang Jiang, Elie El Khoury, Chong-Wah Ngo, Akira Yanagawa, Eric Zavesky
NIST TRECVID Workshop,  2008.  [BIB]

Searching Visual Semantic Spaces with Concept Filters
Eric Zavesky, Zhu Liu, David Gibbon, Behzad Shahraray
IEEE International Conference on Semantic Computing,  2007.  [BIB]

Columbia University's Semantic Video Search Engine
Shih-Fu Chang, Lyndon Kennedy, Eric Zavesky
ACM International Conference on Image and Video Retrieval,  2007.  [BIB]

Columbia University TRECVID2007 High-Level Feature Extraction
Shih-Fu Chang, Wei Jiang, Akira Yanagawa, Eric Zavesky
NIST TRECVID Workshop,  2007.  [BIB]

Columbia University TRECVID-2006 Video Search and High-Level Feature Extraction
Shih-Fu Chang, Winston, Wei Jiang, Lyndon Kennedy, Dong Xu, Akira Yanagawa, Eric Zavesky
NIST TRECVID Workshop,  2006.  [BIB]

Columbia University TRECVID-2005 Video Search and High-Level Feature Extraction
Shih-Fu Chang, Winston Hsu, Lyndon Kennedy, Lexing Xie, Akira Yanagawa, Eric Zavesky, Dongqing Zhang
NIST TRECVID workshop,  2005.  [BIB]

graphviz

Connections

Graphviz