
![]() |
![]() |
![]() |
What if we could capture live video of objects or scenes from several cameras, and process it to generate 3D models and images that could then be displayed manipulated using 3D technology? For example, we could display the scene over the web on a 3D stereo screen, insert the object into a virtual world, or have a gaze-corrected video conference.
In 2008-09, we set up a new minilab to explore this area. A video featuring Vinay Vaishamayan, Shankar Krishnan and Amy Reibman, explains some of this work (primarily for a general audience).
Technical Documents
Real-time image deconvolution on the GPU
James Klosowski, Shankar Krishnan
SPIE Conference: Parallel Processing for Imaging Applications,
2011.
[LINK]
[BIB]
SPIE Copyright
Project Members
Related Projects
AT&T Application Resource Optimizer (ARO) - For energy-efficient apps
CHI Scan (Computer Human Interaction Scan)
CoCITe – Coordinating Changes in Text
E4SS - ECharts for SIP Servlets
Scalable Ad Hoc Wireless Geocast
Graphviz System for Network Visualization
Information Visualization Research - Prototypes and Systems
Swift - Visualization of Communication Services at Scale
AT&T Natural VoicesTM Text-to-Speech
StratoSIP: SIP at a Very High Level
Content Augmenting Media (CAM)
Content Acquisition Processing, Monitoring, and Forensics for AT&T Services (CONSENT)
MIRACLE and the Content Analysis Engine (CAE)
Social TV - View and Contribute to Public Opinions about Your Content Live
Enhanced Indexing and Representation with Vision-Based Biometrics
Visual Semantics for Intuitive Mid-Level Representations
eClips - Personalized Content Clip Retrieval and Delivery
iMIRACLE - Content Retrieval on Mobile Devices with Speech
AT&T WATSON (SM) Speech Technologies
Wireless Demand Forecasting, Network Capacity Analysis, and Performance Optimization