
Content Augmenting Media (CAM)
As the volume of video information and entertainment content increases, consumers spend too much time searching for the content they want to see which leaves less time for them to enjoy content that is of interest to them. CAM provides an innovative way to address this problem by enabling access to personalized program segments based on users preferences.
Content Augmenting Media (CAM) is an AT&T Patented technology that brings users the content they are interested in watching on television or on mobile devices, while minimizing their time in searching for it. It accomplishes this by collecting information about the content from content providers, optionally extracting such information from the content itself by means of AT&T content analysis technology, aggregating such information into network streams that are personalized and delivered to the user's devices as metadata.
CAM is changing how you experience TV content. Users can define a profile in form of keywords and get real-time notification when the content of any channel matches their profile. The technology allows you to create a profile of keywords that are your areas of interest, including names of people, locations and other topics. While you are watching television, suggested programs will display based on your search topics. You can also change your profile on the fly by using your cell phone or tablet. No more searching forever only to miss what you wanted to watch.
In the animation below, you can view each of the stages of CAM execution.
The acquisition of content is a non-trivial, but essential part of CAM's operation. Harnessing systems like URSA, our CAM prototype has been quickly scaled to acquire and process an arbitrary number of streams.
After acquisition, CAM utilizes parts of the Content Analysis Engine (CAE) that generates metadata for an incoming video stream. CAM also utilizes a continuous feed of this metadata so that any content that matches a user's profile can immediately be found.
Project Members
Related Projects
AT&T Application Resource Optimizer (ARO) - For energy-efficient apps
CHI Scan (Computer Human Interaction Scan)
CoCITe – Coordinating Changes in Text
E4SS - ECharts for SIP Servlets
Scalable Ad Hoc Wireless Geocast
Graphviz System for Network Visualization
Information Visualization Research - Prototypes and Systems
Swift - Visualization of Communication Services at Scale
AT&T Natural VoicesTM Text-to-Speech
StratoSIP: SIP at a Very High Level
Content Acquisition Processing, Monitoring, and Forensics for AT&T Services (CONSENT)
MIRACLE and the Content Analysis Engine (CAE)
Social TV - View and Contribute to Public Opinions about Your Content Live
Enhanced Indexing and Representation with Vision-Based Biometrics
Visual Semantics for Intuitive Mid-Level Representations
eClips - Personalized Content Clip Retrieval and Delivery
iMIRACLE - Content Retrieval on Mobile Devices with Speech
AT&T WATSON (SM) Speech Technologies
Wireless Demand Forecasting, Network Capacity Analysis, and Performance Optimization