
180 Park Ave - Building 103
Florham Park, NJ
Subject matter expert in AT&T Natural Voices®, text to speech, SSML, text markup, text normaliztion, unit selection
Started at Bell Labs in 1986 in the text-to-speech group. Initially handled system administration for the group. Did DSP32 programming for synthesis and TTS software support. Projects included programmable phone call automation with TTS responses, TTS on a DOS laptop (with no floating point or built-in audio device), and conversion of Email to Audix messages with TTS.
In 1996 became part of the new text to speech research group at AT&T Labs. Merging old and new technologies, helped create and commercialize AT&T Natural Voices®. Up to NV 3.0, worked with development groups to hand off research inprovments. Since NV 4.0 have been responsible for releases and updates. Also work with organizations using NV within AT&T and provide tier-3 support for Wizzard Software, which handles much of the external sales and licensing for NV.
For many years now, have monitored and maintained the heavily-used Natural Voices public demo, which demonstrates, educates and promotes our technology. On average, more than 6000 people per day use our site from all over the world.
AT&T Natural VoicesTM Text-to-Speech,
Natural Voices is AT&T's state-of-the-art text-to-speech product, taking text and producing natural-sounding, synthesized speech in a variety of voices and languages.
Automatic Assessment of American English Lexical Stress using Machine Learning Algorithms
Yeon Kim, Mark Beutnagel
SLaTE-2011 workshop (Speech and Language Technology in Education),
2011.
[BIB]
{This paper introduces a method to assess lexical stress patterns in American English words automatically using machine learning algorithms, which could be used on the computer assisted language learning (CALL) system. We aim to model human perception concerning lexical stress patterns by training stress patterns in a native speaker's utterances and make use of it to detect erroneous stress patterns from a trainee.
In this paper, all the possible lexical stress patterns in 3- and 4-syllable American English words are presented and four machine learning algorithms, CART, AdaBoost+CART, SVM and MaxEnt, are trained with acoustic measurements from a native speaker's utterances and corresponding stress patterns. Our experimental results show that MaxEnt correctly classified the best, 83.3% stress patterns of 3-syllable words and 88.7% of 4-syllable words. }
Speech acts and dialog TTS
Ann Syrdal, Alistair Conkie, Yeon Kim, Mark Beutnagel
Seventh ISCA Speech Synthesis Workshop,
2010.
[BIB]
Methods And Apparatus For Rapid Acoustic Unit Selection From A Large Speech Corpus,
Tue Nov 20 16:12:23 EST 2012
A speech synthesis system can select recorded speech fragments, or acoustic units, from a very large database of acoustic units to produce artificial speech. The selected acoustic units are chosen to minimize a combination of target and concatenation costs for a given sentence. However, as concatenation costs, which are measures of the mismatch between sequential pairs of acoustic units, are expensive to compute, processing can be greatly reduced by pre-computing and caching the concatenation costs. Unfortunately, the number of possible sequential pairs of acoustic units makes such caching prohibitive. However, statistical experiments reveal that while about 85% of the acoustic units are typically used in common speech, less than 1% of the possible sequential pairs of acoustic units occur in practice. A method for constructing an efficient concatenation cost database is provided by synthesizing a large body of speech, identifying the acoustic unit sequential pairs generated and their respective concatenation costs, and storing those concatenation costs likely to occur. By constructing a concatenation cost database in this faction, the processing power required at run-time is greatly reduced with negligible effect on speech quality.
Methods And Apparatus For Rapid Acoustic Unit Selection From A Large Speech Corpus,
Tue Dec 27 16:06:49 EST 2011
A speech synthesis system can select recorded speech fragments, or acoustic units, from a very large database of acoustic units to produce artificial speech. The selected acoustic units are chosen to minimize a combination of target and concatenation costs for a given sentence. However, as concatenation costs, which are measures of the mismatch between sequential pairs of acoustic units, are expensive to compute, processing can be greatly reduced by pre-computing and caching the concatenation costs. Unfortunately, the number of possible sequential pairs of acoustic units makes such caching prohibitive. A method for constructing an efficient concatenation cost database is provided by synthesizing a large body of speech, identifying the acoustic unit sequential pairs generated and their respective concatenation costs. By constructing a concatenation cost database in this fashion, the processing power required at run-time is greatly reduced with negligible effect on speech quality.
Method And System For Aligning Natural And Synthetic Video To Speech Synthesis,
Tue Nov 30 15:05:08 EST 2010
According to MPEG-4's TTS architecture, facial animation can be driven by two streams simultaneously--text and Facial Animation Parameters. A Text-To-Speech converter drives the mouth shapes of the face. An encoder sends Facial Animation Parameters to the face. The text input can include codes, or bookmarks, transmitted to the Text-to-Speech converter, which are placed between and inside words. The bookmarks carry an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. The Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text. The system reads the bookmark and provides the encoder time stamp and a real-time time stamp. The facial animation system associates the correct facial animation parameter with the real-time time stamp using the encoder time stamp of the bookmark as a reference.
Methods And Apparatus For Rapid Acoustic Unit Selection From A Large Speech Corpus,
Tue Jul 20 15:04:13 EDT 2010
A speech synthesis system can select recorded speech fragments, or acoustic units, from a large database of acoustic units to produce artificial speech. The selected acoustic units are chosen to minimize a combination of target and concatenation costs for a given sentence. Concatenation costs are expensive to compute. Processing is reduced by pre-computing and caching the concatenation costs. The number of possible sequential pairs of acoustic units makes such caching prohibitive. A method for constructing an efficient concatenation cost database is provided by synthesizing a large body of speech, identifying the acoustic unit sequential pairs generated and their respective concatenation costs, and storing those concatenation costs likely to occur.
Methods and apparatus for rapid acoustic unit selection from a large speech corpus,
Tue May 06 18:12:48 EDT 2008
A speech synthesis system can select recorded speech fragments, or acoustic units, from a very large database of acoustic units to produce artificial speech. The selected acoustic units are chosen to minimize a combination of target and concatenation costs for a given sentence. However, as concatenation costs, which are measures of the mismatch between sequential pairs of acoustic units, are expensive to compute, processing can be greatly reduced by pre-computing and aching the concatenation costs. Accordingly, a method is disclosed for constructing an efficient concatenation cost database by synthesizing a large body of speech, identifying the acoustic unit sequential pairs generated and their respective concatention costs, and storing those concatenation costs likely to occur.
Method and system for aligning natural and synthetic video to speech synthesis,
Tue Apr 29 18:12:46 EDT 2008
Facial animation in MPEG-4 can be driven by a text stream and a Facial Animation Parameters (FAP) stream. Text input is sent to a TTS converter that drives the mouth shapes of the face. FAPs are sent from an encoder to the face over the communication channel. Disclosed are codes bookmarks in the text string transmitted to the TTS converter. Bookmarks are placed between and inside words and carry an encoder time stamp. The encoder time stamp does not relate to real-world time. The FAP stream carries the same encoder time stamp found in the bookmark of the text. The system reads the bookmark and provides the encoder time stamp as well as a real-time time stamp to the facial animation system. The facial animation system associates the correct facial animation parameter with the real-time time stamp using the encoder time stamp of the bookmark as a reference.
Method and system for aligning natural and synthetic video to speech synthesis,
Tue Sep 19 18:11:34 EDT 2006
According to MPEG-4's TTS architecture, facial animation can be driven by two streams simultaneously--text, and Facial Animation Parameters. In this architecture, text input is sent to a Text-To-Speech converter at a decoder that drives the mouth shapes of the face. Facial Animation Parameters are sent from an encoder to the face over the communication channel. The present invention includes codes (known as bookmarks) in the text string transmitted to the Text-to-Speech converter, which bookmarks are placed between words as well as inside them. According to the present invention, the bookmarks carry an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. In addition, the Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text. The system of the present invention reads the bookmark and provides the encoder time stamp as well as a real-time time stamp to the facial animation system. Finally, the facial animation system associates the correct facial animation parameter with the real-time time stamp using the encoder time stamp of the bookmark as a reference.
Methods and apparatus for rapid acoustic unit selection from a large speech corpus,
Tue Jul 25 18:11:26 EDT 2006
A speech synthesis system can select recorded speech fragments, or acoustic units, from a very large database of acoustic units to produce artificial speech. The selected acoustic units are chosen to minimize a combination of target and concatenation costs for a given sentence. However, as concatenation costs, which are measures of the mismatch between sequential pairs of acoustic units, are expensive to compute, processing can be greatly reduced by pre-computing and caching the concatenation costs. Unfortunately, the number of possible sequential pairs of acoustic units makes such caching prohibitive. However, statistical experiments reveal that while about 85% of the acoustic units are typically used in common speech, less than 1% of the possible sequential pairs of acoustic units occur in practice. A method for constructing an efficient concatenation cost database is provided by synthesizing a large body of speech, identifying the acoustic unit sequential pairs generated and their respective concatenation costs, and storing those concatenation costs likely to occur. By constructing a concatenation cost database in this fashion, the processing power required at run-time is greatly reduced with negligible effect on speech quality.
Advance TTS for facial animation,
Tue Jul 11 18:11:24 EDT 2006
An enhanced system is achieved by allowing bookmarks which can specify that the stream of bits that follow corresponds to phonemes and a plurality of prosody information, including duration information, that is specified for times within the duration of the phonemes. Illustratively, such a stream comprises a flag to enable a duration flag, a flag to enable a pitch contour flag, a flag to enable an energy contour flag, a specification of the number of phonemes that follow, and, for each phoneme, one or more sets of specific prosody information that relates to the phoneme, such as a set of pitch values and their durations.
Employing Speech Models In Concatenative Speech Synthesis,
Tue Sep 27 18:10:32 EDT 2005
A text-to-speech synthesizer employs database that includes units. For each unit there is a collection of unit selection parameters and a plurality of frames. Each frame has a set of model parameters derived from a base speech frame, and a speech frame synthesized from the frame's model parameters. A text to be synthesized is converted to a sequence of desired unit features sets, and for each such set the database is perused to retrieve a best-matching unit. An assessment is made whether modifications to the frames are needed, because of discontinuities in the model parameters at unit boundaries, or because of differences between the desired and selected unit features. When modifications are necessary, the model parameters of frames that need to be altered are modified, and new frames are synthesized from the modified model parameters and concatenated to the output. Otherwise, the speech frames previously stored in the database are retrieved and concatenated to the output.
Method and system for aligning natural and synthetic video to speech synthesis,
Tue Mar 01 18:10:18 EST 2005
According to MPEG-4's TTS architecture, facial animation can be driven by two streams simultaneously--text, and Facial Animation Parameters. In this architecture, text input is sent to a Text-To-Speech converter at a decoder that drives the mouth shapes of the face. Facial Animation Parameters are sent from an encoder to the face over the communication channel. The present invention includes codes (known as bookmarks) in the text string transmitted to the Text-to-Speech converter, which bookmarks are placed between words as well as inside them. According to the present invention, the bookmarks carry-an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. In addition, the Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text. The system of the present invention reads the bookmark and provides the encoder time stamp as well as a real-time time stamp to the facial animation system. Finally, the facial animation system associates the correct facial animation parameter with the real-time time stamp using the encoder time stamp of the bookmark as a reference.
Integration Of Talking Heads And Text-To-Speech Synthesizers For visual TTS,
Tue Jan 04 18:10:15 EST 2005
An enhanced arrangement for a talking head driven by text is achieved by sending FAP information to a rendering arrangement that allows the rendering arrangement to employ the received FAPs in synchronism with the speech that is synthesized. In accordance with one embodiment, FAPs that correspond to visemes which can be developed from phonemes that are generated by a TTS synthesizer in the rendering arrangement are not included in the sent FAPs, to allow the local generation of such FAPs. In a further enhancement, a process is included in the rendering arrangement for creating a smooth transition from one FAP specification to the next FAP specification. This transition can follow any selected function. In accordance with one embodiment, a separate FAP value is evaluated for each of the rendered video frames.
Methods and apparatus for rapid acoustic unit selection from a large speech corpus,
Tue Mar 02 18:09:06 EST 2004
A speech synthesis system can select recorded speech fragments, or acoustic units, from a very large database of acoustic units to produce artificial speech. The selected acoustic units are chosen to minimize a combination of target and concatenation costs for a given sentence. However, as concatenation costs, which are measures of the mismatch between sequential pairs of acoustic units, are expensive to compute, processing can be greatly reduced by pre-computing and caching the concatenation costs. Unfortunately, the number of possible sequential pairs of acoustic units makes such caching prohibitive. However, statistical experiments reveal that while about 85% of the acoustic units are typically used in common speech, less than 1% of the possible sequential pairs of acoustic units occur in practice. A method for constructing an efficient concatenation cost database is provided by synthesizing a large body of speech, identifying the acoustic unit sequential pairs generated and their respective concatenation costs, and storing those concatenation costs likely to occur. By constructing a concatenation cost database in this fashion, the processing power required at run-time is greatly reduced with negligible effect on speech quality.
Method and apparatus for rapid acoustic unit selection from a large speech corpus,
Tue Feb 24 18:09:05 EST 2004
A speech synthesis system can select recorded speech fragments, or acoustic units, from a very large database of acoustic units to produce artificial speech. The selected acoustic units are chosen to minimize a combination of target and concatenation costs for a given sentence. However, as concatenation costs, which are measures of the mismatch between sequential pairs of acoustic units, are expensive to compute, processing can be greatly reduced by pre-computing and caching the concatenation costs. Unfortunately, the number of possible sequential pairs of acoustic units makes such caching prohibitive. However, statistical experiments reveal that while about 85% of the acoustic units are typically used in common speech, less than 1% of the possible sequential pairs of acoustic units occur in practice. A method for constructing an efficient concatenation cost database is provided by synthesizing a large body of speech, identifying the acoustic unit sequential pairs generated and their respective concatenation costs, and storing those concatenation costs likely to occur. By constructing a concatenation cost database in this fashion, the processing power required at run-time is greatly reduced with negligible effect on speech quality.
Method And System For Aligning Natural And Synthetic Video To Speech Synthesis,
Tue May 20 18:08:42 EDT 2003
According to MPEG-4's TTS architecture, facial animation can be driven by two streams simultaneously--text, and Facial Animation Parameters. In this architecture, text input is sent to a Text-To-Speech converter at a decoder that drives the mouth shapes of the face. Facial Animation Parameters are sent from an encoder to the face over the communication channel. The present invention includes codes (known as bookmarks) in the text string transmitted to the Text-to-Speech converter, which bookmarks are placed between words as well as inside them. According to the present invention, the bookmarks carry an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. In addition, the Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text. The system of the present invention reads the bookmark and provides the encoder time stamp as well as a real-time time stamp to the facial animation system. Finally, the facial animation system associates the correct facial animation parameter with the real-time time stamp using the encoder time stamp of the bookmark as a reference.
Verbal, Fully Automatic Dictionary Updates By End-Users Of Speech Synthesis And Recognition Systems,
Tue Jun 20 18:05:34 EDT 2000
A method and system that allows users, or maintainers, of a speech-based application to revise the phonetic transcription of words in a phonetic dictionary, or to add transcriptions for words not yet present in the dictionary. The application is assumed to communicate with the user or maintainer audibly by means of speech recognition and/or speech synthesis systems, both of which rely on a dictionary of phonetic transcriptions to accurately recognize speech and pronunciation of a given word. The method automatically determines the phonetic transcription based on the word's spelling and the recorded preferred pronunciation, and updates the dictionary accordingly. Moreover, both speech synthesis and recognition performance are improved through use of the updated dictionary.