[go: up one dir, main page]

0% found this document useful (0 votes)
37 views18 pages

Algorithms For Speech Processing

The document discusses various algorithms for speech processing including speech/non-speech detection using log energy and zero crossing rate, voiced/unvoiced/background classification using a Bayesian approach with 5 speech parameters, pitch detection in different domains, formant estimation, median smoothing to address pitch period discontinuities, and a Bayesian classifier to classify frames as voiced, unvoiced, or background using Gaussian models trained on speech parameters from a labeled dataset.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views18 pages

Algorithms For Speech Processing

The document discusses various algorithms for speech processing including speech/non-speech detection using log energy and zero crossing rate, voiced/unvoiced/background classification using a Bayesian approach with 5 speech parameters, pitch detection in different domains, formant estimation, median smoothing to address pitch period discontinuities, and a Bayesian classifier to classify frames as voiced, unvoiced, or background using Gaussian models trained on speech parameters from a labeled dataset.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Speech Processing Algorithms

Speech/Non-speech detection
Rule-based method using log energy and zero crossing rate Single speech interval in background noise

Digital Signal Processing Design - Lecture 9

Voiced/Unvoiced/Background classification
Bayesian approach using 5 speech parameters Needs to be trained (mainly to establish statistics for background signals)

Pitch detection
Estimation of pitch period (or pitch frequency) during regions of voiced speech Implicitly needs classification of signal as voiced speech Algorithms in time domain, frequency domain, cepstral domain, or using LPC-based processing methods

Algorithms for Speech Processing

Formant estimation
Estimation of the frequencies of the major resonances during voiced speech regions Implicitly needs classification of signal as voiced speech Need to handle birth and death processes as formants appear and disappear depending on spectral intensity

The Problem

Median Smoothing and Speech Processing

Pitch period discontinuities that need to be smoothed for use in speech processing systems individual pitch period errors individual voiced/unvoiced errors (pitch period set to 0) regions of pitch period errors The solution median smoother

Running Medians

Non-Linear Smoothing
linear smoothers (filters) are not always appropriate for smoothing parameter estimates because of smearing and blurring discontinuities pitch period smoothing would emphasize errors and distort the contour use combination of non-linear smoother of running medians and linear smoothing linear smoothing => separation of signals based on non-overlapping frequency content non-linear smoothing => separating signals based on their character (smooth or noise-like)

5 point median 5 point averaging

x [n ] = S( x [n ]) + R ( x [n ]) - smooth + rough components y ( x[n ]) = median( x [n ]) = ML ( x[n ]) ML ( x[n ]) = median of x [n ]... x[n L + 1]
6

Properties of Running Medians


Running medians of length L:
1. ML( x[n]) = ML(x[n]) 2. Medians will not smear out discontinuities (jumps) in the signal if there are no discontinuities within L/2 samples 3. ML( x1[n]+ x2[n]) ML(x1[n]) + ML(x2[n]) 4. Median smoothers generally preserve sharp discontinuities in signal, but fail to adequately smooth noise-like components
7

Median Smoothing

Median Smoothing

Median Smoothing

10

Median Smoothing

Nonlinear Smoother

=S[x(n)] SR[x(n)]

R[x(n)]

11

12

Nonlinear Smoother
- y [n ] is an approximation to the signal S( x[n ]) - second pass of non-linear smoothing improves performance based on: y [n ] = S( x [n ]) - the difference signal, z[n ], is formed as: z[n ] = x[n ] y [n ] = R( x[n ]) - second pass of nonlinear smoothing of z[n ] yields a correction term that is added to y [n ] to give w [n ], a refined approximation to S( x[n ]) w [n ] = S( x [n ]) + S[R( x[n ])] - if z[n ] = R ( x [n ]) exactly, i.e., the non-linear smoother was ideal, then S[R ( x [n ])] would be identically zero and the correction term would be unnecessary
13

Nonlinear Smoother with Delay Compensation

14

Speech Detection Issues


key problem in speech processing is locating accurately the beginning and end of a speech utterance in noise/background signal

Algorithm #1
Speech/Non-Speech Detection Using Simple Rules

beginning of speech need endpoint detection to enable: computation reduction (dont have to process background signal) better recognition performance (cant mistake background for speech) non-trivial problem except for high SNR recordings

Ideal Speech/Non-Speech Detection

Speech Detection Examples

Beginning of speech interval Ending of speech interval

case of low background noise => simple case

can find beginning of speech based on knowledge of sounds (/s/ in six)

Speech Detection Examples

Problems for Reliable Speech Detection


weak fricatives (/f/, /th/, /h/) at beginning or end of utterance weak plosive bursts for /p/, /t/, or /k/ nasals at end of utterance (often devoiced and reduced levels) voiced fricatives which become devoiced at end of utterance trailing off of vowel sounds at end of utterance
the good news is that highly reliable endpoint detection is not required for most practical applications; also we will see how some applications can process background signal/silence in the same way that speech is processed, so endpoint detection becomes a moot issue

difficult case because of weak fricative sound, /f/, at beginning of speech

Speech/Non-Speech Detection Algorithm #1

Speech/Non-Speech Detection Algorithm #1


1. 2. Detect beginning and ending of speech intervals using short-time energy and short-time zero crossings Find major concentration of signal (guaranteed to be speech) using region of signal energy around maximum value of short-time energy => energy normalization Refine region of concentration of speech using reasonably tight short-time energy thresholds that separate speech from backgroundsbut may fail to find weak fricatives, low level nasals, etc Refine endpoint estimates using zero crossing information outside intervals identified from energy concentrationsbased on zero crossing rates commensurate with unvoiced speech

3.
Sampling Rate Conversion - to a standard sampling rate of 10 kHz Highpass Filter to eliminate DC offset and hum; using a 101 point FIR equiripple highpass filter Short-Time Analysis frame size of 40 msec; frame shift of 10 msec; compute short-time log energy and short-time zero crossing rate (per 10 msec interval)

4.

Speech/Non-Speech Detection

Rule-Based Short-Time Measurements of Speech


Algorithm for endpoint detection: 1. compute mean and of log En and Z100 for first 100 msec of signal (assuming no speech in this interval and assuming FS=10,000 Hz). 2. determine maximum value of log En for entire recording => normalization. 3. compute log En thresholds based on results of steps 1 and 2e.g., take some percentage of the peaks over the entire interval. Use threshold for zero crossings based on ZC distribution for unvoiced speech. 4. find an interval of log En that exceeds a high threshold ITU. 5. find a putative starting point (N1) where log En crosses ITL from above; find a putative ending point (N2) where log En crosses ITL from above.

Log energy separates Voiced from Unvoiced and Silence

Zero crossings separate Unvoiced from Silence and Voiced

6. move backwards from N1 by comparing Z100 to IZCT, and find the first point where Z100 exceeds IZCT; similarly move forward from N2 by comparing Z100 to IZCT and finding last point where Z100 exceeds IZCT.

Endpoint Detection Algorithm

Endpoint Detection Examples

1. 2. 3.

find heart of signal via conservative energy threshold => Interval 1 refine beginning and ending points using tighter threshold on energy => Interval 2 check outside the regions using zero crossing and unvoiced threshold => Interval 3

Isolated digit /one/

Isolated digit /six/

Endpoint Detection Examples

Voiced/Unvoiced/Background ClassificationAlgorithm #2
Utilize a Bayesian statistical approach to classification of frames as voiced speech, unvoiced speech or background signal (i.e., 3-class recognition/classification problem) Use 5 short-time speech parameters as the basic feature set Utilize a (hand) labeled training set to learn the statistics (means and variances for Gaussian model) of each of the 5 short-time speech parameters for each of the classes

Isolated digit /eight/

Speech Parameters
X = [ x1 , x2 , x3 , x4 , x5 ] x1 = log ES -- short-time log energy of the signal x2 = Z100 -- short-time zero crossing rate of the signal for a 100-sample frame x3 = C1 -- short-time autocorrelation coefficient at unit sample delay x4 = 1 -- first predictor coefficient of a p th order linear predictor x5 = E p -- normalized energy of the prediction error of a p th order linear predictor

Speech Parameter Signal Processing


Frame-based measurements Frame size of 10 msec Frame shift of 10 msec 200 Hz highpass filter used to eliminate any residual low frequency hum or dc offset in signal

Manual Training
Using a designated training set of sentences, each 10 msec interval is classified manually (based on waveform displays and plots of parameter values) as either:
Voiced speech clear periodicity seen in waveform Unvoiced speech clear indication of frication or whisper Background signal lack of voicing or unvoicing traits Unclassified unclear as to whether low level voiced, low level unvoiced, or background signal (usually at speech beginnings and endings); not used as part of the training set

Gaussian Fits to Training Data

Each classified frame is used to train a single Gaussian model, for each speech parameter and for each pattern class; i.e., the mean and variance of each speech parameter is measured for each of the 3 classes

Gaussian Fits to Training Data

Gaussian Fits to Training Data

Bayesian Classifier
Class 1, i , i = 1, representing the background signal class Class 2, i , i = 2, representing the unvoiced class Class 3, i , i = 3, representing the voiced class

Bayesian Classifier
Maximize the probability: p (i | x) = where p ( x) = p ( x | i ) P (i )
i =1 3

p ( x | i ) P (i ) p( x)

mi = E[ x] for all x in class i Wi = E[( x mi )( x mi ) ] for all x in class i


T

p ( x | i ) =

T 1 1 e (1/ 2)( x mi ) Wi ( x mi ) (2 )5/ 2 | Wi |1/ 2

Bayesian Classifier
Maximize p (i | x ) using the monotonic discriminant function gi ( x) = ln p (i | x) = ln[ p ( x | i ) P (i )] ln p( x) = ln p ( x | i ) + ln P (i ) ln p ( x) Disregard term ln p ( x) since it is independent of class, i , giving 1 gi ( x) = ( x mi )T Wi 1 ( x m i ) + ln P(i ) + ci 2 5 1 ci = ln(2 ) ln | Wi | 2 2

Bayesian Classifier
i Ignore bias term, ci , and apriori class probability, ln Pi . Then we can convert maximization to a minimization by reversing the sign, giving the decision rule: Decide class i if and only if di ( x) = ( x mi )T Wi 1 ( x mi ) d j ( x) j i i Utilize confidence measure, based on relative decision scores, to enable a no-decision output when no reliable class information is obtained.

Classification Performance
Training Set
BackgroundClass 1 Unvoiced Class 2 Voiced Class 3

Typical Classifications
VUS classification and confidence scores (scaled by factor of 3) for: (a) Synthetic vowel sequence (b) All voiced utterance (c) - (e) Speech utterances with mixtures of voiced, unvoiced, and background regions

Count 76 57 313

Testing Set 96.8% 85.4% 98.9%

Count 94 82 375

85.5% 98.2% 99%

Pitch Period Estimation


Essential component of general synthesis model for speech production Major component of excitation source information (along with voiced-unvoiced decision, amplitude) Pitch period estimation involves two problems, simultaneously; determination as to whether the speech is periodic, and, if so, the resulting pitch (period or frequency) A range of pitch detection methods have been proposed including several time domain/frequency domain/cepstral domain/LPC domain methods

Algorithm #3
Pitch Detection (Pitch Period Estimation Methods)

Periodic Signals Fundamentals of Pitch Period Estimation


The Ideal Case of Perfectly Periodic Signals
An analog signal x(t) is periodic with period T0 if: x(t ) = x(t + mT0 ) t , m = ... 1, 0,1,... The fundamental frequency is:
1 f0 = T 0 A true periodic signal has a line spectrum, i.e., nonzero spectral values exist only at frequencies f=kf0, where k is an integer Speech is not precisely periodic, hence its spectrum is not strictly a line spectrum; further the period generally changes slowly with time

The Ideal Pitch Detector


To estimate pitch period reliably, the ideal input would be either:
a periodic impulse train at the pitch period a pure sinusoid at the pitch frequency

Ideal Input to Pitch Detector


Periodic Impulse Train
1 0.8 amplitude 0.6 0.4 0.2 0

T0=50 samples

100

200

300

In reality, we cant get either (although we use signal processing to either try to flatten the signal spectrum, or eliminate all harmonics but the fundamental)

400 time in samples

500

600

700

800

50

0 log magnitude

-50

F0=200 Hz (with sampling rate of FS=10 kHz)

-100

-150

500

1000

1500

2000

2500 frequency

3000

3500

4000

4500

5000

Ideal Input to Pitch Detector


Pure sinewave at 200 Hz
1 0.5

Ideal Synthetic Signal Input


Synthetic Vowel100 Hz Pitch
1 0.5 amplitude

amplitude

-0.5

-0.5

-1

100

200

300

400 time in samples

500

600

700

800

-1

100

200

300

400 time in samples

500

600

700

800

100 50 log magnitude 0 -50 -100 -150

60

log magnitude

Single harmonic at 200 Hz

40 20 0 -20 -40

500

1000

1500

2000

2500 frequency

3000

3500

4000

4500

5000

-60

500

1000

1500

2000

2500 frequency

3000

3500

4000

4500

5000

The Real World


Vowel with varying pitch period
1 0.5 amplitude 0 -0.5 -1 -1.5

Pitch Detector
0 100 200 300 400 time in samples 500 600 700 800

60 40 log magnitude 20 0 -20 -40

Time Domain Method of Pitch Detection


0 500 1000 1500 2000 2500 frequency 3000 3500 4000 4500 5000

Time Domain Pitch Detection (Pitch Period Estimation) Algorithm


1. 2. 3.

Time Domain Pitch Measurements


Positive peaks

Filter speech to 900 Hz region (adequate for all ranges of pitch eliminates extraneous signal harmonics) Find all positive and negative peaks in the waveform At each positive peak:
determine peak amplitude pulse (positive pulses only) determine peak-valley amplitude pulse (positive pulses only) determine peak-previous peak amplitude pulse (positive pulses only)

4.

At each negative peak:


determine peak amplitude pulse (negative pulses only) determine peak-valley amplitude pulse (negative pulses only) determine peak-previous peak amplitude pulse (negative pulses only)

5. 6. 7. 8.

Filter pulses with an exponential (peak detecting) window to eliminate false positives and negatives that are far too short to be pitch pulse estimates Determine pitch period estimate as the time between remaining major pulses in each of the six elementary pitch period detectors Vote for best pitch period estimate by combining the 3 most recent estimates for each of the 6 pitch period detectors Clean up errors using some type of non-linear smoother

Negative peaks

Basic Pitch Detection Principles


use 6, semi-independent, parallel processors to create a number of impulse trains which (hopefully) retain the periodicity of the original signal and discard features which are irrelevant to the pitch detection process (e.g., amplitude variations, spectral shape, etc) very simple pitch detectors are used the 6 pitch estimates are logically combined to infer the best estimate of pitch period for the frame being analyzed the frame could be classified as unvoiced/silence, with zero pitch period

Parallel Processing Pitch Detector


10 kHz speech speech lowpass filtered to 900 Hz => guarantees 1 or more harmonics, even for high pitched females and children

a set of peaks and valleys (local maxima and minima) are located, and from their locations and amplitudes, 6 impulse trains are derived

Pitch Detection Algorithm


6 impulse trains: 1. m1(n): an impulse equal to the peak amplitude at the location of each peak 2. m2(n): an impulse equal to the difference between the peak amplitude and the preceding valley amplitude occurs at each peak 3. m3(n): an impulse equal to the difference between the peak amplitude and the preceding peak amplitude occurs at each peak (so long as it is positive) 4. m4(n): an impulse equal to the negative of the amplitude at a valley occurs at each valley 5. m5(n): an impulse equal to the negative of the amplitude at a valley plus the amplitude at the preceding peak occurs at each valley 6. m6(n): an impulse equal to the negative of the amplitude at a valley plus the amplitude at the preceding local minimum occurs at each valley (so long as it is positive)

Peak Detection for Sinusoids

Processing of Pulse Trains

Final Processing for Pitch


same detection applied to all 6 detectors => 6 estimates of pitch period every sampling interval

each impulse train is processed by a time-varying non-linear system (called a peak detecting exponential window) impulse of sufficient amplitude is detected => output is reset to value of impulse and held for a blanking interval, Tau(n) during which no new pulses can be detected after the blanking interval, the detector output decays exponentially with a rate of decay dependent on the most recent estimate of pitch period the decay continues until an impulse that exceeds the level of the decay is detected output is a quasi-periodic sequence of pulses, and the duration between estimated pulses is an estimate of the pitch period pitch period estimated periodically, e.g., 100/sec

the 6 current estimates are combined with the two most recent estimates for each of the 6 detectors the pitch period with the most occurrences (to within some tolerance) is declared the pitch period estimate at that time

the algorithm works well for voiced speech there is a lack of pitch period consistency for unvoiced speech or background signal

Pitch Detector Performance

Pitch Detector
Autocorrelation Method of Pitch Detection

using synthetic speech gives a measure of accuracy of the algorithm pitch period estimates generally within 2 samples of actual pitch period first 10-30 msec of voicing often classified as unvoiced since decision method needs about 3 pitch periods before consistency check works properly => delay of 2 pitch periods in detection

10

Autocorrelation Pitch Detection


basic principle a periodic function has a periodic autocorrelation just find the correct peak basic problem the autocorrelation representation of speech is just too rich
it contains information that enables you to estimate the vocal tract transfer function (from the first 10 or so values) many peaks in autocorrelation in addition to pitch periodicity peaks some peaks due to rapidly changing formants some peaks due to window size interactions with the speech signal

Autocorrelation of Voiced Speech Frame


x[n], n = 0,1,...,399

x[n], n = 0,1,...,559

need some type of spectrum flattening so that the speech signal more closely approximates a periodic impulse train => center clipping spectrum flattener
pmin ploc pmax

R[k ], k = 0,1,..., pmax + 10

Autocorrelation of Voiced Speech Frame


x[n], n = 0,1,...,399

Center Clipping

x[n], n = 0,1,...,559
CL=% of Amax (e.g., 30%)
R[k ], k = 0,1,..., pmax + 10
pmin ploc pmax

Center Clipper definition: if x(n) > CL, y(n)=x(n)-CL if x(n) CL, y(n)=0

3-Level Center Clipper


Examples of waveforms and autocorrelation functions; (a) No clipping (b) Center-clipped at 70% of peak (c) 3-level center clipped

y(n) = +1 = -1 = 0

if x(n) > CL if x(n) < -CL otherwise

significantly simplified computation (no multiplications) autocorrelation function is very similar to that from a conventional center clipper => most of the extraneous peaks are eliminated and a clear indiction of periodicity is retained

11

Center Clipping
Autocorrelation functions of center clipped speech using L=401 analysis frames (a) Clipping level set at 90% of max (b) Clipping level at 60% of max (c) Clipping level at 30% of max

Doubling Errors using ACs

Doubling Errors using ACs

Doubling Errors using ACs

Second and fourth harmonics much stronger than first and third harmonics leading to potential pitch doubling error.

Doubling Errors using ACs

Autocorrelation Pitch Detector


Utterance: This is a test

Fourth harmonic strongest; second harmonic stronger than first; fourth harmonic stronger than thrid (or second or first); potential pitch doubling error results.

12

STFT for Pitch Detection


Yet Another Pitch Detector (YAPD) Log Harmonic Product Spectrum Pitch Detector
from narrowband STFT's we see that the pitch period is manifested in sharp peaks at integer multiples of the fundamental frequency => good input for designing a pitch detection algorithm define a new measure, called the harmonic product spectrum, as Pn (e j ) =

| X (e ) |
n j r r =1 K n

the log harmonic product spectrum is thus Pn (e j ) = 2

log| X (e ) |
j r r =1

P is a sum of K frequency compressed replicas of log| Xn (e j ) | => for periodic voiced speech, the harmonics will all align at the fundamental frequency and reinforce each other

sharp peak at F0

15 frames of voiced speech from male talker; pitch frequency goes from 175 Hz down to 130 Hz

15 frames of voiced speech from female talker; pitch frequency goes from 190 Hz up to 240 Hz

STFT for Pitch Detection


no problem with unvoiced speechno strong peak is manifest in log harmonic product spectrum no problem if fundamental is missing (e.g., highpass filtered speech) as fundamental is found from higher order terms that line up at the fundamental but nowhere else no problem with additive noise or linear distortion (see plot at 0 dB SNR)

Yet Another Pitch Detector (YAPD)

Cepstral Pitch Detector

13

Cepstral Pitch Detection


1. 2. 3. 4. simple procedure for cepstral pitch detection compute cepstrum every 10-20 msec search for periodicity peak in expected range of n if found and above threshold => voice, pitch=location of cepstral peak if not found => unvoiced

Cepstral Pitch Detection

Cepstral Pitch Detection Male Talker

Cepstral Pitch Detection Female Talker

Autocorrelation Pitch Detector

14

Issues in Cepstral Pitch Detection


1.

strong peak in 3-20 msec range is strong indication of voiced speech-absense of such a peak does not guarantee unvoiced speech
cepstral peak depends on length of window, and formant structure maximum height of pitch peak is 1 (RW, unchanging pitch, window contains exactly N periods); height varies dramatically with HW, changing pitch, window interactions with pitch period => need at least 2 full pitch periods in window to define pitch period well in cepstrum => need 40 msec window for low pitch malebut this is way too long for high pitch female

2.

bandlimited speech makes finding pitch period harder


extreme case of single harmonic => single peak in log spectrum => no peak in cepstrum this occurs during voiced stop sounds (b,d,g) where the spectrum is cut off above a few hundred Hz

3.

need very low threshold-e.g., 0.1-on pitch period-with lots of secondary verifications of pitch period

15

LPC Pitch Detection-SIFT


Yet Another Pitch Detector (YAPD) LPC-Based Pitch Detector

sampling rate reduced from 10 kHz to 2 kHz p=4 analysis inverse filter signal to give spectrally flat result compute short time autocorrelation and find strongest peak in estimated pitch region

LPC Pitch Detection-SIFT


part a: section of input waveform being analyzed part b: input spectrum and reciprocal of the inverse filter part c: spectrum of signal at output of the inverse filter part d: time waveform at output of the inverse filter part e: normalized autocorrelation of the signal at the output of the inverse filter => 8 msec pitch period found here

Algorithm #4 Formant Estimation Cepstral-Based Formant Estimation

Cepstral Formant Estimation


the low-time cepstrum corresponds primarily to the combination of vocal tract, glottal pulse, and radiation, while the high time part corresponds primarily to excitation => use lowpass liftered cepstrum to give smoothed log spectra to estimate formants

Cepstral Formant Estimation


1. fit peaks in cepstrumdecide if section of speech voiced or unvoiced 2. if voiced-estimate pitch period, lowpass lifter cepstrum, match first 3 formant frequencies to smooth log magnitude spectrum 3. if unvoiced, set pole frequency to highest peak in smoothed log spectrum; choose zero to maximize fit to smoothed log spectrum

want to estimate time-varying model parameters every 10-20 msec

16

Cepstral Formant Estimation


cepstra spectra

LPC-Based Formant Estimation


sometimes 2 formants get so close that they merge and there are not 2 distinct peaks in the log magnitude spectrum use higher resolution spectral analysis via CZT blown up region of 0-900 Hz showing 2 peaks when only 1 seen in normal spectrum

Utterance: This is a test

Formant Analysis Using LPC

Algorithm #5 Speech Synthesis Methods

Speech Synthesis
can use cepstrally (or LPC) estimated parameters to control speech synthesis model for voiced speech the vocal tract transfer function is modeled as V (z) =

Speech Synthesis
for unvoiced speech the model is a complex pole and zero of the form V (z) = (1 2e T cos( 2 FpT ) + e 2 T )(1 2e T cos( 2 FzT )z 1 + e 2 T z 2 ) (1 2e T cos( 2 FpT )z 1 + e 2 T z 2 )(1 2e T cos( 2 FzT ) + e 2 T )

1 2e
k =1

1 2e kT cos( 2 FkT ) + e 2 kT
kT

cos( 2 FkT )z 1 + e 2 kT z 2

-- cascade of digital resonators (F1 F4 ) with unity gain at f = 0 -- estimate F1 F3 using formant estimation methods, F4 fixed at 4000 Hz -- formant bandwidths fixed (1 4 ) fixed spectral compensation approximates glottal pulse shape and radiation (1 e aT z 1 )(1 + e bT z 1 ) a = 400 , b = 5000 S( z ) = (1 e aT )(1 + e bT )

Fp = largest peak in smoothed spectrum above 1000 Hz Fz = ( 0.0065Fp + 4.5 )( 0.014Fp + 28) = 20 log10 H (e
j 2 FpT

) 20 log10 H (e j 0 )

these formulas ensure spectral amplitudes are preserved

17

Spectral Matches for Various Sounds

Analysis and Synthesis of Speech

essential features of signal well preserved very intelligible synthetic speech speaker easily identified

formant synthesis

Quantization of Synthesizer Parameters


model parameters estimated at 100/sec rate, lowpass filtered SR reduced to twice the LP cutoff and parameters quantized parameters could be filtered to 16 Hz BW with no noticeable degradation => 33 Hz SR formants and pitch quantized with a linear quantizer; amplitude quantized with a logarithmic quantizer

Quantization of Synthesizer Parameters


Parameter Pitch Period (Tau) First Formant (F1) Second Formant (F2) Third Formant (F3) log-amplitude (AV) Required Bits/Sample 6 3 4 3 2

600 bps total rate for voiced speech with 100 bps for V/UV decisions

Quantization of Synthesizer Parameters

Algorithms for Speech Processing


Based on the various representations of speech we can create algorithms for measuring features that characterize speech and estimating properties of the speech signal, e.g.,
presence or absence of speech (Speech/Non-Speech Discrimination) classification of signal frame as Voiced/Unvoiced/Background signal estimation of the pitch period (or pitch frequency) for a voiced speech frame estimation of the formant frequencies (resonances and antiresonances of the vocal tract) for both voiced and unvoiced speech frames

formant modificationslowpass filtering

formant modifications -pitch

a: original; b: smoothed; c: quantized and decimated by 3-to-1 ratio --little perceptual difference

Based on the model of speech production, we can build a speech synthesizer on the basis of speech parameters estimated by the above set of algorithms and synthesize intelligible speech

18

You might also like