SlideShare a Scribd company logo
ICE 3251
DIGITAL SIGNAL PROCESSING
Slides by Dr. Chinmay Rajhans
Course by Dr. Anjan Gudigar (A) and Dr. Chinmay Rajhans (B)
Module 5: DSP Applications
Second Year B. Tech. (Electronics and Instrumentation Engineering)
MIT, Manipal, Udupi, Karnataka, India (MAHE)
Jan-May 2024
2023-2024
DSP Syllabus
• Discrete Time Signals and Systems: Standard Discrete time signals,
representation, classification, mathematical operations on discrete time
signals, response of LTI discrete time systems in time domain, classification
of discrete time systems. Linear convolution, cross correlation and
autocorrelations. (10 hrs)
• Discrete Fourier Transform (DFT): DFT, properties of the DFT, use of DFT in
linear filtering, filtering of long data sequences, DFT as linear
transformation, Inverse DFT, FFT Algorithms, Radix 2 DITFFT and DIFFFT. (10
hrs)
• IIR filters: Frequency response of analog and digital filters, characteristics of
Butterworth, Chebyshev and elliptic filters, classical filter design, design of
digital filters by impulse invariance, bilinear transformation and matched Z
transform. (12 hrs)
DSP Syllabus (Contd)
• FIR Filters: Linear phase FIR Filters, characteristics, frequency
response, design using windows, frequency sampling design. (06 hrs)
• Implementation of Discrete Time Systems: Structures for FIR systems
– Direct form, cascade form, Structures for IIR systems – Direct form
(DF I and II), cascade and parallel form structures, lattice ladder
structures. (06 hrs)
• Applications of digital signal processing: Speech processing- speech
coding, recognition, speech synthesis, biomedical signal processing,
Image processing applications. (04 hrs)
Books
1. Proakis John G, Manolakis Dimitris G., Digital Signal Processing, PHI,
(4e), 2003.
2. Rabiner L.R and Gold Bernard, Theory and Applications of Digital
Signal Processing, PHI, 2002.
3. Sanjit Mitra K, Digital Signal Processing: A Computer Based
Approach, TMH, (4e), 2013.
4. A. Nagoor Kani, Digital Signal Processing, Tata McGraw Hill
Education, (2e), 2017.
5. Johnson Johny R, Introduction to digital signal processing, Prentice
Hall Of India, 2003.
COURSE OUTCOMES:
CO1: Analyze signals and LTI systems in discrete time domain and Z
domain.
CO2: Evaluate Discrete Fourier Transform (DFT) for discrete time
signals.
CO3: Analyze digital IIR filters.
CO4: Implement digital FIR filters.
CO5: Apply the principles of digital signal processing to real world
problems.
DSP Module 5 Syllabus
• Applications of digital signal processing: Speech processing- speech
coding, recognition, speech synthesis, biomedical signal processing,
Image processing applications.
Some DSP
Applications
Speech Processing
• The speech signal is a slowly timed varying signal.
• The speech signal can be broadly classified into voiced and unvoiced
signal.
• The voiced signals are periodic.
• Unvoiced signals are random in nature.
• The voiced signals will have a fundamental frequency in a segment of
15 to 20 msec, representing a characteristic sound of the speech.
• The various frequency components of sounds in speech signal will lie
within 4 kHz.
Broad Classification of Speed Processing
• Speech analysis:
In general, the process of extracting the features of speech and then coding or
directly digitizing the speech and then reducing the bit rate are called speech
analysis. It is used in speech recognition, speaker verification and speaker
identification.
• Speech synthesis:
In general, the process of decoding the speech signal represented in the form of
codes are called speech synthesis. It is used in conversion of text to speech.
Speech Coding and Decoding
• The speech coding is digital representation of speech using minimum
bit rate without affecting the voice quality.
• The speech decoding is conversion of digital speech data to analog
speech.
Speech Coding
• The old method for quality transmission and reception of digital speech
signal through telephone lines, employs a bit rate of 64 kbps (kilo bits per
second).
• Pulse Code Modulation (PCM), in which the speech signal is sampled at 8
kHz and each sample is quantized to 13 bits and then compressed to 8 bits
using m-law or A-law standards to achieve a transmission rate of 64 kbps
(8000 samples per second x 8 bits per sample = 64000 bits per second)
needed for transmission.
• A number of digital speech coding techniques are developed to represent
the speech at lower bit rates up to 1000 bits per second to effectively
utilize the transmission channels and also to reduce memory requirements
for storage and retrieval of speech.
Speech Coding
• The speech coding techniques can be broadly classified into
waveform coding techniques and parametric coding techniques.
• Some of the popular waveform coding techniques are Adaptive Pulse
Code Modulation (APCM), Differential Pulse Code Modulation
(DPCM) and Adaptive Differential Pulse Code Modulation (ADPCM).
• Some of the parametric method of speech coding are Linear
Prediction Coding (LPC), Mel-Frequency Cepstrum Coefficients
(MFCC), Code Excited Linear Predictive Coding (CELP) and Vector Sum
Excited Linear Prediction ( VSELP).
Adaptive Differential Pulse Code Modulation
(ADPCM)
Encoder
Adaptive Differential Pulse Code Modulation
(ADPCM)
Decoder
Mel-Frequency Cepstrum Coefficients (MFCC)
Speech Recognition
• A speaker-dependent system is a system that recognizes a specific
speaker’s speech, while speaker-independent systems can be used to
recognize the speech of any unspecified speaker.
Speech Recognition
• The front-end analysis, extracts the acoustic features of input speech.
• Some of the popular techniques used for extracting the acoustic features of
speech are Linear Predictive Coding (LPC) , Mel-Frequency Cepstrum Coefficients
(MFCC) and Perceptual Linear Prediction (PLP).
• The output of front-end analysis is a compact, efficient set of parameters that
represent the acoustic properties observed from input speech signals, for
subsequent utilization by acoustic modeling.
• The acoustic models represent the acoustic properties, phonetic properties,
microphone and environmental variability, as well as gender and dialectal
differences among speakers.
• The language models contain the syntax, semantics, and pragmatics knowledge
for the intended recognition task. These models can be dynamically modified
according to the characteristics of the speech to be recognized during the training
process.
Speech Recognition
• A speech recognition system has to be trained with known speech, before
using the system for recognization.
• Acoustic pattern recognition aims at measuring the similarity between an
input speech and a reference model (obtained during training) and
determines the best match for the input speech.
• Some popular methods of acoustic pattern matching are Dynamic Time
Warping (DTW), Hidden Markov Modeling (HMM), discrete HMM (DHMM),
Continuous-Density HMM (CDHMM) and Vector Quantization (VQ).
• The language analysis is important in speech recognition, for Large
Vocabulary Continuous Speech Recognition (LVCSR) tasks. The speech
decoding process needs to invoke knowledge of pronunciation, lexicon,
syntax, and pragmatics in order to produce a satisfactory output text
sequence.
Speech Synthesis
Speech Synthesis
• The process of transforming text into speech contains two phases.
• The first phase consists of text analysis and phonetic analysis. The
second phase is generation of speech signal, which can be divided
into two sub-phases : the search of speech segments from a database
or the creation of these segments and the implementation of the
prosodic features.
• Text analysis includes the task of text normalization and linguistic
analysis. In text normalization, the numbers and symbols are
converted to words and abbreviations are replaced by their
corresponding whole words or phrases, so that the whole text is
converted to human utterance like words.
Speech Synthesis
• The linguistic analysis aims at understanding the content of the text,
exact meaning of utterance word and provide prosodic informations
like position of pause, differentiate interrogative clause from
statements, etc., for subsequent processing.
• Phonetic analysis assigns phonetic transcription to each word, and
this process is called grapheme-to-phoneme conversion.
• Grapheme is the smallest unit of written word, and phoneme is the
smallest unit of speech.
Speech Synthesis
• Prosody refers to the rhythm of speech, stress patterns, pitch,
duration, intonation, etc., and it plays- a very important role in the
understandability of speech. In prosodic analysis, some prosody
features are added to synthetic speech so that it resembles natural
speech. Moreover, some hierarchical rules have been developed to
control the timing and fundamental frequency, which makes the flow
of speech in synthesis systems to resemble natural sounding.
• Speech synthesis block, finally generates the speech signal. This can
be done by selecting speech unit for every phoneme from a database,
using an appropriate search process. The resulting short units of
speech are joined together to produce the final speech signal.
Digital Vocoder
Block diagram of speech analysis digital vocoder
Block diagram of speech synthesis digital vocoder
Dual Tone Multi Frequency (DTMF) in Telephone
Dialing
Biomedical Signal Processing
Biomedical signal classification
• Bioelectric signals : Signals generated by nerve cells and muscle cells.
• Biomagnetic signals: The brain, heart and lungs produce extremely weak magnetic fields, and this
contains additional information to that obtained from bioelectric signals.
• Bioimpedance signals: The tissue impedance reveals information about tissue composition, blood
volume and distribution and more. Usually obtained as a ratio of voltage measured at the desired
spot, and current injected using electrodes.
• Bioacoustic signals: Sound or acoustic signals are created by flow of blood through the heart, its
valves, or vessels and flow of air through upper and lower airways and lungs. Sound signals are
also produced by digestive tract, joints and contraction of muscles. These signals can be recorded
using microphones.
• Biomechanical signals: Motion and displacement signals, pressure, tension and flow signals.
• Biochemical signals: Chemical measurements from living tissue or samples analyzed in a
laboratory.
• Biooptical signals: Blood oxygenation obtained by measuring transmitted and backscattered light
from a tissue, estimation of heart output by dye dilution.
Processing of Biomedical Signals
• The processing of biomedical signals usually consists of at least four
stages:
• Measurement or observation of signals, which is also called signal acquisition.
• Transformation and reduction of the signals.
• Computation of signal parameters that are diagnostically significant.
• Interpretation or classification of the signals.
Biomedical Applications Domains
• Information gathering: The measurement of phenomena to
understand the system.
• Diagnosis: Detection of malfunction, pathology or abnormality.
• Monitoring: To obtain continuous or periodic information about the
system.
• Therapy and control: Modify the behaviour of the system and ensure
the result.
• Evaluation: Proof of performance, quality control, effect of treatment.
Thank You
Time for Questions
Dr. Chinmay Rajhans
You can email queries / questions / feedback at
chinmay.rajhans@manipal.edu
with subject: DSP Module 5 Queries

More Related Content

PPTX
Digital speech processing lecture1
Samiul Parag
 
PDF
Speech recognition (dr. m. sabarimalai manikandan)
Indian Institute of Technology Bhubaneswar
 
DOCX
Voice morphing document
himadrigupta
 
PDF
Speech Analysis and synthesis using Vocoder
IJTET Journal
 
PPTX
Speech Signal Analysis
Pradeep Reddy Guvvala
 
PDF
DSP_FOEHU - Lec 13 - Digital Signal Processing Applications I
Amr E. Mohamed
 
PDF
DDSP_2018_FOEHU - Lec 10 - Digital Signal Processing Applications
Amr E. Mohamed
 
PDF
An Evaluation Of Lms Based Adaptive Filtering
Renee Wardowski
 
Digital speech processing lecture1
Samiul Parag
 
Speech recognition (dr. m. sabarimalai manikandan)
Indian Institute of Technology Bhubaneswar
 
Voice morphing document
himadrigupta
 
Speech Analysis and synthesis using Vocoder
IJTET Journal
 
Speech Signal Analysis
Pradeep Reddy Guvvala
 
DSP_FOEHU - Lec 13 - Digital Signal Processing Applications I
Amr E. Mohamed
 
DDSP_2018_FOEHU - Lec 10 - Digital Signal Processing Applications
Amr E. Mohamed
 
An Evaluation Of Lms Based Adaptive Filtering
Renee Wardowski
 

Similar to DSP_Module5_Rev2.pdfICE3251_DSP_DIGITAL SYSTEM PROCESSING_MIT (20)

PPT
Speech coding techniques
Hemaraja Nayaka S
 
PPT
multirate signal processing for speech
Rudra Prasad Maiti
 
PPT
Discrete-Time Signal Processing
lancer350
 
PDF
The past, present and future of singing synthesis
Eji Warp
 
PDF
G010424248
IOSR Journals
 
PDF
Survey On Speech Synthesis
CSCJournals
 
PPTX
Wireless and mobile communication final year AKTU (KEC-076) Unit-2 Lecture-01...
gopaltiwari28122003
 
PDF
B034205010
inventionjournals
 
PDF
SPEECH CODING
Shradheshwar Verma
 
PPTX
Speech coding standards2
elroy25
 
PDF
Course report-islam-taharimul (1)
TANVIRAHMED611926
 
PPT
Audio and video compression
neeraj9217
 
PPT
Speech encoding techniques
Hemaraja Nayaka S
 
PPT
Multimedia Compression and Communication
Benesh Selvanesan
 
PPTX
Speech coding techniques
kailash karki
 
PPTX
Sub band project
Siraj Sidhik
 
PPT
Module-4.ppt of mmc which is multi media communication
Samrk8
 
PDF
Digital Signal Processor evolution over the last 30 years
Francois Charlot
 
Speech coding techniques
Hemaraja Nayaka S
 
multirate signal processing for speech
Rudra Prasad Maiti
 
Discrete-Time Signal Processing
lancer350
 
The past, present and future of singing synthesis
Eji Warp
 
G010424248
IOSR Journals
 
Survey On Speech Synthesis
CSCJournals
 
Wireless and mobile communication final year AKTU (KEC-076) Unit-2 Lecture-01...
gopaltiwari28122003
 
B034205010
inventionjournals
 
SPEECH CODING
Shradheshwar Verma
 
Speech coding standards2
elroy25
 
Course report-islam-taharimul (1)
TANVIRAHMED611926
 
Audio and video compression
neeraj9217
 
Speech encoding techniques
Hemaraja Nayaka S
 
Multimedia Compression and Communication
Benesh Selvanesan
 
Speech coding techniques
kailash karki
 
Sub band project
Siraj Sidhik
 
Module-4.ppt of mmc which is multi media communication
Samrk8
 
Digital Signal Processor evolution over the last 30 years
Francois Charlot
 
Ad

Recently uploaded (20)

PPTX
A Smarter Way to Think About Choosing a College
Cyndy McDonald
 
DOCX
Unit 5: Speech-language and swallowing disorders
JELLA VISHNU DURGA PRASAD
 
DOCX
Action Plan_ARAL PROGRAM_ STAND ALONE SHS.docx
Levenmartlacuna1
 
PDF
Virat Kohli- the Pride of Indian cricket
kushpar147
 
PDF
The-Invisible-Living-World-Beyond-Our-Naked-Eye chapter 2.pdf/8th science cur...
Sandeep Swamy
 
PDF
What is CFA?? Complete Guide to the Chartered Financial Analyst Program
sp4989653
 
PPTX
Cleaning Validation Ppt Pharmaceutical validation
Ms. Ashatai Patil
 
DOCX
SAROCES Action-Plan FOR ARAL PROGRAM IN DEPED
Levenmartlacuna1
 
PPTX
Tips Management in Odoo 18 POS - Odoo Slides
Celine George
 
PPTX
Care of patients with elImination deviation.pptx
AneetaSharma15
 
PDF
RA 12028_ARAL_Orientation_Day-2-Sessions_v2.pdf
Seven De Los Reyes
 
PPTX
Applications of matrices In Real Life_20250724_091307_0000.pptx
gehlotkrish03
 
PPTX
Introduction to pediatric nursing in 5th Sem..pptx
AneetaSharma15
 
PPTX
Basics and rules of probability with real-life uses
ravatkaran694
 
PPTX
CONCEPT OF CHILD CARE. pptx
AneetaSharma15
 
PDF
Biological Classification Class 11th NCERT CBSE NEET.pdf
NehaRohtagi1
 
PPTX
Artificial Intelligence in Gastroentrology: Advancements and Future Presprec...
AyanHossain
 
PPTX
Software Engineering BSC DS UNIT 1 .pptx
Dr. Pallawi Bulakh
 
PPTX
20250924 Navigating the Future: How to tell the difference between an emergen...
McGuinness Institute
 
PPTX
HEALTH CARE DELIVERY SYSTEM - UNIT 2 - GNM 3RD YEAR.pptx
Priyanshu Anand
 
A Smarter Way to Think About Choosing a College
Cyndy McDonald
 
Unit 5: Speech-language and swallowing disorders
JELLA VISHNU DURGA PRASAD
 
Action Plan_ARAL PROGRAM_ STAND ALONE SHS.docx
Levenmartlacuna1
 
Virat Kohli- the Pride of Indian cricket
kushpar147
 
The-Invisible-Living-World-Beyond-Our-Naked-Eye chapter 2.pdf/8th science cur...
Sandeep Swamy
 
What is CFA?? Complete Guide to the Chartered Financial Analyst Program
sp4989653
 
Cleaning Validation Ppt Pharmaceutical validation
Ms. Ashatai Patil
 
SAROCES Action-Plan FOR ARAL PROGRAM IN DEPED
Levenmartlacuna1
 
Tips Management in Odoo 18 POS - Odoo Slides
Celine George
 
Care of patients with elImination deviation.pptx
AneetaSharma15
 
RA 12028_ARAL_Orientation_Day-2-Sessions_v2.pdf
Seven De Los Reyes
 
Applications of matrices In Real Life_20250724_091307_0000.pptx
gehlotkrish03
 
Introduction to pediatric nursing in 5th Sem..pptx
AneetaSharma15
 
Basics and rules of probability with real-life uses
ravatkaran694
 
CONCEPT OF CHILD CARE. pptx
AneetaSharma15
 
Biological Classification Class 11th NCERT CBSE NEET.pdf
NehaRohtagi1
 
Artificial Intelligence in Gastroentrology: Advancements and Future Presprec...
AyanHossain
 
Software Engineering BSC DS UNIT 1 .pptx
Dr. Pallawi Bulakh
 
20250924 Navigating the Future: How to tell the difference between an emergen...
McGuinness Institute
 
HEALTH CARE DELIVERY SYSTEM - UNIT 2 - GNM 3RD YEAR.pptx
Priyanshu Anand
 
Ad

DSP_Module5_Rev2.pdfICE3251_DSP_DIGITAL SYSTEM PROCESSING_MIT

  • 1. ICE 3251 DIGITAL SIGNAL PROCESSING Slides by Dr. Chinmay Rajhans Course by Dr. Anjan Gudigar (A) and Dr. Chinmay Rajhans (B) Module 5: DSP Applications Second Year B. Tech. (Electronics and Instrumentation Engineering) MIT, Manipal, Udupi, Karnataka, India (MAHE) Jan-May 2024 2023-2024
  • 2. DSP Syllabus • Discrete Time Signals and Systems: Standard Discrete time signals, representation, classification, mathematical operations on discrete time signals, response of LTI discrete time systems in time domain, classification of discrete time systems. Linear convolution, cross correlation and autocorrelations. (10 hrs) • Discrete Fourier Transform (DFT): DFT, properties of the DFT, use of DFT in linear filtering, filtering of long data sequences, DFT as linear transformation, Inverse DFT, FFT Algorithms, Radix 2 DITFFT and DIFFFT. (10 hrs) • IIR filters: Frequency response of analog and digital filters, characteristics of Butterworth, Chebyshev and elliptic filters, classical filter design, design of digital filters by impulse invariance, bilinear transformation and matched Z transform. (12 hrs)
  • 3. DSP Syllabus (Contd) • FIR Filters: Linear phase FIR Filters, characteristics, frequency response, design using windows, frequency sampling design. (06 hrs) • Implementation of Discrete Time Systems: Structures for FIR systems – Direct form, cascade form, Structures for IIR systems – Direct form (DF I and II), cascade and parallel form structures, lattice ladder structures. (06 hrs) • Applications of digital signal processing: Speech processing- speech coding, recognition, speech synthesis, biomedical signal processing, Image processing applications. (04 hrs)
  • 4. Books 1. Proakis John G, Manolakis Dimitris G., Digital Signal Processing, PHI, (4e), 2003. 2. Rabiner L.R and Gold Bernard, Theory and Applications of Digital Signal Processing, PHI, 2002. 3. Sanjit Mitra K, Digital Signal Processing: A Computer Based Approach, TMH, (4e), 2013. 4. A. Nagoor Kani, Digital Signal Processing, Tata McGraw Hill Education, (2e), 2017. 5. Johnson Johny R, Introduction to digital signal processing, Prentice Hall Of India, 2003.
  • 5. COURSE OUTCOMES: CO1: Analyze signals and LTI systems in discrete time domain and Z domain. CO2: Evaluate Discrete Fourier Transform (DFT) for discrete time signals. CO3: Analyze digital IIR filters. CO4: Implement digital FIR filters. CO5: Apply the principles of digital signal processing to real world problems.
  • 6. DSP Module 5 Syllabus • Applications of digital signal processing: Speech processing- speech coding, recognition, speech synthesis, biomedical signal processing, Image processing applications.
  • 8. Speech Processing • The speech signal is a slowly timed varying signal. • The speech signal can be broadly classified into voiced and unvoiced signal. • The voiced signals are periodic. • Unvoiced signals are random in nature. • The voiced signals will have a fundamental frequency in a segment of 15 to 20 msec, representing a characteristic sound of the speech. • The various frequency components of sounds in speech signal will lie within 4 kHz.
  • 9. Broad Classification of Speed Processing • Speech analysis: In general, the process of extracting the features of speech and then coding or directly digitizing the speech and then reducing the bit rate are called speech analysis. It is used in speech recognition, speaker verification and speaker identification. • Speech synthesis: In general, the process of decoding the speech signal represented in the form of codes are called speech synthesis. It is used in conversion of text to speech.
  • 10. Speech Coding and Decoding • The speech coding is digital representation of speech using minimum bit rate without affecting the voice quality. • The speech decoding is conversion of digital speech data to analog speech.
  • 11. Speech Coding • The old method for quality transmission and reception of digital speech signal through telephone lines, employs a bit rate of 64 kbps (kilo bits per second). • Pulse Code Modulation (PCM), in which the speech signal is sampled at 8 kHz and each sample is quantized to 13 bits and then compressed to 8 bits using m-law or A-law standards to achieve a transmission rate of 64 kbps (8000 samples per second x 8 bits per sample = 64000 bits per second) needed for transmission. • A number of digital speech coding techniques are developed to represent the speech at lower bit rates up to 1000 bits per second to effectively utilize the transmission channels and also to reduce memory requirements for storage and retrieval of speech.
  • 12. Speech Coding • The speech coding techniques can be broadly classified into waveform coding techniques and parametric coding techniques. • Some of the popular waveform coding techniques are Adaptive Pulse Code Modulation (APCM), Differential Pulse Code Modulation (DPCM) and Adaptive Differential Pulse Code Modulation (ADPCM). • Some of the parametric method of speech coding are Linear Prediction Coding (LPC), Mel-Frequency Cepstrum Coefficients (MFCC), Code Excited Linear Predictive Coding (CELP) and Vector Sum Excited Linear Prediction ( VSELP).
  • 13. Adaptive Differential Pulse Code Modulation (ADPCM) Encoder
  • 14. Adaptive Differential Pulse Code Modulation (ADPCM) Decoder
  • 16. Speech Recognition • A speaker-dependent system is a system that recognizes a specific speaker’s speech, while speaker-independent systems can be used to recognize the speech of any unspecified speaker.
  • 17. Speech Recognition • The front-end analysis, extracts the acoustic features of input speech. • Some of the popular techniques used for extracting the acoustic features of speech are Linear Predictive Coding (LPC) , Mel-Frequency Cepstrum Coefficients (MFCC) and Perceptual Linear Prediction (PLP). • The output of front-end analysis is a compact, efficient set of parameters that represent the acoustic properties observed from input speech signals, for subsequent utilization by acoustic modeling. • The acoustic models represent the acoustic properties, phonetic properties, microphone and environmental variability, as well as gender and dialectal differences among speakers. • The language models contain the syntax, semantics, and pragmatics knowledge for the intended recognition task. These models can be dynamically modified according to the characteristics of the speech to be recognized during the training process.
  • 18. Speech Recognition • A speech recognition system has to be trained with known speech, before using the system for recognization. • Acoustic pattern recognition aims at measuring the similarity between an input speech and a reference model (obtained during training) and determines the best match for the input speech. • Some popular methods of acoustic pattern matching are Dynamic Time Warping (DTW), Hidden Markov Modeling (HMM), discrete HMM (DHMM), Continuous-Density HMM (CDHMM) and Vector Quantization (VQ). • The language analysis is important in speech recognition, for Large Vocabulary Continuous Speech Recognition (LVCSR) tasks. The speech decoding process needs to invoke knowledge of pronunciation, lexicon, syntax, and pragmatics in order to produce a satisfactory output text sequence.
  • 20. Speech Synthesis • The process of transforming text into speech contains two phases. • The first phase consists of text analysis and phonetic analysis. The second phase is generation of speech signal, which can be divided into two sub-phases : the search of speech segments from a database or the creation of these segments and the implementation of the prosodic features. • Text analysis includes the task of text normalization and linguistic analysis. In text normalization, the numbers and symbols are converted to words and abbreviations are replaced by their corresponding whole words or phrases, so that the whole text is converted to human utterance like words.
  • 21. Speech Synthesis • The linguistic analysis aims at understanding the content of the text, exact meaning of utterance word and provide prosodic informations like position of pause, differentiate interrogative clause from statements, etc., for subsequent processing. • Phonetic analysis assigns phonetic transcription to each word, and this process is called grapheme-to-phoneme conversion. • Grapheme is the smallest unit of written word, and phoneme is the smallest unit of speech.
  • 22. Speech Synthesis • Prosody refers to the rhythm of speech, stress patterns, pitch, duration, intonation, etc., and it plays- a very important role in the understandability of speech. In prosodic analysis, some prosody features are added to synthetic speech so that it resembles natural speech. Moreover, some hierarchical rules have been developed to control the timing and fundamental frequency, which makes the flow of speech in synthesis systems to resemble natural sounding. • Speech synthesis block, finally generates the speech signal. This can be done by selecting speech unit for every phoneme from a database, using an appropriate search process. The resulting short units of speech are joined together to produce the final speech signal.
  • 23. Digital Vocoder Block diagram of speech analysis digital vocoder Block diagram of speech synthesis digital vocoder
  • 24. Dual Tone Multi Frequency (DTMF) in Telephone Dialing
  • 25. Biomedical Signal Processing Biomedical signal classification • Bioelectric signals : Signals generated by nerve cells and muscle cells. • Biomagnetic signals: The brain, heart and lungs produce extremely weak magnetic fields, and this contains additional information to that obtained from bioelectric signals. • Bioimpedance signals: The tissue impedance reveals information about tissue composition, blood volume and distribution and more. Usually obtained as a ratio of voltage measured at the desired spot, and current injected using electrodes. • Bioacoustic signals: Sound or acoustic signals are created by flow of blood through the heart, its valves, or vessels and flow of air through upper and lower airways and lungs. Sound signals are also produced by digestive tract, joints and contraction of muscles. These signals can be recorded using microphones. • Biomechanical signals: Motion and displacement signals, pressure, tension and flow signals. • Biochemical signals: Chemical measurements from living tissue or samples analyzed in a laboratory. • Biooptical signals: Blood oxygenation obtained by measuring transmitted and backscattered light from a tissue, estimation of heart output by dye dilution.
  • 26. Processing of Biomedical Signals • The processing of biomedical signals usually consists of at least four stages: • Measurement or observation of signals, which is also called signal acquisition. • Transformation and reduction of the signals. • Computation of signal parameters that are diagnostically significant. • Interpretation or classification of the signals.
  • 27. Biomedical Applications Domains • Information gathering: The measurement of phenomena to understand the system. • Diagnosis: Detection of malfunction, pathology or abnormality. • Monitoring: To obtain continuous or periodic information about the system. • Therapy and control: Modify the behaviour of the system and ensure the result. • Evaluation: Proof of performance, quality control, effect of treatment.
  • 28. Thank You Time for Questions Dr. Chinmay Rajhans You can email queries / questions / feedback at [email protected] with subject: DSP Module 5 Queries