CONVOLUTIONAL
CODES
Presented By : Abdulaziz Al mabrok Al tagawy
Course : Coding Theory - Feb/2016
Contents:
 Introduction
 Convolutional Codes and
Encoders
 Description in the D-
Transform Domain
 Convolutional Encoder
Representations
 Representation of
Connections:
 State Diagram
Representation:
 Trellis Diagram:
 Convolutional Codes in
Systematic Form
 Minimum Free Distance
of a Convolutional Code
 Maximum Likelihood Detection
 Decoding of Convolutional
Codes: The Viterbi Algorithm
 Catastrophic Generator Matrix
 Punctured Convolutional Codes
 The Most Widely Used
Convolutional Codes
 Practical Examples of
Convolutional Codes
 Advantages of Convolutional
Codes.
Introduction
Introduction
 Convolutional codes were first discovered by P.Elias in
1955.
 The structure of convolutional codes is quite different from
that of block codes.
 During each unit of time, the input to a convolutional code
encoder is also a k-bit message block and the corresponding
output is also an n-bit coded block with k < n.
 Each coded n-bit output block depends not only the
corresponding k-bit input message block at the same time
unit but also on the m previous message blocks.
 Thus the encoder has k input lines, n output lines and a
memory of order m .
Introduction
 Each message (or information) sequence is encoded into a code
sequence.
 The set of all possible code sequences produced by the encoder is
called an (n,k,m) convolutional code.
 The parameters, k and n, are normally small, say 1k8 and 2n9.
 The ratio R=k/n is called the code rate.
 Typical values for code rate are: 1/2, 1/3, 2/3.
 The parameter m is called the memory order of the code.
 Note that the number of redundant (or parity) bits in each coded
block is small. However, more redundant bits are added by
increasing the memory order m of the code while holding k and n
fixed.
An (n,k,m) convolutional code encoder
Convolutional Codes and Encoders
Convolutional Codes and Encoders
 Convolutional encoder is accomplished using shift
registers (D flip-flop) and combinatorial logic that
performs modulo-two addition.
Convolutional Codes and Encoders
 We’ll also need an output selector to toggle
between the two modulo-two adders .
 The output selector (SEL A/B block) cycles
through two states; in the first state, it selects and
outputs the output of the upper modulo-two adder;
in the second state, it selects and outputs the output
of the lower modulo-two adder.
Convolutional Codes and Encoders
Convolutional Codes and Encoders
 For the case of (n,k,m), the encoder has k inputs
and n outputs .
 At the i-th input terminal, the input message
sequence is:
𝑚
(𝑖)
= ( 𝑚0
(𝑖)
, 𝑚1
(𝑖)
, 𝑚2
(𝑖)
…. ….. 𝑚𝑙
(𝑖)
…..)
for 1i k.
 At the j-th output terminal, the output code
sequence is
𝑐
(𝑗)
= ( 𝑐0
(𝑗)
, 𝑐1
(𝑗)
, 𝑐2
(𝑗)
…. ….. 𝑐𝑙
(𝑗)
…..)
for 1j n
Convolutional Codes and Encoders
 An (n,k,m) convolutional code is specified by
k× n generator sequence:
𝑔1
(1)
, 𝑔1
(2)
, 𝑔1
(3)
…. ….. 𝑔1
(𝑛)
𝑔2
(1)
, 𝑔2
(2)
, 𝑔2
(3)
…. ….. 𝑔2
(𝑛)
: : : : .
𝑔 𝑘
(1)
, 𝑔 𝑘
(2)
, 𝑔 𝑘
(3)
…. ….. 𝑔 𝑘
(𝑛)
 The generator sequence 𝑔 𝑗
(𝑖)
is of the following
form:
𝑔
(𝑗)
= ( 𝑔𝑗0
(𝑖)
, 𝑔𝑗1
(𝑖)
, 𝑔𝑗2
(𝑖)
…. …..𝑔𝑗,𝑚
(𝑖)
)
Convolutional Codes and Encoders
 Whereas the encoders of convolutional codes can
be represented by linear time-invariant (LTI)
systems.
 The n output code sequences are then given by :
𝑐
(1)
=𝑚
(1)
* 𝑔1
(1)
+𝑚
(2)
* 𝑔2
(1)
+…..+𝑚
(𝑘)
* 𝑔 𝑘
(1)
𝑐
(2)
=𝑚
(1)
* 𝑔1
(2)
+𝑚
(2)
* 𝑔2
(2)
+…..+𝑚
(𝑘)
* 𝑔 𝑘
(2)
:
𝑐
(𝑛)
=𝑚
(1)
* 𝑔1
(𝑛)
+𝑚
(2)
* 𝑔2
(𝑛)
+…..+𝑚
(𝑘)
* 𝑔 𝑘
(𝑛)
Convolutional Codes and Encoders
 where * is the convolutional operation and 𝑔 𝑘
(𝑗)
is
the impulse response of the i-th input sequence
with the response to the j-th output.
 𝑔 𝑘
(𝑗)
can be found by stimulating the encoder
with the discrete impulse (1, 0, 0, . . .) at the i-th
input and by observing the j-th output when all
other inputs are fed the zero sequence
(0, 0, 0, . . .).
 The impulse responses are called generator
sequences of the encoder.
Convolutional Codes and Encoders
 Impulse Response for the Binary (2, 1, 2) Convolutional
Code:
𝑔1 = (1, 1, 1, 0, . . .) = (1, 1, 1),
𝑔2 = (1, 0, 1, 0, . . .) = (1, 0, 1)
𝑣1 = (1, 1, 1, 0, 1) ∗ (1, 1, 1) = (1, 0, 1, 0, 0, 1, 1)
𝑣1 = (1, 1, 1, 0, 1) ∗ (1, 0, 1) = (1, 1, 0, 1, 0, 0, 1)
Convolutional Codes and Encoders
 Generator Matrix in the Time Domain:
 The convolutional codes can be generated by a
generator matrix multiplied by the information
sequences.
 Let 𝑢1, 𝑢2, . . . , 𝑢 𝑘 are the information sequences and
𝑣1, 𝑣2, . . . , 𝑣 𝑛 are the output sequences.
 Arrange the information sequences as :
𝑢 = (𝑢1,0, 𝑢2,0, . . . , 𝑢 𝑘,0, 𝑢1,1, 𝑢2,1, . . . , 𝑢 𝑘,1,
. . . , 𝑢1,𝑙, 𝑢2,𝑙, . . . , 𝑢 𝑘,𝑙, . . .)
= (𝑤0, 𝑤1, . . . , 𝑤𝑙, . . .),
Convolutional Codes and Encoders
 Generator Matrix in the Time Domain:
 Let the output sequences as:
𝑣 = (𝑣1,0, 𝑣2,0, . . . , 𝑣 𝑘,0, 𝑣1,1, 𝑣2,1, . . . , 𝑣 𝑘,1,
. . . , 𝑣1,𝑙, 𝑣2,𝑙, . . . , 𝑣 𝑘,𝑙, . . .)
= (𝑧0, 𝑧1, . . . , 𝑧𝑙, . . .),
 𝑣 is called a codeword or code sequence.
 The relation between 𝑣 and 𝑢 can characterized as
𝑣 = 𝑢 ·G
 where G is the generator matrix of the code.
Convolutional Codes and Encoders
 Generator Matrix in the Time Domain:
 The generator matrix is :
 G=
G0 G1 G2 … G 𝑚
G0 G1 ⋯ G 𝑚−1 G 𝑚
G0 ⋯ G 𝑚−2 G 𝑚−1 G 𝑚
⋱ ⋱
Convolutional Codes and Encoders
 Generator Matrix in the Time Domain:
 With the k × n submatrices :
𝐺𝑙=
𝑔1,𝑙
(1)
𝑔2,𝑙
(1)
⋯ ⋯
𝑔1,𝑙
(2)
𝑔2,𝑙
(2)
⋯ ⋯
⋮ ⋮
𝑔 𝑛,𝑙
(1)
𝑔 𝑛,𝑙
(2)
⋮
𝑔1,𝑙
(𝑘)
𝑔2,𝑙
(𝑘)
⋯ ⋯𝑔 𝑛,𝑙
(𝑘)
 The element 𝑔𝑗,𝑙
(𝑖)
, for i ∈ [1, k] and j ∈ [1, n], are the
impulse response of the i-th input with respect to j-th output
Convolutional Codes and Encoders
 Generator Matrix in the Time Domain:
 Ex: Generator Matrix of the Binary (2, 1, 2)
Convolutional Code :
Convolutional Codes and Encoders
 Generator Matrix in the Time Domain:
 Ex: Generator Matrix of the Binary (2, 1, 2)
Convolutional Code :
𝑣 = 𝑢 ·G
𝑣 = (1, 1, 1, 0, 1) ·
11 10 11
11 10 11
11 10 11
11 10 11
11 10 11
𝑣 = (11, 01, 10, 01, 00, 10, 11)
Convolutional Codes and Encoders
 Generator Matrix in the Time Domain:
Description in the D-Transform
Domain
Description in the D-Transform
Domain
 The D-transform is a function of the indeterminate
D (the delay operator) and is defined as:
Description in the D-Transform
Domain
 The convolutional relation of the D transform
 D{u ∗ g} = U(D)G(D) is used to transform the
convolution of input sequences and generator
sequences to a multiplication in the D domain.
Description in the D-Transform
Domain
 Ex: (2,1,2) convolutional code encoder :
G(1)(D) = 1 + 𝐷 + 𝐷2
G(2)(D) = 1 + 𝐷2
G(D) = [1 + 𝐷 + 𝐷2
, 1 + 𝐷2
]
U(D) = 1 + 𝐷 + 𝐷2
+ 𝐷4
Description in the D-Transform
Domain
 Ex: (2,1,2) convolutional code encoder :
V (D) = U(D) • G(D)
V1(D)= U(D) • G(1)(D)
V1(D)= (1 + 𝐷 + 𝐷2
+ 𝐷4
) x (= 1 + 𝐷 + 𝐷2
)
= (1 + 𝐷 + 𝐷2
+ 𝐷4
+ 𝐷 + 𝐷2
+
𝐷3
+𝐷5
+𝐷2
+ 𝐷3
+ 𝐷4
+𝐷6
)
V1(D) = 1 + 𝐷2
+ 𝐷5
+ 𝐷6
Description in the D-Transform
Domain
 Ex: (2,1,2) convolutional code encoder :
V (D) = U(D) • G(D)
V2(D)= U(D) • G(2)(D)
V2(D)= (1 + 𝐷 + 𝐷2
+ 𝐷4
) x (= 1 + 𝐷2
)
= (1 + 𝐷 + 𝐷2
+ 𝐷4
+𝐷2
+ 𝐷3
+
𝐷4
+𝐷6
)
V2(D) = 1 + 𝐷 + 𝐷3
+ 𝐷6
Description in the D-Transform
Domain
 Ex: (2,1,2) convolutional code encoder :
V1(D) = 1 + 𝐷2
+ 𝐷5
+ 𝐷6
V2(D) = 1 + 𝐷 + 𝐷3
+ 𝐷6
In the time domain :
𝑣1 = (1010011)
𝑣2 = (1101001)
Then the code word is :
𝑣 = (11 01 10 01 11 10 11)
Convolutional Encoder
Representations
Convolutional Encoder
Representations
 The encoder of a convolutional code
𝐶𝑐𝑜𝑛𝑣 (2, 1, 2), in each clock cycle
the bits contained in each register
stage are right shifted to the
following stage.
 The two outputs are sampled to
generate the two output bits for each
input bit.
 Convolutional code 𝐶𝑐𝑜𝑛𝑣(n, k, K) is
described by the vector
representation of the generator
polynomials 𝑔(1) and 𝑔(2) that
correspond to the upper and lower
branches of the FSSM (finite state
sequential machine)
Representation of Connections:
 In this description, a ‘1’ means
that there is connection, and a ‘0’
means that the corresponding
register is not connected:
𝑔(1)
= (101)
𝑔(2) = (111)
Convolutional Encoder
Representations
 Since the encoder is a linear sequential circuit,
its behavior can be described by a state
diagram.
 Define the encoder state at the time l as the m-
tuple,
(𝑚𝑖−1 , 𝑚𝑖−1 ,……, 𝑚𝑖−𝑚 )
 Which consists of the m message bits stored in
the shift register.
 There are 2 𝑚
possible states. At any time
instant, the encoder must be in one of these
states.
 The encoder undergoes a state transition when
a message bit is shifted into the encoder
register as shown below.
 The encoder undergoes a state transition when
a message bit is shifted into the encoder
register as shown below.
State Diagram Representation:
 At each time unit, the output block
depends on the input and the state,
Convolutional Encoder
Representations
 The state diagram can be expanded in time to display the state transition of a convolutional
encoder in time. This expansion in time results in a trellis diagram.
 Normally the encoder starts from the all-zero state, (0, 0, … , 0).
 When the first message bit 𝑚0 is shifted into the encoder register, the encoder is in one of
the two following states:
(𝑚0= 0,0,0, …..,0) ; (𝑚0 = 1,0,0, …….,0)
 When the second message bit 𝑚1 is shifted into the encoder register, the encoder is in one
of the following states:
(𝑚1 =0, 𝑚0= 0,0,0, …..,0) ; (𝑚1 =0, 𝑚0 = 1,0,0, …….,0)
(𝑚1 =1, 𝑚0= 0,0,0, …..,0) ; (𝑚1 =1, 𝑚0 = 1,0,0, …….,0)
 Every time, when a message bit is shifted into the encoder register, the number of state is
doubled until the number of states reaches 2 𝑚
.
Trellis Diagram:
Convolutional Encoder
Representations
 At the time m, the encoder reaches the steady
state.
Trellis Diagram:
Convolutional Encoder
Representations
 Termination of a Trellis:
 When the entire sequence m has been encoded, the encoder
must return to the starting state. This can be done by
appending m zeros to the message sequence m .
 When the first “0” is shifted into the encoder register, then
; it will be there are 2 𝑚−1
such states.
 When the second “0” is shifted into the encoder register,
then ; it will be there are 2 𝑚−2 such states.
 When the m-th “0” is shifted into the register, the encoder
is back to the all-zero state, (0, 0, … , 0).
Trellis Diagram:
Convolutional Encoder
Representations
 Termination of a Trellis:
 At this instant, the trellis converges into a single
vertex.
 During the termination process, the number of
states is reduced by half as each “0” is shifted into
the encoder register.
Trellis Diagram:
Convolutional Encoder
Representations
 Termination of a Trellis:
Trellis Diagram:
Convolutional Codes
in Systematic Form
Convolutional Codes
in Systematic Form
 In a systematic code,
message information
can be seen and
directly extracted from
the encoded
information.
Convolutional Codes
in Systematic Form
 Example : Determine the transfer function of the systematic
convolutional code as shown in the below Figure , and then
obtain the code sequence for the input sequence m =
(1101).
Ans:
The transfer function in this case is:
G(𝐷 ) = [ 1 𝐷 + 𝐷2
]
Convolutional Codes
in Systematic Form
 Example : Determine the transfer function of the systematic
convolutional code as shown in the below Figure , and then obtain
the code sequence for the input sequence m = (1101).
Ans : (Cont…)
and so the code sequence for the given input sequence, which in
polynomial form is M(𝐷 )= 1 +𝐷 + 𝐷3 is obtained from :
𝐶(1)
= M(𝐷 ) 𝐺(1)
(𝐷 ) = 1 +𝐷 + 𝐷3
𝐶(2)
= M(𝐷 ) 𝐺(2)
(𝐷 ) = (1 +𝐷 + 𝐷3
) . (𝐷 + 𝐷2
)
Then
𝐶 = (10, 11, 00, 11, 01, 01)
Convolutional Codes
in Systematic Form
 In the case of systematic convolutional codes, there
is no need to have an inverse transfer function
decoder to obtain the input sequence, because this
is directly read from the code sequence.
Minimum Free Distance
of a Convolutional Code
Minimum Free Distance
of a Convolutional Code
 The most important distance measure for
convolutional codes is the minimum free distance,
denoted 𝑑 𝑓𝑟𝑒𝑒.
 The minimum free distance of a convolutional code
 is simply the minimum Hamming distance
between any two code sequences in the code.
 It is also the minimum weight of all the code
sequences, which are produced by the nonzero
message sequences.
Minimum Free Distance
of a Convolutional Code
 Since convolutional codes are linear, we can use
the all-zero path 0 as our reference for studying the
distance structure of convolutional codes without
loss of generality.
Minimum Free Distance
of a Convolutional Code
 Example : Determine the minimum free distance of
the convolutional code 𝐶𝑐𝑜𝑛𝑣 (2, 1, 2) used in
previous examples, by employing the above
procedure, implemented over the corresponding
trellis.
Minimum Free Distance
of a Convolutional Code
 Ans:
The sequence corresponding to the path described by the sequence of
states 𝑆 𝑎 𝑆 𝑏 𝑆𝑐 𝑆 𝑎 , seen in bold in the Figure, is the sequence of
minimum weight, which is equal to 5. There are other sequences like
those described by the state sequences 𝑆 𝑎 𝑆 𝑏 𝑆𝑐 𝑆 𝑏 𝑆𝑐 𝑆 𝑎 and
𝑆 𝑎 𝑆 𝑏 𝑆 𝑑 𝑆𝑐 𝑆 𝑎 that both are of weight 6. The remaining paths are all of
larger weight, and so the minimum free distance of this convolutional
code is 𝑑 𝑓 = 5.
Maximum Likelihood Detection
Maximum Likelihood Detection
 If the input sequence messages are equally likely, the optimum
decoder which minimizes the probability of error is the
Maximum Likelihood (ML) decoder.
 Maximum likelihood decoding means finding the code branch
in the code trellis that was most likely to transmitted.
 Therefore maximum likelihood decoding is based on
measuring the distance using either Hamming distance for
hard-decision decoding or Euclidean distance for soft-
decision decoding .
 Probability to decode sequence is then :
Decoding of Convolutional Codes:
The Viterbi Algorithm
Decoding of Convolutional Codes:
The Viterbi Algorithm
 The Viterbi algorithm performs maximum
likelihood decoding but reduces the computational
complexity by taking advantage of the special
structure of the code trellis.
 It was first introduced by A. Viterbi in 1967.
 It was first recognized by D. Forney in 1973 that it
is a MLD algorithm for convolutional code.
Decoding of Convolutional Codes:
The Viterbi Algorithm
 Generate the code trellis at the
decoder.
 The decoder penetrates through the
code trellis level by level in search
for the transmitted code sequence.
 At each level of the trellis, the
decoder computes and compares the
metrics of all the partial paths
entering a node.
 The decoder stores the partial path
with the largest metric and
eliminates all the other partial paths.
The stored partial path is called the
survivor.
 For m < l ≦ L, there are
2 𝑘𝑚2km nodes at the l-th level
of the code trellis. Hence there
are 2 𝑘𝑚 survivors.
 When the code trellis begins to
terminate, the number of
survivors reduces.
 At the end, the (L+m)-th level,
there is only one node (the all-
zero state) and hence only one
survivor.
 his last survivor is the maximum
Basic Concepts:
Decoding of Convolutional Codes:
The Viterbi Algorithm
Procedure:
Decoding of Convolutional Codes:
The Viterbi Algorithm
Procedure:
Decoding of Convolutional Codes:
The Viterbi Algorithm
Procedure:
Decoding of Convolutional Codes:
The Viterbi Algorithm
Procedure:
Decoding of Convolutional Codes:
The Viterbi Algorithm
Procedure:
Decoding of Convolutional Codes:
The Viterbi Algorithm
Example :
 m = (101) Source message
 U = (11 10 00 10 11) Codeword to be transmitted
 Z = (11 10 11 10 01) Received codeword
Decoding of Convolutional Codes:
The Viterbi Algorithm
Example:
 Label all the branches with the branch metric (Hamming
distance)
Decoding of Convolutional Codes:
The Viterbi Algorithm
Example:
 i=2,
Decoding of Convolutional Codes:
The Viterbi Algorithm
Example:
 i=3,
Decoding of Convolutional Codes:
The Viterbi Algorithm
Example:
 i=4,
Decoding of Convolutional Codes:
The Viterbi Algorithm
Example:
 i=5,
Decoding of Convolutional Codes:
The Viterbi Algorithm
Example:
 i=6,
Decoding of Convolutional Codes:
The Viterbi Algorithm
Example:
 Trace back and then :
𝑚 = (100)
𝑈 = (11 10 11 00 00) There are some decoding errors.
Catastrophic Generator Matrix
 A catastrophic matrix maps information sequences with
infinite Hamming weight to code sequences with finite
Hamming weight.
 For a catastrophic code, a finite number of transmission
errors can cause an infinite number of errors in the
decoded information sequence. Hence, theses codes
should be avoided in practice.
 It is easy to see that systematic generator matrices
are never catastrophic.
 A code which inherits the catastrophic error
propagation property is called a catastrophic code.
Punctured Convolutional Codes
The Most Widely Used Convolutional
Codes
 The most widely used convolutional code is
(2,1,6) Odenwalter code generate by the following
generator sequence,
𝑔(1)
= (1101101)
𝑔(2) = (1001111).
 This code has 𝑑 𝑓𝑟𝑒𝑒 =10
 With hard-decision decoding, it provides a 3.98dB
coding gain over the uncoded BPSK modulation
system.
 With soft-decision decoding, the coding gain is
6.98dB.
Practical Examples
of Convolutional Codes
Advantages of Convolutional Codes
 Convolution coding is a popular error-correcting
coding method used in digital communications.
 The convolution operation encodes some redundant
information into the transmitted signal, thereby
improving the data capacity of the channel.
 Convolution Encoding with Viterbi decoding is a
powerful FEC technique that is particularly suited to a
channel in which the transmitted signal is corrupted
mainly by AWGN.
 It is simple and has good performance with low
implementation cost.
THANK YOU

More Related Content

PPTX
Convolution Codes
PPT
Convolutional Codes And Their Decoding
PPTX
Power Point Presentation on Artificial Intelligence
PPTX
Error control coding techniques
DOCX
Turbo code
PDF
Question Bank Microprocessor 8085
PPTX
PPTX
Turbo codes
Convolution Codes
Convolutional Codes And Their Decoding
Power Point Presentation on Artificial Intelligence
Error control coding techniques
Turbo code
Question Bank Microprocessor 8085
Turbo codes

What's hot (20)

PPTX
Linear block coding
PPTX
Turbo codes.ppt
PPTX
Multirate DSP
PPTX
NYQUIST CRITERION FOR ZERO ISI
PPTX
Pipelining approach
PPT
Small Scale Multi path measurements
PPT
Digital Communication: Channel Coding
PDF
Convolution codes - Coding/Decoding Tree codes and Trellis codes for multiple...
PPT
Switching systems lecture2
PDF
Vlsi lab viva question with answers
PPT
UNIT-3 : CHANNEL CODING
PPT
Multipliers in VLSI
PPTX
Cyclic code non systematic
PPTX
DIGITAL COMMUNICATION: ENCODING AND DECODING OF CYCLIC CODE
PPT
Noise in AM systems.ppt
PPTX
non parametric methods for power spectrum estimaton
PPT
Propagation mechanisms
PDF
Companding & Pulse Code Modulation
PPTX
Four way traffic light conrol using Verilog
Linear block coding
Turbo codes.ppt
Multirate DSP
NYQUIST CRITERION FOR ZERO ISI
Pipelining approach
Small Scale Multi path measurements
Digital Communication: Channel Coding
Convolution codes - Coding/Decoding Tree codes and Trellis codes for multiple...
Switching systems lecture2
Vlsi lab viva question with answers
UNIT-3 : CHANNEL CODING
Multipliers in VLSI
Cyclic code non systematic
DIGITAL COMMUNICATION: ENCODING AND DECODING OF CYCLIC CODE
Noise in AM systems.ppt
non parametric methods for power spectrum estimaton
Propagation mechanisms
Companding & Pulse Code Modulation
Four way traffic light conrol using Verilog
Ad

Similar to Convolutional codes (20)

PPTX
unit 5 (1).pptx
PPT
Slides
PPT
Digital communication coding Lectures Slides.ppt
PPTX
Convolutional Error Control Coding
PPTX
A Nutshell On Convolutional Codes (Representations)
PDF
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
PPTX
Digital Logic Design Lectures on Flip-flops and latches and counters
PDF
Combinational Circuits - II (Encoders, Decoders, Multiplexers & PIDs).pdf
PDF
IJCER (www.ijceronline.com) International Journal of computational Engineeri...
PDF
2 marks DPCO.pdf
PDF
Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...
PPTX
Convolution codes and turbo codes
PPTX
Digital Logic Design Lecture on Counters and
PPTX
Turbo Code
PDF
Lti system
PDF
FYBSC IT Digital Electronics Unit IV Chapter I Multiplexer, Demultiplexer, AL...
PDF
The International Journal of Engineering and Science (The IJES)
PPTX
Digital electronics-COMBINATIONAL CIRCUIT DESIGN
PDF
Error Control coding
PDF
Differential 8 PSK code with multisymbol interleaving
unit 5 (1).pptx
Slides
Digital communication coding Lectures Slides.ppt
Convolutional Error Control Coding
A Nutshell On Convolutional Codes (Representations)
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...
Digital Logic Design Lectures on Flip-flops and latches and counters
Combinational Circuits - II (Encoders, Decoders, Multiplexers & PIDs).pdf
IJCER (www.ijceronline.com) International Journal of computational Engineeri...
2 marks DPCO.pdf
Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...
Convolution codes and turbo codes
Digital Logic Design Lecture on Counters and
Turbo Code
Lti system
FYBSC IT Digital Electronics Unit IV Chapter I Multiplexer, Demultiplexer, AL...
The International Journal of Engineering and Science (The IJES)
Digital electronics-COMBINATIONAL CIRCUIT DESIGN
Error Control coding
Differential 8 PSK code with multisymbol interleaving
Ad

More from Abdullaziz Tagawy (13)

PDF
Service performance and analysis in cloud computing extened 2
PDF
Solar Cells versus Photodiode
PDF
Managing enterprise networks with cisco prime infrastructure_ 1 of 2
PDF
EMP_Assessment Report ABDELAZEZ TAGAWY
PPTX
IPSec and VPN
PPTX
OFDM Orthogonal Frequency Division Multiplexing
PDF
Solving QoS multicast routing problem using ACO algorithm
PPTX
Solving QoS multicast routing problem using aco algorithm
PPTX
SNAPDRAGON SoC Family and ARM Architecture
PPTX
Error Detection and Correction - Data link Layer
PPTX
Introduction to Data-Link Layer
PPTX
Comp net 2
PPTX
Comp net 1
Service performance and analysis in cloud computing extened 2
Solar Cells versus Photodiode
Managing enterprise networks with cisco prime infrastructure_ 1 of 2
EMP_Assessment Report ABDELAZEZ TAGAWY
IPSec and VPN
OFDM Orthogonal Frequency Division Multiplexing
Solving QoS multicast routing problem using ACO algorithm
Solving QoS multicast routing problem using aco algorithm
SNAPDRAGON SoC Family and ARM Architecture
Error Detection and Correction - Data link Layer
Introduction to Data-Link Layer
Comp net 2
Comp net 1

Recently uploaded (20)

PDF
AIGA 012_04 Cleaning of equipment for oxygen service_reformat Jan 12.pdf
PPTX
WN UNIT-II CH4_MKaruna_BapatlaEngineeringCollege.pptx
PDF
SEH5E Unveiled: Enhancements and Key Takeaways for Certification Success
PPTX
CS6006 - CLOUD COMPUTING - Module - 1.pptx
PPTX
CT Generations and Image Reconstruction methods
PDF
Beginners-Guide-to-Artificial-Intelligence.pdf
PDF
Present and Future of Systems Engineering: Air Combat Systems
PDF
20250617 - IR - Global Guide for HR - 51 pages.pdf
PPTX
Solar energy pdf of gitam songa hemant k
PPTX
Micro1New.ppt.pptx the main themes if micro
PPTX
Environmental studies, Moudle 3-Environmental Pollution.pptx
PPTX
Micro1New.ppt.pptx the mai themes of micfrobiology
PDF
UEFA_Carbon_Footprint_Calculator_Methology_2.0.pdf
PDF
Unit I -OPERATING SYSTEMS_SRM_KATTANKULATHUR.pptx.pdf
PPT
UNIT-I Machine Learning Essentials for 2nd years
PDF
MLpara ingenieira CIVIL, meca Y AMBIENTAL
PPTX
AI-Reporting for Emerging Technologies(BS Computer Engineering)
PDF
LOW POWER CLASS AB SI POWER AMPLIFIER FOR WIRELESS MEDICAL SENSOR NETWORK
PDF
Lesson 3 .pdf
DOCX
ENVIRONMENTAL PROTECTION AND MANAGEMENT (18CVL756)
AIGA 012_04 Cleaning of equipment for oxygen service_reformat Jan 12.pdf
WN UNIT-II CH4_MKaruna_BapatlaEngineeringCollege.pptx
SEH5E Unveiled: Enhancements and Key Takeaways for Certification Success
CS6006 - CLOUD COMPUTING - Module - 1.pptx
CT Generations and Image Reconstruction methods
Beginners-Guide-to-Artificial-Intelligence.pdf
Present and Future of Systems Engineering: Air Combat Systems
20250617 - IR - Global Guide for HR - 51 pages.pdf
Solar energy pdf of gitam songa hemant k
Micro1New.ppt.pptx the main themes if micro
Environmental studies, Moudle 3-Environmental Pollution.pptx
Micro1New.ppt.pptx the mai themes of micfrobiology
UEFA_Carbon_Footprint_Calculator_Methology_2.0.pdf
Unit I -OPERATING SYSTEMS_SRM_KATTANKULATHUR.pptx.pdf
UNIT-I Machine Learning Essentials for 2nd years
MLpara ingenieira CIVIL, meca Y AMBIENTAL
AI-Reporting for Emerging Technologies(BS Computer Engineering)
LOW POWER CLASS AB SI POWER AMPLIFIER FOR WIRELESS MEDICAL SENSOR NETWORK
Lesson 3 .pdf
ENVIRONMENTAL PROTECTION AND MANAGEMENT (18CVL756)

Convolutional codes

  • 1. CONVOLUTIONAL CODES Presented By : Abdulaziz Al mabrok Al tagawy Course : Coding Theory - Feb/2016
  • 2. Contents:  Introduction  Convolutional Codes and Encoders  Description in the D- Transform Domain  Convolutional Encoder Representations  Representation of Connections:  State Diagram Representation:  Trellis Diagram:  Convolutional Codes in Systematic Form  Minimum Free Distance of a Convolutional Code  Maximum Likelihood Detection  Decoding of Convolutional Codes: The Viterbi Algorithm  Catastrophic Generator Matrix  Punctured Convolutional Codes  The Most Widely Used Convolutional Codes  Practical Examples of Convolutional Codes  Advantages of Convolutional Codes.
  • 4. Introduction  Convolutional codes were first discovered by P.Elias in 1955.  The structure of convolutional codes is quite different from that of block codes.  During each unit of time, the input to a convolutional code encoder is also a k-bit message block and the corresponding output is also an n-bit coded block with k < n.  Each coded n-bit output block depends not only the corresponding k-bit input message block at the same time unit but also on the m previous message blocks.  Thus the encoder has k input lines, n output lines and a memory of order m .
  • 5. Introduction  Each message (or information) sequence is encoded into a code sequence.  The set of all possible code sequences produced by the encoder is called an (n,k,m) convolutional code.  The parameters, k and n, are normally small, say 1k8 and 2n9.  The ratio R=k/n is called the code rate.  Typical values for code rate are: 1/2, 1/3, 2/3.  The parameter m is called the memory order of the code.  Note that the number of redundant (or parity) bits in each coded block is small. However, more redundant bits are added by increasing the memory order m of the code while holding k and n fixed.
  • 8. Convolutional Codes and Encoders  Convolutional encoder is accomplished using shift registers (D flip-flop) and combinatorial logic that performs modulo-two addition.
  • 9. Convolutional Codes and Encoders  We’ll also need an output selector to toggle between the two modulo-two adders .  The output selector (SEL A/B block) cycles through two states; in the first state, it selects and outputs the output of the upper modulo-two adder; in the second state, it selects and outputs the output of the lower modulo-two adder.
  • 11. Convolutional Codes and Encoders  For the case of (n,k,m), the encoder has k inputs and n outputs .  At the i-th input terminal, the input message sequence is: 𝑚 (𝑖) = ( 𝑚0 (𝑖) , 𝑚1 (𝑖) , 𝑚2 (𝑖) …. ….. 𝑚𝑙 (𝑖) …..) for 1i k.  At the j-th output terminal, the output code sequence is 𝑐 (𝑗) = ( 𝑐0 (𝑗) , 𝑐1 (𝑗) , 𝑐2 (𝑗) …. ….. 𝑐𝑙 (𝑗) …..) for 1j n
  • 12. Convolutional Codes and Encoders  An (n,k,m) convolutional code is specified by k× n generator sequence: 𝑔1 (1) , 𝑔1 (2) , 𝑔1 (3) …. ….. 𝑔1 (𝑛) 𝑔2 (1) , 𝑔2 (2) , 𝑔2 (3) …. ….. 𝑔2 (𝑛) : : : : . 𝑔 𝑘 (1) , 𝑔 𝑘 (2) , 𝑔 𝑘 (3) …. ….. 𝑔 𝑘 (𝑛)  The generator sequence 𝑔 𝑗 (𝑖) is of the following form: 𝑔 (𝑗) = ( 𝑔𝑗0 (𝑖) , 𝑔𝑗1 (𝑖) , 𝑔𝑗2 (𝑖) …. …..𝑔𝑗,𝑚 (𝑖) )
  • 13. Convolutional Codes and Encoders  Whereas the encoders of convolutional codes can be represented by linear time-invariant (LTI) systems.  The n output code sequences are then given by : 𝑐 (1) =𝑚 (1) * 𝑔1 (1) +𝑚 (2) * 𝑔2 (1) +…..+𝑚 (𝑘) * 𝑔 𝑘 (1) 𝑐 (2) =𝑚 (1) * 𝑔1 (2) +𝑚 (2) * 𝑔2 (2) +…..+𝑚 (𝑘) * 𝑔 𝑘 (2) : 𝑐 (𝑛) =𝑚 (1) * 𝑔1 (𝑛) +𝑚 (2) * 𝑔2 (𝑛) +…..+𝑚 (𝑘) * 𝑔 𝑘 (𝑛)
  • 14. Convolutional Codes and Encoders  where * is the convolutional operation and 𝑔 𝑘 (𝑗) is the impulse response of the i-th input sequence with the response to the j-th output.  𝑔 𝑘 (𝑗) can be found by stimulating the encoder with the discrete impulse (1, 0, 0, . . .) at the i-th input and by observing the j-th output when all other inputs are fed the zero sequence (0, 0, 0, . . .).  The impulse responses are called generator sequences of the encoder.
  • 15. Convolutional Codes and Encoders  Impulse Response for the Binary (2, 1, 2) Convolutional Code: 𝑔1 = (1, 1, 1, 0, . . .) = (1, 1, 1), 𝑔2 = (1, 0, 1, 0, . . .) = (1, 0, 1) 𝑣1 = (1, 1, 1, 0, 1) ∗ (1, 1, 1) = (1, 0, 1, 0, 0, 1, 1) 𝑣1 = (1, 1, 1, 0, 1) ∗ (1, 0, 1) = (1, 1, 0, 1, 0, 0, 1)
  • 16. Convolutional Codes and Encoders  Generator Matrix in the Time Domain:  The convolutional codes can be generated by a generator matrix multiplied by the information sequences.  Let 𝑢1, 𝑢2, . . . , 𝑢 𝑘 are the information sequences and 𝑣1, 𝑣2, . . . , 𝑣 𝑛 are the output sequences.  Arrange the information sequences as : 𝑢 = (𝑢1,0, 𝑢2,0, . . . , 𝑢 𝑘,0, 𝑢1,1, 𝑢2,1, . . . , 𝑢 𝑘,1, . . . , 𝑢1,𝑙, 𝑢2,𝑙, . . . , 𝑢 𝑘,𝑙, . . .) = (𝑤0, 𝑤1, . . . , 𝑤𝑙, . . .),
  • 17. Convolutional Codes and Encoders  Generator Matrix in the Time Domain:  Let the output sequences as: 𝑣 = (𝑣1,0, 𝑣2,0, . . . , 𝑣 𝑘,0, 𝑣1,1, 𝑣2,1, . . . , 𝑣 𝑘,1, . . . , 𝑣1,𝑙, 𝑣2,𝑙, . . . , 𝑣 𝑘,𝑙, . . .) = (𝑧0, 𝑧1, . . . , 𝑧𝑙, . . .),  𝑣 is called a codeword or code sequence.  The relation between 𝑣 and 𝑢 can characterized as 𝑣 = 𝑢 ·G  where G is the generator matrix of the code.
  • 18. Convolutional Codes and Encoders  Generator Matrix in the Time Domain:  The generator matrix is :  G= G0 G1 G2 … G 𝑚 G0 G1 ⋯ G 𝑚−1 G 𝑚 G0 ⋯ G 𝑚−2 G 𝑚−1 G 𝑚 ⋱ ⋱
  • 19. Convolutional Codes and Encoders  Generator Matrix in the Time Domain:  With the k × n submatrices : 𝐺𝑙= 𝑔1,𝑙 (1) 𝑔2,𝑙 (1) ⋯ ⋯ 𝑔1,𝑙 (2) 𝑔2,𝑙 (2) ⋯ ⋯ ⋮ ⋮ 𝑔 𝑛,𝑙 (1) 𝑔 𝑛,𝑙 (2) ⋮ 𝑔1,𝑙 (𝑘) 𝑔2,𝑙 (𝑘) ⋯ ⋯𝑔 𝑛,𝑙 (𝑘)  The element 𝑔𝑗,𝑙 (𝑖) , for i ∈ [1, k] and j ∈ [1, n], are the impulse response of the i-th input with respect to j-th output
  • 20. Convolutional Codes and Encoders  Generator Matrix in the Time Domain:  Ex: Generator Matrix of the Binary (2, 1, 2) Convolutional Code :
  • 21. Convolutional Codes and Encoders  Generator Matrix in the Time Domain:  Ex: Generator Matrix of the Binary (2, 1, 2) Convolutional Code : 𝑣 = 𝑢 ·G 𝑣 = (1, 1, 1, 0, 1) · 11 10 11 11 10 11 11 10 11 11 10 11 11 10 11 𝑣 = (11, 01, 10, 01, 00, 10, 11)
  • 22. Convolutional Codes and Encoders  Generator Matrix in the Time Domain:
  • 23. Description in the D-Transform Domain
  • 24. Description in the D-Transform Domain  The D-transform is a function of the indeterminate D (the delay operator) and is defined as:
  • 25. Description in the D-Transform Domain  The convolutional relation of the D transform  D{u ∗ g} = U(D)G(D) is used to transform the convolution of input sequences and generator sequences to a multiplication in the D domain.
  • 26. Description in the D-Transform Domain  Ex: (2,1,2) convolutional code encoder : G(1)(D) = 1 + 𝐷 + 𝐷2 G(2)(D) = 1 + 𝐷2 G(D) = [1 + 𝐷 + 𝐷2 , 1 + 𝐷2 ] U(D) = 1 + 𝐷 + 𝐷2 + 𝐷4
  • 27. Description in the D-Transform Domain  Ex: (2,1,2) convolutional code encoder : V (D) = U(D) • G(D) V1(D)= U(D) • G(1)(D) V1(D)= (1 + 𝐷 + 𝐷2 + 𝐷4 ) x (= 1 + 𝐷 + 𝐷2 ) = (1 + 𝐷 + 𝐷2 + 𝐷4 + 𝐷 + 𝐷2 + 𝐷3 +𝐷5 +𝐷2 + 𝐷3 + 𝐷4 +𝐷6 ) V1(D) = 1 + 𝐷2 + 𝐷5 + 𝐷6
  • 28. Description in the D-Transform Domain  Ex: (2,1,2) convolutional code encoder : V (D) = U(D) • G(D) V2(D)= U(D) • G(2)(D) V2(D)= (1 + 𝐷 + 𝐷2 + 𝐷4 ) x (= 1 + 𝐷2 ) = (1 + 𝐷 + 𝐷2 + 𝐷4 +𝐷2 + 𝐷3 + 𝐷4 +𝐷6 ) V2(D) = 1 + 𝐷 + 𝐷3 + 𝐷6
  • 29. Description in the D-Transform Domain  Ex: (2,1,2) convolutional code encoder : V1(D) = 1 + 𝐷2 + 𝐷5 + 𝐷6 V2(D) = 1 + 𝐷 + 𝐷3 + 𝐷6 In the time domain : 𝑣1 = (1010011) 𝑣2 = (1101001) Then the code word is : 𝑣 = (11 01 10 01 11 10 11)
  • 31. Convolutional Encoder Representations  The encoder of a convolutional code 𝐶𝑐𝑜𝑛𝑣 (2, 1, 2), in each clock cycle the bits contained in each register stage are right shifted to the following stage.  The two outputs are sampled to generate the two output bits for each input bit.  Convolutional code 𝐶𝑐𝑜𝑛𝑣(n, k, K) is described by the vector representation of the generator polynomials 𝑔(1) and 𝑔(2) that correspond to the upper and lower branches of the FSSM (finite state sequential machine) Representation of Connections:  In this description, a ‘1’ means that there is connection, and a ‘0’ means that the corresponding register is not connected: 𝑔(1) = (101) 𝑔(2) = (111)
  • 32. Convolutional Encoder Representations  Since the encoder is a linear sequential circuit, its behavior can be described by a state diagram.  Define the encoder state at the time l as the m- tuple, (𝑚𝑖−1 , 𝑚𝑖−1 ,……, 𝑚𝑖−𝑚 )  Which consists of the m message bits stored in the shift register.  There are 2 𝑚 possible states. At any time instant, the encoder must be in one of these states.  The encoder undergoes a state transition when a message bit is shifted into the encoder register as shown below.  The encoder undergoes a state transition when a message bit is shifted into the encoder register as shown below. State Diagram Representation:  At each time unit, the output block depends on the input and the state,
  • 33. Convolutional Encoder Representations  The state diagram can be expanded in time to display the state transition of a convolutional encoder in time. This expansion in time results in a trellis diagram.  Normally the encoder starts from the all-zero state, (0, 0, … , 0).  When the first message bit 𝑚0 is shifted into the encoder register, the encoder is in one of the two following states: (𝑚0= 0,0,0, …..,0) ; (𝑚0 = 1,0,0, …….,0)  When the second message bit 𝑚1 is shifted into the encoder register, the encoder is in one of the following states: (𝑚1 =0, 𝑚0= 0,0,0, …..,0) ; (𝑚1 =0, 𝑚0 = 1,0,0, …….,0) (𝑚1 =1, 𝑚0= 0,0,0, …..,0) ; (𝑚1 =1, 𝑚0 = 1,0,0, …….,0)  Every time, when a message bit is shifted into the encoder register, the number of state is doubled until the number of states reaches 2 𝑚 . Trellis Diagram:
  • 34. Convolutional Encoder Representations  At the time m, the encoder reaches the steady state. Trellis Diagram:
  • 35. Convolutional Encoder Representations  Termination of a Trellis:  When the entire sequence m has been encoded, the encoder must return to the starting state. This can be done by appending m zeros to the message sequence m .  When the first “0” is shifted into the encoder register, then ; it will be there are 2 𝑚−1 such states.  When the second “0” is shifted into the encoder register, then ; it will be there are 2 𝑚−2 such states.  When the m-th “0” is shifted into the register, the encoder is back to the all-zero state, (0, 0, … , 0). Trellis Diagram:
  • 36. Convolutional Encoder Representations  Termination of a Trellis:  At this instant, the trellis converges into a single vertex.  During the termination process, the number of states is reduced by half as each “0” is shifted into the encoder register. Trellis Diagram:
  • 39. Convolutional Codes in Systematic Form  In a systematic code, message information can be seen and directly extracted from the encoded information.
  • 40. Convolutional Codes in Systematic Form  Example : Determine the transfer function of the systematic convolutional code as shown in the below Figure , and then obtain the code sequence for the input sequence m = (1101). Ans: The transfer function in this case is: G(𝐷 ) = [ 1 𝐷 + 𝐷2 ]
  • 41. Convolutional Codes in Systematic Form  Example : Determine the transfer function of the systematic convolutional code as shown in the below Figure , and then obtain the code sequence for the input sequence m = (1101). Ans : (Cont…) and so the code sequence for the given input sequence, which in polynomial form is M(𝐷 )= 1 +𝐷 + 𝐷3 is obtained from : 𝐶(1) = M(𝐷 ) 𝐺(1) (𝐷 ) = 1 +𝐷 + 𝐷3 𝐶(2) = M(𝐷 ) 𝐺(2) (𝐷 ) = (1 +𝐷 + 𝐷3 ) . (𝐷 + 𝐷2 ) Then 𝐶 = (10, 11, 00, 11, 01, 01)
  • 42. Convolutional Codes in Systematic Form  In the case of systematic convolutional codes, there is no need to have an inverse transfer function decoder to obtain the input sequence, because this is directly read from the code sequence.
  • 43. Minimum Free Distance of a Convolutional Code
  • 44. Minimum Free Distance of a Convolutional Code  The most important distance measure for convolutional codes is the minimum free distance, denoted 𝑑 𝑓𝑟𝑒𝑒.  The minimum free distance of a convolutional code  is simply the minimum Hamming distance between any two code sequences in the code.  It is also the minimum weight of all the code sequences, which are produced by the nonzero message sequences.
  • 45. Minimum Free Distance of a Convolutional Code  Since convolutional codes are linear, we can use the all-zero path 0 as our reference for studying the distance structure of convolutional codes without loss of generality.
  • 46. Minimum Free Distance of a Convolutional Code  Example : Determine the minimum free distance of the convolutional code 𝐶𝑐𝑜𝑛𝑣 (2, 1, 2) used in previous examples, by employing the above procedure, implemented over the corresponding trellis.
  • 47. Minimum Free Distance of a Convolutional Code  Ans: The sequence corresponding to the path described by the sequence of states 𝑆 𝑎 𝑆 𝑏 𝑆𝑐 𝑆 𝑎 , seen in bold in the Figure, is the sequence of minimum weight, which is equal to 5. There are other sequences like those described by the state sequences 𝑆 𝑎 𝑆 𝑏 𝑆𝑐 𝑆 𝑏 𝑆𝑐 𝑆 𝑎 and 𝑆 𝑎 𝑆 𝑏 𝑆 𝑑 𝑆𝑐 𝑆 𝑎 that both are of weight 6. The remaining paths are all of larger weight, and so the minimum free distance of this convolutional code is 𝑑 𝑓 = 5.
  • 49. Maximum Likelihood Detection  If the input sequence messages are equally likely, the optimum decoder which minimizes the probability of error is the Maximum Likelihood (ML) decoder.  Maximum likelihood decoding means finding the code branch in the code trellis that was most likely to transmitted.  Therefore maximum likelihood decoding is based on measuring the distance using either Hamming distance for hard-decision decoding or Euclidean distance for soft- decision decoding .  Probability to decode sequence is then :
  • 50. Decoding of Convolutional Codes: The Viterbi Algorithm
  • 51. Decoding of Convolutional Codes: The Viterbi Algorithm  The Viterbi algorithm performs maximum likelihood decoding but reduces the computational complexity by taking advantage of the special structure of the code trellis.  It was first introduced by A. Viterbi in 1967.  It was first recognized by D. Forney in 1973 that it is a MLD algorithm for convolutional code.
  • 52. Decoding of Convolutional Codes: The Viterbi Algorithm  Generate the code trellis at the decoder.  The decoder penetrates through the code trellis level by level in search for the transmitted code sequence.  At each level of the trellis, the decoder computes and compares the metrics of all the partial paths entering a node.  The decoder stores the partial path with the largest metric and eliminates all the other partial paths. The stored partial path is called the survivor.  For m < l ≦ L, there are 2 𝑘𝑚2km nodes at the l-th level of the code trellis. Hence there are 2 𝑘𝑚 survivors.  When the code trellis begins to terminate, the number of survivors reduces.  At the end, the (L+m)-th level, there is only one node (the all- zero state) and hence only one survivor.  his last survivor is the maximum Basic Concepts:
  • 53. Decoding of Convolutional Codes: The Viterbi Algorithm Procedure:
  • 54. Decoding of Convolutional Codes: The Viterbi Algorithm Procedure:
  • 55. Decoding of Convolutional Codes: The Viterbi Algorithm Procedure:
  • 56. Decoding of Convolutional Codes: The Viterbi Algorithm Procedure:
  • 57. Decoding of Convolutional Codes: The Viterbi Algorithm Procedure:
  • 58. Decoding of Convolutional Codes: The Viterbi Algorithm Example :  m = (101) Source message  U = (11 10 00 10 11) Codeword to be transmitted  Z = (11 10 11 10 01) Received codeword
  • 59. Decoding of Convolutional Codes: The Viterbi Algorithm Example:  Label all the branches with the branch metric (Hamming distance)
  • 60. Decoding of Convolutional Codes: The Viterbi Algorithm Example:  i=2,
  • 61. Decoding of Convolutional Codes: The Viterbi Algorithm Example:  i=3,
  • 62. Decoding of Convolutional Codes: The Viterbi Algorithm Example:  i=4,
  • 63. Decoding of Convolutional Codes: The Viterbi Algorithm Example:  i=5,
  • 64. Decoding of Convolutional Codes: The Viterbi Algorithm Example:  i=6,
  • 65. Decoding of Convolutional Codes: The Viterbi Algorithm Example:  Trace back and then : 𝑚 = (100) 𝑈 = (11 10 11 00 00) There are some decoding errors.
  • 66. Catastrophic Generator Matrix  A catastrophic matrix maps information sequences with infinite Hamming weight to code sequences with finite Hamming weight.  For a catastrophic code, a finite number of transmission errors can cause an infinite number of errors in the decoded information sequence. Hence, theses codes should be avoided in practice.  It is easy to see that systematic generator matrices are never catastrophic.  A code which inherits the catastrophic error propagation property is called a catastrophic code.
  • 68. The Most Widely Used Convolutional Codes  The most widely used convolutional code is (2,1,6) Odenwalter code generate by the following generator sequence, 𝑔(1) = (1101101) 𝑔(2) = (1001111).  This code has 𝑑 𝑓𝑟𝑒𝑒 =10  With hard-decision decoding, it provides a 3.98dB coding gain over the uncoded BPSK modulation system.  With soft-decision decoding, the coding gain is 6.98dB.
  • 70. Advantages of Convolutional Codes  Convolution coding is a popular error-correcting coding method used in digital communications.  The convolution operation encodes some redundant information into the transmitted signal, thereby improving the data capacity of the channel.  Convolution Encoding with Viterbi decoding is a powerful FEC technique that is particularly suited to a channel in which the transmitted signal is corrupted mainly by AWGN.  It is simple and has good performance with low implementation cost.

Editor's Notes

  • #3: Beginning course details and/or books/materials needed for a class/project.