Viterbi Decoder
Decoding convolutionally encoded data using the Viterbi algorithm.
blockType: ViterbiDecoder
Path in the library:
|
Description
Block Viterbi Decoder decodes the input characters obtained by convolutional encoding to produce binary output characters using the Viterbi algorithm. The lattice structure defines the convolutional encoding scheme.
Ports
Entrance
Port_1 is the input word obtained by convolutional encoding
column vector
The input word obtained by convolutional encoding, specified as a column vector. If the decoder accepts input bit streams (that is, it can take possible input characters), the length of the input vector of the block is for some positive integer .
Additional information can be found in the sections Input and output data sizes, Input data and type of solution, as well as in the parameter description Operation mode.
Data types: Float32, Float64, Int8, Int16, Int32, UInt8, UInt16, UInt32, Bool
Output
Port_1 — output message
binary column vector
The output message returned as a binary column vector. If the decoder outputs output bit streams (that is, it can output possible output characters), the length of the output vector of the block is for some positive integer .
For more information, see the section Input and output data sizes.
Data types: Float32, Float64, Int8, Int16, Int32, UInt8, UInt16, UInt32, Bool
Parameters
Encoded data parameters
Trellis structure — description of the convolutional code through the pass:q code grid[<br>] poly2trellis(7, [171,133]) (default)
A lattice description of a convolutional code, defined as a lattice structure for a code with a velocity , where is the number of input bit streams, and — the number of output bit streams.
To create the lattice structure, you can use the function poly2trellis or set it manually.
The lattice structure contains the following fields:
-
numInputSymbols— the number of characters entering the input of the encoding device is set as an integer equal to , where K is the number of input bit streams. -
numOutputSymbols— the number of characters coming to the output of the encoding device is set as an integer equal to , where K is the number of output bit streams. -
numStates— the number of states in the encoding device, set as a power of 2. -
nextStates— the following states for all combinations of current states and current inputs, specified as a matrix of integers. The size of the matrix should benumStateson . -
outputs— outputs for all combinations of current states and current inputs, specified as a matrix of octal numbers. The size of the matrix should benumStateson .
Branch metric computation parameters
Decision type — decoder decision type
Hard decision (by default) | Unquantized
Decoder solution type, options to choose from:
-
Hard decision— the decoder uses the Hamming distance to calculate the branching metric. The input must contain a vector of hard decision values, which are 0 or 1. The input data type must be double precision, single precision, logical or numeric. -
Unquantized— the decoder uses the Euclidean distance to calculate the branching metric. The input data must be a vector of real values of double or single precision, non-quantized. The object converts positive values to logical ones, and negative values to logical zeros.
Traceback decoding parameters
Traceback depth — the depth of the pass trace:q[<br>] 34 (default) | a positive integer
The trace depth is set as an integer, which indicates the number of branches of the grid used to build each trace path.
The depth of the trace affects the decoding delay. The decoding delay is the number of zero characters that precede the first decoded character in the output.
In continuous operation, the decoding delay is equal to the number of characters of the trace depth.
According to general estimates, the typical value of the trace depth is about two to three times greater. , where
-
— the length of the constraints in the code;
-
;
-
— number of input characters;
-
— number of output characters;
-
— vector of puncture patterns.
For example, if we apply this general estimate, we will get such approximate tracing depths:
-
The code with a rate of 1/2 has a trace depth .
-
The code with a speed of 2/3 has a trace depth .
-
The 3/4-speed code has a trace depth. .
-
The code with a speed of 5/6 has a trace depth .
More information in 7.
Operation mode — pass:q encoded frame completion method[<br>] Continuous (by default) | Truncated
The completion method of the encoded frame, set as one of these mode values:
-
Continuous— The block stores its internal status metric at the end of each input for use with the next frame. Each trace path is processed independently. This mode results in a delay in decoding Traceback depth zero bits for convolutional code with speed , where — the number of characters of the message, and — the number of encoded characters. -
Truncated— the unit processes each input independently. The trace path starts from the state with the best metric and always ends in the "all zeros" state. This mode is suitable if for the corresponding block Convolutional Encoder The Operation mode parameter is set toTruncated (reset every frame). There is no output delay in this mode.
The state of the decoder is reset at each step of the input time if the block outputs sequences, the length of which changes during the simulation, and the operating mode is set. Truncated.
If the input signal contains only one character, use the mode Continuous.
Additional Info
Input and output data sizes
If the convolutional code uses an alphabet from possible characters, then the length of the input vector should be for some positive integer . Similarly, if the decoded data uses an alphabet from possible output characters, the length of the output vector will be .
This block accepts an input signal in the form of a column vector with any positive integer value for . For variable-size input signals may change during simulation. The operation of the unit is regulated by the operating mode parameter.
This table shows the data types supported for the input and output ports.
| Port | Supported data type |
|---|---|
Entrance |
|
Exit |
|
Input data and type of solution
The entries of the input vector are either bipolar real, binary, or integer data, depending on the value of the Decision type parameter.
| Decision type | Possible entries at the decoder input | Interpretation of values | Calculating the branching metric |
|---|---|---|---|
|
Real numbers. Input values outside the range are truncated to values and accordingly. |
Positive real number: logical zero. Negative real number: a logical unit. |
The Euclidean distance |
|
0, 1 |
0: a logical zero. 1: a logical unit. |
The Hamming distance |
Literature
-
Clark, George C., and J. Bibb Cain. Error-Correction Coding for Digital Communications. Applications of Communications Theory. New York: Plenum Press, 1981.
-
Gitlin, Richard D., Jeremiah F. Hayes, and Stephen B. Weinstein. Data Communications Principles. Applications of Communications Theory. New York: Plenum Press, 1992.
-
Heller, J., and I. Jacobs. “Viterbi Decoding for Satellite and Space Communication.” IEEE Transactions on Communication Technology 19, no. 5 (October 1971): 835–48. https://doi.org/10.1109/TCOM.1971.1090711.
-
Yasuda, Y., K. Kashiki, and Y. Hirata. “High-Rate Punctured Convolutional Codes for Soft Decision Viterbi Decoding.” IEEE Transactions on Communications 32, no. 3 (March 1984): 315–19. https://doi.org/10.1109/TCOM.1984.1096047.
-
Haccoun, D., and G. Begin. “High-Rate Punctured Convolutional Codes for Viterbi and Sequential Decoding.” IEEE Transactions on Communications 37, no. 11 (November 1989): 1113–25. https://doi.org/10.1109/26.46505.
-
Begin, G., D. Haccoun, and C. Paquin. “Further Results on High-Rate Punctured Convolutional Codes for Viterbi and Sequential Decoding.” IEEE Transactions on Communications 38, no. 11 (November 1990): 1922–28. https://doi.org/10.1109/26.61470.
-
Moision, B. "A Truncation Depth Rule of Thumb for Convolutional Codes." In Information Theory and Applications Workshop (January 27 2008-February 1 2008, San Diego, California), 555-557. New York: IEEE, 2008.