[go: up one dir, main page]

CN101136639B - Systems and methods for reduced complexity ldpc decoding - Google Patents

Systems and methods for reduced complexity ldpc decoding Download PDF

Info

Publication number
CN101136639B
CN101136639B CN200710153163.3A CN200710153163A CN101136639B CN 101136639 B CN101136639 B CN 101136639B CN 200710153163 A CN200710153163 A CN 200710153163A CN 101136639 B CN101136639 B CN 101136639B
Authority
CN
China
Prior art keywords
mrow
variable
processor
sum
ldpc
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN200710153163.3A
Other languages
Chinese (zh)
Other versions
CN101136639A (en
Inventor
孙国辉
杨鸿文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
KY WIRE ELECTRIC CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by KY WIRE ELECTRIC CO Ltd filed Critical KY WIRE ELECTRIC CO Ltd
Publication of CN101136639A publication Critical patent/CN101136639A/en
Application granted granted Critical
Publication of CN101136639B publication Critical patent/CN101136639B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Error Detection And Correction (AREA)

Abstract

Systems and methods for generating check node updates in the decoding of low-density parity-check (LDPC) codes use new approximations in order to reduce the complexity of implementing a LDPC decoder, while maintaining accuracy. The new approximations approximate the standard float-point sum-product algorithm (SPA), and can reduce the approximation error of min-sum algorithm (MSA) and have almost the same performance under 5 bits fix-point realization as the float-point sum-product algorithm (SPA).

Description

System and method for reduced complexity low density parity check decoding
Technical Field
Embodiments described herein relate to a method of low density parity check decoding, and more particularly, to a method of implementing a low density parity check decoder with reduced complexity.
Background
Low Density Parity Check (LDPC) codes are error correcting codes that provide a method of transmitting messages over noisy transmission channels. Although the LDPC technique cannot guarantee ideal transmission, the probability of information loss can be made very small. In fact, LDPC codes are the first to bring the data transmission rate close to the theoretical maximum, e.g., Shannon (Shannon) limit. LDPC techniques use sparse parity check matrices, e.g., matrices that are mostly occupied by zeros and therefore referred to as low density. The sparse matrix is randomly generated subject to a defined sparsity constraint.
LDPC codes can be defined in matrix and graph form. The LDPC matrix has a number of rows (M) and columns (N). The matrix may also be represented by the number of "1" s (w) per rowr) And the number of "1" s in each column (w)c) To be defined. For a matrix considered to be of low density, the following conditions should be satisfied: w is ac<<N and wr<<And M. The LDPC matrix may be regular or irregular. The regular LDPC matrix is where w for each columncIs a constant and w per rowr=wcAnd (N/M) is also one of the constants. If the matrix is low density, but the number of "1" s in each row or column is not constant, then such a code is called an irregular LDPC code.
It should also be appreciated that an LDPC code may be graphically defined by its corresponding Tanner graph. Such graphs not only provide a complete representation of the code, but they are helpful in describing the decoding algorithm as explained in more detail below. The tanner graph contains nodes and edges. The nodes are divided into two different groups or types, and the edges connect the two different types of nodes. The two types of nodes in the tanner graph are called variable nodes (v-nodes) and check nodes (c-nodes), or parity nodes. Thus, the tanner graph consists of M check nodes (the number of parity bits), and N variable nodes (the number of bits in the codeword). If there is a "1" in the corresponding element of the LDPC matrix, the check node is connected with the variable node.
The number of information bits can be expressed as (K). Then, a generator matrix (G) can be defined according to the following relationshipK×N):
cN×1=GN×KdK×1Wherein
dK×1a message or data word, and
cN×1a codeword.
It can be seen that codeword CN×1Is generated by multiplying the message by the generator matrix. The subscripts are matrix representations, referring to the number of rows and columns, respectively. Thus, the data words and code words can be represented as a single column matrix of K rows and N rows, respectively.
Parity may be defined as HM×NcN×1=0。
Thus, fig. 1 is a schematic diagram depicting a system 100 including a transmitter and a receiver. For simplicity, only a portion 102 of the transmitter and a portion 110 of the receiver are shown. Referring to fig. 1, the encoder 104 generates a matrix G by applyingN×KData word dN×1Conversion to code word CN×1. The modulator 106 may be configured to then encode the codeword cN×1Modulated onto a carrier such that the codeword can be transmitted wirelessly across channel 108 to a receiver.
In the receive part 110, the demodulator 112 may be configured to remove the carrier from the received signal; however, channel 108 adds channel effects and noise, so that the signal generated by demodulator 112 may have the form: r isN×1=2/σ2(1-2cN×1)+wN×1Wherein r is a numberA stage signal. Some data bits d are lost during transmission as a result of noise and channel effects. To recover as much data as possible, the decoder 114 may be configured to use a parity check matrix HM×NGenerating and generating raw data dK×1Very close estimated data d'K×1. It should be appreciated that decoder 114 may be a hard decision decoder or a soft decision decoder. Soft decision decoders are more accurate but generally also require more resources.
To describe the operation of the LDPC code, the following example is given:
<math> <mrow> <msub> <mi>H</mi> <mrow> <mn>3</mn> <mo>&times;</mo> <mn>6</mn> </mrow> </msub> <mo>=</mo> <mfenced open='[' close=']' separators=''> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow></math>
it can be seen that the demonstrationThe parity check matrix H is low density, or sparse. Row 1 of the matrix H defines a 1 st parity check node, or equation. It can be seen that the 1 st parity node will check the received samples r0、r2And r4Remember that r is the multi-level signal generated by the demodulator 12 in the receiver. Checking the received sample r for the 2 nd parity node, i.e., row 2 of H1、r3And r5And 3 rd parity check node check sample r0、r1And r5. In this example, there are 3 parity check nodes and 6 samples. The 1 st and 2 nd parity nodes are considered orthogonal because they involve mutually exclusive groups of samples.
If K is 3 and M is 3, then the following equations hold:
<math> <mrow> <msub> <mi>H</mi> <mrow> <mn>3</mn> <mi>x</mi> <mn>6</mn> </mrow> </msub> <msub> <mi>c</mi> <mrow> <mn>6</mn> <mi>x</mi> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>0</mn> <mo>&DoubleLeftRightArrow;</mo> <msub> <mi>H</mi> <mrow> <mn>3</mn> <mi>x</mi> <mn>6</mn> </mrow> </msub> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>d</mi> <mrow> <mn>3</mn> <mi>x</mi> <mn>1</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>p</mi> <mrow> <mn>3</mn> <mi>x</mi> <mn>1</mn> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mn>0</mn> <mo>&DoubleLeftRightArrow;</mo> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>d</mi> <mn>0</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>d</mi> <mn>1</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>d</mi> <mn>2</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>p</mi> <mn>0</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>p</mi> <mn>1</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>p</mi> <mn>2</mn> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mn>0</mn> </mrow> </mrow> </mrow> </mrow></math>
this generates the following equation:
d0+d2+p1=0
d1+p0+p2=0
d0+d1+p2=0
these equations are simplified to:
p0=d0
p1=d0+d2
p2=d0+d1
thus, for example, if d ═ 0; 1; 0], then, p ═ 0; 0; 1] and c ═ 0; 1; 0; 0; 0; 1].
Fig. 2 is a tanner graph describing the operation of H in the above example. It can be seen that the schematic diagram of fig. 2 contains 3 parity nodes 202, 204, and 206, and 6 variable nodes 208, 210, 214, 216, and 218 corresponding to the bits of c. The parity nodes 202, 204, and 206 are connected to the variable nodes 208, 210, 214, 216, and 218 by edges 220, 222, 224, 226, 228, 230, 232, 234, and 236 as indicated by the entries in H. In other words, each side 220, 222, 224, 226, 228, 230, 232, 234, and 236 should correspond to a "1" in H.
In the LDPC decoder, the operations of the parity check nodes and the variable nodes may be implemented by a processor. In other words, each parity node may be implemented by a parity processor, and each variable node may be implemented by a variable node processor. The LDPC decoder is then an iterative decoder implementing the message passing algorithm defined by H.
Unfortunately, conventional LDPC decoding techniques result in implementation with highly complex, fully parallel decoders, where all messages to and from all check node processors must be computed for each iteration in the decoding process. This results in high complexity, high resource requirements, and high cost.
Therefore, much effort is currently devoted to reducing the complexity of check node message updates while keeping the performance penalty as small as possible. The most common simplification is the min-sum algorithm (MSA), which greatly reduces the complexity of check node updates, but causes 0.3-0.4dB of degradation in performance relative to standard sum-product algorithm (SPA) check node implementations. To correct for this performance degradation, it has also been proposed to correct the MSA using a normalization term and an offset adjustment term. Such a solution does reduce the performance penalty compared to more traditional MSA implementations, but there is still a significant performance penalty. In addition, two-dimensional MSA schemes have been proposed that can further improve MSA performance with increased complexity. Thus, in conventional implementations, there is often a tradeoff between complexity and performance.
Disclosure of Invention
Systems and methods for check node update during decoding of Low Density Parity Check (LDPC) codes are described below. The systems and methods described below use new approximations to reduce the complexity of implementing an LDPC decoder while maintaining accuracy. The new approximation approximates the standard sum-product algorithm (SPA) and can reduce the approximation error of the Minimum Sum Algorithm (MSA) and have almost the same performance as the sum-product algorithm (SPA) under both floating-precision and fixed-point operations.
In one aspect, a receiver may include a demodulator configured to receive a wireless signal, remove a carrier signal from the wireless signal, and generate a received signal; and a Low Density Parity Check (LDPC) processor configured to recover the original data signal from the received signal. The LDPC processor may include a plurality of variable node processors configured to receive the received signal and generate variable messages from the received signal; and a parity node processor configured to receive the variable message and generate a soft output from the variable message.
In another aspect, a receiver includes a demodulator configured to receive a wireless signal including an original data signal and a carrier signal, remove the carrier signal from the wireless signal, and generate a received signal; and a Low Density Parity Check (LDPC) processor coupled to the demodulator, the LDPC processor configured to recover the original data signal from the received signal. The LDPC processor comprises a plurality of variable node processors configured to generate variable messages from received signals; and a check node processor coupled to the plurality of variable node processors and configured to implement an approximation of a sum-product algorithm (SPA) using a base 2 logarithmic operation.
In yet another aspect, the check node processor is configured to implement an approximation of the sum-product algorithm (SPA) using base 2 logarithms and rounding all operands to yield the nearest integer.
In yet another aspect, the check node processor includes binary hardware circuitry.
In yet another aspect, the generated integers are limited to ± amax=±(2v-1-1) inside.
In yet another aspect, the check node processor contains v fixed point circuits, including 1 bit for integer symbols and v-1 bits for absolute values.
In yet another aspect, a method of processing a received signal using a parity check node processor included in an LDPC decoder is described. The method includes receiving a signal, demodulating the signal, generating a variable message, and generating a soft output from the variable message using a base 2 logarithmic operation, using an approximation of a sum-product algorithm (SPA). The soft output variable messages are refined by iteration until the soft output variable messages match the parity check node equation or the maximum number of iterations occurs. Where the soft output is further generated by rounding all operands to the nearest integer.
These and other features, aspects, and embodiments of the present invention are described below in the section entitled "detailed description".
Drawings
Features, aspects, and embodiments of the invention will be described in conjunction with the appended drawings, in which:
FIG. 1 is a diagram illustrating an exemplary communication system using LDPC codes;
FIG. 2 is a diagram illustrating operation of an exemplary parity check matrix;
FIG. 3 is a schematic diagram depicting an exemplary parity check node processor;
FIG. 4 is a diagram illustrating the operation of an exemplary parity node processor;
FIG. 5 is a diagram illustrating the operation of an exemplary variable node processor;
FIG. 6 is a graph depicting simulated Frame Error Rate (FER) under AWGN (additive white Gaussian noise) channel with different quantization bits for floating point SPA, floating point MSA and our proposed method;
FIG. 7 is a graph depicting simulated Frame Error Rate (FER) performance under AWGN channels with different quantization bits for floating point SPA, floating point MSA and our proposed method; and
FIG. 8 is a flow chart describing an exemplary method of performing modified LDPC decoding.
Detailed Description
In the following description, certain exemplary parameters, values, and the like are used; however, it should be understood that the embodiments described herein are not necessarily limited by these examples. Therefore, these examples should not be construed as limiting the embodiments in any way. Also, the embodiments of LDPC decoders described herein may be applied to many different types of systems implementing a wide variety of protocols and communication techniques, such as a Binary Phase Shift Keying (BPSK) modulation technique, a Quadrature Phase Shift Keying (QPSK) modulation technique, or a Quadrature Amplitude Modulation (QAM) technique. Thus, unless specifically specified, the embodiments should not be construed as limited to a particular type of system, architecture, protocol, wireless interface, etc.
Check node processor 302 is shown in FIG. 3 for degree n. In each iteration, incoming soft messages { u } are utilizediI 1, 2, n update outgoing soft message λ i1, 2.. multidot.n }. Outgoing soft messages are defined as an algorithm of the ratio of the probability that the corresponding bit is 0 or 1.
For the standard sum-product algorithm, the outgoing message is determined as follows:
<math> <mrow> <msub> <mi>&lambda;</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>2</mn> <msup> <mi>tanh</mi> <mrow> <mo>-</mo> <mi>I</mi> </mrow> </msup> <munderover> <mrow> <mi>&Pi;</mi> <mo></mo> </mrow> <munder> <mrow> <mi>i</mi> <mo>=</mo> <mi>l</mi> </mrow> <mrow> <mi>j</mi> <mo>&NotEqual;</mo> <mi>i</mi> </mrow> </munder> <mi>n</mi> </munderover> <mi>tanh</mi> <mfrac> <msub> <mi>u</mi> <mi>j</mi> </msub> <mn>2</mn> </mfrac> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1,2</mn> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mi>n</mi> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow></math>
the outgoing soft messages are then fed back to the variable node processors for use in generating an output u during the next iterationiPerforming the following steps; however, the variable node-based soft message λ output from a specific nodeiNot returned to that node. Therefore, constrained by j ≠ i in (1) as follows:
<math> <mrow> <munderover> <mrow> <mi>&Pi;</mi> <mo></mo> </mrow> <munder> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>j</mi> <mo>&NotEqual;</mo> <mi>i</mi> </mrow> </munder> <mi>n</mi> </munderover> <mi>tanh</mi> <mfrac> <msub> <mi>u</mi> <mi>j</mi> </msub> <mn>2</mn> </mfrac> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1,2</mn> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mi>n</mi> <mo>.</mo> </mrow></math>
this can also be described with the aid of fig. 4, which fig. 4 is a schematic diagram describing the operation of the parity node processor 202. First, the LDPC decoder initializes the variable data bits u0, ul, u2,.., u6 of the variable node processors 208, 210, 212, 214, 216, and 218 using r0, r1, r 2. With reference to figure 4 of the drawings,
Figure S071F3163320071010D000064
and
Figure S071F3163320071010D000065
are variable messages sent from variable nodes 208, 212, and 216 to parity node processor 202. The parity node processor 202 operates on these messages to compute itMessage lambdak. E.g. λk(0 → 2) represents the message sent from parity node 202 to variable node 212 in the k-th iteration.
The messages generated by the parity check node processor 202 may be defined using the following equation:
<math> <mrow> <msup> <mi>&lambda;</mi> <mi>k</mi> </msup> <mrow> <mo>(</mo> <mn>0</mn> <mo>&RightArrow;</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>2</mn> <msup> <mi>tanh</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>&lsqb;</mo> <mi>tanh</mi> <mrow> <mo>(</mo> <mfrac> <msubsup> <mi>u</mi> <mn>2</mn> <mrow> <mi>k</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mi>tanh</mi> <mrow> <mo>(</mo> <mfrac> <msubsup> <mi>u</mi> <mn>4</mn> <mrow> <mi>k</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>&rsqb;</mo> </mrow></math>
<math> <mrow> <msup> <mi>&lambda;</mi> <mi>k</mi> </msup> <mrow> <mo>(</mo> <mn>0</mn> <mo>&RightArrow;</mo> <mn>2</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>2</mn> <msup> <mi>tanh</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>&lsqb;</mo> <mi>tanh</mi> <mrow> <mo>(</mo> <mfrac> <msubsup> <mi>u</mi> <mn>0</mn> <mrow> <mi>k</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mi>tanh</mi> <mrow> <mo>(</mo> <mfrac> <msubsup> <mi>u</mi> <mn>4</mn> <mrow> <mi>k</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow></math>
<math> <mrow> <msup> <mi>&lambda;</mi> <mi>k</mi> </msup> <mrow> <mo>(</mo> <mn>0</mn> <mo>&RightArrow;</mo> <mn>4</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>2</mn> <msup> <mi>tanh</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>&lsqb;</mo> <mi>tanh</mi> <mrow> <mo>(</mo> <mfrac> <msubsup> <mi>u</mi> <mn>0</mn> <mrow> <mi>k</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mi>tanh</mi> <mrow> <mo>(</mo> <mfrac> <msubsup> <mi>u</mi> <mn>2</mn> <mrow> <mi>k</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>&rsqb;</mo> </mrow></math>
thus, the parity node processor 202 may be configured to implement equation (2) above. Soft messages generated by parity nodes, e.g., parity node 202, are then fed back to variable nodes 208, 210, 212, 214, 216, and 218 for use in the next iteration.
For example, FIG. 5 is a schematic diagram depicting the operation of the variable node processor 208.Referring to FIG. 5, the variable node processor 208 receives as input messages from the parity node processors 202 and 206 and generates variable messages to be sent back to the same parity node processors 202 and 206. In the examples of fig. 4 and 5, for multi-level variables
Figure S071F3163320071010D000074
Hard decisions are made and checked to see if they satisfy the parity check node equations defined above. If there is a match, or if some defined number of iterations is exceeded, the decoder may be stopped.
The variable node processor 208 may be configured to implement the following equation:
<math> <mrow> <msubsup> <mi>u</mi> <mn>0</mn> <mi>k</mi> </msubsup> <mo>=</mo> <msub> <mi>u</mi> <mrow> <mi>ch</mi> <mo>,</mo> <mn>0</mn> </mrow> </msub> <mo>+</mo> <msup> <mi>&lambda;</mi> <mi>k</mi> </msup> <mrow> <mo>(</mo> <mn>0</mn> <mo>&RightArrow;</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>+</mo> <msup> <mi>&lambda;</mi> <mi>k</mi> </msup> <mrow> <mo>(</mo> <mn>2</mn> <mo>&RightArrow;</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow></math>
wherein u isch,0Is a message from the channel that does not change with each iteration.
It will be appreciated that the decoder as described above may be implemented using suitably configured hardware and/or software, and that although separate parity node processors and variable node processors are described, these processors may be implemented using a single processor, such as a digital signal processor, or circuitry, such as an Application Specific Integrated Circuit (ASIC); however, as described above, implementations of LDPC processors like those described with reference to fig. 2-5 can result in a high degree of complexity, stringent memory requirements, and interconnect complexity that can cause bottlenecks. These problems can be exacerbated if multiple data rates are to be achieved. In other words, the practical implementation of such a decoder can be made limited.
Thus, by using the system and method as described above, the resources required to implement a parity node may be reduced, i.e., complexity reduced, while still maintaining high accuracy. In certain embodiments, the complexity is reduced even further by degree reduction techniques. In other words, the number of inputs to the parity node may be reduced, and thus the resources required to implement the parity node may be reduced. It should also be noted that in many parity check node implementations, the sign and absolute value of the outgoing soft message are calculated separately.
Therefore, the LDPC code according to the parity check matrix H of size M × N has M check nodes and N variable nodes. In the decoder, soft messages are propagated and iteratively updated between variable nodes and check nodes until they coincide on a valid codeword. Otherwise, decoding ends with a failure. The update algorithm on the variable nodes involves two types of input messages, i.e. messages from channel observations and messages from check nodes.
Soft messages are defined as logarithms of probability ratios:
from channel observations: <math> <mrow> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>ln</mi> </mrow> <mrow> <mfrac> <mrow> <msup> <mi>p</mi> <mn>0</mn> </msup> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mn>1</mn> <mo>-</mo> <msup> <mi>p</mi> <mn>0</mn> </msup> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>,</mo> <mi>m</mi> <mo>&Element;</mo> <mo>&lsqb;</mo> <mn>1</mn> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <mi>N</mi> <mo>&rsqb;</mo> <mo>,</mo> </mrow></math> and (4)
From the check node: <math> <mrow> <msub> <mi>&lambda;</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>ln</mi> <mfrac> <mrow> <msubsup> <mi>p</mi> <mi>i</mi> <mi>c</mi> </msubsup> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mn>1</mn> <mo>-</mo> <msubsup> <mi>p</mi> <mi>i</mi> <mi>c</mi> </msubsup> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>,</mo> <mi>m</mi> <mo>&Element;</mo> <mo>&lsqb;</mo> <mn>1</mn> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <mi>N</mi> <mo>&rsqb;</mo> <mo>,</mo> <mi>i</mi> <mo>&Element;</mo> <mi>C</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow></math> (5)
where m refers to the mth variable node, i.e., the mth bit in the codeword.
L (m) is a soft message from channel observation, p0(m) is a posterior probability that the conditional bit is 0 for channel observation y (m).
(5) C (m) in (a) is a set of check nodes connected to the m-th variable node.Is that the mth bit is 0 is estimated by check node i in C (m)The probability of (c).
For BPSK modulation and with unity gain and noise variance σ2L (m) becomes:
<math> <mrow> <mi>L</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <mi>y</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> </mfrac> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow></math>
the variable node sends the message ui(m) to the verifying node i, ui(m) is defined in the same manner as in (4):
<math> <mrow> <msub> <mi>u</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>ln</mi> <mfrac> <mrow> <msubsup> <mi>p</mi> <mi>i</mi> <mi>v</mi> </msubsup> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mn>1</mn> <mo>-</mo> <msubsup> <mi>p</mi> <mi>i</mi> <mi>v</mi> </msubsup> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>,</mo> <mi>m</mi> <mo>&Element;</mo> <mo>&lsqb;</mo> <mn>1</mn> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <mi>N</mi> <mo>&rsqb;</mo> <mo>,</mo> <mi>i</mi> <mo>&Element;</mo> <mi>C</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow></math>
wherein,
Figure S071F3163320071010D000086
is the probability that the mth bit is 0, estimated by the variable node.
The update algorithm on a variable node may be defined as:
<math> <mrow> <msub> <mi>u</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>&Element;</mo> <mi>C</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>\</mo> <mi>i</mi> </mrow> </munder> <msub> <mi>&lambda;</mi> <mi>j</mi> </msub> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow></math>
where C (m) \\ i represents a subset of C (m) that does not include i.
The outgoing message from the check node to the variable node is a function of all incoming messages from all other variable nodes connected to this check node except the one to which the message was sent, i.e.:
λj(m)=f(uj(k),k∈V(j)\m),j∈[1,…,M],m∈V(j)。(9)
where V (j) is a set of variable nodes connected to the jth check node.
(9) The standard update algorithm of f is the sum-product algorithm (SPA) as described above, where again:
<math> <mrow> <msub> <mi>&lambda;</mi> <mi>j</mi> </msub> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>2</mn> <msup> <mi>tanh</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <munder> <mi>&Pi;</mi> <mrow> <mi>k</mi> <mo>&Element;</mo> <mi>V</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>\</mo> <mi>m</mi> </mrow> </munder> <mi>tanh</mi> <mfrac> <mrow> <msub> <mi>u</mi> <mi>j</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> <mn>2</mn> </mfrac> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow></math>
this is equivalent to:
<math> <mrow> <msub> <mi>&lambda;</mi> <mi>j</mi> </msub> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mrow> <mo>&CirclePlus;</mo> <mi></mi> </mrow> <mrow> <mi>k</mi> <mo>&Element;</mo> <mi>V</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>\</mo> <mi>m</mi> </mrow> </munder> <msub> <mi>u</mi> <mi>j</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow></math>
(11) represents | V (j) | -1 continuousThe operation is carried out according to the operation parameters,
Figure S071F3163320071010D000095
is a binary operator defined as:
<math> <mrow> <mi>x</mi> <mo>&CirclePlus;</mo> <mi>y</mi> <mover> <mo>=</mo> <mi>&Delta;</mi> </mover> <mi>ln</mi> <mfrac> <mrow> <mn>1</mn> <mo>+</mo> <msup> <mi>e</mi> <mrow> <mi>x</mi> <mo>+</mo> <mi>y</mi> </mrow> </msup> </mrow> <mrow> <msup> <mi>e</mi> <mi>x</mi> </msup> <mo>+</mo> <msup> <mi>e</mi> <mi>y</mi> </msup> </mrow> </mfrac> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow></math>
for real numbers x and y, it can be shown
Figure S071F3163320071010D000097
The operator has commutative and associative properties.
The variable node update algorithm (8) involves only summation. Thus, the decoding complexity comes mainly from check nodes that require a large number of logarithmic and exponential estimates. A simplified form of SPA is to estimate with a simple approximation
Figure S071F3163320071010D000098
Minimum sum of operation algorithm (MSA):
<math> <mrow> <mi>x</mi> <mo>&CirclePlus;</mo> <mi>y</mi> <mo>&ap;</mo> <mi>sgn</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mi>sgn</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>min</mi> <mo>{</mo> <mo>|</mo> <mi>x</mi> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mi>y</mi> <mo>|</mo> <mo>}</mo> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow></math>
although MSA is simple enough to be implemented efficiently in hardware, some performance is lost compared to SPA. The amount of loss depends on the particular code structure and code rate, typically on the order of 0.3-0.4 dB. Many better improvements in performance, even without loss, compared to SPA have been published in the literature.
Certain embodiments described herein are based on log 2 by base2The SPA is modified instead of the base e log ln in equations (3), (4) and (7), so that (6) will become:
<math> <mrow> <mi>L</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <mi>y</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> <mi>ln</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>14</mn> <mo>)</mo> </mrow> </mrow></math>
and (12) becomes:
<math> <mrow> <mi>x</mi> <mo>&CirclePlus;</mo> <mi>y</mi> <mo>=</mo> <msub> <mi>log</mi> <mn>2</mn> </msub> <mfrac> <mrow> <mn>1</mn> <mo>+</mo> <msup> <mn>2</mn> <mrow> <mi>x</mi> <mo>+</mo> <mi>y</mi> </mrow> </msup> </mrow> <mrow> <msup> <mn>2</mn> <mi>x</mi> </msup> <mo>+</mo> <msup> <mn>2</mn> <mi>y</mi> </msup> </mrow> </mfrac> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>15</mn> <mo>)</mo> </mrow> </mrow></math>
all operands and results involved in the decoding algorithm described herein may be rounded to the nearest integer. Thus, equations (14) and (15) become:
<math> <mrow> <mi>L</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>&ap;</mo> <mi>Round</mi> <mrow> <mo>(</mo> <mfrac> <mrow> <mn>2</mn> <mi>y</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> <mi>log</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> </mrow></math> and (16)
<math> <mrow> <mi>x</mi> <mo>&CirclePlus;</mo> <mi>y</mi> <mo>&ap;</mo> <mi>Round</mi> <mrow> <mo>(</mo> <msub> <mi>log</mi> <mn>2</mn> </msub> <mfrac> <mrow> <mn>1</mn> <mo>+</mo> <msup> <mn>2</mn> <mrow> <mi>x</mi> <mo>+</mo> <mi>y</mi> </mrow> </msup> </mrow> <mrow> <msup> <mn>2</mn> <mi>x</mi> </msup> <mo>+</mo> <msup> <mn>2</mn> <mi>y</mi> </msup> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>17</mn> <mo>)</mo> </mrow> </mrow></math>
Where Round () denotes rounding the operand to the nearest integer.
(16) And the right side of (17) is an approximation, less than 0.5, caused by rounding errors.
Note that for (16), (8), and (17), the entire decoding algorithm uses only integer arithmetic. In particular, the integer powers and integer logarithms of 2 in (17) can be easily implemented using binary hardware circuits. Thus, the complexity of standard SPA implementations can be greatly reduced. The following gives simulation results showing performance loss within 0.1 dB.
And, it can be proved that (17) is completely equivalent to the following equation:
<math> <mrow> <mi>x</mi> <mo>&CirclePlus;</mo> <mi>y</mi> <mo>&ap;</mo> <mi>sign</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mi>sign</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&lsqb;</mo> <mi>min</mi> <mrow> <mo>(</mo> <mo>|</mo> <mi>x</mi> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mi>y</mi> <mo>|</mo> <mo>)</mo> </mrow> <mo>-</mo> <mi>&epsiv;</mi> <mrow> <mo>(</mo> <mo>|</mo> <mi>x</mi> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mi>y</mi> <mo>|</mo> <mo>)</mo> </mrow> <mo>&rsqb;</mo> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>18</mn> <mo>)</mo> </mrow> </mrow></math>
wherein,
Figure S071F3163320071010D000111
comparing (18) and (19) with the MSA in (13), it can be seen that there is only a slight increase in complexity. It should be noted that in a hardware implementation, subtracting 1 is less complex than the normal subtraction, and that testing ≦ 1 and >1 requires only some bit test logic.
If all integers involved in the algorithm are limited to + -Amax=±(2v-1-1), then the algorithm can be implemented directly with a v-bit fixed point circuit, where 1 bit is used for the sign and v-1 bits for the absolute value. In addition, (19) should be modified to:
Figure S071F3163320071010D000112
this is because, when one input is AmaxWhen the real value is [ A ]max,+∞]Inner variance, therefore, the real value of | a-b | is likely to be greater than 1.
Thus, by using the system and method as described above, the resources required to implement the check nodes can be reduced, i.e. the complexity is reduced, while still maintaining high accuracy. In certain embodiments, complexity may be reduced even further by degree reduction techniques. In other words, the number of inputs to the check nodes can be reduced, thereby reducing the resources required to implement the check nodes. It should also be noted that in many check node implementations, the sign and absolute value of the outgoing soft message are calculated separately.
FIGS. 6 and 7 are diagrams depicting the methods described herein for floating point SPA, floating point MSA, and utilizing different quantization bits, respectively, under AWGN channelA graph of simulated Frame Error Rate (FER) and Bit Error Rate (BER). The LDPC code is an irregular (1152, 576)1/2 rate LDPC code constructed according to 802.16eD 12. The decoder uses layered decoding with a maximum number of iterations of 30. The curve labeled 'SPA 2' refers to the proposed algorithm where all operations are integer operations. The curve labeled ` SPA2 unlimited ` refers to AmaxCase ∞. In computer simulations, it is practically limited by the maximum integer of the system. This curve reflects the effect of rounding errors in (17) and (18). It can be shown that the performance penalty due to rounding errors is less than 0.1dB, limiting the integer to 5 bits (a)max15) causes little further loss.
It should be noted that we are comparing floating point SPA and floating point MSA. If both algorithms are implemented directly with fixed point circuits, they require approximately 8 bits to maintain the same performance.
Fig. 8 is a flowchart describing an exemplary method of performing LDPC decoding as described above. First, in step 802, a signal may be received. In step 804, the signal may be demodulated. In step 806, a variable message may be generated from the demodulated signal as an estimate of the data bits. In step 808, the variable message may be operated on to approximate the value of the original signal. As mentioned above, this approximation is based on a sum-product algorithm (SPA) that uses base 2 logarithmic operations and rounding the resulting values. This approximation can be done according to equations 16 and 17. In step 810, the results of step 808 may be evaluated to determine if the variable message matches the parity check node equation. If there is a match, or if a certain number of iterations is exceeded, the LDPC decoder may be stopped.
While certain embodiments of the present invention have been described, it should be understood that the described embodiments are by way of example only. Therefore, the present invention should not be limited according to the embodiments. Rather, the scope of the invention described herein is limited only by the claims which follow in conjunction with the above description and accompanying drawings.

Claims (14)

1. A receiver, comprising:
a demodulator configured to receive a wireless signal including an original data signal and a carrier signal, remove the carrier signal from the wireless signal, and generate a received signal; and
a Low Density Parity Check (LDPC) processor coupled to the demodulator and configured to recover the original data signal from the received signal, the LDPC processor comprising:
a plurality of variable node processors configured to generate variable messages from received signals; and
a check node processor coupled to the plurality of variable node processors, the check node processor configured to implement an approximation of a sum-product algorithm (SPA) using a base 2 logarithmic operation.
2. The receiver of claim 1, wherein the check node processor is further configured to implement an approximation of the sum-product algorithm (SPA) using base 2 logarithms and different numbers of bits rounding all operands to yield the nearest integer.
3. The receiver of claim 1, wherein the check node processor is further configured to implement an approximation of a sum-product algorithm (SPA) equivalent to:
<math> <mrow> <mrow> <mi>x</mi> <mo>&CirclePlus;</mo> <mi>y</mi> <mo>&ap;</mo> <mi>sign</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mi>sign</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>[</mo> <mi>min</mi> <mrow> <mo>(</mo> <mo>|</mo> <mi>x</mi> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mi>y</mi> <mo>|</mo> <mo>)</mo> </mrow> <mo>-</mo> <mi>&epsiv;</mi> <mrow> <mo>(</mo> <mo>|</mo> <mi>x</mi> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mi>y</mi> <mo>|</mo> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mo>,</mo> </mrow> </math>
wherein,
Figure FSB0000119882440000011
and wherein x and y are real numbers and are used to describe operands of the approximation operation of the sum-product algorithm.
4. The receiver of claim 2, wherein the generated integer is limited to ± amax=±(2v-1Within-1), v is the number of variable nodes.
5. The receiver of claim 4, wherein the check node processor contains v fixed point circuits comprising 1 bit for integer symbols and v-1 bits for absolute values.
6. A Low Density Parity Check (LDPC) processor comprising:
a plurality of variable node processors configured to generate variable messages from received signals; and
a check node processor coupled to the plurality of variable node processors, the check node processor configured to implement an approximation of a sum-product algorithm (SPA) using a base 2 logarithmic operation.
7. The LDPC processor of claim 6, wherein the check node processor is further configured to utilize base 2 logarithms and approximations of different bit numbers rounding all operands to implement a sum-product algorithm (SPA) to yield the nearest integer.
8. The LDPC processor of claim 6, wherein the check node processor is further configured to implement an approximation of a sum-product algorithm (SPA) equivalent to:
<math> <mrow> <mrow> <mi>x</mi> <mo>&CirclePlus;</mo> <mi>y</mi> <mo>&ap;</mo> <mi>sign</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mi>sign</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>[</mo> <mi>min</mi> <mrow> <mo>(</mo> <mo>|</mo> <mi>x</mi> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mi>y</mi> <mo>|</mo> <mo>)</mo> </mrow> <mo>-</mo> <mi>&epsiv;</mi> <mrow> <mo>(</mo> <mo>|</mo> <mi>x</mi> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mi>y</mi> <mo>|</mo> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mo>,</mo> </mrow> </math>
wherein,
Figure FSB0000119882440000021
and wherein x and y are real numbers and are used to describe operands of the approximation operation of the sum-product algorithm.
9. The LDPC processor of claim 7, wherein the generated integers are restricted to ± Amax=±(2v-1Within-1), v is the number of variable nodes.
10. The LDPC processor of claim 6 wherein the check node processor contains v fixed point circuits comprising 1 bit for integer symbols and v-1 bits for absolute values.
11. A method of processing a received signal using a parity check node processor included in an LDPC decoder, the method comprising:
receiving a signal;
demodulating the signal;
generating a variable message; and
soft outputs are generated from variable messages using base 2 logarithmic operations, using approximations of the sum-product algorithm (SPA).
12. The method of claim 11, further comprising refining the soft output variable message by iteration until the soft output variable message matches the parity check node equation or a maximum number of iterations has occurred.
13. The method of claim 11, wherein the soft output is further generated by rounding different numbers of bits of all operands to yield the nearest integer.
14. The method of claim 11, generating a soft output equivalent to
<math> <mrow> <mrow> <mi>x</mi> <mo>&CirclePlus;</mo> <mi>y</mi> <mo>&ap;</mo> <mi>sign</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mi>sign</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>[</mo> <mi>min</mi> <mrow> <mo>(</mo> <mo>|</mo> <mi>x</mi> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mi>y</mi> <mo>|</mo> <mo>)</mo> </mrow> <mo>-</mo> <mi>&epsiv;</mi> <mrow> <mo>(</mo> <mo>|</mo> <mi>x</mi> <mo>|</mo> <mo>,</mo> <mo>|</mo> <mi>y</mi> <mo>|</mo> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mo>,</mo> </mrow> </math>
Wherein,
and wherein x and y are real numbers and are used to describe operands of the approximation operation of the sum-product algorithm.
CN200710153163.3A 2006-09-28 2007-09-28 Systems and methods for reduced complexity ldpc decoding Expired - Fee Related CN101136639B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US82735306P 2006-09-28 2006-09-28
US60/827,353 2006-09-28

Publications (2)

Publication Number Publication Date
CN101136639A CN101136639A (en) 2008-03-05
CN101136639B true CN101136639B (en) 2014-04-02

Family

ID=39160517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200710153163.3A Expired - Fee Related CN101136639B (en) 2006-09-28 2007-09-28 Systems and methods for reduced complexity ldpc decoding

Country Status (1)

Country Link
CN (1) CN101136639B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101808068B (en) * 2009-10-29 2013-04-10 清华大学 Method and system for MSK iterative demodulation by combining LDPC code
CN102546102B (en) * 2011-12-23 2014-12-10 同济大学 Quick demodulation method for signal receiving terminal to realize link adaptation
US10574274B2 (en) * 2017-09-29 2020-02-25 Nyquist Semiconductor Limited Systems and methods for decoding error correcting codes
TWI685214B (en) * 2018-10-04 2020-02-11 瑞昱半導體股份有限公司 Decoding device and decoding method
US20220052784A1 (en) * 2019-01-14 2022-02-17 Nokia Technologies Oy Data processing in channel decoding
CN114915380B (en) * 2022-07-19 2022-09-30 中国科学院宁波材料技术与工程研究所 CAN bus-based low-cost high-real-time automatic error correction communication system and method

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
.2005,1098-1102. *
.2006,294-300. *
IEE proceeding online n0.20050205&gt *
Institution of Engineering and Technology 2007&gt *
Masera et al.Finite precison implementation of LDPC dexoders.&lt *
Masera et al.Finite precison implementation of LDPC dexoders.<IEE proceeding online n0.20050205>.2005,1098-1102.
Papaharalabos et al.Modified sum-product algorithm for decoding low density parity check codes.&lt *
Papaharalabos et al.Modified sum-product algorithm for decoding low density parity check codes.<Institution of Engineering and Technology 2007>.2006,294-300.

Also Published As

Publication number Publication date
CN101136639A (en) 2008-03-05

Similar Documents

Publication Publication Date Title
US7895500B2 (en) Systems and methods for reduced complexity LDPC decoding
US8683303B2 (en) Operational parameter adaptable LDPC (low density parity check) decoder
US7137060B2 (en) Forward error correction apparatus and method in a high-speed data transmission system
CN1953336B (en) Method for updating check node in low density parity check decoder
EP4295489B1 (en) Protograph quasi-cyclic polar codes and related low-density generator matrix family
US7673223B2 (en) Node processors for use in parity check decoders
US20160233979A1 (en) Method and System for Reliable Data Communications with Adaptive Multi-Dimensional Modulations for Variable-Iteration Decoding
US7395487B2 (en) Common circuitry supporting both bit node and check node processing in LDPC (Low Density Parity Check) decoder
US8099645B2 (en) LDPC codes and stochastic decoding for optical transmission
US8219890B2 (en) Denoising and error correction for finite input, general output channel
US20060085720A1 (en) Message passing memory and barrel shifter arrangement in LDPC (Low Density Parity Check) decoder supporting multiple LDPC codes
US20060136799A1 (en) LDPC decoding apparatus and method with low computational complexity algorithm
CN101136639B (en) Systems and methods for reduced complexity ldpc decoding
CN109586732B (en) System and method for encoding and decoding LDPC codes with medium and short codes
CN112953554B (en) LDPC decoding method, system and medium based on layered confidence propagation
US9614548B1 (en) Systems and methods for hybrid message passing and bit flipping decoding of LDPC codes
US10892783B2 (en) Apparatus and method for decoding polar codes
CN102412846A (en) Multi-value corrected min-sum decoding method applicable to low-density parity-check code
US7900126B2 (en) Systems and methods for reduced complexity LDPC decoding
Chertova et al. Development of Turbo Product Code with Elementary Encoders as LDPC Code
US8019020B1 (en) Binary decoding for correlated input information
CN101350695B (en) Method and system for decoding low density parity check code
CN113872614B (en) Deep neural network-based Reed-Solomon code decoding method and system
CN112470405A (en) Variable node processing method and apparatus for message passing decoding of non-binary codes
KR20090064268A (en) Decoding device using variable correction value and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: KY WIRE ELECTRIC CO., LTD.

Free format text: FORMER OWNER: MEISHANG WEIRUI ELECTRIC COMPANY

Effective date: 20131015

C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20131015

Address after: The Cayman Islands, British West Indies

Applicant after: Ky Wire Electric Co., Ltd.

Address before: California

Applicant before: Meishang Weirui Electric Company

C14 Grant of patent or utility model
GR01 Patent grant
C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20160726

Address after: American California

Patentee after: Intel Corporation

Address before: The Cayman Islands, British West Indies

Patentee before: Ky Wire Electric Co., Ltd.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140402

Termination date: 20170928