[go: up one dir, main page]

MXPA01001657A - Memory architecture for map decoder - Google Patents

Memory architecture for map decoder

Info

Publication number
MXPA01001657A
MXPA01001657A MXPA/A/2001/001657A MXPA01001657A MXPA01001657A MX PA01001657 A MXPA01001657 A MX PA01001657A MX PA01001657 A MXPA01001657 A MX PA01001657A MX PA01001657 A MXPA01001657 A MX PA01001657A
Authority
MX
Mexico
Prior art keywords
decoding
estimates
window
ram
reception
Prior art date
Application number
MXPA/A/2001/001657A
Other languages
Spanish (es)
Inventor
Steven J Halter
Original Assignee
Qualcomm Incorporated
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Incorporated filed Critical Qualcomm Incorporated
Publication of MXPA01001657A publication Critical patent/MXPA01001657A/en

Links

Abstract

The present invention is a novel and improved technique for decoding technique with particular application to turbo, or iterative, coding techniques. In accordance with one embodiment of the invention, a system for decoding includes a channel deinterleaver RAM for storing a block of symbol estimates, a set of S state metric calculators and a set of S+1 window RAMs. Each state metric calculator is for generating a set of state metric calculations, wherein, S of said S+1 window RAMs provide symbol estimates to said S state metric calculators. A remaining window RAM receives symbol estimates from said channel deinterleaver RAM.

Description

go" MEMORY ARCHITECTURE FOR A DECODER OF PROBABILITY MAXIMUM TO POSTERIORI BACKGROUND OF THE INVENTION 5 I. FIELD OF THE INVENTION The present invention relates to channel coding. More particularly, the present invention relates to a new and improved technique for performing a maximum posterior decoding (MAP).
II. Description of the Related Art "Turbo coding" represents a important advance in the area of anticipated error correction (FEC). There are many variants of turbo coding, but most ^ use multiple coding steps separated by collation steps combined with decoding iterative. This combination provides previously unavailable performance with respect to noise tolerance in a communication system. That is, turbo coding allows communications at Eb / N0 levels that were previously unacceptable using existing error correction techniques. Many systems use early bug fixes and therefore will benefit from the use of turbo coding. For example, turbo codes will improve the performance of wireless satellite links, where the satellite's limited downlink energy needs five receiver systems that can operate at low Eb / N0 levels. The use of turbo codes in a wireless satellite link could reduce the size of the dish for a digital video broadcast (DVB) system, or alternatively, allow it to transmit more data with a given frequency bandwidth. Digital wireless telecommunication systems, such as PCS and cellular, digital telephone systems, also use correction anticipated errors. For example, the interface standard on air IS-95, and its derivatives, such as IS-95B, define a digital system of ^ wireless communication using convolutional coding to provide 0 coding gains to increase system capacity. A system and method for processing RF signals substantially in accordance with the use of the IS-95 standard is described in U.S. Patent No. 5,103,459 entitled "System and Method for 5 Generating Signal Data in a CDMA Cellular Telephone System" assigned to the assignee of the present invention and incorporated herein by reference (patent 59). i Because the digital wireless communication system type IS-95, are mainly for mobile communications, it is important to address devices that reduce the use of energy or power to 5 minimum and that are small and lightweight. Typically, this requires the development of a semiconductor integrated circuit ("circuit") to perform the majority, if not all, of the necessary processing. At the same time that convolutional coding is relatively complex, the circuits necessary to perform convolutional encoding and decoding can be formed in an individual circuit along with any other circuitry. necessary. Turbo coding (particularly operation and decoding) is significantly ^ P more complex than convolutional coding (and decoding). However, it would be highly It is desirable to include turbo coding in digital wireless telecommunication systems, including digital communication systems, mobile and satellite communication system. In this way, the present invention is directed to increasing the speed at which certain decoding operations can be performed, to facilitate the use of turbo coding in a variety of systems.
SUMMARY OF THE INVENTION The present invention is a novel and improved technique for the decoding technique 5 with particular application to coding, turbo or iterative techniques. According to one embodiment of the invention, a system for decoding includes a channel deinterleaver RAM for storing a block of estimates of the symbol, a set of state metric calculators. Each state metric calculator is for generating a set of state metric calculations and a set of S + 1 window RAM, where, S of the S + 1 window RAM provides estimates of symbol to the S calculators of state metrics. A remaining window RAM receives symbol estimates from the channel deinterleaver RAM.
BRIEF DESCRIPTION OF THE DRAWINGS The features, objects and advantages of the present invention will become more apparent from the detailed description set forth below when taken in conjunction with the accompanying drawings in which like reference characters 5 are identified from way corresponding to all along and where: Figures IA and IB are block diagrams of wireless communication systems.
Figure 2 is a block diagram of a ^ p transmission system; Figures 3A and 3B are diagrams of the 5 turbo encoders; Figure 4 is a block diagram of a reception processing system; Figure 5 is a block diagram of the decoder of a portion of a channel deinterleaver lOfl; Figure ß is a flow diagram illustrating an example set of decoding steps.
DETAILED DESCRIPTION OF THE PREFERRED MODALITIES The present invention is a new and improved technique for performing turbo coding.
^ P The example mode is described in the context of the cell phone, digital system. While that it is advantageous to use within this context, different embodiments of the invention can be incorporated in different environments, configurations or digital data transmission systems, including satellite communication systems and communication systems with wires such as digital telephone and cable systems. In general, the various systems described herein may be formed using processors controlled by computer program, integrated circuits, or discrete logic circuits, however, implementation in an integrated circuit is preferred. The data, instructions, commands, information, signals, symbols and circuits that can be referred to throughout the application are advantageously represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or a combination of them. In addition, the blocks shown in each block diagram can represent steps of the method or physical equipment. Figure IA is a highly simplified diagram of a cell phone system configured in accordance with one embodiment of the invention. To conduct telephone calls or other communications, the subscriber units 10 are interconnected with the base stations 12 via RF signals. The base stations 12 are interconnected with the telephone, switched, public network via the base station controller (BSC). Figure IB is a highly simplified diagram of a satellite communications system configured in accordance with another embodiment of the present invention. The uplink station 40 transmits RF signals containing information such as video programming to the satellite 42. The satellite 42 transmits the RF signals back to the ground where the receiver 44 converts the receiving RF signals into digital data. Figure 2 is a block diagram of an exemplary transmission system configured in accordance with the use of one embodiment of the present invention. The transmission system may be used within a subscriber unit 10, a base station 12, or an uplink station 40, as well as any other system that generates digital signals for transmission. The transmission processing shown represents only one possible embodiment of the invention, since numerous different transmission processing schemes can be incorporated and benefit from the use of the various embodiments of the invention. The data 70 is supplied to the CRC generator 72 which generates CRC verification data for a given, predetermined amount of received data. The resulting data blocks are supplied to the turbo encoder 76 which generates code symbols that are supplied to the channel interleaver 78. Code symbols typically include a retransmission of the original data (the systematic symbol) and one or more parity symbols. The number of parity symbols transmitted for each systematic symbol depends on the coding rate. For a coding speed of _, a parity symbol is transmitted for each systematic symbol, for a total of two • symbols generated for each data bit (including CRC) received. For a 1/3 speed turbo encoder, two parity symbols are generated for each systematic symbol, for a total of three symbols generated for each received data bit. Code symbols of the turbo encoder • 76 are supplied to channel interleaver 78. The channel interleaver 78 performs the interleaving in the blocks of the symbols as received, the transfer of interleaved symbols that are received by the correlator 80. Typically, the channel metalayer 78 performs block interleaving or bit inversion, virtually all types of interleavers can be used as the channel interleaver. The correlator 80 takes the interleaved code symbols and symbol words of a certain bit width are generated based on a predetermined correlation scheme. The symbol words are then applied to the modulator 82 which generates a modulated waveform based on the received symbol word. Typical modulation techniques include QPSK, 8-PSK and 16 QAM; although several different modulation schemes can be used. The modulated waveform is then converted upwardly for transmission at an RF frequency. Figure 3A is a block diagram of a turbo encoder configured in accordance with a first embodiment of the invention. In this first embodiment of the invention, the turbo encoder is configured as a parallel, concatenated turbo encoder. Within the version of the turbo encoder 76, the constituent encoder 90 and the code interleaver l? Fl 92 receive the data from the CRC generator 72, which as described above transcribes the input data in the CRC verification bits. The constituent encoder 90 also generates end bits to provide a state known at the end of each painting. As is well known, the code interleaver 92 must be a highly interleaved ^ P randomized for the best performance. An interleaver that provides excellent performance with The minimum complexity as a code interleaver is described in the United States Patent Application, Copending, Serial No. 09 / 158,459 filed on September 22, 1998 entitled "Coding System Having State Machine Based 5 Interleaver", and the Part of the co-pending United States Patent Application Serial No. 09 / 172,069, filed October 13, 1998, entitled "Coding System Having State Machine Based Interleaver" and United States Patent Application No. of Series 09 / 09/205, 511 filed on December 4, 1998 entitled "Turbo Code Interleaver Using Linear Cognitive Sequence", all assigned to the assignee of the present application and incorporated herein by reference. The constituent encoder 90 transfers systematic symbols 94 (typically a copy of the original input bits) and the parity symbols 96. The constituent encoder 98 receives the interleaved output from the code interleaver 92 and transfers the additional parity symbols 99. The constituent encoder 90 may also add end bits to provide a known state at the end of each frame. The outputs of the constituent encoder 90 and the constituent encoder 98 are mixed in the output data stream for a total coding rate R of 1/3. Constituent codes and additional code interleaver pairs can be added to reduce the coding rate for an early, increased error correction. Alternatively, some of the parity symbols 96 and 99 may be punctured (not transmitted) to increase the coding rate. For example, the coding rate should be increased to _ by improving each parity symbol 96 and 99 differently, or by not fully transmitting the parity symbol 96. The constituent encoders 90 and 98 can be various types of encoders including block encoders or convolutional decoders. As convolutional encoders, the constituent encoders 90 and 98 typically have a small restriction length such l? fl as 4 (four) to reduce complexity and are convolutional, systematic, recursive (RSC) encoders. The lower restriction length reduces the complexity of the corresponding decoder in the reception system. 15 Typically, the two encoders transfer a systematic symbol and a parity symbol for each bit received for one speed ^ P of constituent coding R = _. The total encoding speed for the turbo encoder of the Figure IA is R_l / 3, however, because the systematic bit of the constituent encoder 98 is not used. As noted above, the additional interleaver and the encoder pairs can also be added in parallel to reduce the coding speed, and therefore provide a greater error correction, or the drilling can be performed to increase the coding rate.
Figure 3B depicts the turbo encoder 76 as a concatenated turbo encoder, in series according to an alternative embodiment of the invention. Within the turbo encoder of Figure 3B, the data of the CRC generator 72 is received by the constituent encoder 110 and the resulting code symbols are applied to the code interleaver 112. The resulting interleaved parity symbols are supplied to the constituent encoder 114. , which performs additional coding to generate parity symbols 115. Typically, the constituent encoder 110 (the output encoder) can be one of several types of encoders including block encoders or convolutional encoders. But the constituent encoder 114 (the internal encoder) is preferably a recursive encoder and typically is a systematic, recursive encoder. Like the recursive systematic convolutional encoders (RSC), the constituent encoders 110 and 114 generate symbols at a coding rate R < 1. That is, for a given number of input bits N, M output symbols are generated where M > N. The total coding rate for the serially concatenated turbo encoder of Figure IB is the coding rate of the constituent encoder 110 multiplied by the coding rate of the constituent encoder 114. The additional interleaver and the encoder pairs are also • can add in series to reduce the coding speed and therefore provide additional protection against error. Figure 4 is a block diagram of a reception system configured in accordance with an embodiment of the invention. The antenna 150 provides the RF signals received to the RF unit 152.
• The RF unit performs the down-conversion, filtering and digitization of the RF signals. The correlator 140 receives the digitalized data and provides soft decision data to the channel deinterleaver 156. The turbo decoder 158 decodes the soft decision data of the channel deinterleaver 156 and supplies the difficult decision data, resulting to the processor or control unit in the receiving system, which can verify the accuracy of the data using the verification data of CRC Figure 5 is a block diagram of the turbo decoder 158 and a portion of the channel deinterleaver when configured in accordance with an embodiment of the invention. As shown, the turbo decoder 158 is configured to decode the turbo encoder data shown in Figure 3A. In the described embodiment, two channel interleaver memory banks are provided, channel deinterleaver RAM 160a and 160b, with each bank that is capable of storing a channel interleaver block. The address input of the two channel interleaver memory banks are controlled by the channel address generator 204 and the counter 206, which are applied to the address inputs via the multiplexers 208 and 210. The multiplexers 208 and 210 are they control by the signal I and II (the logical inverse of I), and therefore when a channel deinterleaver RAM is controlled by the channel deinterleaver address generator 204, the other is controlled by the controller 206. In general, any functionality can be provided of control by a microprocessor that runs a computer program stored in memory, and by discrete logic circuits, although the use of several different types of control systems is constituent with the use of the invention. The I / O ports of the channel interleaver memory banks are coupled to multiplexers 212 and 214. The multiplexer 212 receives the moderate decision data from the correlator 140 to one of the two channel deinterleaver memory banks. The multiplexer 214 transfers the moderate decision data stored in one of the two channel deinterleaver memory banks to the partial sum 218. The multiplexers 212 and 214 are controlled by the signal B and IB, respectively, and therefore when a Channel deinterleaver RAM is receiving sample from correlator 140, the other is transferring samples to partial sum 5 218. During operation, channel deinterleaver address generator 204 is applied to the channel deinterleaver memory banks that it receives the sample of the correlator 140. The L? B channel deinterleaver address generator generates addresses in inverted order with respect to the interleaving made by the channel interleaver 78 of FIG. 3. In this way, the samples are read in the bank 140 memory channel deinterleaver not in the interleaved order (not interleaved with respect to the channel interleaver). The counter 206 is applied to the channel deinterleaver memory bank which reads the moderate decision to the partial sum 218. Since the moderate decision data is read in inverted order, the moderate decision data can be read in deinterleaved order by simply using the counter 206. Various methods can also be used different moderate decision data buffer storage, including the use of two port memory. Also, another method for generating deinterleaver addresses can be used, including switching counter 206 and channel deinterleaver address generator 204. Within the turbo encoder 158, the partial sum 218 receives the reception estimates (moderate decision data) of the channel deinterleaver as well as the pri ori probability data (APP) of the APP memory 170. As is well known, the APP values are the estimates of the transmitted data ß ß based on the previous decoding iteration. During the first iteration, APP values are placed at an equal probability state. The estimates of the memory of channel deinterleaver include estimates of the systematic symbol as well as estimates of two parity symbols for each data bit associated with the channel interleaver block. The partial sum 218 adds the value of APP to the symbol systematic, to create a "refined systematic estimate". The refined systematic estimate, together with the two part symbol estimates, are read in the RAM file 224. Within the RAM file 224, the values of estimation are written in window RAM 230a-d (marked RAM 0-RAM 3). In one embodiment of the invention, the estimates are written in sequential order in the address space of RAM 0-3.
This process starts with RAM 0, and presides through RAM 3. At any given moment, only one window RAM is written. The remaining three window RAM (those that were not written) are applied to (read by) the MAP machine 270 by multiplexers 260 as described in greater detail below. In one embodiment of the invention, a sliding window architecture is employed to perform the correlation decoding. A system and method for performing this sliding window decoding is disclosed in co-pending United States Patent Application Serial No. 08 / 743,688 entitled "Soft Decision Output Decoder for Decoding Convolut ionally Encoded Code ords" "assigned to the assignee of the present invention and incorporated herein by reference In that application, MAP decoding is performed in data "windows." In the described embodiment of the invention, RAM window banks 230 are Lxq in size, wherein L is the number of data bits transmitted in the window and q is the number of memory bits required to store the estimates of the refined systematic symbol and the two parity symbols generated for each data bit In one embodiment of the invention, they use six (6) bits for the two estimates of parity symbols and seven (7) bits are used for symbol estimates, systematic, refined, (which as Write above is the sum of the estimates of symbols, systematic, reception and value of APP), and a q of eighteen (18) bits. As noted above, the estimates, including the symbol estimate, systematic, refined and parity symbol estimates, are described in window RAM 230a-d in sequential order. Typically, only one window RAM 230 is being written and the three remaining window RAM 230 are read by the MAP machine 270. In an example processing, the data of a new channel block is first written to the window RAM 230a and then the window RAM 230b. In this way, the window RAM 230a contains the first L (ÍL) set of estimates (where one set comprises the refined systematic estimate and the two parity estimates) and the window RAM 230b contains the second L (2L) set of estimates. By storing the first two estimated L sets within the window RAM 230, the multiplexers 260 are applying the data stored in the window RAM 230 in the state metric calculators (SMC) within a decoder 270 to pri ori ( MAP) maximum. In one embodiment of the invention, the three SMCs are comprised of a direct SMC 272 (FSMC) and two inverted SMC 274a and 274b (RSMC). ^ As the data continues to be written to the RAM file 224, the multiplexers 260 apply the estimates stored in the three of the four window RAMs to the three state metric calculators within the MAP decoding 260 according to Table I.
• Table I Along with the particular window RAM applied to the particular state metric, Table I also lists the set of estimates contained in that window RAM at the time, and therefore, the estimates processed by the corresponding SMC. A window is processed once in the direct address and once in the inverted direction according to MAP processing.
Additionally, most windows are processed for an additional time in the inverted ^ direction to generate an initialization state for the other processing of inverted state 5 metrics. In Table I, the initialization passes are denoted by the text in italics. In the described modality, each estimate set is processed three times, and therefore, the window RAM in which the estimates are stored is also accessed three times. Using 3 window RAM prevents RAM contention. Also as shown in Table I, at any particular time at least one window RAM does not attach to any SMC, and therefore is available to make new data written. By having more than three RAM windows inside the 224 RAM file, the data can be fed ^ P continuously repeated to the MAP machine in the correct sequence, and to the correct one of the three SMC, in so that it is received simultaneously from the channel interleaver memory 160 via the partial sum 218. It should also be noted that Table I illustrates the coupling performed for six (6) data windows. In this way, the size of the Example channel interleaver block is 6L, and the channel deinterleaver memory is 6Lxq. The channel block size of 6L is for example only and the typical channel block size will be greater than 6L. Still with reference to Figure 5, within the MAP decoding 270, the FSMC 272 receives the estimates from the RAM file 224 as described above and calculates the direct state metric values over a window L. The state metric values direct are stored in the metric buffer 276. Additionally, according to Table I, an RSMC 274 calculates the inverted values of state metrics on another window L. In another embodiment of the invention, each state metric calculator contains its own branch metric calculator. In other embodiments of the invention, a branching, shared, individual time metric calculator may be used in the set of state metrics, preferably in combination with a branching metric buffer. In one embodiment of the invention, the MAP decoder used is a log-MAP decoder, which operates on the logarithm of the estimates to reduce the complexity of the physical equipment. An implementation of a log-MAP decoder that includes the branch metric and state metric calculators is described in reference S.S. Pietrobon, "Implementat ion and Performance of a Turbo / MAP Decoder", presented to the In t erna ti on l Journa l of Sa tel li te Commun i ca ti on s, February 1997. The Pietrobon log-MAP decoder does not use the sliding window architecture described in the patent application referenced above "Soft 5 Decision Output Decoder for Decoding Convolut ionally Encoded Codewords". The last value calculated by the first RSMC 274 is used to initialize the other RSMC 274, which performs an inverted state metric calculation in lofl the window L for which the direct state metrics have already been calculated and stored in memory intermediate 276 of metrics. As the inverted state metrics are calculated, they are sent, via the multiplexer 278, to the calculator 280 of log-likelihood ratio (LLR). The calculator 280 of LLR performs a log-likelihood calculation of the inverted state metrics received from the ^ P multiplexer 278 and the direct status metrics stored in the metric buffer 276.
The resulting data estimates of LLR 280 are sent to APP memory 170. When using a sliding metric calculation process, the amount of memory used to perform the necessary processing is reduced. For example, the set of window RAM 230 need only be as large as window L, instead of the size of the complete interleaver block. Similarly, only one L window of the direct status metrics needs to be stored within the metric buffer 276 in any • Given moment. This significantly reduces the size of the circuit. Additionally, the use of three metric calculators significantly increases the speed at which decoding can be performed. The speed increases, due to the fact that the functions of • Parallel initialization and decoding. Initialization increases the accuracy of the decoder. In alternative embodiments of the invention, two direct state metric calculators can be used in conjunction with another calculator of inverted state metrics. Also, few state metric calculators can be used if it is synchronized at a high enough speed to perform twice as many operations. However, increasing the clock speed increases the energy consumption which is undesirable in many cases including battery-powered communication systems, or mobile phones. Additionally, while the use of the RAM file 224 reduces the circuit area and RAM contention within the decoder, other embodiments of the invention may select alternative memory architectures.
As described above with reference to the exemplary embodiment, decoding is done by performing a first decoding, a first window in the first direction and simultaneously performing a second decoding in a second window in the second direction. The second address is preferably the opposite of the first address. The results of the first decoding are stored. The results of the second decoding are used to initialize a third decoding performed in the first window in the second direction. During the third decoding, the LLR values are calculated using the values calculated during the third decoding and the stored values. Simultaneously with the third decoding, a fourth decoding of another window is performed in the first direction, as well as a fifth decoding in the second direction in yet another window. The results of the fourth decoding are stored, and the fifth decoding is performed to generate an initialization value. These steps are repeated until the entire channel interleaver block is decoded. The various alternative embodiments of the invention may omit some of the steps used in the described embodiment. However, the use of the described set of steps provides fast and accurate decoding with a minimum of • memory and additional circuitry, therefore provides significant performance benefits. In one embodiment of the invention, the APP memory 170 is comprised of two APP memory banks 284. The multiplexers 286 switch each memory bank 284 between that which is read by the partial sum 218, and that which is written by the LLR 280 for • provide a double buffering operation. The multiplexers 288 apply either a counter 290 or the code interleaver address generator 292 to perform the turbo code interleaving and the inverted interleaving for each decoding interaction. Also, the APP memory banks 284 may be large enough to retain all data estimates for the channel interleaver block, where the estimates are for the transmitted data and not for the parity symbols. Six-bit resolution estimates can be used. The entire decoding process proceeds by repeatedly reading the reception estimates of a channel deinterleaver buffer 160 and processing with the APP values of the APP memory banks 170. After a series of iterations, the estimates can include a set of values that are then used to generate difficult decisions. The results of the decoding are then verified using the CRC values. Simultaneous with the decoding, in the other buffer of the 5 channel deinterleaver receives the next block of reception estimates. Figure 6 is a flowchart illustrating the steps performed during the log-MAP decoding of a channel interleaver block according to one embodiment of the invention. The steps are discussed with reference to the elements shown in Figure 5. The decoding begins in step 300 and in step 302 the Window Index N is set to 0. 15 In step 304, the window [N] and the window [N + l] of the estimates stored in the channel deinterleaver RAM is described in the window RAM [N mod á 3] and the window RAM [(N + l) mod 3] respectively. As indicated above, an estimate window corresponds to the symbols generated with respect to a window L of data bits to be transmitted. In step 308, FSMC processes the estimates in RAM [N mod 3] (which in this case is RAM 0) and RSMC 5 0 processes the estimates in RAM [(N + l) mod 3] (which is RAM 1). Additionally, the window [N + 2] of estimates of the channel deinterleaver RAM is written in the RAM window [(N + 2) mod 3] (which is RAM 2).
In step 310, the window index N is incremented and X is adjusted to N mod 2 and Y is set to (N + l) mod 2. In step 312 it is determined whether N + l corresponds to the last window of estimates that is will process. If not, step 314 is performed. In step 314, FSMC processes the estimates stored in the window RAM [N mod 3], RSMC X (X = 0 in the first p processes the estimates stored in the RAM [ (N + l) mod 3] window, and RSMC Y (Y = l in the first p processes the estimates stored in the RAM window [(Nl) mod 3]. Additionally, the window [N + 2] of estimates of the RAM of the channel deinterleaver are described to the window RAM [(N + 2) mod 3]. What is not shown is that a set of APP values corresponding to the window [Nl] of data are generated in step 314 during the processing performed by RSMC 1. When performing the decoding on the windows, the size is maintained from the metric buffer to the length L of the times the number of metrics generated for each decoding step. Since numerous metrics are stored for each decoding step, the reduction in the metric memory is significant compared to storing the state metrics for the full channel deinterleaver block. Additionally, the use of the second inverted state metric calculator increases speed and accuracy. For example, the second RSMC can calculate a new initial value for the next decoding window while the previous decoding window is being processed. Calculating this new initial value eliminates the need to perform new RSMC calculations for each decoding step, since the new value can be used to decode all previous window. '"• Step 314 provides an excellent illustration of the efficiencies gained in processing the estimates as exemplified by the RAM file 224. In particular, step 314 illustrates the ability to perform four steps simultaneously in the decoding processing. This increases the speed at which decoding can be performed for a given clock speed. In the described embodiment, the steps include the processing of calculation of state metrics and the writing of additional samples to the RAM file 224. Also, the APP values are calculated during step 314. Once step 314 is completed, the window index N increases step 310. If the value N + l is equal to the last window, then the value is discontinued. cascade processing, and the remaining estimates within the RAM file 224 are processed in steps 316 through 322. • In particular, in step 316, the FSMC processes the RAM [N mod 3] of the RSMC X window processes the RAM [ (N + 1) mod 3] window and RSMC and process the RAM [(N + l) mod 3] window. In step 318, the Window N Index is increased, X is adjusted to N mod 2 and Y is set to (N + l) mod 2. In step 320, FSMC processes the RAM [N mod 3] and RSMC and processes the • RAM [(N-1) mod 3]. In step 322, RSMC 1 processes the RAM [N mod 3]. In step 324, the channel deinterleaver block has been terminated at the end of the processing. In this way, a new and improved technique for performing turbo coding has been described.

Claims (15)

  1. NOVELTY OF THE INVENTION Having described the present invention, it is considered as a novelty and, therefore, the content of the following 5 CLAIMS is claimed as property: 1. A system for decoding, comprising: a) Deinterleaver RAM of channel to store a block of symbol estimates; l? ß b) set S of state metric calculators, each state metric calculator to generate a set of state metric calculations; c) set of S + 1 window RAM, where 15 S of the S + 1 window RAM provide symbol estimates to the S metric state calculators, and a remaining window RAM receives symbol estimates of the RAM of channel deinterleaver.
  2. 2. The system according to claim 1, wherein S is equal to
  3. 3. The system according to claim 1, wherein the window RAMs are significantly smaller than the channel de-interleaver RAM.
  4. 4. The system according to claim 1, wherein the state metric calculators process data on windows equal or smaller in size to the window RAM.
  5. 5. A decoder comprising: channel interleaver memory for storing a channel interleaver block of reception estimates; a decoding machine for decoding the reception estimates; decoder buffer to simultaneously read the first set of reception estimates and a second set of reception estimates to the decoding machine and write a third of the received estimates of the channel interleaver memory.
  6. 6. The decoder according to claim 5, wherein the decoder buffer is further to simultaneously read a third set of reception estimates.
  7. The decoder according to claim 5, wherein the decoding machine is a MAP decoding machine.
  8. The decoder according to claim 5, wherein the decoding machine is comprised of: direct state metric calculator for generating direct state metrics in response to the first set of receive estimates; calculator of inverted state metrics to generate inverted state metrics in response to the second set of reception estimates.
  9. 9. The decoder according to claim 6, wherein the decoding machine • is also comprised of: calculator of direct state metrics to generate direct state metrics in response to the first set of reception estimates; first calculator of inverted state metrics to generate inverted state metrics in response to the second set of estimates of • reception; and second calculator of inverted state metrics to generate inverted state metrics in response to the third set of reception estimates.
  10. The decoder according to claim 5, wherein the decoder buffer is comprised of: first memory for reading and writing reception samples; second memory for reading and writing reception samples; and third memory for reading and writing reception samples.
  11. A method for decoding data, comprising the steps of: a) coupling a first state metric calculator to a first set of receive estimates to generate an initialization value; b) coupling a second state metric calculator to a second set of receive estimates to generate a first set of state metrics; c) coupling a third state metric calculator to a third set of receive estimates to generate a second set of state metrics; d) writing a fourth set of reception estimates to a data buffer, where steps a, b, c and d are carried out simultaneously.
  12. The method according to claim 11, wherein the second set of state metrics is generated using previously calculated initialization values and processed with a previously calculated set of state metrics to generate the data estimates.
  13. The method according to claim 11, further comprising the steps of: coupling a first calculator of state metrics to the second set of reception estimates; and coupling the third calculator of state metrics to the first set of reception estimates.
  14. A method for decoding, comprising the steps of: a) performing a first decoding of a first window in the first direction and simultaneously performing a second decoding in a second window in a second direction; b) storing the results of the first decoding; c) initializing a third decoding using a result of the second decoding; d) performing a third decoding in the first window in the second direction, and calculating the LLR values using metrics calculated during the third decoding and results; and simultaneous with step d) perform a fourth decoding, a Fifth decoding of another window in the first direction, as well as a sixth decoding in the second direction in a new window; ^ e) store the results of the fifth decoding, using the results of the sixth 20 decoding for an initialization value.
  15. 15. The method according to claim 14, wherein the second direction is opposite to the first direction.
MXPA/A/2001/001657A 1998-08-14 2001-02-14 Memory architecture for map decoder MXPA01001657A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US60/096,489 1998-08-14
US09259665 1999-02-26
US09283013 1999-03-31

Publications (1)

Publication Number Publication Date
MXPA01001657A true MXPA01001657A (en) 2002-05-09

Family

ID=

Similar Documents

Publication Publication Date Title
KR100671075B1 (en) Decoder, decoding system and decoding method to facilitate the use of turbo coding
US6754290B1 (en) Highly parallel map decoder
US6434203B1 (en) Memory architecture for map decoder
JP5129216B2 (en) Memory architecture for map decoder
JP2004533140A (en) Space efficient turbo decoder
US20120246539A1 (en) Wireless system with diversity processing
MXPA01001657A (en) Memory architecture for map decoder
RU2236085C2 (en) Memory architecture for maximal a posteriori probability decoder
MXPA01001656A (en) Partitioned deinterleaver memory for map decoder
WO2000057560A2 (en) Improved turbo decoder 1