[go: up one dir, main page]

CN109525253B - Convolutional code decoding method based on deep learning and integration method - Google Patents

Convolutional code decoding method based on deep learning and integration method Download PDF

Info

Publication number
CN109525253B
CN109525253B CN201811250493.9A CN201811250493A CN109525253B CN 109525253 B CN109525253 B CN 109525253B CN 201811250493 A CN201811250493 A CN 201811250493A CN 109525253 B CN109525253 B CN 109525253B
Authority
CN
China
Prior art keywords
convolutional code
decoding
neural network
information
deep neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811250493.9A
Other languages
Chinese (zh)
Other versions
CN109525253A (en
Inventor
姜小波
张帆
梁冠强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201811250493.9A priority Critical patent/CN109525253B/en
Publication of CN109525253A publication Critical patent/CN109525253A/en
Application granted granted Critical
Publication of CN109525253B publication Critical patent/CN109525253B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/23Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using convolutional codes, e.g. unit memory codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Error Detection And Correction (AREA)

Abstract

本发明提供一种基于深度学习和集成方法的卷积码译码方法,该方法设置弱分类器和弱分类器的个数;所述弱分类器采用深度神经网络或感知器对卷积码进行译码,并对深度神经网络的深度进行设置;最后采用集成方法对弱分类器的译码结果投票得到译码输出;所述深度神经网络为全连接神经网络、卷积神经网络、GAN或LSTM。本发明方法采用了深度学习算法和集成方法来对卷积码进行译码,并从带噪的软信息序列中还原发送的信息位序列。

Figure 201811250493

The invention provides a convolutional code decoding method based on deep learning and integration method, the method sets the number of weak classifiers and weak classifiers; Decoding, and setting the depth of the deep neural network; finally, the ensemble method is used to vote the decoding result of the weak classifier to obtain the decoding output; the deep neural network is a fully connected neural network, a convolutional neural network, GAN or LSTM . The method of the invention adopts the deep learning algorithm and the integration method to decode the convolutional code, and restores the transmitted information bit sequence from the noisy soft information sequence.

Figure 201811250493

Description

基于深度学习和集成方法的卷积码译码方法Convolutional code decoding method based on deep learning and ensemble methods

技术领域technical field

本发明涉及电子通信技术领域,更具体地说,涉及一种基于深度学习和集成方法的卷积码译码方法。The present invention relates to the technical field of electronic communication, and more particularly, to a convolutional code decoding method based on deep learning and integration methods.

背景技术Background technique

为了提高信号在信道传输的可靠性,各种纠错码技术被广泛的应用在数字通信中,卷积码(convolutional code)是一种应用广泛同时性能良好的编码方式,被应用于各种数据传输系统,特别是卫星通信系统中,而维特比则是针对卷积码的一种译码方法。In order to improve the reliability of signal transmission in the channel, various error correction code technologies are widely used in digital communication. Convolutional code (convolutional code) is a widely used coding method with good performance. In transmission systems, especially satellite communication systems, Viterbi is a decoding method for convolutional codes.

卷积码由爱里斯(Elias)于1955年提出,卷积码是一种循环码,循环码与分组码不同之处在于,分组码在编码和译码过程中,本组的n-k个校验元仅与本组的k个信息元有关,而与其他各组无关。而在卷积码编码和译码中,本组的n-k个校验元不仅与本组的k个信息元有关,而且还与以前各时刻输入至编码器的信息组有关。正由于在卷积码的编码过程中,充分利用了各组之间的相关性,且k和n也较小,因此,在与分组码同样的码率和设备复杂性条件下,无论从理论上还是从实际上均已证明卷积码的性能至少不比分组码差。Convolutional code was proposed by Elias in 1955. Convolutional code is a kind of cyclic code. The difference between cyclic code and block code is that during the encoding and decoding process of block code, the n-k checksums of this group are The element is only related to the k information elements of this group, and has nothing to do with other groups. In convolutional code encoding and decoding, the n-k check elements of this group are not only related to the k information elements of this group, but also related to the information groups input to the encoder at previous times. Because the correlation between groups is fully utilized in the encoding process of convolutional codes, and k and n are also small, therefore, under the same code rate and equipment complexity as block codes, no matter theoretically It has been proved that the performance of convolutional codes is at least no worse than that of block codes.

卷积码现有的维特比译码方法来说,维特比译码在译码效率和译码性能的平衡上还有提高的空间,当维特比的译码窗口一定时,维特比译码通过计算汉明距离来获得最优路径,大大降低了译码的效率。For the existing Viterbi decoding methods of convolutional codes, Viterbi decoding still has room for improvement in the balance between decoding efficiency and decoding performance. When the decoding window of Viterbi is fixed, Viterbi decoding passes Calculating the Hamming distance to obtain the optimal path greatly reduces the decoding efficiency.

发明内容SUMMARY OF THE INVENTION

本发明的目的在于克服现有技术中的缺点与不足,提供一种基于深度学习和集成方法的卷积码译码方法,该译码方法采用了深度学习算法和集成方法来对卷积码进行译码,并从带噪的软信息序列中还原发送的信息位序列。The object of the present invention is to overcome the shortcomings and deficiencies in the prior art, and to provide a convolutional code decoding method based on deep learning and integration methods, which uses deep learning algorithms and integration methods to perform convolutional code decoding. Decoding and recovering the transmitted information bit sequence from the noisy soft information sequence.

为了达到上述目的,本发明通过下述技术方案予以实现:一种基于深度学习和集成方法的卷积码译码方法,其特征在于:设置弱分类器和弱分类器的个数;所述弱分类器采用深度神经网络或感知器对卷积码进行译码,并对深度神经网络的深度进行设置;最后采用集成方法对弱分类器的译码结果投票得到译码输出;所述深度神经网络为全连接神经网络、卷积神经网络、GAN或LSTM。In order to achieve the above object, the present invention is achieved by the following technical solutions: a convolutional code decoding method based on deep learning and an integrated method, characterized in that: setting the number of weak classifiers and weak classifiers; The classifier uses a deep neural network or a perceptron to decode the convolutional code, and sets the depth of the deep neural network; finally, an ensemble method is used to vote on the decoding result of the weak classifier to obtain a decoding output; the deep neural network Be fully connected neural network, convolutional neural network, GAN or LSTM.

弱分类器采用深度神经网络对卷积码进行译码,并对深度神经网络的深度进行设置;最后采用集成方法对弱分类器的译码结果投票得到译码输出是指:在弱分类器中,通过建立深度神经网络模型并将半无限的卷积码序列切分为符合深度神经网络结构的训练集,训练完深度神经网络模型后,对切分后的带噪卷积码经过不同维度的译码;最后用集成方法投票将其转换成全部码字的译码输出。The weak classifier uses the deep neural network to decode the convolutional code, and sets the depth of the deep neural network; finally, the ensemble method is used to vote on the decoding result of the weak classifier to obtain the decoding output: in the weak classifier , by establishing a deep neural network model and dividing the semi-infinite convolutional code sequence into a training set that conforms to the deep neural network structure. After training the deep neural network model, the divided noisy convolutional codes are processed in different dimensions. Decoding; finally, the ensemble vote is used to convert it into the decoding output of all codewords.

包括以下步骤:Include the following steps:

第一步,在弱分类器中确定深度神经网络的模型参数,并建立深度神经网络模型;The first step is to determine the model parameters of the deep neural network in the weak classifier, and establish the deep neural network model;

第二步,建立卷积码译码的数据样本集;The second step is to establish a data sample set for convolutional code decoding;

第三步,利用第二步数据样本集并采用softmax分类方式和批量梯度下降法对深度神经网络模型进行训练;The third step is to use the data sample set of the second step and use the softmax classification method and the batch gradient descent method to train the deep neural network model;

第四步,将需译码的卷积码输入第三步得到的深度神经网络模型中进行译码;在译码过程中,取得若干个弱分类器在不同的维度上对需译码的卷积码信息码字段对应的信息位做分类并译码输出,采用集成的方法对该信息位的译码输出进行投票产生一个强分类器以得到最终译码,完成卷积码译码。The fourth step is to input the convolutional code to be decoded into the deep neural network model obtained in the third step for decoding; in the decoding process, several weak classifiers are obtained to decode the volume to be decoded in different dimensions. The information bits corresponding to the information code field of the product code are classified and decoded for output, and the decoded output of the information bits is voted by an integrated method to generate a strong classifier to obtain the final decoding and complete the decoding of the convolutional code.

在第一步中,所述确定深度神经网络的模型参数,并建立深度神经网络模型是指:对于任一个(n0,k0,m)卷积码,设定深度神经网络模型的输出层维度为n,输入层维度为n0×n/k0;设定隐藏层的激活函数为f(x)=relu(x);根据输出层维度、输入层维度和隐藏层的激活函数建立深度神经网络模型。In the first step, determining the model parameters of the deep neural network and establishing the deep neural network model means: for any (n 0 , k 0 , m) convolutional code, setting the output layer of the deep neural network model The dimension is n, the dimension of the input layer is n 0 ×n/k 0 ; the activation function of the hidden layer is set as f(x)=relu(x); the depth is established according to the dimension of the output layer, the dimension of the input layer and the activation function of the hidden layer Neural network model.

所述建立卷积码译码的数据样本集是指:The described data sample set for establishing convolutional code decoding refers to:

首先,随机生成一段长度为L的信息序列,经过(n0,k0,m)卷积码编码后,并通过高斯白噪声加噪得到长度为n0×L/k0的带噪卷积码信息序列;First, an information sequence of length L is randomly generated, encoded by (n 0 , k 0 , m) convolutional code, and a noisy convolution of length n 0 ×L/k 0 is obtained by adding noise to Gaussian white noise code information sequence;

其次,在带噪卷积码信息序列前加入00做为状态位,并将带噪卷积码信息序列按照第一步深度神经网络模型的输入维度进行切分,形成对应于深度神经网络模型大小的带噪卷积码信息码字段;其中,对于任一个(n0,k0,m)卷积码的开始状态均为00;Second, add 00 as the status bit before the noisy convolutional code information sequence, and divide the noisy convolutional code information sequence according to the input dimension of the deep neural network model in the first step to form a size corresponding to the deep neural network model. The information code field of the noisy convolutional code; wherein, the start state of any (n 0 , k 0 , m) convolutional code is 00;

最后,对带噪卷积码信息码字段进行样本构建,批量生成符合深度神经网络模型的数据样本集。Finally, sample construction is performed on the information code field of the noisy convolutional code, and a data sample set conforming to the deep neural network model is generated in batches.

所述对带噪卷积码信息码字段进行样本构建,批量生成符合深度神经网络模型的数据样本集是指:The described sample construction of the information code field of the noisy convolutional code, and batch generation of data sample sets conforming to the deep neural network model refers to:

(1)在带噪卷积码信息码字段中,前k0×m位为原始码字的状态位,后n0×n/k0位为带噪卷积码信息码字段,作为第一个训练样本;(1) In the information code field of the noisy convolutional code, the first k 0 ×m bits are the status bits of the original codeword, and the last n 0 ×n/k 0 bits are the information code field of the noisy convolutional code, as the first training samples;

(2)设定样本获取信息位窗口大小为N,取第二个训练样本时,样本窗口在带噪卷积码信息码字段按序列方向向后滑动一位,将前一个码字段的状态位的第二个0和码字段的第一个比特作为状态位,加上滑动后样本窗口加入的码字比特,作为第二个训练样本;(2) Set the size of the sample acquisition information bit window to N. When taking the second training sample, the sample window slides one bit backward in the sequence direction in the information code field of the noisy convolutional code, and the status bit of the previous code field is set. The second 0 and the first bit of the code field are used as the status bit, plus the codeword bits added to the sample window after sliding, as the second training sample;

(3)以此类推,根据全段的带噪卷积码信息码字段和对应的信息位批量生成符合深度神经网络模型的数据样本集。(3) By analogy, a data sample set conforming to the deep neural network model is generated in batches according to the entire segment of the noisy convolutional code information code field and the corresponding information bits.

在第三步中,深度神经网络模型训练时采用前馈计算和后向传播两个过程更新权重的方式得到最优的权重,使得该模型具有分类能力。In the third step, during the training of the deep neural network model, two processes of feedforward calculation and backward propagation are used to update the weights to obtain the optimal weights, so that the model has the ability to classify.

在第四步中,所述将需译码的卷积码输入第三步得到的深度神经网络模型中进行译码;在译码过程中,取得若干个弱分类器在不同的维度上对需译码的卷积码信息码字段对应的信息位做分类并译码输出,采用集成的方法对该信息位的译码输出进行投票产生一个强分类器以得到最终译码,完成卷积码译码是指:In the fourth step, the convolutional code that needs to be decoded is input into the deep neural network model obtained in the third step for decoding; in the decoding process, several weak classifiers are obtained in different dimensions. The information bits corresponding to the information code field of the decoded convolutional code are classified and decoded for output, and an integrated method is used to vote on the decoded output of the information bits to generate a strong classifier to obtain the final decoding and complete the convolutional code decoding. Code means:

(1)将需译码的卷积码进行编码加噪后得到带噪卷积码信息序列,并在带噪卷积码信息序列的信息位最后加上零信息位后输入深度神经网络模型;(1) Encode the convolutional code to be decoded and add noise to obtain a noisy convolutional code information sequence, and input the deep neural network model after adding zero information bits to the information bits of the noisy convolutional code information sequence at the end;

(2)设定最开始状态位为00,并对第一个信息位进行译码;(2) Set the initial state bit to 00, and decode the first information bit;

(3)将带噪卷积码信息序列的状态更新,并向带噪卷积码信息序列后滑动一位,每滑动一位得到一次译码输出;当向带噪卷积码信息序列后滑动的次数达到深度神经网络模型的输出层维度大小n时,则取得了包括该信息位的n个弱分类器,这些n个弱分类器在不同的维度上对该信息位做了分类并有n次译码输出;采用集成的方法对这个信息位进行投票产生一个强分类器:即对n次译码输出进行统计,投票得到数量多的译码输出作为该信息位的译码结果;(3) Update the state of the noisy convolutional code information sequence, slide one bit to the back of the noisy convolutional code information sequence, and get a decoding output for each one bit; slide to the back of the noisy convolutional code information sequence When the number of times reaches the output layer dimension size n of the deep neural network model, n weak classifiers including the information bit are obtained, and these n weak classifiers classify the information bit in different dimensions and have n Sub-decoding output; use the integrated method to vote on this information bit to generate a strong classifier: that is, perform statistics on the n-time decoding output, and vote to obtain a large number of decoding outputs as the decoding results of the information bit;

(4)对后续带噪卷积码信息序列重复此步译码,完成卷积码译码。(4) Repeat this step of decoding for the subsequent noisy convolutional code information sequence to complete the convolutional code decoding.

与现有技术相比,本发明具有如下优点与有益效果:Compared with the prior art, the present invention has the following advantages and beneficial effects:

1、本发明基于深度学习和集成方法的卷积码译码方法采用了深度学习算法和集成方法来对卷积码进行译码,并从带噪的软信息序列中还原发送的信息位序列。1. The convolutional code decoding method based on the deep learning and integration method of the present invention adopts the deep learning algorithm and the integration method to decode the convolutional code, and restore the transmitted information bit sequence from the noisy soft information sequence.

2、本发明通过一个深度神经网络模型得到了多个弱分类器,并将其集成为一个强分类器,极大地改善了深度神经网络模型的译码性能。2. The present invention obtains a plurality of weak classifiers through a deep neural network model, and integrates them into a strong classifier, which greatly improves the decoding performance of the deep neural network model.

附图说明Description of drawings

图1是本发明实施例一基于深度学习和集成方法的卷积码译码方法的流程图;1 is a flowchart of a convolutional code decoding method based on a deep learning and integration method according to an embodiment of the present invention;

图2是本发明实施例一的深度神经网络模型的结构图;2 is a structural diagram of a deep neural network model according to Embodiment 1 of the present invention;

图3是本发明实施例一方法的弱分类器的译码过程图;Fig. 3 is the decoding process diagram of the weak classifier of the method of Embodiment 1 of the present invention;

图4是本发明实施例一方法的译码与维特比译码性能对比图。FIG. 4 is a performance comparison diagram of decoding and Viterbi decoding of a method according to Embodiment 1 of the present invention.

具体实施方式Detailed ways

下面结合附图与具体实施方式对本发明作进一步详细的描述。The present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments.

实施例一Example 1

本发明以(2,1,2)卷积码为实施例,对本发明提供的一种基于深度学习和集成方法的卷积码译码方法进行详细的说明,其中该卷积码的编码方式如下,卷积码的每一段的起始状态可由在该段之前两位的输入比特来表示,在编码最前端的起始状态可由00来表示。G(D)=[1+D+D2,1+D2]。The present invention takes (2, 1, 2) convolutional code as an embodiment, and provides a detailed description of a convolutional code decoding method based on deep learning and integration methods provided by the present invention, wherein the encoding method of the convolutional code is as follows , the initial state of each segment of the convolutional code can be represented by two input bits before the segment, and the initial state at the front end of the code can be represented by 00. G(D)=[1+D+D2,1+D2].

如图1至4所示,本发明基于深度学习和集成方法的卷积码译码方法是这样的:设置弱分类器和弱分类器的个数;所述弱分类器采用深度神经网络对卷积码进行译码,并对深度神经网络的深度进行设置;最后采用集成方法对弱分类器的译码结果投票得到译码输出;所述深度神经网络为全连接神经网络、卷积神经网络、GAN或LSTM。As shown in Figures 1 to 4, the convolutional code decoding method based on the deep learning and ensemble method of the present invention is as follows: set the number of weak classifiers and weak classifiers; The product code is decoded, and the depth of the deep neural network is set; finally, the ensemble method is used to vote the decoding result of the weak classifier to obtain the decoding output; the deep neural network is a fully connected neural network, a convolutional neural network, GAN or LSTM.

具体为通过建立深度神经网络模型并将半无限的卷积码序列切分为符合深度神经网络结构的训练集,训练完深度神经网络模型后,对切分后的带噪卷积码经过不同维度的译码,最后用集成方法投票将其转换成全部码字的译码输出。Specifically, by establishing a deep neural network model and dividing the semi-infinite convolutional code sequence into a training set that conforms to the deep neural network structure, after training the deep neural network model, the divided noisy convolutional codes are processed in different dimensions. , and finally use the ensemble method to vote to convert it into the decoding output of all codewords.

包括以下步骤:Include the following steps:

第一步,在弱分类器中确定深度神经网络的模型参数,并建立深度神经网络模型;The first step is to determine the model parameters of the deep neural network in the weak classifier, and establish the deep neural network model;

第二步,建立卷积码译码的数据样本集;The second step is to establish a data sample set for convolutional code decoding;

第三步,利用第二步数据样本集并采用softmax分类方式和批量梯度下降法对深度神经网络模型进行训练;The third step is to use the data sample set of the second step and use the softmax classification method and the batch gradient descent method to train the deep neural network model;

第四步,将需译码的卷积码输入第三步得到的深度神经网络模型中进行译码;在译码过程中,取得若干个弱分类器在不同的维度上对需译码的卷积码信息码字段对应的信息位做分类,采用集成的方法对该信息位进行投票产生一个强分类器,并通过强分类器进行译码,完成卷积码译码。The fourth step is to input the convolutional code to be decoded into the deep neural network model obtained in the third step for decoding; in the decoding process, several weak classifiers are obtained to decode the volume to be decoded in different dimensions. The information bits corresponding to the information code field of the product code are classified, and the information bits are voted by an integrated method to generate a strong classifier, which is decoded by the strong classifier to complete the decoding of the convolutional code.

具体步骤如下:Specific steps are as follows:

(1)首先确定深度神经网络的模型参数,并建立深度神经网络模型。可将深度神经网络的输出层维度大小设定为8,对应的深度神经网络的输入层维度大小为8×2=18,其中2为该小段卷积码的起始状态。由于(2,1,2)卷积码的构造比较简单,所以一层隐藏层已经足够,此处将隐藏层大小设为512,并设定隐藏层的激活函数为f(x)=relu(x)。根据输出层维度、输入层维度和隐藏层的激活函数建立深度神经网络模型。(1) First determine the model parameters of the deep neural network, and establish the deep neural network model. The dimension of the output layer of the deep neural network can be set to 8, and the dimension of the corresponding input layer of the deep neural network is 8×2=18, where 2 is the initial state of the small convolutional code. Since the structure of the (2, 1, 2) convolutional code is relatively simple, one hidden layer is sufficient. Here, the size of the hidden layer is set to 512, and the activation function of the hidden layer is set to f(x)=relu( x). A deep neural network model is built according to the dimension of the output layer, the dimension of the input layer and the activation function of the hidden layer.

(2)对带噪卷积码信息码字段进行样本构建,批量生成符合深度神经网络模型的数据样本集。(2) Construct samples for the information code fields of noisy convolutional codes, and generate data sample sets conforming to the deep neural network model in batches.

随机生成长度为L的信息序列,经(2,1,2)卷积码编码后,由范围为1db~7db的高斯白噪声加噪,得到一个长度为2×L的带噪卷积码信息序列。取L=1000,为方便译码,最后7位可设为零信息位,以便倒数第八位的译码后译码过程结束,构建样本的时候,将带噪卷积码信息序列按照第一步深度神经网络模型的输入维度进行切分,形成对应于深度神经网络模型大小的带噪卷积码信息码字段。取信息位滑动窗口大小为8,编码后加上高斯噪声得到16位的码字样本。第一个训练样本状态位由00开始,后面是2×8的信息序列为带噪卷积码信息码字段,作为第一个训练样本。Randomly generate an information sequence of length L, after encoding by (2, 1, 2) convolutional code, add noise with Gaussian white noise ranging from 1db to 7db to obtain a noisy convolutional code information of length 2×L sequence. Take L=1000, for the convenience of decoding, the last 7 bits can be set as zero information bits, so that the decoding process after the decoding of the eighth last bit ends, when constructing the sample, the noisy convolutional code information sequence according to the first The input dimension of the deep neural network model is segmented to form a noisy convolutional code information code field corresponding to the size of the deep neural network model. Take the information bit sliding window size as 8, and add Gaussian noise after encoding to obtain 16-bit codeword samples. The status bit of the first training sample starts from 00, followed by a 2×8 information sequence which is the information code field of the noisy convolutional code, which is used as the first training sample.

取第二个训练样本时,窗口在带噪卷积码信息码字段按序列方向向后滑动一位,将前一个码字段的状态位的第二个0和码字段的第一个比特做为状态位,加上滑动后窗口加入的码字比特做为输入,标签为编码前码字段的onehot形式,组成了第二个训练样本,以此类推,将长度为L的信息序列全部转化为对应输入层大小为16,输入层大小为8的神经网络的训练样本集,其中输入包含码字段的状态位和码字段,输出为码字段译码后的onehot。When taking the second training sample, the window slides one bit backward in the sequence direction in the information code field of the noisy convolutional code, and takes the second 0 of the status bit of the previous code field and the first bit of the code field as The status bit, plus the codeword bits added to the sliding window as input, the label is the onehot form of the code field before encoding, forming the second training sample, and so on, all the information sequences of length L are converted into corresponding The training sample set of the neural network with the input layer size of 16 and the input layer size of 8, the input contains the state bits of the code field and the code field, and the output is the onehot after the code field is decoded.

(3)得到数据样本集后,隐藏层以f(x)=relu(x)做为激活函数,采用softmax分类方式和批量梯度下降法对深度神经网络模型进行训练。深度神经网络模型训练时采用前馈计算和后向传播两个过程更新权重的方式得到最优的权重,使得该模型具有分类能力。以上步骤为一个完整的训练过程,经过训练后,误差会不断降低,也就是说深度神经网络逐渐学习对带噪的卷积码信息序列进行译码。进行多次训练直到深度神经网络的准确率和误差达到稳定即可停止训练。此处选择训练次数为2000次。(3) After obtaining the data sample set, the hidden layer uses f(x)=relu(x) as the activation function, and uses the softmax classification method and the batch gradient descent method to train the deep neural network model. During the training of the deep neural network model, the optimal weights are obtained by two processes of feedforward calculation and backward propagation, which make the model have the ability to classify. The above steps are a complete training process. After training, the error will continue to decrease, that is to say, the deep neural network gradually learns to decode the noisy convolutional code information sequence. The training can be stopped by performing multiple trainings until the accuracy and error of the deep neural network stabilize. Here, the training times are selected as 2000 times.

(4)将需译码的卷积码输入至训练好的深度神经网络模型中进行译码,完成卷积码译码:对训练好的深度神经网络模型,随机生成一段信息序列,经过(2,1,2)卷积码编码和范围为1db~7db的高斯白噪声加噪后,将得到的带噪软信息按深度神经网络的输入层大小输入神经网络。最开始的状态位为00,以译卷积码的第一个信息位例子,对第一个训练样本进行译码得到长为8的译码结果,第一个训练样本译码输出的最后一位是卷积码第一个信息位在第一次的译码结果。取第二个训练样本,相当于带噪码字后滑动一位,得到一个输出,而卷积码的第一个信息位在第二个码字上对应是倒数第二位置的译码输出。本例中滑动窗口大小为8,当滑动次数达到深度神经网络的输出层大小8时,取得了包括该信息位的8个弱分类器。这些8个弱分类器在不同的维度上对该信息位做了分类并有8次译码输出,采用集成的方法对这个信息位进行投票产生一个强分类器:即对8个弱分类器的8次译码输出进行统计,投票得到数量多的译码输出结果作为该信息位的译码结果,从而获得对该信息位更好的译码性能。整个过程如图3所示,颜色相同的信息位在卷积码上的位置相同,每一列都是一个样本的译码输出,所以对应着是集成了8次译码结果。对后续码字重复此步骤,从而完成了卷积码的全部译码。(4) Input the convolutional code to be decoded into the trained deep neural network model for decoding, and complete the decoding of the convolutional code: randomly generate an information sequence for the trained deep neural network model, and go through (2 , 1, 2) After the convolutional code coding and the Gaussian white noise ranging from 1db to 7db are added to the noise, the obtained soft information with noise is input into the neural network according to the size of the input layer of the deep neural network. The initial state bit is 00. Taking the first information bit of the convolutional code as an example, the first training sample is decoded to obtain a decoding result with a length of 8. The bit is the first decoding result of the first information bit of the convolutional code. Taking the second training sample is equivalent to sliding one bit after the noisy codeword to get an output, and the first information bit of the convolutional code corresponds to the decoding output of the penultimate position on the second codeword. In this example, the sliding window size is 8. When the number of sliding times reaches 8, the output layer size of the deep neural network, 8 weak classifiers including this information bit are obtained. These 8 weak classifiers classify the information bits in different dimensions and have 8 decoding outputs, and use the integrated method to vote on the information bits to generate a strong classifier: that is, for the 8 weak classifiers 8 times of decoding outputs are counted, and a large number of decoding output results are obtained by voting as the decoding results of the information bits, so as to obtain better decoding performance for the information bits. The whole process is shown in Figure 3. The information bits of the same color are in the same position on the convolutional code, and each column is the decoding output of one sample, so it corresponds to the integration of 8 decoding results. Repeat this step for subsequent codewords, thus completing all decoding of the convolutional code.

实施例二Embodiment 2

本实施例基于深度学习和集成方法的卷积码译码方法是这样的:设置弱分类器和弱分类器的个数,弱分类器采用感知器对卷积码进行译码,并对深度神经网络的深度进行设置;最后采用集成方法对弱分类器的译码结果投票得到译码输出;所述深度神经网络为全连接神经网络、卷积神经网络、GAN或LSTM。The convolutional code decoding method based on the deep learning and ensemble method in this embodiment is as follows: the weak classifier and the number of weak classifiers are set, the weak classifier uses a perceptron to decode the convolutional code, and the deep neural network is used to decode the convolutional code. The depth of the network is set; finally, an ensemble method is used to vote the decoding result of the weak classifier to obtain the decoding output; the deep neural network is a fully connected neural network, a convolutional neural network, GAN or LSTM.

上述实施例为本发明较佳的实施方式,但本发明的实施方式并不受上述实施例的限制,其他的任何未背离本发明的精神实质与原理下所作的改变、修饰、替代、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。The above-mentioned embodiments are preferred embodiments of the present invention, but the embodiments of the present invention are not limited by the above-mentioned embodiments, and any other changes, modifications, substitutions, combinations, The simplification should be equivalent replacement manners, which are all included in the protection scope of the present invention.

Claims (4)

1. A convolutional code decoding method based on deep learning and integration method is characterized in that: the method comprises the following steps:
firstly, setting the number of weak classifiers and weak classifiers; determining model parameters of a deep neural network in a weak classifier, and establishing a deep neural network model; the deep neural network is a fully-connected neural network, a convolutional neural network, a GAN or an LSTM;
secondly, establishing a data sample set of convolutional code decoding;
thirdly, training a deep neural network model by using the data sample set in the second step and adopting a softmax classification mode and a batch gradient descent method;
fourthly, inputting the convolutional code to be decoded into the deep neural network model obtained in the third step for decoding; in the decoding process, a plurality of weak classifiers are obtained to classify and decode information bits corresponding to convolutional code information code segments to be decoded in different dimensions, an integrated method is adopted to vote for the decoded output of the information bits to generate a strong classifier so as to obtain final decoding, and the convolutional code decoding is completed;
the establishing of the data sample set of the convolutional code decoding refers to:
first, an information sequence of length L is randomly generated and passed through (n)0,k0M) after encoding the convolutional code, the length n is obtained by Gaussian white noise plus noise0×L/k0The information sequence of the noisy convolutional code;
secondly, adding 00 as a state bit in front of the noisy convolutional code information sequence, and segmenting the noisy convolutional code information sequence according to the input dimension of the deep neural network model in the first step to form a noisy convolutional code information code field corresponding to the size of the deep neural network model; wherein, for any one (n)0,k0M) the start states of the convolutional codes are all 00;
finally, carrying out sample construction on the noisy convolutional code information code word segment, and generating a data sample set conforming to the deep neural network model in batches;
the step of carrying out sample construction on the noisy convolutional code information code segment and generating a data sample set conforming to the deep neural network model in batches is as follows:
(1) in the information code field of the noisy convolutional code, front k0X m bits are the status bits of the original codeword, the last n0×n/k0The bits are information code fields of the noisy convolutional codes and are used as first training samples;
(2) setting the size of a sample acquisition information bit window to be N, when a second training sample is taken, sliding a sample window backwards by one bit along the sequence direction for a code field of noisy convolutional code information, taking a second 0 of a state bit of a previous code field and a first bit of the previous code field as state bits, and adding code bits added in the sample window after sliding to serve as a second training sample;
(3) and in the same way, generating a data sample set conforming to the deep neural network model in batch according to the information code field of the full-section noisy convolutional code and the corresponding information bits.
2. The convolutional code decoding method based on deep learning and integration method as claimed in claim 1, wherein: in the first placeIn one step, the determining the model parameters of the deep neural network and establishing the deep neural network model includes: for any one (n)0,k0M) convolutional code, setting the output layer dimension of the deep neural network model as n and the input layer dimension as n0×n/k0(ii) a Setting an activation function of the hidden layer as f (x) ═ relu (x); and establishing a deep neural network model according to the output layer dimension, the input layer dimension and the activation function of the hidden layer.
3. The convolutional code decoding method based on deep learning and integration method as claimed in claim 1, wherein: in the third step, the optimal weight is obtained by adopting a mode of updating the weight in two processes of feedforward calculation and backward propagation during the training of the deep neural network model, so that the model has classification capability.
4. The convolutional code decoding method based on deep learning and integration method as claimed in claim 2, wherein: in the fourth step, the convolutional code to be decoded is input into the deep neural network model obtained in the third step for decoding; in the decoding process, a plurality of weak classifiers are obtained to classify and decode information bits corresponding to convolutional code information code segments to be decoded in different dimensions, an integrated method is adopted to vote for the decoded output of the information bits to generate a strong classifier so as to obtain final decoding, and the completion of convolutional code decoding means that:
(1) coding and denoising a convolutional code to be decoded to obtain a noisy convolutional code information sequence, and finally adding a zero information bit to an information bit of the noisy convolutional code information sequence and inputting the information bit into a deep neural network model;
(2) setting the initial state bit as 00, and decoding the first information bit;
(3) updating the state of the information sequence of the noisy convolutional code, sliding one bit behind the information sequence of the noisy convolutional code, and obtaining once decoding output after sliding one bit each time; when the number of sliding times behind the information sequence of the noisy convolutional code reaches the dimension size n of an output layer of the deep neural network model, acquiring n weak classifiers comprising the information bits, classifying the information bits on different dimensions by the n weak classifiers and outputting n decoding times; voting the information bit by adopting an integrated method to generate a strong classifier: counting the n decoding outputs, voting to obtain a large number of decoding outputs as the decoding result of the information bit;
(4) and repeating the decoding step for the subsequent noisy convolutional code information sequence to finish the convolutional code decoding.
CN201811250493.9A 2018-10-25 2018-10-25 Convolutional code decoding method based on deep learning and integration method Active CN109525253B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811250493.9A CN109525253B (en) 2018-10-25 2018-10-25 Convolutional code decoding method based on deep learning and integration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811250493.9A CN109525253B (en) 2018-10-25 2018-10-25 Convolutional code decoding method based on deep learning and integration method

Publications (2)

Publication Number Publication Date
CN109525253A CN109525253A (en) 2019-03-26
CN109525253B true CN109525253B (en) 2020-10-27

Family

ID=65774135

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811250493.9A Active CN109525253B (en) 2018-10-25 2018-10-25 Convolutional code decoding method based on deep learning and integration method

Country Status (1)

Country Link
CN (1) CN109525253B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110739977B (en) * 2019-10-30 2023-03-21 华南理工大学 BCH code decoding method based on deep learning
CN110912566B (en) * 2019-11-28 2023-09-29 福建江夏学院 Digital audio broadcasting system channel decoding method based on sliding window function
CN112953565B (en) * 2021-01-19 2022-06-14 华南理工大学 A return-to-zero convolutional code decoding method and system based on convolutional neural network
CN115424262A (en) * 2022-08-04 2022-12-02 暨南大学 Method for optimizing zero sample learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6374385B1 (en) * 1998-05-26 2002-04-16 Nokia Mobile Phones Limited Method and arrangement for implementing convolutional decoding
CN106571831A (en) * 2016-10-28 2017-04-19 华南理工大学 LDPC hard decision decoding method based on depth learning and decoder
CN107194433A (en) * 2017-06-14 2017-09-22 电子科技大学 A kind of Radar range profile's target identification method based on depth autoencoder network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6374385B1 (en) * 1998-05-26 2002-04-16 Nokia Mobile Phones Limited Method and arrangement for implementing convolutional decoding
CN106571831A (en) * 2016-10-28 2017-04-19 华南理工大学 LDPC hard decision decoding method based on depth learning and decoder
CN107194433A (en) * 2017-06-14 2017-09-22 电子科技大学 A kind of Radar range profile's target identification method based on depth autoencoder network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度学习的LDPC译码算法研究;李杰;《中国优秀硕士学位论文全文数据库》;20180615(第06期);第38-46页 *

Also Published As

Publication number Publication date
CN109525253A (en) 2019-03-26

Similar Documents

Publication Publication Date Title
CN109525253B (en) Convolutional code decoding method based on deep learning and integration method
CN107040262B (en) Method for calculating L ist predicted value of polar code SC L + CRC decoding
CN109525254B (en) Deep learning-based soft-decision decoding method for convolutional codes
CN106571831B (en) LDPC hard decision decoding method and decoder based on deep learning
CN110474716A (en) The method for building up of SCMA codec models based on noise reduction self-encoding encoder
CN110278002A (en) Polar Code Belief Propagation List Decoding Method Based on Bit Flip
CN106571832A (en) Multi-system LDPC cascaded neural network decoding method and device
CN108540267B (en) A method and device for detecting multi-user data information based on deep learning
CN109547032B (en) Confidence propagation LDPC decoding method based on deep learning
CN109728824B (en) LDPC code iterative decoding method based on deep learning
CN108155972A (en) The decoding optimization method of distributed associating signal source and channel system
CN109495211B (en) Channel coding and decoding method
CN110299921B (en) A Model-Driven Deep Learning Decoding Method for Turbo Codes
CN101467459B (en) Generation method of vector quantization dictionary, encoder and decoder, and encoding and decoding method
CN112953565B (en) A return-to-zero convolutional code decoding method and system based on convolutional neural network
CN115276668A (en) LDPC code hybrid decoding method based on CRC
CN116155297A (en) Data compression method, device, equipment and storage medium
Berezkin et al. Data compression methods based on neural networks
CN118014094B (en) Quantum computing method, quantum circuit, device and medium for determining function classification
Huang et al. Recognition of channel codes based on BiLSTM-CNN
CN110798224A (en) Compression coding, error detection and decoding method
CN114448570B (en) Deep learning decoding method of distributed joint information source channel coding system
Shehab et al. Recurrent neural network based prediction to enhance satellite telemetry compression
CN109039531B (en) Method for adjusting LT code coding length based on machine learning
Zhu et al. Deep learning for waveform level receiver design with natural redundancy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant