CN113919401A - Modulation type identification method and device based on constellation diagram characteristics and computer equipment - Google Patents
Modulation type identification method and device based on constellation diagram characteristics and computer equipment Download PDFInfo
- Publication number
- CN113919401A CN113919401A CN202111304357.5A CN202111304357A CN113919401A CN 113919401 A CN113919401 A CN 113919401A CN 202111304357 A CN202111304357 A CN 202111304357A CN 113919401 A CN113919401 A CN 113919401A
- Authority
- CN
- China
- Prior art keywords
- layer
- convolutional
- neural network
- output
- inputting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/02—Preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application belongs to the field of signal processing, and relates to a modulation type identification method based on constellation diagram characteristics, which comprises the following steps: acquiring an input signal; preprocessing an input signal to obtain a preprocessed signal; calculating a constellation diagram of the processed signal to obtain a signal constellation diagram; generating a gray scale map and a binary map according to the signal constellation map to obtain the characteristics of the signal constellation map; and inputting the characteristics of the signal constellation diagram into the trained neural network to obtain the type recognition result of the input signal. The application also provides a modulation type identification device based on the constellation diagram characteristics, computer equipment and a storage medium. The identification effect of modulation type identification based on the constellation diagram characteristics is improved.
Description
Technical Field
The present application relates to the field of signal processing, and in particular, to a modulation type identification method and apparatus based on constellation diagram features, a computer device, and a storage medium.
Background
The modulation type identification method based on the constellation diagram characteristics is mainly used for amplitude modulation and phase modulation signals. Different modulation types have different corresponding constellation diagrams and can be used as the basis for identifying the modulation types. Scientific researchers use the algorithm of constellation map feature matching to realize the recognition of different signal modulation types, and experimental results show that the method can better prevent noise interference. The method has the advantages that the modulation type recognition of the signals is realized by extracting the amplitude statistical characteristics, the blind clustering characteristics and the template matching characteristics of the constellation map as the characteristic information, and experimental results show that the algorithm is small in calculation amount and has high classification accuracy and robustness. Shu Chang et al propose a signal modulation type recognition algorithm based on the biquadratic spectrum and constellation diagram structure, this algorithm is according to constellation diagram characteristic and biquadratic spectrum characteristic and extract the characteristic information, carry on the modulation type to discern through the classifier. Because the modulation type identification method based on the constellation diagram characteristics is based on the complex baseband symbol sequence, a large amount of prior knowledge and more signal preprocessing are required, and the modulation type identification effect is poor.
Disclosure of Invention
The embodiment of the application aims to provide a modulation type identification method and device based on constellation diagram characteristics, computer equipment and a storage medium, and the accuracy of signal identification is improved.
In order to solve the above technical problem, an embodiment of the present application provides a modulation type identification method based on constellation diagram features, which adopts the following technical solutions:
acquiring an input signal;
preprocessing the input signal to obtain a preprocessed signal;
calculating the constellation diagram of the processed signal to obtain a signal constellation diagram;
generating a gray scale map and a binary map according to the signal constellation map to obtain the characteristics of the signal constellation map;
and inputting the characteristics of the signal constellation diagram into a trained neural network to obtain a type identification result of the input signal.
Further, the step of inputting the features of the signal constellation diagram to a trained neural network to obtain the type recognition result of the input signal specifically includes:
inputting the gray-scale image and the binary image into a first layer of convolutional neural network to obtain a first convolutional output, wherein the first layer of convolutional neural network is 64 convolutional kernels with the size of 7x7, and the convolutional step is a convolutional layer with the size of 4x 4;
inputting the first volume output to a first BN layer, and processing the output of the first BN layer by a Relu function to obtain a first layer output result;
inputting the first layer output result to a maximum pooling layer to obtain a first layer characteristic diagram;
inputting the first layer of feature map into a second layer of convolutional neural network to obtain a second convolutional output, wherein the second layer of convolutional neural network is 128 convolutional kernels with the size of 4x4, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the second convolution output to a second BN layer, and processing the output of the second BN layer by a Relu function to obtain a second layer output result;
inputting the second-layer output result to a maximum pooling layer to obtain a second-layer characteristic diagram;
inputting the second layer of feature map into a third layer of convolutional neural network to obtain a second convolutional output, wherein the third layer of convolutional neural network is 256 convolutional kernels with the size of 3x3, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the third convolution output to a third BN layer, and processing the third BN layer output by a Relu function to obtain a third layer output result;
inputting the third layer of output results into a fourth layer of convolutional neural network to obtain a fourth convolutional output, wherein the fourth layer of convolutional neural network is 128 convolutional kernels with the size of 3x3, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the fourth convolution output to a fourth BN layer, and processing the fourth BN layer output by a Relu function to obtain a fourth layer output result;
inputting the fourth layer output result into a fifth layer convolutional neural network to obtain a fifth convolutional output, wherein the fifth layer convolutional neural network is 128 convolutional kernels with the size of 3x3, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the fifth convolution output to a fifth BN layer, and processing the fifth BN layer output by a Relu function to obtain a fifth layer output result;
inputting the fifth-layer output result into a maximum pooling layer to obtain a fifth-layer characteristic diagram;
inputting the third layer of output results into a sixth layer of convolutional neural network to obtain a sixth convolutional output, wherein the sixth layer of convolutional neural network is 32 convolutional kernels with the size of 1x1, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the sixth convolution output to a sixth BN layer, and processing the sixth BN layer output by a Relu function to obtain a sixth layer output result;
inputting the fourth layer of output results into a seventh layer of convolutional neural network to obtain a seventh convolutional output, wherein the seventh layer of convolutional neural network is 128 convolutional kernels with the size of 3x3, and the convolutional step is a convolutional layer with the size of 1x 1;
and inputting the seventh convolution output, the sixth layer output result and the fifth layer feature map into a feature association map layer to obtain a type identification result of the input signal.
Further, the step of inputting the seventh convolution output, the sixth layer output result, and the fifth layer feature map into a feature association map layer to obtain the type identification result of the input signal specifically includes:
inputting the seventh convolution output, the sixth layer output result and the fifth layer feature map into a first full-connection layer to obtain three feature vectors;
splicing the three characteristic vectors to obtain a signal vector;
and inputting the first signal vector to a second full-connection layer, and calculating the confidence coefficient of the type of the input signal through a softmax function to obtain the type identification result of the input signal.
Further, before the step of inputting the features of the signal constellation diagram to the trained neural network to obtain the type recognition result of the input signal, the method further includes:
acquiring a plurality of training data and a label corresponding to the training data;
inputting the training data and the corresponding label to an initial neural network model;
passing the initial neural network model throughTraining to obtain a trained convolutional neural network model,representing the weight value obtained by training the kth neuron in the nth layer of the multi-layer perceptron of the trained neural network model according to the output of the (n-1) th layer of the multi-layer perceptron of the trained convolutional neural network model,to representCorresponding offset, fi nRepresenting the output of the n-th layer of the trained neural network model after the ith training data is input into the trained neural network model, wherein i is any positive integer, n is a natural number, and f is the last layer of the trained neural network modeli nIs the output of the trained convolutional neural network model, fi n-1Representing the output of the n-1 layer of the trained neural network model after the ith training data is input into the trained convolutional neural network model;
and deploying the trained neural network model.
Further, the step of deploying the trained neural network model further comprises:
calculating a global loss value;
and if the global loss value is larger than the threshold value, adjusting the weight of the neuron.
Further, the initial neural network model at least includes: convolutional layer and fully-connected layer, the initial neural network model is passed throughTraining, wherein the step of obtaining the trained convolutional neural network model specifically comprises the following steps:
setting the probability of the weight of the neuron in the full connection layer to be 0 by 50%.
Further, the initial neural network model is passed throughTraining, wherein the step of obtaining the trained convolutional neural network model specifically comprises the following steps:
when the network training times are larger than the preset maximum learning times, the training is also finished.
In order to solve the above technical problem, an embodiment of the present application further provides a modulation type identification apparatus based on a constellation diagram feature, which adopts the following technical scheme:
the acquisition module is used for acquiring an input signal;
the preprocessing module is used for preprocessing the input signal to obtain a preprocessed signal;
the computing module is used for computing the constellation diagram of the processed signal to obtain a signal constellation diagram;
the generating module is used for generating a gray scale map and a binary map according to the signal constellation map to obtain the characteristics of the signal constellation map;
and the recognition module is used for inputting the characteristics of the signal constellation diagram into the trained neural network to obtain the type recognition result of the input signal.
Further, the identification module is further configured to:
inputting the gray-scale image and the binary image into a first layer of convolutional neural network to obtain a first convolutional output, wherein the first layer of convolutional neural network is 64 convolutional kernels with the size of 7x7, and the convolutional step is a convolutional layer with the size of 4x 4;
inputting the first volume output to a first BN layer, and processing the output of the first BN layer by a Relu function to obtain a first layer output result;
inputting the first layer output result to a maximum pooling layer to obtain a first layer characteristic diagram;
inputting the first layer of feature map into a second layer of convolutional neural network to obtain a second convolutional output, wherein the second layer of convolutional neural network is 128 convolutional kernels with the size of 4x4, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the second convolution output to a second BN layer, and processing the output of the second BN layer by a Relu function to obtain a second layer output result;
inputting the second-layer output result to a maximum pooling layer to obtain a second-layer characteristic diagram;
inputting the second layer of feature map into a third layer of convolutional neural network to obtain a second convolutional output, wherein the third layer of convolutional neural network is 256 convolutional kernels with the size of 3x3, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the third convolution output to a third BN layer, and processing the third BN layer output by a Relu function to obtain a third layer output result;
inputting the third layer of output results into a fourth layer of convolutional neural network to obtain a fourth convolutional output, wherein the fourth layer of convolutional neural network is 128 convolutional kernels with the size of 3x3, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the fourth convolution output to a fourth BN layer, and processing the fourth BN layer output by a Relu function to obtain a fourth layer output result;
inputting the fourth layer output result into a fifth layer convolutional neural network to obtain a fifth convolutional output, wherein the fifth layer convolutional neural network is 128 convolutional kernels with the size of 3x3, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the fifth convolution output to a fifth BN layer, and processing the fifth BN layer output by a Relu function to obtain a fifth layer output result;
inputting the fifth-layer output result into a maximum pooling layer to obtain a fifth-layer characteristic diagram;
inputting the third layer of output results into a sixth layer of convolutional neural network to obtain a sixth convolutional output, wherein the sixth layer of convolutional neural network is 32 convolutional kernels with the size of 1x1, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the sixth convolution output to a sixth BN layer, and processing the sixth BN layer output by a Relu function to obtain a sixth layer output result;
inputting the fourth layer of output results into a seventh layer of convolutional neural network to obtain a seventh convolutional output, wherein the seventh layer of convolutional neural network is 128 convolutional kernels with the size of 3x3, and the convolutional step is a convolutional layer with the size of 1x 1;
and inputting the seventh convolution output, the sixth layer output result and the fifth layer feature map into a feature association map layer to obtain a type identification result of the input signal.
Further, the identification module is further configured to:
inputting the seventh convolution output, the sixth layer output result and the fifth layer feature map into a first full-connection layer to obtain three feature vectors;
splicing the three characteristic vectors to obtain a signal vector;
and inputting the first signal vector to a second full-connection layer, and calculating the confidence coefficient of the type of the input signal through a softmax function to obtain the type identification result of the input signal.
Further, the modulation type identification apparatus based on the constellation diagram feature further includes a training module, and the training module is further configured to:
acquiring a plurality of training data and a label corresponding to the training data;
inputting the training data and the corresponding label to an initial neural network model;
passing the initial neural network model throughTraining to obtain a trained convolutional neural network model,representing the weight value obtained by training the kth neuron in the nth layer of the multi-layer perceptron of the trained neural network model according to the output of the (n-1) th layer of the multi-layer perceptron of the trained convolutional neural network model,to representCorresponding offset, fi nRepresenting the output of the n-th layer of the trained neural network model after the ith training data is input into the trained neural network model, wherein i is any positive integer, n is a natural number, and f is the last layer of the trained neural network modeli nIs the output of the trained convolutional neural network model, fi n-1Representing the output of the n-1 layer of the trained neural network model after the ith training data is input into the trained convolutional neural network model;
and deploying the trained neural network model.
Further, the modulation type identification apparatus based on the constellation diagram feature further includes a loss value calculation module, where the loss value calculation module is further configured to:
calculating a global loss value;
and if the global loss value is larger than the threshold value, adjusting the weight of the neuron.
Further, the modulation type identification apparatus based on the constellation diagram feature further includes an initialization module, where the initialization module is further configured to:
setting the probability of the weight of the neuron in the full connection layer to be 0 by 50%.
Further, the modulation type identification apparatus based on the constellation diagram feature further includes a setting module, and the setting module is further configured to:
when the network training times are larger than the preset maximum learning times, the training is also finished.
In order to solve the above technical problem, an embodiment of the present application further provides a computer device, which adopts the following technical solutions:
a computer device comprising at least one connected processor, a memory, and an input/output unit, wherein the memory is used for storing computer readable instructions, and the processor is used for calling the computer readable instructions in the memory to execute the steps of the modulation type identification method based on constellation diagram characteristics.
In order to solve the above technical problem, an embodiment of the present application further provides a computer-readable storage medium, which adopts the following technical solutions:
a computer readable storage medium, having computer readable instructions stored thereon, which when executed by a processor, implement the steps of the modulation type identification method based on constellation diagram characteristics described above.
Compared with the prior art, the embodiment of the application mainly has the following beneficial effects:
in the signal modulation type identification stage of the algorithm, firstly, a receiving end receives a signal to be processed, the signal is mapped into a constellation diagram after preprocessing, then, a characteristic image generation algorithm is selected, and a corresponding characteristic image is calculated and generated based on the distribution condition of a signal symbol sample in the constellation diagram. And finally, inputting the generated characteristic image into a corresponding classification network, and realizing the identification of the signal modulation type according to the output result of the classifier. The method combines deep learning knowledge, and converts the signal modulation type identification problem into an image processing and classification problem through calculation imaging based on a signal constellation diagram. And then, a deep neural network is built to complete feature extraction, and finally, the modulation type identification of the signal is realized according to the output result of the deep neural network. The characteristic information is extracted by a deep learning method, so that the characteristic extraction process is simplified, the influence of human factors on the characteristic information in the characteristic extraction process is reduced, and the accuracy of a modulation type identification algorithm is improved.
Drawings
In order to more clearly illustrate the solution of the present application, the drawings needed for describing the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2-1 is a flow diagram of one embodiment of a constellation feature based modulation type identification method according to the present application;
fig. 2-2 is a schematic diagram of a signal constellation generation according to the modulation type identification method based on constellation features of the present application;
FIGS. 2-3 are schematic diagrams of a neural network model for a constellation feature based modulation type identification method according to the present application;
fig. 3 is a schematic structural diagram of an embodiment of a modulation type identification apparatus based on constellation diagram features according to the present application;
FIG. 4 is a schematic block diagram of one embodiment of a computer device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
It should be noted that the modulation type identification method based on the constellation diagram features provided in the embodiments of the present application is generally executed by a server/terminal device, and accordingly, the modulation type identification apparatus based on the constellation diagram features is generally disposed in the server/terminal device.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continuing reference to fig. 2, a flow diagram of one embodiment of a method for constellation feature based modulation type identification in accordance with the present application is shown. The modulation type identification method based on the constellation diagram characteristics comprises the following steps:
In this embodiment, the input signal refers to a wireless signal, and the modulation identification technique of the wireless signal refers to a technique for determining a modulation scheme of an unknown signal by analyzing electromagnetic characteristics, spectral characteristics, statistical characteristics, and the like of a transmission signal. The modulation identification technology under the complex electromagnetic environment can promote the efficient utilization of frequency spectrum resources and improve the transmission efficiency, is also a key technology of non-cooperative communication and electronic countermeasure, and has an important position in the civil and military fields. In the civil field, modulation identification technology has wide application in advanced modulation coding technology. For example, in adaptive modulation and coding, the modulation mode and coding rate of a wireless signal are dynamically adjusted to adapt to different channel qualities, so that the link rate can be ensured to be as close to the channel capacity as possible, and through a modulation identification technology, a transmitter does not need to occupy additional resources to broadcast the modulation mode and coding rate. In the field of Cognitive Radio (CR), applying a modulation recognition technology to device sensing can greatly improve the intelligent degree of a CR system and provide more information for the decision of the CR system.
In the present embodiment, the signal preprocessing generally includes noise reduction processing, detecting the frequency of a carrier, estimating the period of a symbol, estimating the power of a signal and channel equalization, and the like. Generally, the recognition algorithms are different, the corresponding preprocessing is different, and the preprocessing precision requirements are different. Some identification algorithms require accurate estimates of some parameters, while some algorithms may be insensitive to the same parameters. Therefore, the signal preprocessing is specifically adjusted according to the recognition algorithm and is not invariable.
In this embodiment, after simple preprocessing, the signal is mapped into the constellation diagram. For example, the SLEW signal adopts 8PSK modulation. The single tone with 1.8kHz is shifted to different phases by using octal number to control the phase shift, thereby realizing the waveform modulation. The modulation method is shown in fig. 2-2.
And 204, generating a gray scale map and a binary map according to the signal constellation map to obtain the characteristics of the signal constellation map.
In this embodiment, according to the distribution of the received symbol samples on the constellation diagram, a corresponding binary image or grayscale image is generated by using a feature map calculation generation algorithm.
In the present embodiment, as shown in fig. 2-3, the neural network structure comprises 7 convolutional layers, 3 max pooling layers, 2 full-link layers and 1 feature map associated layer in total. The convolutional layers 1 to 5 function to extract feature information of an image and output a feature image. The convolutional layers 6 and 7 have the functions of reducing the depths of the feature images with different scales output from the convolutional layers 3 and 4, fusing redundant feature information, reducing the complexity of calculation, and inputting the processed feature images into the feature image related layer. In the feature map association layer, feature images of different scales are expanded into one-dimensional feature vectors through the full connection layer, the feature vectors of the feature maps of different scales are associated into a feature vector containing multi-scale features, and the feature vector is used as a final judgment basis. The two full connection layers are used for extracting information in the final characteristic vector, and finally outputting the identification result of the modulation type of the communication signal by adjusting parameters in the layers, so that the network completes the modulation classification of the communication signal.
In the embodiment, three classic convolutional neural network structures, namely Alex Net, GoogLe Net and Res Net, are used as classification networks of the communication signal modulation type recognition algorithm, and the networks are trained by using the generated characteristic images, so that the networks can realize the recognition of the communication signal modulation type. When the image is input into the convolutional neural network, the sizes of the feature images output after different convolutional layer processing are different, and the meanings of feature information contained in the feature images are also different. When the classical convolutional neural network structure is used as a communication signal modulation type classifier, the modulation type of the signal is judged by only using the feature map output by the last layer of convolution as a judgment basis. In the feature image output by the last layer of convolution, the receptive field of the pixel point is large, and due to different distribution conditions of the baseband symbols of the modulation signals in the constellation image, the aggregation degree information of the baseband symbols is only reflected in a small range sometimes, and the key information may be omitted from the feature image with the large receptive field. Therefore, a communication signal modulation type classification network (MSFCN) based on a multi-scale feature map is used, different convolutional layers are used for extracting feature information of different scales in a feature image, and the feature information of different scales is fused to be used as a basis for judging the signal modulation type.
According to the method, a signal constellation diagram is utilized, and a communication signal characteristic image is calculated and generated according to the distribution condition of received symbol samples on the constellation diagram. And converting the problem of identification of the modulation type into the problem of image classification by using computational imaging. A deep convolutional neural network is constructed, and a baseband modulated signal characteristic diagram is used for training the network, so that the convolutional neural network can realize the automatic identification of the signal modulation type. Firstly, an efficient characteristic image generation algorithm is provided, and the time for calculating and imaging is shortened while the original information of the image is kept; secondly, a multi-scale feature modulation signal classification network is applied, feature graphs of different scales are correlated, feature information of different scales is fused, and the recognition rate of the network to signal modulation types is improved. The result shows that the feature image generation algorithm provided by the thesis has high identification accuracy and lower calculation complexity, and the provided multi-scale feature classification network has high identification accuracy under the same data set.
In some optional implementation manners, the step of inputting the features of the signal constellation diagram to a trained neural network to obtain the type recognition result of the input signal specifically includes:
inputting the gray-scale image and the binary image into a first layer of convolutional neural network to obtain a first convolutional output, wherein the first layer of convolutional neural network is 64 convolutional kernels with the size of 7x7, and the convolutional step is a convolutional layer with the size of 4x 4;
inputting the first volume output to a first BN layer, and processing the output of the first BN layer by a Relu function to obtain a first layer output result;
inputting the first layer output result to a maximum pooling layer to obtain a first layer characteristic diagram;
inputting the first layer of feature map into a second layer of convolutional neural network to obtain a second convolutional output, wherein the second layer of convolutional neural network is 128 convolutional kernels with the size of 4x4, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the second convolution output to a second BN layer, and processing the output of the second BN layer by a Relu function to obtain a second layer output result;
inputting the second-layer output result to a maximum pooling layer to obtain a second-layer characteristic diagram;
inputting the second layer of feature map into a third layer of convolutional neural network to obtain a second convolutional output, wherein the third layer of convolutional neural network is 256 convolutional kernels with the size of 3x3, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the third convolution output to a third BN layer, and processing the third BN layer output by a Relu function to obtain a third layer output result;
inputting the third layer of output results into a fourth layer of convolutional neural network to obtain a fourth convolutional output, wherein the fourth layer of convolutional neural network is 128 convolutional kernels with the size of 3x3, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the fourth convolution output to a fourth BN layer, and processing the fourth BN layer output by a Relu function to obtain a fourth layer output result;
inputting the fourth layer output result into a fifth layer convolutional neural network to obtain a fifth convolutional output, wherein the fifth layer convolutional neural network is 128 convolutional kernels with the size of 3x3, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the fifth convolution output to a fifth BN layer, and processing the fifth BN layer output by a Relu function to obtain a fifth layer output result;
inputting the fifth-layer output result into a maximum pooling layer to obtain a fifth-layer characteristic diagram;
inputting the third layer of output results into a sixth layer of convolutional neural network to obtain a sixth convolutional output, wherein the sixth layer of convolutional neural network is 32 convolutional kernels with the size of 1x1, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the sixth convolution output to a sixth BN layer, and processing the sixth BN layer output by a Relu function to obtain a sixth layer output result;
inputting the fourth layer of output results into a seventh layer of convolutional neural network to obtain a seventh convolutional output, wherein the seventh layer of convolutional neural network is 128 convolutional kernels with the size of 3x3, and the convolutional step is a convolutional layer with the size of 1x 1;
and inputting the seventh convolution output, the sixth layer output result and the fifth layer feature map into a feature association map layer to obtain a type identification result of the input signal.
In the above embodiment, as shown in fig. 2-3, the input layer dimension of the network is 300x300x 1. The input layer receives the characteristic image of the communication signal modulation type and inputs the image into the convolutional layer. The first convolutional layer has 64 convolutional kernels with the size of 7x7, the convolution step is 4x4, feature information in the image is extracted through convolution operation, and 64 feature maps with the size of 74x74 are output. The feature map output by convolution is processed by a Batch Normalization layer (BN layer), and is input into a maximum pooling layer after being processed by a ReLU function. The size of the pooling operation is 3x3 with a step size of 2x2, and the feature map is input into the second layer convolution after completion of the pooling operation, the feature map size being 36x36x 64. The second layer of convolution has a total of 128 convolution kernels of size, with convolution step size 1x 1. The convolution step is reduced, so that the overlapping area of the characteristic information in the output characteristic image is enlarged, and the size of the characteristic image is not changed. After the convolution operation is completed, a 36x36x128 feature map is output, and the extracted feature information is richer by deepening the depth of the feature map. And (4) inputting the feature graph into the maximum pooling layer after the feature graph is processed by the BN layer and the ReLU activation function. The size and step size of the pooling operation is the same as in the first layer convolution. And after down-sampling, the feature map is obtained. Input to the third layer of convolution, the signature size is 17x17x 128. In the third convolutional layer, there are 256 convolutions of 3 × 3 in total, the convolution step is 1 × 1, the size of the received feature map is not changed after the convolution operation, the depth of the feature map is deepened through the change of the number of convolution kernels, and then the feature map of 17 × 256 size is obtained after the feature image is processed by the BN layer and the ReLU function. At this time, the feature map is input to CONV6 so that the obtained feature image of 17 × 17 size is used as a criterion for final signal modulation type identification. 32 convolution kernels with the size and the step length of 1x1 are shared in the convolution layer, after the feature map is subjected to convolution operation, the feature map with the size of 17x17x32 is output to the feature map association layer without changing the size of the feature map. By reducing the number of convolution kernels, the depth of the output feature map is reduced, redundant feature information in the feature map is reduced, and meanwhile the computational complexity of subsequent feature association operation is reduced. In another operation, the feature map output by the third convolutional layer is transferred to the fourth convolutional layer to continue extracting features. In the fourth convolutional layer, 128 convolutional kernels with the size of 3x3 are shared in total, the convolution step is 1x1, the size of the feature map output after the feature map output by the third layer is processed is 17x17x128, but the receptive field of the feature image is improved, and the information in the feature map is different from the information of the feature map output by the third layer. In order to input the feature map output by this layer into the feature association layer, the depth of the feature map is also reduced without changing the size of the feature image by 32 convolution kernels each having a size and a step size of 1x1, and the feature map having a size of 17x17x32 is input into the feature map association layer. Meanwhile, the convolutional layer inputs the feature map into the last convolutional layer for final feature extraction. And in the fifth convolutional layer, 128 convolutional kernels with the size of 3x3 are shared in total, the convolution step is 1x1, a characteristic image with the size of 17x17x128 is output after the convolution operation is finished, and the characteristic image is input into the maximum pooling layer after the characteristic image is processed by the BN layer and the ReLU function. The size and step size of the maximum pooling operation is 3x3 and 2x2, respectively, and the final output signature size is 8x8x 128. This feature map is entered into a feature map association layer. And a BN layer is introduced between the convolutional layer and the activation function, so that the BN layer can eliminate the phenomenon of Internal covariate shift, can accelerate the training speed of the network and prevent gradient disappearance and gradient explosion.
In some optional implementation manners, the step of inputting the seventh convolution output, the sixth-layer output result, and the fifth-layer feature map into a feature association map layer to obtain a type identification result of the input signal specifically includes:
inputting the seventh convolution output, the sixth layer output result and the fifth layer feature map into a first full-connection layer to obtain three feature vectors;
splicing the three characteristic vectors to obtain a signal vector;
and inputting the first signal vector to a second full-connection layer, and calculating the confidence coefficient of the type of the input signal through a softmax function to obtain the type identification result of the input signal.
In the above embodiment, the feature maps of three different scales input in the feature map association layer are expanded into one-dimensional feature vectors by using the fully connected layer. The three feature maps with different scales are respectively input into three full-connection layers with 256 neurons, and after feature extraction and fusion are carried out on feature images through the full-connection layers, three feature vectors with 256 feature values are output. The three expanded feature vectors are connected in the same dimension to form a feature vector containing 768 values, and the feature vector is used as a judgment basis and is input into the next full-connection layer.
In some optional implementations, before the step of inputting the features of the signal constellation diagram to a trained neural network to obtain the type recognition result of the input signal, the method further includes:
acquiring a plurality of training data and a label corresponding to the training data;
inputting the training data and the corresponding label to an initial neural network model;
passing the initial neural network model throughTraining to obtain a trained convolutional neural network model,representing the weight value obtained by training the kth neuron in the nth layer of the multi-layer perceptron of the trained neural network model according to the output of the (n-1) th layer of the multi-layer perceptron of the trained convolutional neural network model,to representCorresponding offset, fi nRepresenting the output of the n-th layer of the trained neural network model after the ith training data is input into the trained neural network model, wherein i is any positive integer, n is a natural number, and f is the last layer of the trained neural network modeli nIs the output of the trained convolutional neural network model, fi n-1Representing the output of the n-1 layer of the trained neural network model after the ith training data is input into the trained convolutional neural network model;
and deploying the trained neural network model.
In the above embodiment, the training main process is as follows: initializing a neural network model, calculating a forward propagation result, calculating a backward propagation error, updating the backward propagation error and updating trainable parameters of a neuron. In the above embodiment, the selection of the training sample signal and the calculation of the forward propagation process are performed to substitute the sample signal into the input layer of the neural network model, and calculate each hidden layer by layer until the result of the output layer is obtained. The process in addition to calculating the forward propagation result includes:
initializing a neural network model: initializing all neurons in the neural network model, randomly assigning trainable parameters of the neurons, and setting parameters such as a loss function, a target loss value and the maximum learning times. And selecting a training sample signal as an input sample and a corresponding real modulation mode thereof as an expected output.
Calculating the back propagation error: and calculating the error between the classification result and the real modulation mode of the sample signal according to the set loss function, and calculating the partial derivative of each hidden layer according to the error.
Updating trainable parameters of each layer of neurons: and optimizing and correcting trainable parameters such as connection weights among all neurons according to the partial derivatives of the neurons in each layer calculated in the last step.
In some optional implementations, the step of deploying the trained neural network model further comprises:
calculating a global loss value;
and if the global loss value is larger than the threshold value, adjusting the weight of the neuron.
In the above embodiment, the loss value of the update parameter is calculated. If the loss value is larger than the set threshold value, the weight between the middle layer and the output layer and output threshold values of all units of the output layer need to be adjusted, the connection weight between the input layer and the middle layer and output threshold values of all units of the middle layer are adjusted, and finally the learning rate is adjusted; and if the error is smaller than the threshold value, finishing the training.
In some optional implementations, the initial neural network model includes at least: convolutional layer and fully-connected layer, the initial neural network model is passed throughTraining, wherein the step of obtaining the trained convolutional neural network model specifically comprises the following steps:
setting the probability of the weight of the neuron in the full connection layer to be 0 by 50%.
In the above embodiment, in the fully-connected layer, a dropout strategy is introduced, and in the network training, the probability that the weight of each neuron in the fully-connected layer is set to 0 is 50%. After the processing of the full connection layer, the confidence degrees of various modulation types are output through the full connection layer finally containing the softmax function, and the identification of the signal modulation types is completed.
In some optional implementations, the passing the initial neural network modelTraining, wherein the step of obtaining the trained convolutional neural network model specifically comprises the following steps:
when the network training times are larger than the preset maximum learning times, the training is also finished.
In the above embodiment, when the network loss value is less than or equal to the set target loss value, the representative model converges, and the neural network model learns the characteristics of the different modulation schemes through updating the weight parameters, and the training ends. When the number of network training times exceeds the set maximum learning times, the training is also finished. Otherwise, selecting the next batch of sample signals to enter the next round of training and learning again (.
The block chain referred by the application is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware associated with computer readable instructions, which can be stored in a computer readable storage medium, and when executed, the processes of the embodiments of the methods described above can be included. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
With further reference to fig. 3, as an implementation of the method shown in fig. 2, the present application provides an embodiment of a modulation type identification apparatus based on constellation diagram features, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 3, the modulation type identification apparatus 300 according to the present embodiment based on the constellation diagram feature includes: an acquiring module 301, a first encoding module 302, a splitting module 303, a message merging module 304, a header data acquiring module 305, and a sending module 310. Wherein:
an obtaining module 301, configured to obtain an input signal;
a preprocessing module 302, configured to preprocess the input signal to obtain a preprocessed signal;
a calculating module 303, configured to calculate a constellation of the processed signal to obtain a signal constellation;
a generating module 304, configured to generate a grayscale map and a binary map according to the signal constellation map to obtain a feature of the signal constellation map;
and the identifying module 305 is configured to input the features of the signal constellation diagram to a trained neural network, so as to obtain a type identification result of the input signal.
Further, the identification module 305 is further configured to:
inputting the gray-scale image and the binary image into a first layer of convolutional neural network to obtain a first convolutional output, wherein the first layer of convolutional neural network is 64 convolutional kernels with the size of 7x7, and the convolutional step is a convolutional layer with the size of 4x 4;
inputting the first volume output to a first BN layer, and processing the output of the first BN layer by a Relu function to obtain a first layer output result;
inputting the first layer output result to a maximum pooling layer to obtain a first layer characteristic diagram;
inputting the first layer of feature map into a second layer of convolutional neural network to obtain a second convolutional output, wherein the second layer of convolutional neural network is 128 convolutional kernels with the size of 4x4, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the second convolution output to a second BN layer, and processing the output of the second BN layer by a Relu function to obtain a second layer output result;
inputting the second-layer output result to a maximum pooling layer to obtain a second-layer characteristic diagram;
inputting the second layer of feature map into a third layer of convolutional neural network to obtain a second convolutional output, wherein the third layer of convolutional neural network is 256 convolutional kernels with the size of 3x3, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the third convolution output to a third BN layer, and processing the third BN layer output by a Relu function to obtain a third layer output result;
inputting the third layer of output results into a fourth layer of convolutional neural network to obtain a fourth convolutional output, wherein the fourth layer of convolutional neural network is 128 convolutional kernels with the size of 3x3, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the fourth convolution output to a fourth BN layer, and processing the fourth BN layer output by a Relu function to obtain a fourth layer output result;
inputting the fourth layer output result into a fifth layer convolutional neural network to obtain a fifth convolutional output, wherein the fifth layer convolutional neural network is 128 convolutional kernels with the size of 3x3, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the fifth convolution output to a fifth BN layer, and processing the fifth BN layer output by a Relu function to obtain a fifth layer output result;
inputting the fifth-layer output result into a maximum pooling layer to obtain a fifth-layer characteristic diagram;
inputting the third layer of output results into a sixth layer of convolutional neural network to obtain a sixth convolutional output, wherein the sixth layer of convolutional neural network is 32 convolutional kernels with the size of 1x1, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the sixth convolution output to a sixth BN layer, and processing the sixth BN layer output by a Relu function to obtain a sixth layer output result;
inputting the fourth layer of output results into a seventh layer of convolutional neural network to obtain a seventh convolutional output, wherein the seventh layer of convolutional neural network is 128 convolutional kernels with the size of 3x3, and the convolutional step is a convolutional layer with the size of 1x 1;
and inputting the seventh convolution output, the sixth layer output result and the fifth layer feature map into a feature association map layer to obtain a type identification result of the input signal.
Further, the identification module 305 is further configured to:
inputting the seventh convolution output, the sixth layer output result and the fifth layer feature map into a first full-connection layer to obtain three feature vectors;
splicing the three characteristic vectors to obtain a signal vector;
and inputting the first signal vector to a second full-connection layer, and calculating the confidence coefficient of the type of the input signal through a softmax function to obtain the type identification result of the input signal.
Further, the modulation type identification apparatus based on the constellation diagram feature further includes a training module, and the training module is further configured to:
acquiring a plurality of training data and a label corresponding to the training data;
inputting the training data and the corresponding label to an initial neural network model;
passing the initial neural network model throughTraining to obtain a trained convolutional neural network model,representing the weight value obtained by training the kth neuron in the nth layer of the multi-layer perceptron of the trained neural network model according to the output of the (n-1) th layer of the multi-layer perceptron of the trained convolutional neural network model,to representCorresponding offset, fi nRepresenting the output of the n-th layer of the trained neural network model after the ith training data is input into the trained neural network model, wherein i is any positive integer, n is a natural number, and f is the last layer of the trained neural network modeli nIs the output of the trained convolutional neural network model, fi n-1Representing the output of the n-1 layer of the trained neural network model after the ith training data is input into the trained convolutional neural network model;
and deploying the trained neural network model.
Further, the modulation type identification apparatus based on the constellation diagram feature further includes a loss value calculation module, where the loss value calculation module is further configured to:
calculating a global loss value;
and if the global loss value is larger than the threshold value, adjusting the weight of the neuron.
Further, the modulation type identification apparatus based on the constellation diagram feature further includes an initialization module, where the initialization module is further configured to:
setting the probability of the weight of the neuron in the full connection layer to be 0 by 50%.
Further, the modulation type identification apparatus based on the constellation diagram feature further includes a setting module, and the setting module is further configured to:
when the network training times are larger than the preset maximum learning times, the training is also finished.
In order to solve the technical problem, an embodiment of the present application further provides a computer device. Referring to fig. 4, fig. 4 is a block diagram of a basic structure of a computer device according to the present embodiment.
The computer device 4 comprises a memory 41, a processor 42, a network interface 43 communicatively connected to each other via a system bus. It is noted that only computer device 4 having components 41-43 is shown, but it is understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead. As will be understood by those skilled in the art, the computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory 41 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the memory 41 may be an internal storage unit of the computer device 4, such as a hard disk or a memory of the computer device 4. In other embodiments, the memory 41 may also be an external storage device of the computer device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the computer device 6. Of course, the memory 41 may also include both internal and external storage devices of the computer device 4. In this embodiment, the memory 41 is generally used for storing an operating system installed in the computer device 4 and various types of application software, such as computer readable instructions of a modulation type identification method based on constellation features. Further, the memory 41 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 42 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 42 is typically used to control the overall operation of the computer device 4. In this embodiment, the processor 42 is configured to execute computer readable instructions stored in the memory 41 or process data, for example, execute computer readable instructions of the modulation type identification method based on the constellation diagram feature.
The network interface 43 may comprise a wireless network interface or a wired network interface, and the network interface 43 is generally used for establishing communication connection between the computer device 4 and other electronic devices.
The present application further provides another embodiment, which is to provide a computer-readable storage medium storing computer-readable instructions executable by at least one processor to cause the at least one processor to perform the steps of the constellation feature based modulation type identification method as described above.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
It is to be understood that the above-described embodiments are merely illustrative of some, but not restrictive, of the broad invention, and that the appended drawings illustrate preferred embodiments of the invention and do not limit the scope of the invention. This application is capable of embodiments in many different forms and is provided for the purpose of enabling a thorough understanding of the disclosure of the application. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to one skilled in the art that the present application may be practiced without modification or with equivalents of some of the features described in the foregoing embodiments. All equivalent structures made by using the contents of the specification and the drawings of the present application are directly or indirectly applied to other related technical fields and are within the protection scope of the present application.
Claims (10)
1. A modulation type identification method based on constellation diagram characteristics is characterized by comprising the following steps:
acquiring an input signal;
preprocessing the input signal to obtain a preprocessed signal;
calculating the constellation diagram of the processed signal to obtain a signal constellation diagram;
generating a gray scale map and a binary map according to the signal constellation map to obtain the characteristics of the signal constellation map;
and inputting the characteristics of the signal constellation diagram into a trained neural network to obtain a type identification result of the input signal.
2. The modulation type recognition method based on constellation diagram features according to claim 1, wherein the step of inputting the features of the signal constellation diagram into a trained neural network to obtain the type recognition result of the input signal specifically comprises:
inputting the gray-scale image and the binary image into a first layer of convolutional neural network to obtain a first convolutional output, wherein the first layer of convolutional neural network is 64 convolutional kernels with the size of 7x7, and the convolutional step is a convolutional layer with the size of 4x 4;
inputting the first volume output to a first BN layer, and processing the output of the first BN layer by a Relu function to obtain a first layer output result;
inputting the first layer output result to a maximum pooling layer to obtain a first layer characteristic diagram;
inputting the first layer of feature map into a second layer of convolutional neural network to obtain a second convolutional output, wherein the second layer of convolutional neural network is 128 convolutional kernels with the size of 4x4, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the second convolution output to a second BN layer, and processing the output of the second BN layer by a Relu function to obtain a second layer output result;
inputting the second-layer output result to a maximum pooling layer to obtain a second-layer characteristic diagram;
inputting the second layer of feature map into a third layer of convolutional neural network to obtain a second convolutional output, wherein the third layer of convolutional neural network is 256 convolutional kernels with the size of 3x3, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the third convolution output to a third BN layer, and processing the third BN layer output by a Relu function to obtain a third layer output result;
inputting the third layer of output results into a fourth layer of convolutional neural network to obtain a fourth convolutional output, wherein the fourth layer of convolutional neural network is 128 convolutional kernels with the size of 3x3, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the fourth convolution output to a fourth BN layer, and processing the fourth BN layer output by a Relu function to obtain a fourth layer output result;
inputting the fourth layer output result into a fifth layer convolutional neural network to obtain a fifth convolutional output, wherein the fifth layer convolutional neural network is 128 convolutional kernels with the size of 3x3, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the fifth convolution output to a fifth BN layer, and processing the fifth BN layer output by a Relu function to obtain a fifth layer output result;
inputting the fifth-layer output result into a maximum pooling layer to obtain a fifth-layer characteristic diagram;
inputting the third layer of output results into a sixth layer of convolutional neural network to obtain a sixth convolutional output, wherein the sixth layer of convolutional neural network is 32 convolutional kernels with the size of 1x1, and the convolutional step is a convolutional layer with the size of 1x 1;
inputting the sixth convolution output to a sixth BN layer, and processing the sixth BN layer output by a Relu function to obtain a sixth layer output result;
inputting the fourth layer of output results into a seventh layer of convolutional neural network to obtain a seventh convolutional output, wherein the seventh layer of convolutional neural network is 128 convolutional kernels with the size of 3x3, and the convolutional step is a convolutional layer with the size of 1x 1;
and inputting the seventh convolution output, the sixth layer output result and the fifth layer feature map into a feature association map layer to obtain a type identification result of the input signal.
3. The modulation type identification method based on constellation diagram features according to claim 2, wherein the step of inputting the seventh convolution output, the sixth layer output result, and the fifth layer feature diagram into a feature association layer to obtain the type identification result of the input signal specifically includes:
inputting the seventh convolution output, the sixth layer output result and the fifth layer feature map into a first full-connection layer to obtain three feature vectors;
splicing the three characteristic vectors to obtain a signal vector;
and inputting the first signal vector to a second full-connection layer, and calculating the confidence coefficient of the type of the input signal through a softmax function to obtain the type identification result of the input signal.
4. The method according to claim 1, wherein before the step of inputting the features of the signal constellation to a trained neural network to obtain the type recognition result of the input signal, the method further comprises:
acquiring a plurality of training data and a label corresponding to the training data;
inputting the training data and the corresponding label to an initial neural network model;
passing the initial neural network model throughTraining to obtain a trained convolutional neural network model,representing the weight value obtained by training the kth neuron in the nth layer of the multi-layer perceptron of the trained neural network model according to the output of the (n-1) th layer of the multi-layer perceptron of the trained convolutional neural network model,to representCorresponding offset, fi nRepresenting the output of the n-th layer of the trained neural network model after the ith training data is input into the trained neural network model, wherein i is any positive integer, n is a natural number, and f is the last layer of the trained neural network modeli nRefer to the trainingOutput of the trained convolutional neural network model, fi n-1Representing the output of the n-1 layer of the trained neural network model after the ith training data is input into the trained convolutional neural network model;
and deploying the trained neural network model.
5. The constellation feature-based modulation type recognition method of claim 4, wherein the step of deploying the trained neural network model further comprises:
calculating a global loss value;
and if the global loss value is larger than the threshold value, adjusting the weight of the neuron.
6. The method according to claim 1, wherein the initial neural network model at least comprises: convolutional layer and fully-connected layer, the initial neural network model is passed throughTraining, wherein the step of obtaining the trained convolutional neural network model specifically comprises the following steps:
setting the probability of the weight of the neuron in the full connection layer to be 0 by 50%.
7. The constellation feature based modulation type identification method of claim 4, wherein the initial neural network model is passed throughTraining, wherein the step of obtaining the trained convolutional neural network model specifically comprises the following steps:
when the network training times are larger than the preset maximum learning times, the training is also finished.
8. A modulation type recognition apparatus based on constellation diagram characteristics, comprising:
the acquisition module is used for acquiring an input signal;
the preprocessing module is used for preprocessing the input signal to obtain a preprocessed signal;
the computing module is used for computing the constellation diagram of the processed signal to obtain a signal constellation diagram;
the generating module is used for generating a gray scale map and a binary map according to the signal constellation map to obtain the characteristics of the signal constellation map;
and the recognition module is used for inputting the characteristics of the signal constellation diagram into the trained neural network to obtain the type recognition result of the input signal.
9. A computer device comprising a memory and a processor, the memory having stored therein computer-readable instructions, the processor when executing the computer-readable instructions implementing the steps of the constellation feature based modulation type identification method according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which computer-readable instructions are stored, which, when executed by a processor, implement the steps of the constellation feature based modulation type identification method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111304357.5A CN113919401A (en) | 2021-11-05 | 2021-11-05 | Modulation type identification method and device based on constellation diagram characteristics and computer equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111304357.5A CN113919401A (en) | 2021-11-05 | 2021-11-05 | Modulation type identification method and device based on constellation diagram characteristics and computer equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113919401A true CN113919401A (en) | 2022-01-11 |
Family
ID=79245331
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111304357.5A Pending CN113919401A (en) | 2021-11-05 | 2021-11-05 | Modulation type identification method and device based on constellation diagram characteristics and computer equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113919401A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114900407A (en) * | 2022-07-12 | 2022-08-12 | 南京科伊星信息科技有限公司 | Modulation mode automatic identification and countermeasure method based on data enhancement |
CN115296963A (en) * | 2022-06-30 | 2022-11-04 | 哈尔滨工业大学 | Channel equalization method based on convolution cyclic neural network, computer equipment and readable storage medium |
CN115333905A (en) * | 2022-10-12 | 2022-11-11 | 南通中泓网络科技有限公司 | Signal modulation mode identification method |
CN115622852A (en) * | 2022-10-21 | 2023-01-17 | 扬州大学 | A Modulation Recognition Method Based on Constellation KD Tree Enhancement and Neural Network GSENet |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110309854A (en) * | 2019-05-21 | 2019-10-08 | 北京邮电大学 | Method and device for identifying signal modulation mode |
CN111614398A (en) * | 2020-05-12 | 2020-09-01 | 北京邮电大学 | Modulation format and signal-to-noise ratio identification method and device based on XOR neural network |
-
2021
- 2021-11-05 CN CN202111304357.5A patent/CN113919401A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110309854A (en) * | 2019-05-21 | 2019-10-08 | 北京邮电大学 | Method and device for identifying signal modulation mode |
CN111614398A (en) * | 2020-05-12 | 2020-09-01 | 北京邮电大学 | Modulation format and signal-to-noise ratio identification method and device based on XOR neural network |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115296963A (en) * | 2022-06-30 | 2022-11-04 | 哈尔滨工业大学 | Channel equalization method based on convolution cyclic neural network, computer equipment and readable storage medium |
CN114900407A (en) * | 2022-07-12 | 2022-08-12 | 南京科伊星信息科技有限公司 | Modulation mode automatic identification and countermeasure method based on data enhancement |
CN114900407B (en) * | 2022-07-12 | 2022-10-14 | 南京科伊星信息科技有限公司 | Modulation mode automatic identification and countermeasure method based on data enhancement |
CN115333905A (en) * | 2022-10-12 | 2022-11-11 | 南通中泓网络科技有限公司 | Signal modulation mode identification method |
CN115622852A (en) * | 2022-10-21 | 2023-01-17 | 扬州大学 | A Modulation Recognition Method Based on Constellation KD Tree Enhancement and Neural Network GSENet |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Hu et al. | A novel image steganography method via deep convolutional generative adversarial networks | |
CN113919401A (en) | Modulation type identification method and device based on constellation diagram characteristics and computer equipment | |
CN113435583B (en) | Federal learning-based countermeasure generation network model training method and related equipment thereof | |
CN112101437A (en) | Fine-grained classification model processing method based on image detection and related equipment thereof | |
CN110188829B (en) | Neural network training method, target recognition method and related products | |
CN112446888B (en) | Image segmentation model processing method and processing device | |
CN110309854A (en) | Method and device for identifying signal modulation mode | |
CN112528029A (en) | Text classification model processing method and device, computer equipment and storage medium | |
CN113298152B (en) | Model training method, device, terminal equipment and computer readable storage medium | |
CN113869398B (en) | Unbalanced text classification method, device, equipment and storage medium | |
CN114359582B (en) | Small sample feature extraction method based on neural network and related equipment | |
CN114241459B (en) | Driver identity verification method and device, computer equipment and storage medium | |
CN113722438A (en) | Sentence vector generation method and device based on sentence vector model and computer equipment | |
CN113240071A (en) | Graph neural network processing method and device, computer equipment and storage medium | |
CN113850838A (en) | Method, device, computer equipment and storage medium for acquiring navigation intention of ship | |
CN113988223A (en) | Certificate image recognition method and device, computer equipment and storage medium | |
CN113723077A (en) | Sentence vector generation method and device based on bidirectional characterization model and computer equipment | |
CN117216563A (en) | Sample information updating method and device, electronic equipment and storage medium | |
CN110489955B (en) | Image processing, device, computing device and medium applied to electronic equipment | |
CN114241411B (en) | Counting model processing method and device based on target detection and computer equipment | |
CN117830790A (en) | Training method of multi-task model, multi-task processing method and device | |
CN115099189A (en) | Speech recognition model based on parallel computation and determining method | |
CN115828248B (en) | Malicious code detection method and device based on interpretive deep learning | |
CN116720214A (en) | Model training method and device for privacy protection | |
CN115700845A (en) | Face recognition model training method, face recognition device and related equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |