[go: up one dir, main page]

CN107342810A - Deep learning Brilliant Eyes figure analysis method based on convolutional neural networks - Google Patents

Deep learning Brilliant Eyes figure analysis method based on convolutional neural networks Download PDF

Info

Publication number
CN107342810A
CN107342810A CN201710534126.0A CN201710534126A CN107342810A CN 107342810 A CN107342810 A CN 107342810A CN 201710534126 A CN201710534126 A CN 201710534126A CN 107342810 A CN107342810 A CN 107342810A
Authority
CN
China
Prior art keywords
layer
analysis
eye
convolutional neural
eye diagram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710534126.0A
Other languages
Chinese (zh)
Other versions
CN107342810B (en
Inventor
王丹石
张民
李建强
李进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN201710534126.0A priority Critical patent/CN107342810B/en
Publication of CN107342810A publication Critical patent/CN107342810A/en
Application granted granted Critical
Publication of CN107342810B publication Critical patent/CN107342810B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/07Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems
    • H04B10/075Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems using an in-service signal
    • H04B10/079Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems using an in-service signal using measurements of the data signal
    • H04B10/0795Performance monitoring; Measurement of transmission parameters
    • H04B10/07953Monitoring or measuring OSNR, BER or Q
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/07Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems
    • H04B10/075Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems using an in-service signal
    • H04B10/079Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems using an in-service signal using measurements of the data signal
    • H04B10/0795Performance monitoring; Measurement of transmission parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/07Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems
    • H04B10/075Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems using an in-service signal
    • H04B10/079Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems using an in-service signal using measurements of the data signal
    • H04B10/0795Performance monitoring; Measurement of transmission parameters
    • H04B10/07951Monitoring or measuring chromatic dispersion or PMD

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of deep learning Brilliant Eyes figure analysis method based on convolutional neural networks, it is related to technical field of photo communication, wherein carrying out performance evaluation to eye pattern by building simultaneously training convolutional neural networks module, comprises the following steps:Obtain eye pattern training dataset;Eye pattern is pre-processed;CNN modules are trained to carry out feature extraction;The eye pattern of required analysis is inputted to the CNN modules that training is completed after pretreatment and carries out pattern-recognition and performance evaluation;Export analysis result.Depth learning technology based on convolutional neural networks is applied in eye Diagram Analysis by the present invention, solve the problems, such as directly handle initial data in traditional eye pattern performance evaluation, manual intervention need to be carried out, the intellectuality and automation of eye pattern original image information analysis are realized using CNN, can be as the eye pattern software processing module of oscillograph and the eye Diagram Analysis module of simulation software, and then be embedded into tester and carry out intelligent signal analysis and performance monitoring.

Description

基于卷积神经网络的深度学习智能眼图分析方法Deep Learning Intelligent Eye Diagram Analysis Method Based on Convolutional Neural Network

技术领域technical field

本发明涉及光通信技术领域,尤其涉及一种基于卷积神经网络的深度学习智能眼图分析方法。The invention relates to the technical field of optical communication, in particular to a deep learning intelligent eye diagram analysis method based on a convolutional neural network.

背景技术Background technique

机器学习(ML)技术提供了强大的工具来解决诸如自然语言处理,数据挖掘,语音识别和图像识别等许多领域的问题。同时,机器学习技术在光通信领域也得到了广泛的应用,很大程度上促进了智能系统的发展。目前研究主要集中在使用不同的机器学习算法进行光学性能监测(OPM)和非线性损伤补偿方面,所使用的机器学习算法包括期望最大值(EM),随机森林,反向传播人工神经网络(BP-ANN),K近邻(KNN)和支持向量机(SVM)等。然而,所有上述机器学习算法在特征提取的能力上都有其算法本身的限制。更具体地说,机器学习模型不能直接处理自然数据的原始形式,因此不得不在运用算法前需要相当多的领域专长和工程技能来设计特征提取器,将原始数据转换成合适的内部表示或特征向量,进而子系统才能检测出输入数据的模式。因此,希望可以开发出更先进的机器学习算法,不仅可以直接对原始数据进行处理,还可以自动检测所需的特征。Machine learning (ML) techniques provide powerful tools to solve problems in many fields such as natural language processing, data mining, speech recognition, and image recognition. At the same time, machine learning technology has also been widely used in the field of optical communication, which has greatly promoted the development of intelligent systems. Current research mainly focuses on the use of different machine learning algorithms for optical performance monitoring (OPM) and nonlinear damage compensation. The machine learning algorithms used include expected maximum (EM), random forest, backpropagation artificial neural network (BP -ANN), K-Nearest Neighbors (KNN) and Support Vector Machines (SVM), etc. However, all of the above machine learning algorithms have their own limitations in the capability of feature extraction. More specifically, machine learning models cannot directly process natural data in its raw form, so considerable domain expertise and engineering skills have to be designed before applying algorithms to convert raw data into suitable internal representations or feature vectors. , so that the subsystem can detect the pattern of the input data. Therefore, it is hoped that more advanced machine learning algorithms can be developed that not only directly process the raw data, but also automatically detect the desired features.

最近,深度学习成为一个火热的研究课题,其目的是使得机器学习更接近人工智能(AI)的目标。深度学习可以被理解为具有多个非线性层的深度神经网络,其通过自学习过程从数据中学习特征,而不是由人类工程师来进行人工设计。深度学习中最著名的突破之一是Google DeepMind的电脑程序“AlphaGo”,他们首次在棋盘游戏中以自学习的能力击败了专业的选手。另外,作为目前的研究热点,深度学习在无人驾驶飞行器,医疗诊断,情绪分析等各种应用领域取得了重大进展。然而据我们所知,在光通信系统领域却几乎没有基于深度学习的研究工作。Recently, deep learning has become a hot research topic with the aim of bringing machine learning closer to the goal of artificial intelligence (AI). Deep learning can be understood as a deep neural network with multiple non-linear layers that learns features from data through a self-learning process rather than manual design by human engineers. One of the most famous breakthroughs in deep learning is Google DeepMind's computer program "AlphaGo", which for the first time defeated a professional player at a board game by self-learning. In addition, as a current research hotspot, deep learning has made significant progress in various application areas such as unmanned aerial vehicles, medical diagnosis, and sentiment analysis. However, to the best of our knowledge, there is little research work based on deep learning in the field of optical communication systems.

同时,在光通信领域中,目前的调制格式识别和OSNR、CD、线性损伤、非线性损伤等性能指标的估计技术不能直接对原始数据进行处理,而必须人为地提取相应的特征,需要大量的人工干预。因此希望能够利用眼图采用更加先进的技术来进行各种性能的智能分析,无需人工干预,做到精确测量,无需数据统计即时处理,实现利用眼图进行性能分析的智能化和自动化。At the same time, in the field of optical communication, the current modulation format recognition and OSNR, CD, linear impairment, nonlinear impairment and other performance index estimation techniques cannot directly process the original data, but must artificially extract the corresponding features, requiring a large number of human intervention. Therefore, it is hoped that the eye diagram can be used to use more advanced technology to carry out intelligent analysis of various performances, without manual intervention, to achieve accurate measurement, without the need for instant processing of data statistics, and to realize the intelligence and automation of performance analysis using the eye diagram.

发明内容Contents of the invention

本发明的目的在于将深度学习技术应用到光通信领域,提供一种智能、可靠的基于卷积神经网络的深度学习智能眼图分析方法,解决传统眼图性能分析中无法直接处理原始图像数据、需进行人工干预的弊端,实现了对眼图原始图像进行性能分析的智能化和自动化。The purpose of the present invention is to apply deep learning technology to the field of optical communication, provide an intelligent and reliable deep learning intelligent eye diagram analysis method based on convolutional neural network, and solve the problem that the original image data cannot be directly processed in the traditional eye diagram performance analysis. The shortcomings of manual intervention are needed, and the intelligence and automation of performance analysis on the original image of the eye diagram are realized.

为达到上述目的,本发明公开了一种基于卷积神经网络的深度学习智能眼图分析方法,将基于卷积神经网络的深度学习技术应用到眼图分析中,利用卷积神经网络对眼图进行多种性能分析,所述方法包括以下步骤:步骤一、获取所需分析的眼图训练数据集;步骤二、眼图图像预处理;步骤三、训练卷积神经网络(CNN)模块对眼图进行特征提取;步骤四、将所需分析眼图输入训练完成的CNN模块进行模式识别和性能分析;步骤五、输出分析结果。In order to achieve the above object, the present invention discloses a deep learning intelligent eye diagram analysis method based on convolutional neural network, which applies deep learning technology based on convolutional neural network to eye diagram analysis, and utilizes convolutional neural network to analyze the eye diagram. Carry out multiple performance analysis, described method comprises the following steps: step one, obtain the eye pattern training data set of required analysis; Step two, eye pattern image preprocessing; Step three, training convolutional neural network (CNN) module to eye Extract features from the graph; Step 4, input the required analysis eye diagram into the trained CNN module for pattern recognition and performance analysis; Step 5, output the analysis results.

优选地,所述眼图中所需分析的多种性能为调制格式、光信噪比(OSNR)、色散(CD)、线性损伤和非线性损伤。Preferably, the multiple properties to be analyzed in the eye diagram are modulation format, optical signal-to-noise ratio (OSNR), dispersion (CD), linear impairment and nonlinear impairment.

优选地,所述眼图训练集获取步骤一中,采集眼图的各种性能不同指标情况下的训练数据集,其中,训练数据集中的每组数据由输入为眼图图像和输出为特定性能的特定指标信息对构成。Preferably, in the first step of obtaining the eye diagram training set, the training data sets under various performance indicators of the eye diagram are collected, wherein each set of data in the training data set is input as an eye diagram image and output as a specific performance The specific indicator information pair constitutes.

优选地,所述眼图预处理步骤二中,将所述步骤一中获取的训练数据集中的彩色眼图图像转换为灰度图像,并将得到的眼图灰度图像进行下采样处理。Preferably, in the eye diagram preprocessing step 2, the color eye diagram image in the training data set acquired in the step 1 is converted into a grayscale image, and the obtained eye diagram grayscale image is subjected to downsampling processing.

优选地,所述训练CNN模块进行特征提取步骤三中,将所述步骤二中预处理后的眼图输入构建好的CNN模块中,基于所述训练数据进行训练过程后,所述CNN模块自动从眼图图像中提取特征,并构建特征与不同性能之间的关系。Preferably, in the feature extraction step 3 of the training CNN module, the preprocessed eye diagram in the step 2 is input into the constructed CNN module, and after the training process is performed based on the training data, the CNN module automatically Extract features from eye diagram images and construct relationships between features and different properties.

优选地,所述CNN模块模式识别和性能分析步骤四中,经预处理的所需分析的眼图输入所述训练完成的CNN模块中,CNN模块对输入的眼图进行模式识别,并通过其以往的学习经验对当前输入的眼图进行性能分析。Preferably, in step 4 of the pattern recognition and performance analysis of the CNN module, the preprocessed eye diagram to be analyzed is input into the trained CNN module, and the CNN module performs pattern recognition on the input eye diagram, and through its Previous learning experiences perform performance analysis on the current input eye diagram.

优选地,所述输出分析结果步骤五中,由所述CNN模块输出的信息包含所需分析的各种性能,可从输出信息中得到不同性能的分析结果。Preferably, in the fifth step of outputting analysis results, the information output by the CNN module includes various performances to be analyzed, and analysis results of different performances can be obtained from the output information.

优选地,所述CNN模块的结构主要包括:一个输入层、n个卷积层(C1、C2、…、Cn)、n个池化层(P1、P2、…、Pn)、m个全连接层(F1、F2、…、Fm)、一个输出层,其中,所述输入层的输入为经过预处理的眼图图像,输入层与卷积层C1相连接;所述卷积层C1含有k1个大小为a1×a1的卷积核,所述输入层图像经过卷积层C1得到k1个特征图,进而将得到的特征图传送至池化层P1;所述池化层P1以b1×b1的采样大小对所述卷积层C1生成的特征图进行池化,得到相应的k1个采样后的特征图,再将得到的特征图传送至下一个卷积层C2;所述n个卷积层和池化层对的顺次连接,进而不断提取图像深层次的抽样特征,最后一个池化层Pn与全连接层F1相连接,其中,卷积层Ci含有ki个大小为ai×ai的卷积核,池化层Pj的采样大小为bj×bj,Ci表示第i个卷积层,Pj表示第j个池化层;所述全连接层F1为所述最后一个池化层Pn所得的所有kn个特征图的像素点映射而成的一维层,每个像素代表所述全连接层F1的一个神经元节点,F1层的所有神经元节点与下一个全连接层F2的神经元节点进行全连接;经m个全连接层顺次连接,最后一个全连接层Fm与所述输出层进行全连接;所述输出层输出所需分析的眼图不同性能的节点信息。Preferably, the structure of the CNN module mainly includes: an input layer, n convolutional layers (C1, C2, ..., Cn), n pooling layers (P1, P2, ..., Pn), m fully connected Layers (F1, F2, ..., Fm), an output layer, wherein the input of the input layer is a preprocessed eye pattern image, and the input layer is connected to the convolutional layer C1; the convolutional layer C1 contains k1 A convolution kernel with a size of a1×a1, the input layer image obtains k1 feature maps through the convolution layer C1, and then transfers the obtained feature maps to the pooling layer P1; the pooling layer P1 uses b1×b1 Pooling the feature maps generated by the convolutional layer C1 to obtain corresponding k1 sampled feature maps, and then transfer the obtained feature maps to the next convolutional layer C2; the n convolutional layers Layers and pooling layer pairs are sequentially connected to continuously extract deep-level sampling features of the image. The last pooling layer Pn is connected to the fully connected layer F1. Among them, the convolutional layer Ci contains ki layers of size ai×ai The convolution kernel, the sampling size of the pooling layer Pj is bj×bj, Ci represents the i-th convolutional layer, and Pj represents the j-th pooling layer; the fully connected layer F1 is obtained by the last pooling layer Pn Each pixel represents a neuron node of the fully connected layer F1, and all neuron nodes of the F1 layer are connected with the neurons of the next fully connected layer F2 The nodes are fully connected; connected sequentially through m fully connected layers, and the last fully connected layer Fm is fully connected with the output layer; the output layer outputs node information of different performances of the eye diagram to be analyzed.

优选地,所述输出层输出的节点信息为L位的二进制比特序列,其中,所述N个不同的性能分别以L1、L2、…、LN位二进制比特信息来表示,Li位用于表示第i个性能的Li种不同的指标信息,其中L=L1+L2+…+LN。Preferably, the node information output by the output layer is an L-bit binary bit sequence, wherein the N different performances are respectively represented by L1, L2, ..., LN bit binary bit information, and the Li bit is used to represent the first Li kinds of different index information of i properties, where L=L1+L2+...+LN.

优选地,基于CNN的眼图处理算法将作为示波器的眼图软件处理模块或仿真软件的眼图分析模块,进而嵌入到测试仪器中进行智能信号分析和性能监测。Preferably, the CNN-based eye diagram processing algorithm will be used as the eye diagram software processing module of the oscilloscope or the eye diagram analysis module of the simulation software, and then embedded in the test instrument for intelligent signal analysis and performance monitoring.

本发明的有益效果在于:本发明解决了传统眼图分析的弊端,将基于卷积神经网络的深度学习技术应用到眼图分析中,利用卷积神经网络对眼图进行多种性能分析,应用本发明可以对眼图原始图像数据进行直接处理,无需由人工干预来进行特征提取,实现眼图性能分析的智能化和自动化,进而可以作为示波器的眼图软件处理模块或仿真软件的眼图分析模块,嵌入到测试仪器中进行智能信号分析和性能监测。The beneficial effect of the present invention is that: the present invention solves the disadvantages of traditional eye diagram analysis, applies the deep learning technology based on convolutional neural network to eye diagram analysis, uses convolutional neural network to perform various performance analysis on eye diagrams, and applies The present invention can directly process the original image data of the eye diagram without manual intervention for feature extraction, realize the intelligence and automation of eye diagram performance analysis, and then can be used as the eye diagram software processing module of the oscilloscope or the eye diagram analysis of the simulation software Module, embedded into the test instrument for intelligent signal analysis and performance monitoring.

附图说明Description of drawings

图1示出了本发明基于卷积神经网络的深度学习智能眼图分析方法的流程图;Fig. 1 shows the flowchart of the deep learning intelligent eye pattern analysis method based on the convolutional neural network of the present invention;

图2示出了本发明一个实施例的基于卷积神经网络的深度学习智能眼图分析结构示意图;Fig. 2 shows a schematic structural diagram of a deep learning intelligent eye diagram analysis based on a convolutional neural network according to an embodiment of the present invention;

图3示出了本发明一个实施例收集的不同调制格式和不同OSNR的部分眼图图像;Fig. 3 shows different modulation formats and partial eye diagram images of different OSNRs collected by an embodiment of the present invention;

图4示出了本发明一个实施例的不同调制格式下所估计的OSNR的精确度示意图;FIG. 4 shows a schematic diagram of the accuracy of OSNR estimated under different modulation formats according to an embodiment of the present invention;

图5示出了本发明一个实施例的不同调制格式下CNN与其他机器学习算法对于眼图性能分析精确度的对比示意图。FIG. 5 shows a schematic diagram of a comparison of accuracy of eye diagram performance analysis between CNN and other machine learning algorithms under different modulation formats according to an embodiment of the present invention.

具体实施方式detailed description

下面结合附图和实施例,对本发明的具体实施方式作进一步详细描述。以下实施例用于说明本发明,但不用来限制本发明的保护范围。The specific implementation manners of the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. The following examples are used to illustrate the present invention, but are not intended to limit the protection scope of the present invention.

如图1所示,本发明提出的基于卷积神经网络的深度学习智能眼图分析方法,将基于卷积神经网络的深度学习技术应用到眼图分析中,利用卷积神经网络对眼图进行多种性能分析,包括以下步骤:步骤一、获取所需分析的眼图训练数据集;步骤二、眼图图像预处理;步骤三、训练卷积神经网络(CNN)模块对眼图进行特征提取;步骤四、所需分析眼图输入训练完成的CNN模块进行模式识别和性能分析;步骤五、输出分析结果。As shown in Figure 1, the deep learning intelligent eye diagram analysis method based on the convolutional neural network proposed by the present invention applies the deep learning technology based on the convolutional neural network to the eye diagram analysis, and uses the convolutional neural network to analyze the eye diagram. A variety of performance analysis, including the following steps: Step 1, obtain the eye diagram training data set required for analysis; Step 2, eye diagram image preprocessing; Step 3, train the convolutional neural network (CNN) module to extract features from the eye diagram ; Step 4, the required analysis eye pattern is input to the trained CNN module for pattern recognition and performance analysis; Step 5, the output analysis result.

本实施例中,所述要进行分析的眼图性能为调制格式和OSNR。In this embodiment, the eye diagram performance to be analyzed is modulation format and OSNR.

所述获取眼图训练数据集步骤一中,基于VPI Transmission Maker 9.0建立了基本的仿真系统,由伪随机二进制序列生成四种不同调制格式的光信号,分别为:4PAM,RZ-DPSK,NRZ-OOK,RZ-OOK。该四种调制格式均是基于直接检测方式,传递的信息反映在信号的幅度上,适合于后续的眼图分析。仿真系统中使用掺铒光纤放大器(EDFA)将放大的自发发射(ASE)噪声添加到光信号中,并且在1dB的步长下,利用可变光衰减器(VOA)将OSNR调整为10至25dB。为了尽可能模拟真实的光信号,系统中加入了色散(CD)仿真器,使得模拟生成的眼图更能反映真实的情况。对于本实施例中四种不同调制格式的光信号,4PAM、NRZ-OOK和RZ-OOK信号由光电检测器(PD)直接检测,而RZ-DPSK信号则通过与延迟干涉仪(DI)组合的平衡光电检测器(BPD)来进行检测。在进行同步采样之后,获得了包含四种信号强度信息的数字信号。为了获得更为逼真的视觉效果,本实施例采用示波器中专门的眼图生成模块,将接收到的数字信号转换为相应的眼图图像。In the first step of obtaining the eye diagram training data set, a basic simulation system is established based on VPI Transmission Maker 9.0, and optical signals of four different modulation formats are generated by pseudo-random binary sequences, which are: 4PAM, RZ-DPSK, NRZ- OOK, RZ-OOK. The four modulation formats are all based on direct detection, and the transmitted information is reflected in the amplitude of the signal, which is suitable for subsequent eye diagram analysis. An erbium-doped fiber amplifier (EDFA) was used in the simulated system to add amplified spontaneous emission (ASE) noise to the optical signal, and a variable optical attenuator (VOA) was used to adjust the OSNR from 10 to 25 dB in 1 dB steps . In order to simulate the real optical signal as much as possible, a chromatic dispersion (CD) simulator is added to the system, so that the eye diagram generated by the simulation can better reflect the real situation. For the optical signals of four different modulation formats in this embodiment, the 4PAM, NRZ-OOK and RZ-OOK signals are directly detected by the photodetector (PD), while the RZ-DPSK signal is detected by the combined delay interferometer (DI). A balanced photodetector (BPD) is used for detection. After synchronous sampling, a digital signal containing four kinds of signal strength information is obtained. In order to obtain a more realistic visual effect, this embodiment uses a special eye pattern generation module in the oscilloscope to convert the received digital signal into a corresponding eye pattern image.

基于所述仿真系统,本实施例规定每种调制格式生成16个不同OSNR值()的眼图图像,对每种调制格式的每个OSNR值收集100张像素大小为900×1200的“jpg”格式的眼图图像,这里,以每种调制格式的每个OSNR值及其相应的眼图图像作为一组训练数据,因此整个训练数据集合总共包括6400(1600×4)组训练数据。Based on the simulation system, this embodiment stipulates that each modulation format generates 16 different OSNR values ( ), collect 100 eye diagram images in "jpg" format with a pixel size of 900×1200 for each OSNR value of each modulation format, here, each OSNR value of each modulation format and its corresponding The eye pattern images of are used as a set of training data, so the entire training data set includes a total of 6400 (1600×4) sets of training data.

所述眼图图像预处理步骤二中,为了减少计算量和增强泛化能力,将步骤一中收集到的眼图图像经灰度变换后使得原来的彩色图像转换为灰度图像,并经过下采样使得原始眼图的像素大小降至28×28,最后将处理后的训练数据集输入到建立好的CNN模块中。如图3所示,不同的眼图可以呈现出不同的调制格式,并且如果对所观察到的眼图在视觉上进行仔细分析,其同样可以看出眼图与OSNR值的一阶近似关系。In the second step of the eye image preprocessing step, in order to reduce the amount of calculation and enhance the generalization ability, the eye image image collected in the step 1 is converted into a gray image after the gray scale conversion, and the following steps are performed: Sampling reduces the pixel size of the original eye diagram to 28×28, and finally the processed training dataset is input into the established CNN module. As shown in Figure 3, different eye diagrams can exhibit different modulation formats, and if the observed eye diagram is carefully analyzed visually, it can also be seen that the eye diagram has a first-order approximate relationship with the OSNR value.

所述训练CNN模块进行特征提取步骤三中,其中输入CNN模块的眼图训练数据集,其每个眼图图像均与一个由20个比特组成的标签向量一一对应,标签向量的前4位代表不同的调制格式(4PAM:0001、RZ-DPSK:0010、NRZ-OOK:0100、RZ-OOK:1000),后16位代表不同的OSNR值(10dB:0000000000000001、11dB:0000000000000010,…,25dB:1000000000000000)。在所述的训练过程中,CNN模块逐渐提取输入眼图图像的有效特征。同时,为了最小化理想标签向量和实际输出标签向量之间的误差,CNN模块通过反向传播使用梯度下降的方法来逐步调整其内核的参数。The training CNN module is carried out in the feature extraction step 3, wherein the eye pattern training data set of the CNN module is input, and each eye pattern image is in one-to-one correspondence with a label vector composed of 20 bits, and the first 4 bits of the label vector Represents different modulation formats (4PAM: 0001, RZ-DPSK: 0010, NRZ-OOK: 0100, RZ-OOK: 1000), the last 16 bits represent different OSNR values (10dB: 0000000000000001, 11dB: 0000000000000010, ..., 25dB: 1000000000000000). During the training process, the CNN module gradually extracts effective features of the input eye image. At the same time, in order to minimize the error between the ideal label vector and the actual output label vector, the CNN module uses the method of gradient descent through backpropagation to gradually adjust the parameters of its kernel.

图2表示本发明一个具体实施例的基于卷积神经网络的智能眼图分析结构示意图,所述CNN模块的结构主要包括以下几个部分:一个输入层、两个卷积层(C1、C2)、两个池化层(P1、P2)、一个全连接层(F1)、一个输出层。经过预处理的28×28眼图图像作为输入层输入CNN模块,与卷积层C1相连接;输入的眼图图像经过含有6个大小为5×5的卷积核的卷积层C1,得到6个大小为24×24特征图,进而将得到的特征图传送至池化层P1;池化层P1以2×2的采样大小对6个特征图进行最大池化,得到相应的6个大小为12×12的采样后的特征图,进而将得到的特征图传送至卷积层C2;卷积层C2含有12个大小为5×5的卷积核,池化层P1所得的6个特征图经卷积层C2得到12个大小为8×8的特征图,进而将得到的特征图传送至池化层P2;池化层P2同样以2×2的采样大小对卷积层C2生成的12个大小为4×4特征图进行最大池化,得到相应的12个采样后的特征图,接着将得到的特征图传送至全连接层F1;池化层P2所得的所有特征图的像素点映射为一维的全连接层F1,每个像素代表全连接层F1的一个神经元节点,全连接层F1的每个神经元节点与输出层进行全连接;最后输出层输出所需分析的眼图性能的节点信息。Fig. 2 represents the structural representation of the intelligent eye diagram analysis based on the convolutional neural network of a specific embodiment of the present invention, the structure of the CNN module mainly includes the following parts: an input layer, two convolutional layers (C1, C2) , two pooling layers (P1, P2), a fully connected layer (F1), and an output layer. The preprocessed 28×28 eye pattern image is input into the CNN module as the input layer and connected to the convolutional layer C1; the input eye pattern image passes through the convolutional layer C1 containing 6 convolution kernels with a size of 5×5 to obtain 6 feature maps with a size of 24×24, and then transfer the obtained feature maps to the pooling layer P1; the pooling layer P1 performs maximum pooling on the 6 feature maps with a sampling size of 2×2, and obtains the corresponding 6 size It is a 12×12 sampled feature map, and then the obtained feature map is sent to the convolutional layer C2; the convolutional layer C2 contains 12 convolution kernels with a size of 5×5, and the 6 features obtained by the pooling layer P1 The image is passed through the convolutional layer C2 to obtain 12 feature maps with a size of 8×8, and then the obtained feature maps are sent to the pooling layer P2; the pooling layer P2 also uses a sampling size of 2×2 to generate 12 feature maps with a size of 4×4 are maximally pooled to obtain the corresponding 12 sampled feature maps, and then the obtained feature maps are sent to the fully connected layer F1; the pixels of all feature maps obtained by the pooling layer P2 Mapped to a one-dimensional fully connected layer F1, each pixel represents a neuron node of the fully connected layer F1, each neuron node of the fully connected layer F1 is fully connected to the output layer; the final output layer outputs the eye Node information for graph properties.

其中,卷积层是CNN模块的核心构件。该层中的参数由一组卷积核组成,它们具有较小的局部感受野,但却可以延伸到眼图图像的整个深度。在向前传播的过程中,每个卷积核与眼图图像的宽度和高度上的像素点进行卷积,输出一个二维的平面,其被称为从该卷积核生成的特征图。与数学中的经典卷积不同,CNN中的操作是离散卷积,可以被看作是矩阵相乘。卷积核可以被看作为特征检测器,通过卷积核,CNN模块可以从输入的图像中学习到其独有的特征,同时为了构建一个更加有效的模型,一般需要多个卷积核来检测多个特征,以便在卷积层中产生多个特征图。在经过卷积层的特征提取后,池化层会将语义上类似的特征合并成相应的一个,典型的池化方式是计算一个特征图中局部单元块的最大值,进行特征图的子采样。本实施例中每个子采样单元从卷积特征图中2×2的单位区域获取输入,并将这些输入的最大值作为池化后的数值,进而构成池化后的特征图。Among them, the convolutional layer is the core component of the CNN module. The parameters in this layer consist of a set of convolutional kernels that have a small local receptive field but extend to the full depth of the eye image. During forward propagation, each kernel is convolved with pixels across the width and height of the eye image, outputting a two-dimensional plane called the feature map generated from that kernel. Unlike classical convolution in mathematics, operations in CNNs are discrete convolutions, which can be viewed as matrix multiplications. The convolution kernel can be regarded as a feature detector. Through the convolution kernel, the CNN module can learn its unique features from the input image. At the same time, in order to build a more effective model, multiple convolution kernels are generally required to detect Multiple features to produce multiple feature maps in convolutional layers. After the feature extraction of the convolutional layer, the pooling layer will combine semantically similar features into a corresponding one. The typical pooling method is to calculate the maximum value of the local unit block in a feature map and perform sub-sampling of the feature map. . In this embodiment, each sub-sampling unit obtains input from a 2×2 unit area in the convolutional feature map, and uses the maximum value of these inputs as a pooled value to form a pooled feature map.

所述CNN模块模式识别和性能分析步骤四中,经预处理的所需分析的4种不同调制格式、每种调制格式具有范围为的OSNR值(以1dB为步长)的眼图图像输入到上述训练完成的CNN模块中,CNN模块对输入的不同情况下的眼图进行模式识别,并通过训练阶段的学习经验对输入的眼图进行调制格式和OSNR的性能分析,将分析结果以20位的比特向量的形式输出。In step 4 of the CNN module pattern recognition and performance analysis, 4 different modulation formats that need to be analyzed after preprocessing, each modulation format has a range of The eye pattern image with the OSNR value (with 1dB as the step size) is input into the CNN module that has been trained above. As shown in the figure, the performance analysis of the modulation format and OSNR is performed, and the analysis results are output in the form of a 20-bit bit vector.

所述输出分析结果步骤五中,从CNN模块输出的20位比特向量中,其前4位可得到所分析眼图的调制格式信息,后16位可得到相应的OSNR值。In the fifth step of outputting the analysis result, the first 4 bits of the 20-bit bit vector output by the CNN module can obtain the modulation format information of the analyzed eye diagram, and the last 16 bits can obtain the corresponding OSNR value.

为表现本发明所提方法分析的准确性,图4显示了不同调制格式不同迭代次数下CNN模块对OSNR的估计精度。显然,四种调制格式的精确度均随着CNN模块迭代次数的增加而增加。不同迭代次数所训练的CNN模块具有不同的性能识别能力。在本实施例中,当迭代次数超过31时,四种调制格式下CNN模块对其相应的OSNR估计的精度均达到100%,即所分析的性能无错误结果。In order to show the accuracy of the analysis of the method proposed in the present invention, Fig. 4 shows the estimation accuracy of the CNN module for OSNR under different modulation formats and different iterations. Clearly, the accuracies of the four modulation formats all increase with the number of iterations of the CNN module. CNN modules trained with different iterations have different performance recognition capabilities. In this embodiment, when the number of iterations exceeds 31, the CNN module under the four modulation formats has an accuracy of estimating the corresponding OSNR to 100%, that is, the analyzed performance has no wrong result.

同时,为证明本发明的优势,将CNN与其他四种著名的机器学习算法,即决策树,KNN,BP-ANN和SVM进行了比较。每个算法对于不同调制格式下OSNR的估计精度于直方图的形式示于图5,由图可见CNN对于其他四种算法具有明显的优势。其中,决策树算法处理速度快且对内存的要求很小,这些优势同时也导致其估计精度较低;KNN算法通常在低维度上具有良好的估计精度,但在高维度上可能会产生很大的偏差;SVM算法在估计精度和内存使用上均具有很大的优势,其仅需要很少的支持向量,但其本质上只是一个二进制分类器,所以面对多个OSNR的值便需要多个SVM分类器来进行处理;虽然BP神经网络也是从神经网络发展而来,但其缺乏特征提取的能力,需要大量训练数据才能达到较好的效果,并且容易陷入局部最小值以及过拟合现象。与以上算法相比,CNN对输入数据方差的敏感较低,所构建的网络更为强大,在很大程度上可以避免过拟合现象,并且能够自动提取输入数据的特征,尤其是在图像处理上具有非常好的效果,同时,由于局部感受野、权重分配、子采样等优势,CNN能够以适当的计算成本实现最佳的准确性。Meanwhile, to demonstrate the advantages of the present invention, CNN is compared with four other well-known machine learning algorithms, namely decision tree, KNN, BP-ANN and SVM. The estimation accuracy of each algorithm for OSNR under different modulation formats is shown in Figure 5 in the form of a histogram. It can be seen from the figure that CNN has obvious advantages over the other four algorithms. Among them, the decision tree algorithm has fast processing speed and small memory requirements, and these advantages also lead to low estimation accuracy; KNN algorithm usually has good estimation accuracy in low dimensions, but may produce large deviation; the SVM algorithm has great advantages in estimation accuracy and memory usage, it only needs a few support vectors, but it is essentially just a binary classifier, so it needs multiple OSNR values Although the BP neural network is also developed from the neural network, it lacks the ability to extract features, requires a large amount of training data to achieve better results, and is prone to fall into local minimum and overfitting. Compared with the above algorithms, CNN is less sensitive to the variance of input data, and the constructed network is more powerful, which can avoid over-fitting to a large extent, and can automatically extract the characteristics of input data, especially in image processing. At the same time, due to the advantages of local receptive fields, weight distribution, sub-sampling, etc., CNN can achieve the best accuracy at a moderate computational cost.

综上,本发明所提出的方法将基于卷积神经网络的深度学习技术应用到眼图分析中,可以有效地作为示波器的眼图软件处理模块或仿真软件的眼图分析模块,进而嵌入到测试仪器中进行智能信号分析和性能监测,实现眼图分析的自动化和智能化。In summary, the method proposed by the present invention applies deep learning technology based on convolutional neural network to eye diagram analysis, which can be effectively used as the eye diagram software processing module of the oscilloscope or the eye diagram analysis module of the simulation software, and then embedded in the test Intelligent signal analysis and performance monitoring are carried out in the instrument to realize the automation and intelligence of eye diagram analysis.

以上实施例仅用于说明本发明,而并对本发明的保护范围加以限制,对于相关领域的技术人员,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above embodiments are only used to illustrate the present invention, and do not limit the protection scope of the present invention. For those skilled in the relevant fields, the present invention may have various modifications and changes. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included within the protection scope of the present invention.

Claims (10)

1.一种基于卷积神经网络的深度学习智能眼图分析方法,其特征在于,将基于卷积神经网络的深度学习技术应用到眼图分析中,利用卷积神经网络对眼图进行多种性能分析,所述方法包括以下步骤:1. A deep learning intelligent eye diagram analysis method based on a convolutional neural network, characterized in that, the deep learning technology based on a convolutional neural network is applied to the eye diagram analysis, and the convolutional neural network is used to perform multiple eye diagram analysis. Performance analysis, the method includes the following steps: 步骤一、获取所需分析的眼图训练数据集;Step 1. Obtain the eye diagram training data set to be analyzed; 步骤二、眼图图像预处理;Step 2, eye pattern image preprocessing; 步骤三、训练卷积神经网络(CNN)模块对眼图进行特征提取;Step 3, training the convolutional neural network (CNN) module to extract features from the eye diagram; 步骤四、将所需分析眼图输入训练完成的CNN模块进行模式识别和性能分析;Step 4. Input the required analysis eye pattern into the trained CNN module for pattern recognition and performance analysis; 步骤五、输出分析结果。Step five, output the analysis result. 2.根据权利要求1所述的基于卷积神经网络的深度学习智能眼图分析方法,其特征在于,所述眼图中所需分析的多种性能为调制格式、光信噪比(OSNR)、色散(CD)、线性损伤和非线性损伤。2. the deep learning intelligent eye diagram analysis method based on convolutional neural network according to claim 1, is characterized in that, the multiple performances that need to be analyzed in the eye diagram are modulation format, optical signal-to-noise ratio (OSNR) , chromatic dispersion (CD), linear impairment and nonlinear impairment. 3.根据权利要求1所述的基于卷积神经网络的深度学习智能眼图分析方法,其特征在于,所述眼图训练集获取步骤一中,采集眼图的各种性能不同指标情况下的训练数据集,其中,训练数据集中的每组数据由输入为眼图图像和输出为特定性能的特定指标信息对构成。3. the deep learning intelligent eye diagram analysis method based on convolutional neural network according to claim 1, is characterized in that, in described eye diagram training set acquisition step one, under the various performance different index situations of collection eye diagram A training data set, wherein each set of data in the training data set is composed of a pair of specific index information whose input is an eye pattern image and whose output is a specific performance. 4.根据权利要求1所述的基于卷积神经网络的深度学习智能眼图分析方法,其特征在于,所述眼图预处理步骤二中,将所述步骤一中获取的训练数据集中的彩色眼图图像转换为灰度图像,并将得到的眼图灰度图像进行下采样处理。4. the deep learning intelligence eye pattern analysis method based on convolutional neural network according to claim 1, is characterized in that, in described eye pattern preprocessing step 2, the color in the training data set that acquires in described step 1 The eye pattern image is converted into a grayscale image, and the obtained eye pattern grayscale image is down-sampled. 5.根据权利要求1所述的基于卷积神经网络的深度学习智能眼图分析方法,其特征在于,所述训练CNN模块进行特征提取步骤三中,将所述步骤二中预处理后的眼图输入构建好的CNN模块中,基于所述训练数据进行训练过程后,所述CNN模块自动从眼图图像中提取特征,并构建特征与不同性能之间的关系。5. the deep learning intelligent eye diagram analysis method based on convolutional neural network according to claim 1, is characterized in that, described training CNN module carries out feature extraction step 3, with the eye after the pretreatment in described step 2 The image is input into the constructed CNN module, and after the training process is performed based on the training data, the CNN module automatically extracts features from the eye diagram image, and constructs the relationship between the features and different performances. 6.根据权利要求1所述的基于卷积神经网络的深度学习智能眼图分析方法,其特征在于,所述CNN模块模式识别和性能分析步骤四中,经预处理的所需分析的眼图输入所述训练完成的CNN模块中,CNN模块对输入的眼图进行模式识别,并通过其以往的学习经验对当前输入的眼图进行性能分析。6. the deep learning intelligence eye pattern analysis method based on convolutional neural network according to claim 1, is characterized in that, in described CNN module pattern recognition and performance analysis step 4, the eye pattern of the required analysis of preprocessing Input the trained CNN module, the CNN module performs pattern recognition on the input eye diagram, and performs performance analysis on the current input eye diagram through its previous learning experience. 7.根据权利要求1所述的基于卷积神经网络的深度学习智能眼图分析方法,其特征在于,所述输出分析结果步骤五中,由所述CNN模块输出的信息包含所需分析的各种性能,可从输出信息中得到不同性能的分析结果。7. the deep learning intelligent eye pattern analysis method based on convolutional neural network according to claim 1, is characterized in that, in the described output analysis result step 5, the information output by the CNN module includes each required analysis. different properties, and the analysis results of different properties can be obtained from the output information. 8.根据权利要求1所述的基于卷积神经网络的深度学习智能眼图分析方法,其特征在于,所述CNN模块的结构主要包括:一个输入层、n个卷积层(C1、C2、…、Cn)、n个池化层(P1、P2、…、Pn)、m个全连接层(F1、F2、…、Fm)、一个输出层;8. the deep learning intelligent eye diagram analysis method based on convolutional neural network according to claim 1, is characterized in that, the structure of described CNN module mainly comprises: an input layer, n convolution layers (C1, C2, ..., Cn), n pooling layers (P1, P2, ..., Pn), m fully connected layers (F1, F2, ..., Fm), an output layer; 其中,所述输入层的输入为经过预处理的眼图图像,输入层与卷积层C1相连接;Wherein, the input of the input layer is a preprocessed eye diagram image, and the input layer is connected to the convolution layer C1; 所述卷积层C1含有k1个大小为a1×a1的卷积核,所述输入层图像经过卷积层C1得到k1个特征图,进而将得到的特征图传送至池化层P1;The convolutional layer C1 contains k 1 convolution kernels with a size of a 1 ×a 1 , the input layer image is passed through the convolutional layer C1 to obtain k 1 feature maps, and then the obtained feature maps are sent to the pooling layer P1; 所述池化层P1以b1×b1的采样大小对所述卷积层C1生成的特征图进行池化,得到相应的k1个采样后的特征图,再将得到的特征图传送至下一个卷积层C2;The pooling layer P1 pools the feature maps generated by the convolution layer C1 with a sampling size of b 1 ×b 1 to obtain corresponding k 1 sampled feature maps, and then transmits the obtained feature maps to The next convolutional layer C2; 所述n个卷积层和池化层对的顺次连接,进而不断提取图像深层次的抽样特征,最后一个池化层Pn与全连接层F1相连接,其中,卷积层Ci含有ki个大小为ai×ai的卷积核,池化层Pj的采样大小为bj×bj,Ci表示第i个卷积层,Pj表示第j个池化层;The sequential connection of the n convolutional layers and the pooling layer pairs continuously extracts the deep-level sampling features of the image, and the last pooling layer Pn is connected with the fully connected layer F1, wherein the convolutional layer Ci contains k i A convolution kernel with a size of a i × a i , the sampling size of the pooling layer Pj is b j × b j , Ci represents the i-th convolutional layer, and Pj represents the j-th pooling layer; 所述全连接层F1为所述最后一个池化层Pn所得的所有kn个特征图的像素点映射而成的一维层,每个像素代表所述全连接层F1的一个神经元节点,F1层的所有神经元节点与下一个全连接层F2的神经元节点进行全连接;The fully connected layer F1 is a one-dimensional layer formed by mapping pixels of all k n feature maps obtained by the last pooling layer Pn, and each pixel represents a neuron node of the fully connected layer F1, All neuron nodes of the F1 layer are fully connected with the neuron nodes of the next fully connected layer F2; 经m个全连接层顺次连接,最后一个全连接层Fm与所述输出层进行全连接;Connect sequentially through m fully connected layers, and the last fully connected layer Fm is fully connected with the output layer; 所述输出层输出所需分析的眼图不同性能的节点信息。The output layer outputs node information of different performances of the eye diagram to be analyzed. 9.根据权利要求8所述的基于卷积神经网络的深度学习智能眼图分析方法,其特征在于,所述输出层输出的节点信息为L位的二进制比特序列,其中,所述N个不同的性能分别以L1、L2、…、LN位二进制比特信息来表示,Li位用于表示第i个性能的Li种不同的指标信息,其中L=L1+L2+…+LN9. the deep learning intelligent eye pattern analysis method based on convolutional neural network according to claim 8, is characterized in that, the node information of described output layer output is the binary bit sequence of L position, and wherein, described N different The performance of each is represented by L 1 , L 2 , ..., L N -bit binary bit information, and L i bits are used to represent L i different index information of the i-th performance, where L=L 1 +L 2 +... +L N . 10.根据权利要求1所述的基于卷积神经网络的深度学习智能眼图分析方法,其特征在于,基于CNN的眼图处理算法将作为示波器的眼图软件处理模块或仿真软件的眼图分析模块,进而嵌入到测试仪器中进行智能信号分析和性能监测。10. the deep learning intelligent eye pattern analysis method based on convolutional neural network according to claim 1, is characterized in that, the eye pattern processing algorithm based on CNN will be used as the eye pattern analysis of the eye pattern software processing module of oscilloscope or simulation software module, and then embedded into the test instrument for intelligent signal analysis and performance monitoring.
CN201710534126.0A 2017-07-03 2017-07-03 Deep Learning Intelligent Eye Diagram Analysis Method Based on Convolutional Neural Network Active CN107342810B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710534126.0A CN107342810B (en) 2017-07-03 2017-07-03 Deep Learning Intelligent Eye Diagram Analysis Method Based on Convolutional Neural Network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710534126.0A CN107342810B (en) 2017-07-03 2017-07-03 Deep Learning Intelligent Eye Diagram Analysis Method Based on Convolutional Neural Network

Publications (2)

Publication Number Publication Date
CN107342810A true CN107342810A (en) 2017-11-10
CN107342810B CN107342810B (en) 2019-11-19

Family

ID=60218952

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710534126.0A Active CN107342810B (en) 2017-07-03 2017-07-03 Deep Learning Intelligent Eye Diagram Analysis Method Based on Convolutional Neural Network

Country Status (1)

Country Link
CN (1) CN107342810B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108446631A (en) * 2018-03-20 2018-08-24 北京邮电大学 The smart frequency spectrum figure analysis method of deep learning based on convolutional neural networks
CN108957125A (en) * 2018-03-20 2018-12-07 北京邮电大学 Smart frequency spectrum figure analysis method based on machine learning
CN109120563A (en) * 2018-08-06 2019-01-01 电子科技大学 A kind of Modulation Identification method based on Artificial neural network ensemble
CN109217923A (en) * 2018-09-28 2019-01-15 北京科技大学 A kind of joint optical information networks and rate, modulation format recognition methods and system
CN109547102A (en) * 2018-12-17 2019-03-29 北京邮电大学 A kind of optical information networks method, apparatus, electronic equipment and readable storage medium storing program for executing
CN109768944A (en) * 2018-12-29 2019-05-17 苏州联讯仪器有限公司 A kind of signal modulation identification of code type method based on convolutional neural networks
CN109905167A (en) * 2019-02-25 2019-06-18 苏州工业园区新国大研究院 A kind of optical communication system method for analyzing performance based on convolutional neural networks
CN111157551A (en) * 2018-11-07 2020-05-15 浦项工科大学校产学协力团 Method for analyzing perovskite structure using machine learning
CN111863104A (en) * 2020-07-29 2020-10-30 展讯通信(上海)有限公司 Eye pattern determination model training method, eye pattern determination device, eye pattern determination apparatus, and medium
CN111860852A (en) * 2019-04-30 2020-10-30 百度时代网络技术(北京)有限公司 Method, apparatus and system for processing data
CN111934755A (en) * 2020-07-08 2020-11-13 国网宁夏电力有限公司电力科学研究院 SDN controller and optical signal-to-noise ratio prediction method of optical communication equipment
CN112115760A (en) * 2019-06-20 2020-12-22 和硕联合科技股份有限公司 Object detection system and object detection method
CN112836422A (en) * 2020-12-31 2021-05-25 电子科技大学 Interferometric and Convolutional Neural Network Hybrid Scheme Measurement Method
CN113141214A (en) * 2021-04-06 2021-07-20 中山大学 Deep learning-based underwater optical communication misalignment robust blind receiver design method
US11907090B2 (en) 2021-08-12 2024-02-20 Tektronix, Inc. Machine learning for taps to accelerate TDECQ and other measurements
US11923896B2 (en) 2021-03-24 2024-03-05 Tektronix, Inc. Optical transceiver tuning using machine learning
US11923895B2 (en) 2021-03-24 2024-03-05 Tektronix, Inc. Optical transmitter tuning using machine learning and reference parameters
US11940889B2 (en) 2021-08-12 2024-03-26 Tektronix, Inc. Combined TDECQ measurement and transmitter tuning using machine learning
US12146914B2 (en) 2021-05-18 2024-11-19 Tektronix, Inc. Bit error ratio estimation using machine learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205453A (en) * 2015-08-28 2015-12-30 中国科学院自动化研究所 Depth-auto-encoder-based human eye detection and positioning method
CN106326874A (en) * 2016-08-30 2017-01-11 天津中科智能识别产业技术研究院有限公司 Method and device for recognizing iris in human eye images
CN106650688A (en) * 2016-12-30 2017-05-10 公安海警学院 Eye feature detection method, device and recognition system based on convolutional neural network
GB2545661A (en) * 2015-12-21 2017-06-28 Nokia Technologies Oy A method for analysing media content

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205453A (en) * 2015-08-28 2015-12-30 中国科学院自动化研究所 Depth-auto-encoder-based human eye detection and positioning method
GB2545661A (en) * 2015-12-21 2017-06-28 Nokia Technologies Oy A method for analysing media content
CN106326874A (en) * 2016-08-30 2017-01-11 天津中科智能识别产业技术研究院有限公司 Method and device for recognizing iris in human eye images
CN106650688A (en) * 2016-12-30 2017-05-10 公安海警学院 Eye feature detection method, device and recognition system based on convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赖俊森: "基于眼图重构和人工神经网络的光性能监测", 《光电子·激光》 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108957125A (en) * 2018-03-20 2018-12-07 北京邮电大学 Smart frequency spectrum figure analysis method based on machine learning
CN108446631A (en) * 2018-03-20 2018-08-24 北京邮电大学 The smart frequency spectrum figure analysis method of deep learning based on convolutional neural networks
CN109120563B (en) * 2018-08-06 2020-12-29 电子科技大学 A Modulation Identification Method Based on Neural Network Integration
CN109120563A (en) * 2018-08-06 2019-01-01 电子科技大学 A kind of Modulation Identification method based on Artificial neural network ensemble
CN109217923A (en) * 2018-09-28 2019-01-15 北京科技大学 A kind of joint optical information networks and rate, modulation format recognition methods and system
CN111157551A (en) * 2018-11-07 2020-05-15 浦项工科大学校产学协力团 Method for analyzing perovskite structure using machine learning
CN109547102A (en) * 2018-12-17 2019-03-29 北京邮电大学 A kind of optical information networks method, apparatus, electronic equipment and readable storage medium storing program for executing
CN109768944A (en) * 2018-12-29 2019-05-17 苏州联讯仪器有限公司 A kind of signal modulation identification of code type method based on convolutional neural networks
CN109905167A (en) * 2019-02-25 2019-06-18 苏州工业园区新国大研究院 A kind of optical communication system method for analyzing performance based on convolutional neural networks
CN111860852A (en) * 2019-04-30 2020-10-30 百度时代网络技术(北京)有限公司 Method, apparatus and system for processing data
CN112115760A (en) * 2019-06-20 2020-12-22 和硕联合科技股份有限公司 Object detection system and object detection method
CN112115760B (en) * 2019-06-20 2024-02-13 和硕联合科技股份有限公司 Object detection system and object detection method
TWI738009B (en) * 2019-06-20 2021-09-01 和碩聯合科技股份有限公司 Object detection system and object detection method
US11195083B2 (en) 2019-06-20 2021-12-07 Pegatron Corporation Object detection system and object detection method
CN111934755B (en) * 2020-07-08 2022-03-25 国网宁夏电力有限公司电力科学研究院 SDN controller and optical signal-to-noise ratio prediction method of optical communication equipment
CN111934755A (en) * 2020-07-08 2020-11-13 国网宁夏电力有限公司电力科学研究院 SDN controller and optical signal-to-noise ratio prediction method of optical communication equipment
CN111863104A (en) * 2020-07-29 2020-10-30 展讯通信(上海)有限公司 Eye pattern determination model training method, eye pattern determination device, eye pattern determination apparatus, and medium
CN111863104B (en) * 2020-07-29 2023-05-09 展讯通信(上海)有限公司 Eye diagram judgment model training method, eye diagram judgment device, eye diagram judgment equipment and medium
CN112836422B (en) * 2020-12-31 2022-03-18 电子科技大学 Interferometric and Convolutional Neural Network Hybrid Scheme Measurement Method
CN112836422A (en) * 2020-12-31 2021-05-25 电子科技大学 Interferometric and Convolutional Neural Network Hybrid Scheme Measurement Method
US11923896B2 (en) 2021-03-24 2024-03-05 Tektronix, Inc. Optical transceiver tuning using machine learning
US11923895B2 (en) 2021-03-24 2024-03-05 Tektronix, Inc. Optical transmitter tuning using machine learning and reference parameters
CN113141214A (en) * 2021-04-06 2021-07-20 中山大学 Deep learning-based underwater optical communication misalignment robust blind receiver design method
US12146914B2 (en) 2021-05-18 2024-11-19 Tektronix, Inc. Bit error ratio estimation using machine learning
US11907090B2 (en) 2021-08-12 2024-02-20 Tektronix, Inc. Machine learning for taps to accelerate TDECQ and other measurements
US11940889B2 (en) 2021-08-12 2024-03-26 Tektronix, Inc. Combined TDECQ measurement and transmitter tuning using machine learning

Also Published As

Publication number Publication date
CN107342810B (en) 2019-11-19

Similar Documents

Publication Publication Date Title
CN107342810B (en) Deep Learning Intelligent Eye Diagram Analysis Method Based on Convolutional Neural Network
CN107342962B (en) deep learning intelligent constellation diagram analysis method based on convolutional neural network
Fan et al. Joint optical performance monitoring and modulation format/bit-rate identification by CNN-based multi-task learning
CN114692681B (en) SCNN-based distributed optical fiber vibration and acoustic wave sensing signal identification method
CN110932809B (en) Fiber channel model simulation method, device, electronic equipment and storage medium
CN108446631A (en) The smart frequency spectrum figure analysis method of deep learning based on convolutional neural networks
CN110472483A (en) A kind of method and device of the small sample semantic feature enhancing towards SAR image
CN114157539B (en) A data-knowledge dual-driven modulation intelligent identification method
Lv et al. Joint OSNR monitoring and modulation format identification on signal amplitude histograms using convolutional neural network
Wang et al. Convolutional neural network-based deep learning for intelligent OSNR estimation on eye diagrams
CN117557775A (en) Substation power equipment detection method and system based on infrared and visible light fusion
CN106778910A (en) Deep learning system and method based on local training
CN115641263A (en) Super-resolution reconstruction method of single infrared image of power equipment based on deep learning
CN116070136A (en) Multi-modal fusion wireless signal automatic modulation recognition method based on deep learning
CN114070415A (en) Optical fiber nonlinear equalization method and system
CN111353412B (en) End-to-end 3D-CapsNet flame detection method and device
CN104036242A (en) Object recognition method based on convolutional restricted Boltzmann machine combining Centering Trick
CN111541484A (en) Optical Signal-to-Noise Ratio Monitoring Method of Optical Fiber Communication System Based on Delay Sampling
CN116091897A (en) Distributed optical fiber sensing event identification method and system based on light weight
CN114676637A (en) Fiber channel modeling method and system for generating countermeasure network based on conditions
CN113343924B (en) Modulation signal identification method based on cyclic spectrum characteristics and generation countermeasure network
CN114722905A (en) Training method and device for optical communication receiving model
Gao et al. Joint baud-rate and modulation format identification based on asynchronous delay-tap plots analyzer by using convolutional neural network
CN115859186B (en) Distributed optical fiber sensing event recognition method and system based on Gramey angle field
CN110210536A (en) A kind of the physical damnification diagnostic method and device of optical interconnection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant