CN116893162A - Rare anti-nuclear antibody karyotype detection method based on YOLO and attention neural network - Google Patents
Rare anti-nuclear antibody karyotype detection method based on YOLO and attention neural network Download PDFInfo
- Publication number
- CN116893162A CN116893162A CN202310612311.2A CN202310612311A CN116893162A CN 116893162 A CN116893162 A CN 116893162A CN 202310612311 A CN202310612311 A CN 202310612311A CN 116893162 A CN116893162 A CN 116893162A
- Authority
- CN
- China
- Prior art keywords
- model
- rare
- yolo
- karyotype
- attention
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 82
- 230000003460 anti-nuclear Effects 0.000 title claims abstract description 23
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 17
- 238000000034 method Methods 0.000 claims abstract description 24
- 230000007246 mechanism Effects 0.000 claims abstract description 15
- 238000000605 extraction Methods 0.000 claims abstract description 8
- 210000004027 cell Anatomy 0.000 claims description 16
- 238000012549 training Methods 0.000 claims description 15
- 238000012360 testing method Methods 0.000 claims description 11
- 238000010166 immunofluorescence Methods 0.000 claims description 9
- 238000007781 pre-processing Methods 0.000 claims description 7
- 230000006870 function Effects 0.000 claims description 6
- 238000003062 neural network model Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000013135 deep learning Methods 0.000 claims description 4
- 210000000633 nuclear envelope Anatomy 0.000 claims description 4
- 108050006400 Cyclin Proteins 0.000 claims description 3
- 206010023825 Laryngeal cancer Diseases 0.000 claims description 3
- 102000009339 Proliferating Cell Nuclear Antigen Human genes 0.000 claims description 3
- 210000004718 centriole Anatomy 0.000 claims description 3
- 102000005352 centromere protein F Human genes 0.000 claims description 3
- 108010031377 centromere protein F Proteins 0.000 claims description 3
- 210000002919 epithelial cell Anatomy 0.000 claims description 3
- 206010023841 laryngeal neoplasm Diseases 0.000 claims description 3
- 238000012795 verification Methods 0.000 claims description 3
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 2
- 230000003321 amplification Effects 0.000 claims description 2
- 239000011159 matrix material Substances 0.000 claims description 2
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 2
- 238000000638 solvent extraction Methods 0.000 claims description 2
- 239000013598 vector Substances 0.000 claims description 2
- 101001008948 Dictyostelium discoideum Kinesin-related protein 13 Proteins 0.000 claims 1
- 101100421674 Drosophila melanogaster slou gene Proteins 0.000 claims 1
- -1 Midbody Proteins 0.000 claims 1
- 101001091265 Xenopus laevis Kinesin-like protein KIF11-A Proteins 0.000 claims 1
- 101001091264 Xenopus laevis Kinesin-like protein KIF11-B Proteins 0.000 claims 1
- 238000012216 screening Methods 0.000 claims 1
- 230000008569 process Effects 0.000 abstract description 8
- 230000008901 benefit Effects 0.000 abstract description 5
- 238000012545 processing Methods 0.000 abstract description 4
- 230000000007 visual effect Effects 0.000 abstract description 3
- 238000002073 fluorescence micrograph Methods 0.000 abstract 1
- 238000010606 normalization Methods 0.000 abstract 1
- 239000010410 layer Substances 0.000 description 11
- 238000013527 convolutional neural network Methods 0.000 description 10
- 210000000349 chromosome Anatomy 0.000 description 4
- 238000011176 pooling Methods 0.000 description 4
- 208000023275 Autoimmune disease Diseases 0.000 description 3
- 230000004913 activation Effects 0.000 description 3
- 230000022131 cell cycle Effects 0.000 description 3
- 230000032823 cell division Effects 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 210000004940 nucleus Anatomy 0.000 description 3
- 210000000805 cytoplasm Anatomy 0.000 description 2
- 229940079593 drug Drugs 0.000 description 2
- 239000003814 drug Substances 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 201000000596 systemic lupus erythematosus Diseases 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 208000027761 Hepatic autoimmune disease Diseases 0.000 description 1
- 241000238102 Scylla Species 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000031016 anaphase Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000011229 interlayer Substances 0.000 description 1
- 230000016507 interphase Effects 0.000 description 1
- 238000009533 lab test Methods 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 230000031864 metaphase Effects 0.000 description 1
- 230000035773 mitosis phase Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000004393 prognosis Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000031877 prophase Effects 0.000 description 1
- 238000007430 reference method Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000405 serological effect Effects 0.000 description 1
- 230000016853 telophase Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/62—Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
- G01N21/63—Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
- G01N21/64—Fluorescence; Phosphorescence
- G01N21/6428—Measuring fluorescence of fluorescent products of reactions or of fluorochrome labelled reactive substances, e.g. measuring quenching effects, using measuring "optrodes"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Immunology (AREA)
- Chemical & Material Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Quality & Reliability (AREA)
- Chemical Kinetics & Catalysis (AREA)
- Radiology & Medical Imaging (AREA)
- Optics & Photonics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- Pathology (AREA)
- Image Analysis (AREA)
Abstract
本发明提出一种基于YOLO及注意力神经网络的罕见抗核抗体核型探测方法,其流程为图像归一化处理之后,传入神经网络中进行检测,从而得到对罕见抗核抗体核型的探测。该方法的优势在于对传统的目标探测算法YOLO定制化、有效的改进升级,针对临床上罕见抗核抗体图像特点,在YOLO目标探测模型的基础上引入注意力机制,使得模型可提升罕见抗核抗体核型图像的关键特征在整张特征图中所占的权重,提升网络对罕见抗核抗体核型图像的特征提取能力,具有更精确的优势。该方法的实施能够提高医学检验人员对抗核抗体荧光图像的阅片工作效率,避免了不同阅片者肉眼判读的主观差异,同时改善当前临床上由于医学检验人员对罕见ANA核型认识不足,较为普遍存在漏检的情况。
The present invention proposes a rare anti-nuclear antibody karyotype detection method based on YOLO and attention neural network. The process is that after image normalization processing, it is passed into the neural network for detection, thereby obtaining the rare anti-nuclear antibody karyotype. detection. The advantage of this method is that it is a customized and effective improvement and upgrade of the traditional target detection algorithm YOLO. According to the characteristics of clinical rare anti-nuclear antibody images, an attention mechanism is introduced on the basis of the YOLO target detection model, so that the model can improve rare anti-nuclear antibody The weight of the key features of the antibody karyotype image in the entire feature map improves the network's feature extraction capabilities for rare anti-nuclear antibody karyotype images and has the advantage of being more accurate. The implementation of this method can improve the efficiency of medical laboratory personnel in reading antinuclear antibody fluorescence images, avoid subjective differences in visual interpretation by different readers, and at the same time improve the current clinical situation due to insufficient understanding of rare ANA karyotypes by medical laboratory personnel. Missing detections are common.
Description
技术领域Technical field
本发明属于生物医疗领域,具体涉及一种基于YOLO目标探测模型与注意力神经网络模型相结合的用于罕见抗核抗体核型探测的研究方法。The invention belongs to the field of biomedicine, and specifically relates to a research method for rare anti-nuclear antibody karyotype detection based on the combination of YOLO target detection model and attention neural network model.
背景技术Background technique
抗核抗体(anti-nuclear antibody,ANA)作为自身免疫病的血清学标志物,自2007年欧洲抗风湿病联盟/美国风湿病学会(European League Against Rheumatism/American College of Rheumatology,EULAR/ACR)将ANA纳入系统性红斑狼疮(systemiclupus erythematosus,SLE)的主要分类标准后,ANA已是自身免疫病诊断中最重要的实验室检测之一。被纳入多种AID的分类标准中,在AID的诊断、分类及预后中起重要作用。Anti-nuclear antibody (ANA) is a serological marker for autoimmune diseases. Since 2007, the European League Against Rheumatism/American College of Rheumatology (EULAR/ACR) has After ANA was included in the main classification criteria of systemic lupus erythematosus (SLE), ANA has become one of the most important laboratory tests in the diagnosis of autoimmune diseases. It is included in the classification standards of various AIDs and plays an important role in the diagnosis, classification and prognosis of AID.
根据ANA荧光模式国际共识(international consensus on antinuclearantibody pattern,ICAP),现有的ANA荧光模式可达30类之多,但其实每一类别数量分布是不均匀的,绝大部分集中在颗粒型、均质型等核型,这类核型的临床意义目前已较为明确。而部分少见核型由于数量极少,临床意义尚不十分明确,认识尚存在不足。目前,ANA核型中出现概率小于1%的核型定义为罕见ANA核型,这部分核型可分为3组共9类:细胞周期相关的包括NuMA型,纺锤体纤维型,CENP-F型,中间体型,PCNA型,中心粒型;细胞核相关的包括多核点型,核膜型;细胞质相关的主要为高尔基体型。当前临床存在的问题有:1)认识较少:由于此部分核型临床上十分少见,检验人员对罕见ANA核型认识不足,而实验室常规不会报告这种模式,往往存在漏检的情况;2)较为复杂:临床上罕见ANA核型判读要结合细胞有丝分裂期与分裂间期的情况综合判断,而细胞分裂期过程分成又有四个阶段:前期,染色体聚集;中期,染色体向两极拉升;后期,染色体开始分离;末期,两个完全分离的染色体,每个阶段的细胞核形状虽然相似但却均有不同形态的特征。肉眼判读具有一定复杂性;3)有临床价值:部分罕见ANA核型有助于某类自身免疫性疾病的诊断,如核膜型与自身免疫性肝病强相关。因此,探测罕见ANA核型十分具有临床价值。According to the international consensus on antinuclear antibody pattern (ICAP), there are as many as 30 categories of ANA fluorescence patterns. However, in fact, the number distribution of each category is uneven, and most of them are concentrated in granular, uniform and The clinical significance of this type of karyotype is now relatively clear. However, due to the extremely small number of some rare karyotypes, their clinical significance is not yet clear, and their understanding is still insufficient. Currently, karyotypes with a probability of occurrence less than 1% in ANA karyotypes are defined as rare ANA karyotypes. These karyotypes can be divided into 3 groups and 9 categories in total: cell cycle-related ones include NuMA type, spindle fiber type, and CENP-F. Type, intermediate type, PCNA type, centriole type; those related to the nucleus include multinuclear dot type and nuclear membrane type; those related to the cytoplasm are mainly Golgi type. Current clinical problems include: 1) Lack of knowledge: Since this part of the karyotype is very rare in clinical practice, inspectors have insufficient knowledge of the rare ANA karyotype, and laboratory routines do not report this pattern, so detection is often missed. ; 2) More complex: clinically rare ANA karyotype interpretation requires comprehensive judgment based on the situation of cell mitosis and interphase, and the cell division process is divided into four stages: prophase, chromosome aggregation; metaphase, chromosomes are pulled toward the poles Ascending; anaphase, the chromosomes begin to separate; telophase, the two chromosomes are completely separated. Although the shapes of the nuclei at each stage are similar, they all have different morphological characteristics. There is a certain complexity in visual interpretation; 3) Clinical value: Some rare ANA karyotypes are helpful in the diagnosis of certain types of autoimmune diseases. For example, nuclear membrane type is strongly related to autoimmune liver disease. Therefore, detecting rare ANA karyotypes is of great clinical value.
目前最传统的间接荧光免疫法(indirect immunofluorescence,IIF)却至今依然被视为ANA检测的“金标准”“参考方法”“首选方法”。但是实验结果依赖肉眼判读,存在人类视觉捕捉不全、易于疲劳及遗漏的微小病变的缺点。而随着ANA检测的临床应用范围扩大,临床的需求与日俱增,现有的工作模式愈发难以满足日常临床需求。随着近年来以卷积神经网络(convolutional neural networks,CNN)为代表的深度学习的进步,其在医学图像处理方面具有独特的优势。CNN是仿造生物的视知觉机制构建,可以进行监督学习和非监督学习,其隐含层内的卷积核参数共享和层间连接的稀疏性使得卷积神经网络能够以较小的计算量对格点化特征进行学习,非常适合复杂性图像的处理。ANA免疫荧光核型图像因个体差异而复杂多样具有高度复杂性,却正好契合CNN的优势特点。对罕见ANA核型的识别与定位是实现罕见ANA核型探测的前提。At present, the most traditional indirect immunofluorescence method (indirect immunofluorescence, IIF) is still regarded as the "gold standard", "reference method" and "preferred method" for ANA detection. However, the experimental results rely on naked eye interpretation, which has the disadvantages of incomplete human visual capture, easy fatigue, and missed minor lesions. As the scope of clinical application of ANA testing expands, clinical needs increase day by day, and the existing work model becomes increasingly difficult to meet daily clinical needs. With the advancement of deep learning represented by convolutional neural networks (CNN) in recent years, it has unique advantages in medical image processing. CNN is built to imitate the visual perception mechanism of living things and can perform supervised learning and unsupervised learning. The sharing of convolution kernel parameters in the hidden layer and the sparseness of inter-layer connections enable the convolutional neural network to perform tasks with a small amount of calculation. Learning gridded features is very suitable for processing complex images. ANA immunofluorescence karyotype images are complex and highly complex due to individual differences, but they exactly fit the advantages of CNN. The identification and location of rare ANA karyotypes is the prerequisite for the detection of rare ANA karyotypes.
近年来,深度学习中的目标检测算法得到了长足的进步。至今为止,目标检测算法主要包括两大类,一类是以R-CNN为首的两阶段检测算法,包括Fast R-CNN,另一类是单阶段检测算法,例如SSD和YOLO。这类算法在各种领域的目标检测任务中取得了可喜的成果。但由于罕见ANA核型数量稀少,细胞特征不明显,需要找到一些极其细微且容易忽略的关键特征点才能作出判读,这类特征可能出现在细胞分裂周期中的各个阶段,有一定的复杂性;且ANA检测基质人喉癌上皮细胞系(HEP2细胞)还存在细胞密集堆叠的问题,常规的目标检测算法对于此类特征的提取效率较低。In recent years, object detection algorithms in deep learning have made great progress. So far, target detection algorithms mainly include two categories. One is a two-stage detection algorithm headed by R-CNN, including Fast R-CNN, and the other is a single-stage detection algorithm, such as SSD and YOLO. This type of algorithm has achieved promising results in target detection tasks in various fields. However, due to the sparse number of rare ANA karyotypes and unclear cell characteristics, it is necessary to find some extremely subtle and easily overlooked key feature points to make an interpretation. Such features may appear at various stages in the cell division cycle, and there is a certain degree of complexity; Moreover, ANA detection of stromal human laryngeal cancer epithelial cell lines (HEP2 cells) still has the problem of dense stacking of cells, and conventional target detection algorithms have low efficiency in extracting such features.
注意力机制是近年来在深度学习模型中嵌入的一种特殊结构,用来学习和计算输入数据对输出数据的贡献大小,即对输入信号加权,这对突出罕见ANA核型的关键特征点非常有帮助。在CNN架构中,为了捕获足够大的感受野,以及捕获语义的上下文信息,特征图采用逐渐下采样的方式,利用粗略空间格网级别的特征来识别目标对象的位置,并在全图范围内对目标对象之间的关系进行建模,注意力机制的基本作用是对图像的不同区域进行加权,并为最相关的部分会被分配最大的权重。这些模块是可训练的,并应用于图像的每个部分,从而确保渐进式权重学习并增加对关键区域的关注。The attention mechanism is a special structure embedded in deep learning models in recent years. It is used to learn and calculate the contribution of input data to output data, that is, to weight the input signal. This is very useful for highlighting the key feature points of rare ANA karyotypes. helpful. In the CNN architecture, in order to capture a large enough receptive field and capture semantic contextual information, the feature map adopts a gradual down-sampling method, using coarse spatial grid-level features to identify the location of the target object, and within the entire image range. To model the relationship between target objects, the basic function of the attention mechanism is to weight different areas of the image, and the most relevant parts will be assigned the greatest weight. These modules are trainable and applied to every part of the image, ensuring progressive weight learning and increased attention to critical areas.
基于此,我们考虑在YOLO架构的基础上引入了注意力机制,以进一步提升网络对细胞分裂期等更加复杂的信息的特征提取能力。因此,针对罕见ANA核型键特征点探测这一特定的任务,对现有的通用目标检测算法进行定制化、有效的改进升级,来提升模型的精度,这对实现罕见ANA核型探测十分重要。Based on this, we considered introducing an attention mechanism based on the YOLO architecture to further improve the network's feature extraction capabilities for more complex information such as cell division periods. Therefore, for the specific task of detecting rare ANA karyotype key feature points, it is important to customize and effectively improve and upgrade the existing general target detection algorithm to improve the accuracy of the model. This is very important for the detection of rare ANA karyotypes. .
现有技术存在的问题包括:1.临床上,检验人员对罕见ANA核型认识不足,往往存在漏检的情况,实验室不会常规报告这种模式,在模型训练阶段需要花费很大的工作量去收集训练集;2.罕见ANA核型数量稀少,细胞特征不明显,需要找到一些极其细微且容易忽略的关键特征点才能作出判读,对于此类关键表征的提取常规的目标探测算法YOLO的能力有限,需要对现有的算法进行升级改进。Problems with the existing technology include: 1. Clinically, inspectors have insufficient understanding of rare ANA karyotypes and often miss detections. Laboratories do not routinely report this pattern, which requires a lot of work in the model training phase. amount to collect training sets; 2. The number of rare ANA karyotypes is sparse, and the cell characteristics are not obvious. It is necessary to find some extremely subtle and easily overlooked key feature points to make an interpretation. For the extraction of such key features, the conventional target detection algorithm YOLO The capabilities are limited and existing algorithms need to be upgraded and improved.
发明内容Contents of the invention
本发明的目的在于解决当前临床检验人员对罕见ANA核型认识不足所存在漏检的情况,以及在现有的目标检测算法无法完成目标检测的问题,设计了一种在YOLO架构的基础上引入注意力神经网络的方法,用于实现对罕见ANA核型的探测。The purpose of this invention is to solve the problem of missed detection caused by current clinical laboratory personnel's insufficient understanding of rare ANA karyotypes, and the problem that the existing target detection algorithm cannot complete target detection. It designs a method introduced on the basis of the YOLO architecture. Attention neural network method is used to detect rare ANA karyotypes.
本发明技术方案有如下步骤:The technical solution of the present invention has the following steps:
步骤1:构建罕见ANA核型免疫荧光图像目标检测数据集,数据集标注;Step 1: Construct a rare ANA karyotype immunofluorescence image target detection data set, and label the data set;
步骤2:数据集预处理及划分;Step 2: Data set preprocessing and partitioning;
步骤3:使用YOLO v5基础网络中的深度卷积神经网络作为基础对输入图像进行特征提取,构建YOLO目标探测模型;在此之上对基础网络的Backbone主干网络提取出来的有效特征层和上采样后的结果嵌入了注意力机制,构建基于YOLO及注意力神经网络模型;Step 3: Use the deep convolutional neural network in the YOLO v5 basic network as a basis to extract features from the input image and build a YOLO target detection model; on top of this, use the effective feature layer and upsampling extracted from the Backbone backbone network of the basic network The final result is embedded with the attention mechanism, and a model based on YOLO and attention neural network is constructed;
步骤4:设定训练参数,对所构建的基于YOLO及注意力神经网络模型进行训练,获得最佳参数模型,并使用训练好的模型处理ANA免疫荧光图像数据,进而获得待测图像目标检测结果;Step 4: Set the training parameters, train the constructed model based on YOLO and attention neural network, obtain the best parameter model, and use the trained model to process the ANA immunofluorescence image data, and then obtain the target detection results of the image to be tested ;
步骤5:临床部署训练的罕见ANA核型探测模型。Step 5: Clinical deployment of the trained rare ANA karyotype detection model.
本发明中,所述步骤1为构建罕见ANA核型目标检测数据集,具体为:In the present invention, the step 1 is to construct a rare ANA karyotype target detection data set, specifically:
收取2010年4月至2022年12月期间在上海交通大学医学院附属新华医院检验科进行抗核抗体项目检测,检测结果为罕见ANA核型的样本。罕见ANA核型是指:抗核抗体核型中出现概率小于1%的核型,可分为3组共9类:1)细胞周期相关,包括NuMA型,纺锤体纤维型,CENP-F型,中间体型,PCNA型,中心粒型;2)细胞核,包括多核点型,核膜型;3)细胞质,主要为高尔基体型。由从事ANA荧光阅片经历大于10年并具有副主任技师以上的资深检验技师按照2021年ANA荧光模式国际共识(international consensus on antinuclear antibodypattern,ICAP)荧光核型分类标准进行对罕见ANA核型图像进行类别和位置信息的标签标注。共采集8465张罕见ANA核型图像,各核型数量见表1。数据库拥有权为上海交通大学医学院附属新华医院。Collect samples that were tested for anti-nuclear antibodies at the Laboratory Department of Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine from April 2010 to December 2022, and the test results showed a rare ANA karyotype. Rare ANA karyotypes refer to karyotypes with a probability of occurrence less than 1% in antinuclear antibody karyotypes, which can be divided into 3 groups and 9 categories in total: 1) Cell cycle related, including NuMA type, spindle fiber type, CENP-F type , intermediate type, PCNA type, centriole type; 2) nucleus, including multinuclear dot type, nuclear membrane type; 3) cytoplasm, mainly Golgi type. The rare ANA karyotype images will be analyzed in accordance with the 2021 International Consensus on Antinuclear Antibody Pattern (ICAP) fluorescence karyotype classification standards by senior laboratory technicians who have been engaged in ANA fluorescence reading for more than 10 years and have deputy chief technician or above. Label annotation of category and location information. A total of 8465 rare ANA karyotype images were collected. The number of each karyotype is shown in Table 1. The database is owned by Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine.
获取数据集后,使用Labelimg标注工具对图像进行标注,标注出图像中目标的类别和位置信息,并保存为VOC数据集格式。之后将其按8:1:1的比例随机划分为训练集、验证集和测试集,具体数量见表1。After obtaining the data set, use the Labelimg annotation tool to label the image, mark the category and location information of the target in the image, and save it in the VOC data set format. Then it was randomly divided into training set, verification set and test set in a ratio of 8:1:1. The specific numbers are shown in Table 1.
表1罕见ANA核型目标检测数据集成分组成Table 1 Composition of rare ANA karyotype target detection data set
本发明中,所述步骤2为数据集预处理及划分,具体为:In the present invention, the step 2 is data set preprocessing and division, specifically:
模型训练之前需要对罕见ANA核型图像数据进行数据扩充以避免由于采样信息的不足导致的模型检测精度的降低。对图像进行数据扩增预处理包括随机裁切、水平翻转、垂直翻转、随机旋转及改变图片的属性,其中图像的属性包括亮度、对比度、饱和度或色调。随后对图像进行归一化处理,并将所述图像及对应标注文件根据7:2:1的划分比例分为训练集、验证集和测试集。Before model training, data expansion of rare ANA karyotype image data is required to avoid a decrease in model detection accuracy due to insufficient sampling information. Data amplification preprocessing of images includes random cropping, horizontal flipping, vertical flipping, random rotation and changing the attributes of the image, where the attributes of the image include brightness, contrast, saturation or hue. The images are then normalized, and the images and corresponding annotation files are divided into training sets, verification sets and test sets according to a division ratio of 7:2:1.
本发明中,所述步骤3为使用YOLO v5基础网络中的深度卷积神经网络作为主干网络对输入图像进行特征提取,构建YOLO目标探测模型,具体为:In the present invention, step 3 is to use the deep convolutional neural network in the YOLO v5 basic network as the backbone network to extract features from the input image and build a YOLO target detection model, specifically:
构建YOLO v5为主体的特征提取网络包括Input输入端、Backbone主干网络、Neck颈部网络和Head输出端;其中Neck颈部网络部分嵌入注意力机制,注意力机制同时包含通道及空间注意力两个独立子模块,主要是针对Backbone主干网络提取出来的有效特征层及上采样后的结果,从而提高对罕见抗核抗体核型图像的关键特征的提取能力,其具体实现步骤如下:The feature extraction network constructed with YOLO v5 as the main body includes the Input input end, Backbone backbone network, Neck neck network and Head output end; the Neck neck network part is embedded with the attention mechanism, and the attention mechanism includes both channel and spatial attention. The independent sub-module is mainly aimed at the effective feature layer extracted by the Backbone backbone network and the results after upsampling, thereby improving the ability to extract key features of rare antinuclear antibody karyotype images. The specific implementation steps are as follows:
4.1Input输入端:使用Mosaic数据增强,4张图片按照随机缩放,随机裁剪和随机排布的方式进行拼接;4.1Input input terminal: Using Mosaic data enhancement, the 4 pictures are spliced according to random scaling, random cropping and random arrangement;
4.2Backbone主干网络:首先设定模型的深度和宽度;而后采用CSPDarknet53作为架构,主要包括Focus和Bottleneck结构;其中Focus负责下采样,Bottleneck结构又包括CSP和SPP模块,SPP模块对从Bottleneck层提取出的不同尺度特征图的通道数压缩后,对多尺度特征进行融合;Bottleneck结构是Backbone主干网络最基础结构,其主要组成结构为残差块,它通过求和的方式将传递信息结合起来再继续向下传递;4.2 Backbone backbone network: First set the depth and width of the model; then use CSPDarknet53 as the architecture, which mainly includes Focus and Bottleneck structures; Focus is responsible for downsampling, and the Bottleneck structure includes CSP and SPP modules. The SPP module pair is extracted from the Bottleneck layer. After compressing the number of channels of feature maps of different scales, the multi-scale features are fused; the Bottleneck structure is the most basic structure of the Backbone backbone network. Its main component structure is the residual block, which combines the transmitted information through summation before continuing. pass downward;
4.3Neck颈部网络:在Backbone和最后的Head输出层之间插入FPN_PAN结构形成基础Neck颈部网络;其中FPN特征金字塔结构的水平轴视为尺度轴,提取尺度不变的特征变量,再将每个金字塔特征图均匀地调整为设定的高分辨率特征金字塔图,最后将高分辨率特征金字塔图与提取的尺度不变的特征变量相连接,用于YOLO模型Head输出端检测罕见ANA核型的特征区域;由于YOLO v5基础Neck颈部网络对罕见ANA核型特征提取能力有限,为求以进一步提升网络对细胞分裂期等更加复杂的信息的特征提取能力,在原有Neck颈部网络基础上嵌入注意力机制,注意力机制同时包含通道及空间注意力两个独立子模块,分别对主干网络提取出来的有效特征层及对上采样后的结果进行处理传递,YOLO v5主干网络在嵌入注意力模块后,网络模型结构见图2,对于注意力模块,详细实现步骤如下:4.3Neck neck network: Insert the FPN_PAN structure between the Backbone and the final Head output layer to form the basic Neck neck network; the horizontal axis of the FPN feature pyramid structure is regarded as the scale axis, and the scale-invariant feature variables are extracted, and then each Each pyramid feature map is evenly adjusted to the set high-resolution feature pyramid map, and finally the high-resolution feature pyramid map is connected to the extracted scale-invariant feature variables, which are used to detect rare ANA karyotypes at the output end of the YOLO model Head. feature area; due to the limited ability of the YOLO v5 basic Neck network to extract features of rare ANA karyotypes, in order to further improve the network's feature extraction capabilities for more complex information such as cell division stages, based on the original Neck network Embedding the attention mechanism, the attention mechanism includes two independent sub-modules of channel and spatial attention, which process and transmit the effective feature layer extracted by the backbone network and the upsampling results respectively. The YOLO v5 backbone network is embedded in the attention After the module, the network model structure is shown in Figure 2. For the attention module, the detailed implementation steps are as follows:
4.3.1通道注意力模块原理及具体实现过程为:将输入的特征图F(H×W×C)(W、H分别为特征图的宽和高,C则是特征图的通道数),分别经过基于宽度、高度的全局最大池化和全局平均池化,得到两张1×1×C的特征图,再分别将它们送入一个双层的多层感知机(MLP)神经网络,随后将MLP输出的特征相加,再经过sigmoid激活函数最终得到通道权重系数Mc(公式1)。将权重系数Mc处理转换为一维的通道注意力图AT,再将输入的特征图F与AT进行像素级相乘(以表示),得到通道向的显著特征图FT,计算公式如(公式3);4.3.1 The principle and specific implementation process of the channel attention module is: take the input feature map F (H×W×C) (W and H are the width and height of the feature map respectively, and C is the number of channels of the feature map), After global maximum pooling and global average pooling based on width and height respectively, two 1×1×C feature maps are obtained, and then they are sent to a two-layer multi-layer perceptron (MLP) neural network respectively, and then The features output by MLP are added, and then passed through the sigmoid activation function to finally obtain the channel weight coefficient Mc (Formula 1). The weight coefficient Mc is processed and converted into a one-dimensional channel attention map A T , and then the input feature map F and AT are multiplied at the pixel level (to represents), the channel-directed salient feature map F T is obtained, and the calculation formula is as follows (Formula 3);
4.3.2空间注意力模块原理及具体实现过程为:将所述特征图F经过所述最大池化层和所述平均池化层,得到两个H×W×1的通道描述,并将这两个描述按照通道拼接在一起,然后经过一个7×7的卷积层及Sigmoid激活,得到空间权重系数Ms(公式2);4.3.2 The principle and specific implementation process of the spatial attention module is: pass the feature map F through the maximum pooling layer and the average pooling layer to obtain two H×W×1 channel descriptions, and convert these The two descriptions are spliced together according to the channel, and then go through a 7×7 convolution layer and Sigmoid activation to obtain the spatial weight coefficient Ms (Formula 2);
4.3.3获得具有注意力权重的特征图:将上述步骤得到的将空间权重系数Ms处理转换为一维的空间注意力图AK,将通道向的显著特征图FT与AK进行像素级相乘,序贯合并后得到具有注意力权重输出的特征图FR(公式4);4.3.3 Obtain the feature map with attention weight: Convert the spatial weight coefficient Ms obtained in the above steps into a one-dimensional spatial attention map A K , and perform pixel-level comparison between the channel-directed salient feature map F T and A K. After multiplication and sequential merging, the feature map F R with attention weight output is obtained (Formula 4);
4.4Head输出端:在原有网络基础上替换原损失函数GIOU_Loss换为SIoU,预测筛选框的NMS变为DIOU_nns以解决由于图像细胞密集堆叠时出现的预测框在真实框内部不同位置时的回归收敛问题。4.4Head output: Replace the original loss function GIOU_Loss with SIoU based on the original network, and change the NMS of the prediction filter box to DIOU_nns to solve the regression convergence problem of the prediction box at different positions inside the real box due to dense stacking of image cells. .
本发明中,所述步骤4为设定训练参数,对所构建的基于YOLO及注意力神经网络模型进行训练,并使用训练好的模型处理免疫荧光图像数据,进而获得待测图像目标检测结果,具体为:In the present invention, the step 4 is to set training parameters, train the constructed model based on YOLO and attention neural network, and use the trained model to process the immunofluorescence image data, and then obtain the detection result of the image target to be tested, Specifically:
读取罕见ANA核型目标检测数据集,设置训练参数后开始训练,得到一条损失值随训练时间变化的曲线,当损失值收敛后,对模型进行测试,当损失值未收敛时调整模型参数,直至模型收敛,见图3A、B,此时训练出来的模型作为最终进行临床部署的模型;Read the rare ANA karyotype target detection data set, set the training parameters and start training, and get a curve of the loss value changing with training time. When the loss value converges, test the model. When the loss value does not converge, adjust the model parameters. Until the model converges, see Figure 3A and B, the model trained at this time is used as the final model for clinical deployment;
由于ANA核型图像使用的是人喉癌上皮细胞系(HEP2细胞)作为检测基质,细胞存在细胞密集堆叠的问题,故使用DIOU_nns优化预测框和真实框之间的面积来进行收敛,使得NMS得到的结果更加合理和有效。Since the ANA karyotype image uses the human laryngeal cancer epithelial cell line (HEP2 cells) as the detection matrix, the cells have the problem of dense stacking of cells. Therefore, DIOU_nns is used to optimize the area between the prediction box and the real box for convergence, so that NMS can be obtained The results are more reasonable and effective.
本发明中,所述步骤5为罕见抗核抗体核型注意力神经网络的模型部署,包括5.1)模型检测和5.2临床可视化部署:In the present invention, the step 5 is the model deployment of the rare anti-nuclear antibody karyotype attention neural network, including 5.1) model detection and 5.2 clinical visualization deployment:
所述5.1模型检测包括:The 5.1 model detection includes:
5.1.1:读取待测图像,进行数据预处理;5.1.1: Read the image to be tested and perform data preprocessing;
5.1.2:将所述待检测的目标图像进行数据归一化;5.1.2: Normalize the data of the target image to be detected;
5.1.3:将归一化处理后的数据送入已经上述步骤4训练好的模型中;5.1.3: Feed the normalized data into the model trained in step 4 above;
5.1.4:对待测图像进行模型检测,得到预测结果及预测概率。5.1.4: Perform model detection on the image to be tested to obtain the prediction results and prediction probability.
所述5.2临床可视化部署具体为:The 5.2 clinical visualization deployment is specifically:
将训练好的模型通过C++在Windows平台进行编译,配置对应的深度学习环境,通过编译CMake实现模型预测结果可视化,然后将模型部署在荧光显微镜CCD相机上,模型可以读取临床上待测图像对其进行检测直至得到预测结果及概率。Compile the trained model on the Windows platform through C++, configure the corresponding deep learning environment, visualize the model prediction results by compiling CMake, and then deploy the model on the fluorescence microscope CCD camera. The model can read clinical image pairs to be tested. It performs tests until it obtains predicted results and probabilities.
本发明针对临床上临床检验人员对罕见ANA核型认识不足所存在漏检的情况,首次创新提出了一种在YOLO架构的基础上引入注意力神经网络的方法,用于实现对罕见ANA核型的探测的目的。In view of the situation of missed detection caused by clinical laboratory personnel's insufficient understanding of rare ANA karyotypes, this invention innovatively proposes for the first time a method of introducing attention neural networks based on the YOLO architecture to achieve detection of rare ANA karyotypes. the purpose of detection.
本发明方法中,针对罕见ANA核型细胞特征不明显,需要找到一些极其细微且容易忽略的关键特征点才能作出判读的特点,引入的注意力神经网络模型特征提取能够结合低级(空间)和高级(通道)网络的特征。In the method of the present invention, in view of the characteristics of rare ANA karyotype cells whose characteristics are not obvious and some extremely subtle and easily overlooked key feature points need to be found to make interpretations, the introduced attention neural network model feature extraction can combine low-level (spatial) and high-level Characteristics of (channel) networks.
其中,低级(空间)信息:经过多次下采样后的低级特征。能够提供目标特征区域在整个图像中上下文语义信息。这有助于发现可疑的罕见ANA核型在整个图像中的空间位置信息。Among them, low-level (spatial) information: low-level features after multiple downsampling. It can provide contextual semantic information of the target feature area in the entire image. This helps to discover the spatial location information of suspected rare ANA karyotypes throughout the image.
其中,高级(通道)信息:经过concatenate操作从encoder直接传递到同高度decoder上的高分辨率信息。低级和高级网络特征输出的向量被馈送到ReLU及Sigmoid激活,得到的注意力权重为捕捉更加精细的特征提供支持。Among them, high-level (channel) information: high-resolution information directly transferred from the encoder to the decoder of the same height through the concatenate operation. The vectors output by low-level and high-level network features are fed to ReLU and Sigmoid activation, and the resulting attention weights provide support for capturing more refined features.
现有目标探测技术相比,本发明具有如下技术效果:Compared with existing target detection technology, the present invention has the following technical effects:
(1)本发明在YOLO架构的基础上引入注意力机制,共同构建的神经网络可以高效实现对罕见ANA核型细胞极其细微且容易忽略的关键特征点进行提取,发挥多提取特征的优势,使其更好的适应临床上因患者个体差异而导致的复杂多变的情况;(1) The present invention introduces an attention mechanism based on the YOLO architecture, and the jointly constructed neural network can efficiently extract the extremely subtle and easily overlooked key feature points of rare ANA karyotype cells, taking advantage of multiple extraction features, so that It can better adapt to complex and changeable clinical situations caused by individual patient differences;
(2)本发明的YOLO框架损失函数采用SIoU,充分考虑了真实边界框与预测检测框之间的空间方向因素,可使目标回归框更加稳定,减少漏检误检情况发生。预测筛选框的NMS变为DIOU_nns以解决由于图像细胞密集堆叠时出现的预测框在真实框内部不同位置时的回归收敛问题(2) The YOLO framework loss function of the present invention adopts SIoU, which fully considers the spatial direction factors between the real bounding box and the predicted detection frame, which can make the target regression frame more stable and reduce the occurrence of missed detections and false detections. The NMS of the prediction filter box is changed to DIOU_nns to solve the regression convergence problem when the prediction box is at different positions inside the real box due to dense stacking of image cells.
(3)本发明在尽可能不降低检测速度的前提下,有效提升了对罕见ANA核型的检测精度,降低误检漏检概率,对临床上因临床检验人员对罕见ANA核型认识不足,而导致的误检漏检的情况作出有力贡献。(3) On the premise of not reducing the detection speed as much as possible, the present invention effectively improves the detection accuracy of rare ANA karyotypes and reduces the probability of false detection and missed detection. In clinical practice, due to insufficient understanding of rare ANA karyotypes by clinical laboratory personnel, This makes a strong contribution to the situation of false detection and missed detection.
本发明有益效果包括:本发明属于医疗相关人工智能技术、生物医疗领域,本发明可以有效降低检验医师的工作量,提高工作效率,减低临床上误检漏检的现象。本发明中,不需要与大量模板匹配,处理速度快,准确度高,满足检验医师在实际工作中的要求。The beneficial effects of the invention include: the invention belongs to the field of medical-related artificial intelligence technology and biomedicine. The invention can effectively reduce the workload of laboratory doctors, improve work efficiency, and reduce the phenomenon of false detection and missed detection in clinical practice. In the present invention, there is no need to match a large number of templates, the processing speed is fast, the accuracy is high, and it meets the requirements of laboratory doctors in actual work.
附图说明Description of the drawings
图1A、B是罕见ANA核型的图像描述及临床意义。(罕见ANA核型的图像描述及临床意义(AC-26/25/14/27)如图1A所示),图1B是AC-13/6/24/22。Figure 1A and B are image descriptions and clinical significance of rare ANA karyotypes. (Image description and clinical significance of the rare ANA karyotype (AC-26/25/14/27) is shown in Figure 1A), Figure 1B is AC-13/6/24/22.
图2是本发明模型网络结果及结果输出效果。Figure 2 shows the model network results and result output effects of the present invention.
图3是本发明的模型训练图,图3A是损失曲线图,图3B是模型精确率曲线。Figure 3 is a model training diagram of the present invention, Figure 3A is a loss curve diagram, and Figure 3B is a model accuracy curve.
图4是本发明的方法流程图。Figure 4 is a flow chart of the method of the present invention.
图5是本发明模型待测图像及结果预测输出示例。(模型待测图像及结果预测输出示例Figure 5 is an example of the image to be tested and the result prediction output of the model of the present invention. (Example of model test image and result prediction output
(左侧:待测图像;右侧:预测输出)如图5所示)。(Left side: image to be tested; right side: prediction output) as shown in Figure 5).
具体实施方式Detailed ways
结合以下具体实施例和附图,对本发明作进一步的详细说明实施本发明的过程及实验方法。图3为本发明方法流程图,具体可以分为以下步骤:The process and experimental methods for implementing the present invention will be further described in detail with reference to the following specific examples and drawings. Figure 3 is a flow chart of the method of the present invention, which can be divided into the following steps:
1.数据预处理,对待检测的ANA细胞载片图像进行预处理,得到归一化的图像。1. Data preprocessing: Preprocess the ANA cell slide image to be detected to obtain a normalized image.
2.数据扩充,对图像做随机的平移变换,角度转换。2. Data expansion, random translation transformation and angle conversion of the image.
3.模型加载及预测,加载模型成功后调用模型预测函数,对待测图像进行预测。3. Model loading and prediction. After successfully loading the model, call the model prediction function to predict the image to be tested.
4.结果输出,网络的输出结果为一个张量,维度为:K*K*[B*(5*N)],其中,K代表每张待测图片划分的网格数,B代表每一个网格对应的检测框数量,C代表每一个检测框的坐标信息(x,y,h,w)和置信度,N为目标核型的类别数。其中检测框的坐标信息(x,y,h,w),x表示检测框的x坐标信息;y表示检测框y坐标信息;h表示检测框的高度信息;w表示检测框的宽度信息。表达式K*K*[B*(5*N)]的含义具体为:B用于锁定落在该网格中心点的特定的罕见核型特征,而每个检测框都对应一个分值,代表该检测框是否有物体及置信度,其中置信度为该检测框中含有罕见核型特征即检测框的预测概率,分值为网格中罕见核型特征的把握度。在判断模型探测准确度时,本发明使用了SloU(Scylla Intersection over Union)来评价模型性能,SIoU的计算具体见公式(4)。4. Result output. The output result of the network is a tensor with the dimension: K*K*[B*(5*N)], where K represents the number of grids divided into each picture to be tested, and B represents each The number of detection frames corresponding to the grid, C represents the coordinate information (x, y, h, w) and confidence of each detection frame, and N is the number of categories of the target karyotype. Among them, the coordinate information of the detection frame (x, y, h, w), x represents the x coordinate information of the detection frame; y represents the y coordinate information of the detection frame; h represents the height information of the detection frame; w represents the width information of the detection frame. The specific meaning of the expression K*K*[B*(5*N)] is: B is used to lock specific rare karyotype features falling at the center point of the grid, and each detection box corresponds to a score, Represents whether there is an object in the detection frame and the confidence level, where the confidence level is the predicted probability that the detection frame contains rare karyotype features, that is, the detection frame, and the score is the certainty of rare karyotype features in the grid. When judging the accuracy of model detection, the present invention uses SloU (Scylla Intersection over Union) to evaluate model performance. The calculation of SIoU is detailed in formula (4).
其中Δ代表距离损失(Distance cost)和角度损失(Angle cost),具体为预测边界框中心点与真实框中心点的距离,以及的连线和两点高度垂直线形成的角度;Ω代表形状损失(Shape cost),具体为预测边界框与真实框形状的相似程度;IoU具体为预测边界框与真实框之间的IoU值。把握度由SIoUPredicted/True表示,代表预测框与真实框之间的IoU值。Among them, Δ represents distance cost and angle cost, specifically the distance between the center point of the predicted bounding box and the center point of the real box, and the angle formed by the connecting line and the vertical line of the height of the two points; Ω represents the shape loss. (Shape cost), specifically the similarity of the shape of the predicted bounding box and the real box; IoU is specifically the IoU value between the predicted bounding box and the real box. The degree of certainty is represented by SIoU Predicted/True , which represents the IoU value between the predicted box and the real box.
图2是采用本发明方法对临床中进行检测的罕见ANA核型图像进行探测的结果示意图,如图5所示是对发现的罕见ANA核型作出的预测的一个示例效果图。可以看出本发明方法可以有效地探测罕见ANA核型,降低检验科医生漏检及误诊率,提高临床工作效率。Figure 2 is a schematic diagram of the results of using the method of the present invention to detect rare ANA karyotype images detected in clinical practice. Figure 5 is an example rendering of the prediction of the rare ANA karyotype discovered. It can be seen that the method of the present invention can effectively detect rare ANA karyotypes, reduce the missed detection and misdiagnosis rate of laboratory doctors, and improve clinical work efficiency.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310612311.2A CN116893162A (en) | 2023-05-28 | 2023-05-28 | Rare anti-nuclear antibody karyotype detection method based on YOLO and attention neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310612311.2A CN116893162A (en) | 2023-05-28 | 2023-05-28 | Rare anti-nuclear antibody karyotype detection method based on YOLO and attention neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116893162A true CN116893162A (en) | 2023-10-17 |
Family
ID=88312690
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310612311.2A Pending CN116893162A (en) | 2023-05-28 | 2023-05-28 | Rare anti-nuclear antibody karyotype detection method based on YOLO and attention neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116893162A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118261887A (en) * | 2024-04-10 | 2024-06-28 | 重庆市妇幼保健院(重庆市妇产科医院、重庆市遗传与生殖研究所) | Improved YOLOv prokaryotic and blastomere detection method |
CN119587893A (en) * | 2024-12-31 | 2025-03-11 | 上海交通大学医学院附属新华医院 | Repetitive transcranial magnetic stimulation intervention system |
-
2023
- 2023-05-28 CN CN202310612311.2A patent/CN116893162A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118261887A (en) * | 2024-04-10 | 2024-06-28 | 重庆市妇幼保健院(重庆市妇产科医院、重庆市遗传与生殖研究所) | Improved YOLOv prokaryotic and blastomere detection method |
CN119587893A (en) * | 2024-12-31 | 2025-03-11 | 上海交通大学医学院附属新华医院 | Repetitive transcranial magnetic stimulation intervention system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109886273B (en) | A CMR Image Segmentation and Classification System | |
CN111178197B (en) | Instance Segmentation Method of Cohesive Pigs in Group Breeding Based on Mask R-CNN and Soft-NMS Fusion | |
CN109300121B (en) | A kind of construction method of cardiovascular disease diagnosis model, system and the diagnostic device | |
CN110298291B (en) | Mask-RCNN-based cow face and cow face key point detection method | |
Li et al. | cC-GAN: A robust transfer-learning framework for HEp-2 specimen image segmentation | |
CN110909756A (en) | Convolutional neural network model training method and device for medical image recognition | |
CN108447062A (en) | A kind of dividing method of the unconventional cell of pathological section based on multiple dimensioned mixing parted pattern | |
CN111489324A (en) | Cervical cancer lesion diagnosis method fusing multi-modal prior pathology depth features | |
CN116893162A (en) | Rare anti-nuclear antibody karyotype detection method based on YOLO and attention neural network | |
CN108090906A (en) | A kind of uterine neck image processing method and device based on region nomination | |
US20240232627A1 (en) | Systems and Methods to Train A Cell Object Detector | |
CN115909006B (en) | Mammary tissue image classification method and system based on convolution transducer | |
Gehlot et al. | Ednfc-net: Convolutional neural network with nested feature concatenation for nuclei-instance segmentation | |
CN112348059A (en) | Deep learning-based method and system for classifying multiple dyeing pathological images | |
CN109886346A (en) | A Cardiac MRI Image Classification System | |
CN110263670A (en) | A kind of face Local Features Analysis system | |
CN116503858B (en) | Immunofluorescence image classification method and system based on generation model | |
CN117011614A (en) | Wild ginseng reed body detection and quality grade classification method and system based on deep learning | |
Yancey | Deep feature fusion for mitosis counting | |
Gayathri et al. | Optical character recognition in banking sectors using convolutional neural network | |
CN105528791B (en) | A quality evaluation device and evaluation method for touch-screen hand-painted images | |
CN118116576A (en) | Intelligent case analysis method and system based on deep learning | |
CN118038152A (en) | Infrared small target detection and classification method based on multi-scale feature fusion | |
CN118279235A (en) | Tongue segmentation and traditional Chinese medicine diagnosis method and system based on deep learning | |
CN116721294A (en) | An image classification method based on hierarchical fine-grained classification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |