[go: up one dir, main page]

CN107064019B - The device and method for acquiring and dividing for dye-free pathological section high spectrum image - Google Patents

The device and method for acquiring and dividing for dye-free pathological section high spectrum image Download PDF

Info

Publication number
CN107064019B
CN107064019B CN201710353158.0A CN201710353158A CN107064019B CN 107064019 B CN107064019 B CN 107064019B CN 201710353158 A CN201710353158 A CN 201710353158A CN 107064019 B CN107064019 B CN 107064019B
Authority
CN
China
Prior art keywords
layer
image
parameter
training
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710353158.0A
Other languages
Chinese (zh)
Other versions
CN107064019A (en
Inventor
张镇西
王森豪
王晶
陈韵竹
张璐薇
姚翠萍
王斯佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Triplex International Biosciences China Co ltd
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201710353158.0A priority Critical patent/CN107064019B/en
Publication of CN107064019A publication Critical patent/CN107064019A/en
Application granted granted Critical
Publication of CN107064019B publication Critical patent/CN107064019B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/01Arrangements or apparatus for facilitating the optical investigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/01Arrangements or apparatus for facilitating the optical investigation
    • G01N2021/0181Memory or computer-assisted visual determination

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

用于无染色病理切片高光谱图像采集及分割的装置及方法,支架中部支有切片样品平台,通过计算机进行未染色病理切片高光谱图像的自动采集与处理得到病变区域分割结果;本发明基于组织病变所导致的光谱差异,使用个人电脑同步控制相关模块,采集未染色病理组织切片的光谱序列图像并预处理叠加生成对应的三维高光谱数据,并基于此数据结合目前流行的神经网络分类思想开发光谱分类算法进行病变区域的识别分割,加快了病理组织切片的识别速率及效率,避免了染色过程中可能引入的人工误差,降低了切片制作所需时间,且使用机器算法自动判别,降低了人工判别带来的主观性,可为病理医生检测病理切片提供较好的辅助。

A device and method for collecting and segmenting hyperspectral images of non-stained pathological slices. The middle part of the support has a slice sample platform, and the computer automatically collects and processes the hyperspectral images of unstained pathological slices to obtain the segmentation results of lesion regions; the present invention is based on tissue For the spectral difference caused by lesions, use a personal computer to synchronously control related modules, collect spectral sequence images of unstained pathological tissue sections, preprocess and superimpose to generate corresponding three-dimensional hyperspectral data, and develop based on this data combined with the current popular neural network classification ideas The spectral classification algorithm is used to identify and segment lesion areas, which speeds up the identification rate and efficiency of pathological tissue sections, avoids possible manual errors in the staining process, and reduces the time required for section production. The subjectivity brought about by discrimination can provide better assistance for pathologists to detect pathological slides.

Description

用于无染色病理切片高光谱图像采集及分割的装置及方法Device and method for hyperspectral image acquisition and segmentation of non-stained pathological sections

技术领域technical field

本发明属于检测装置领域,具体涉及一种基于高光谱数据的无染色病理切片快速采集及分割的装置及方法。The invention belongs to the field of detection devices, and in particular relates to a device and method for rapid collection and segmentation of non-stained pathological sections based on hyperspectral data.

背景技术Background technique

病理检查(pathological examination)已经大量应用于临床工作及科学研究。在临床方面主要进行尸体病理检查及手术病理检查。手术病理检查的目的,一是为了明确诊断及验证术前的诊断,提高临床的诊断水平;二是诊断明确后,可决定下步治疗方案及估计预后,进而提高临床的治疗水平。通过临床病理分析,也可获得大量极有价值的科研资料。Pathological examination has been widely used in clinical work and scientific research. In the clinical aspect, the postmortem pathological examination and surgical pathological examination are mainly carried out. The purpose of surgical pathological examination is to clarify the diagnosis and verify the preoperative diagnosis and improve the level of clinical diagnosis; secondly, after the diagnosis is confirmed, the next treatment plan can be determined and the prognosis can be estimated, so as to improve the level of clinical treatment. Through clinical and pathological analysis, a large amount of extremely valuable scientific research data can also be obtained.

病理切片是病理标本的一种。制作时将部分有病变的组织或脏器经过各种化学品和埋藏法的处理,使之固定硬化,在切片机上切成薄片,粘附在玻片上,染以各种颜色,供在显微镜下检查,以观察病理变化,作出病理诊断,为临床诊断和治疗提供帮助。病理常规制片技术是病理诊断的基础,切片质量是指导医师作出准确诊断的重要保证。苏木精(Hematoxylin)伊红(Eosin)染色法,简称HE染色法,是生物学和医学的细胞与组织学最广泛应用的染色方法。实际制作过程中,由于一些病理技术人员未经过严格而系统的工作培训,工作中经验不足,操作不当,常出现切片脱落、染色不均匀,模糊不清、皱褶及细胞核浆对比不明显等现象而影响诊断。因此若能简化切片制作过程,对于使用病理切片进行辅助诊断是十分有意义的。A pathological slide is a type of pathological specimen. During production, some diseased tissues or organs are treated with various chemicals and embedding methods to make them fixed and hardened, sliced into thin slices on a microtome, adhered to glass slides, dyed with various colors, and prepared for viewing under a microscope. Check to observe pathological changes, make pathological diagnosis, and provide assistance for clinical diagnosis and treatment. Routine pathological slice making technology is the basis of pathological diagnosis, and slice quality is an important guarantee to guide doctors to make accurate diagnosis. Hematoxylin and Eosin staining, referred to as HE staining, is the most widely used staining method in biology and medical cell and histology. In the actual production process, due to the fact that some pathological technicians have not received strict and systematic work training, lack of work experience, and improper operation, the phenomenon of section falling off, uneven staining, blurring, wrinkles, and inconspicuous nuclei-cytoplasmic contrast often occur. affect the diagnosis. Therefore, if the process of making slices can be simplified, it is very meaningful to use pathological slices for auxiliary diagnosis.

组织的病变过程往往伴随着组织结构在细胞层次与亚细胞层次的变化,从而导致组织光谱的迁移,通过将传统的二维成像技术与光谱技术进行有机结合,可以同时提供观测目标的二维空间信息及一维光谱信息,通过光谱数据分析,获取组织的形态结构及化学组分。将高光谱成像技术应用到病理切片的辅助检测领域,可有效降低对病理医生的经验要求,同时降低病理检测的主观性因素,进而提高病理检测效率。The pathological process of the tissue is often accompanied by changes in the tissue structure at the cellular and subcellular levels, which leads to the migration of the tissue spectrum. By combining the traditional two-dimensional imaging technology with the spectral technology, the two-dimensional space of the observation target can be provided at the same time. Information and one-dimensional spectral information, through the analysis of spectral data, the morphological structure and chemical composition of the tissue can be obtained. Applying hyperspectral imaging technology to the field of auxiliary detection of pathological slides can effectively reduce the experience requirements for pathologists, reduce the subjectivity of pathological detection, and improve the efficiency of pathological detection.

目前,国际上Siddiqi Anwer M等人使用显微光谱成像仪获取染色宫颈癌切片的高光谱数据,通过训练最小二乘支持向量机算法对切片进行机器识别,灵敏度和特异性均可达90%以上。国内青岛市光电工程技术研究院亦使用类似方法对肝癌切片进行识别,同样达到了较好的识别效果。但是此两种方法均需对癌症切片进行染色,仍对病理医生均有较高的实验与经验要求,耗时较多。At present, Siddiqi Anwer M and others in the world use a microspectral imager to obtain hyperspectral data of stained cervical cancer slices, and machine recognition of the slices by training the least squares support vector machine algorithm, the sensitivity and specificity can reach more than 90%. . Domestic Qingdao Institute of Optoelectronic Engineering Technology also used a similar method to identify liver cancer slices, and also achieved a good identification effect. However, both of these two methods require staining of cancer sections, which still require high experimental and experience requirements for pathologists, and are time-consuming.

发明内容Contents of the invention

为了克服上述现有技术的不足,本发明的目的在于提供一种用于无染色病理切片高光谱图像采集及分割的装置及方法,能够实现病理切片的高光谱图像自动采集,并基于样本切片的三维高光谱数据进行人工神经网络算法的训练,对无染色肝癌切片图像进行识别,在降低传统染切片检测的复杂度的同时,降低了判别过程中的主观性因素,可为病理医生的诊断提供较好的参考作用。In order to overcome the deficiencies in the prior art above, the object of the present invention is to provide a device and method for hyperspectral image acquisition and segmentation of non-stained pathological slices, which can realize automatic acquisition of hyperspectral images of pathological slices, and based on sample slices The three-dimensional hyperspectral data is used to train the artificial neural network algorithm to identify unstained liver cancer slice images. While reducing the complexity of traditional stained slice detection, it also reduces the subjective factors in the process of discrimination, and can provide pathologists with a diagnosis. good reference.

为了达到上述目的,本发明的技术方案为:In order to achieve the above object, technical scheme of the present invention is:

用于无染色病理切片高光谱图像采集及分割的装置,包括合金钢基板104,合金钢基板104上置有氙灯光源105,合金钢基板104上还垂直设置有固定支架101,固定支架101从上往下设置有高光谱图像采集模块102和样品平台103,高光谱图像采集模块102、样品平台103与氙灯光源105同轴心,高光谱图像采集模块102外接计算机106;The device for hyperspectral image acquisition and segmentation of non-stained pathological slices includes an alloy steel substrate 104, a xenon lamp light source 105 is placed on the alloy steel substrate 104, a fixed bracket 101 is vertically arranged on the alloy steel substrate 104, and the fixed bracket 101 is vertically arranged on the alloy steel substrate 104. A hyperspectral image acquisition module 102 and a sample platform 103 are arranged downward, the hyperspectral image acquisition module 102, the sample platform 103 are coaxial with the xenon lamp light source 105, and the hyperspectral image acquisition module 102 is externally connected to a computer 106;

所述的高光谱图像采集模块102包括CCD相机110,通过CCD相机110侧面USB接口外接计算机106,CCD相机110通过中继镜头112、中继镜头调节环113与液晶可调谐滤光器114相连并保持同轴心,液晶可调谐滤光器114通过侧面USB接口115与计算机106相连,液晶可调谐滤光器114下部通过C口对焦环116、光圈117与物镜118相连。Described hyperspectral image acquisition module 102 comprises CCD camera 110, connects computer 106 through CCD camera 110 side USB interface, and CCD camera 110 is connected with liquid crystal tunable filter 114 by relay lens 112, relay lens adjustment ring 113 and Keeping the coaxial center, the liquid crystal tunable filter 114 is connected to the computer 106 through the side USB interface 115 , and the lower part of the liquid crystal tunable filter 114 is connected to the objective lens 118 through the C-mount focusing ring 116 and the aperture 117 .

用于无染色病理切片高光谱图像采集及分割方法,包括以下步骤:A hyperspectral image acquisition and segmentation method for non-stained pathological sections, comprising the following steps:

步骤一、装置搭建:将未染色切片放置于切片样品台104上并调整氙灯103照明范围使得切片光照均匀,并通过计算机106设置CCD相机110与液晶可调谐滤光器114的参数使得图像灰度清晰,同时调整中继镜头调节环113、C口对焦环116、光圈117使得切片与成像焦平面重合;Step 1, device construction: place the unstained slice on the slice sample stage 104 and adjust the illumination range of the xenon lamp 103 to make the slice evenly illuminated, and set the parameters of the CCD camera 110 and the liquid crystal tunable filter 114 through the computer 106 to make the image grayscale Clear, adjust the relay lens adjustment ring 113, C-mount focus ring 116, and aperture 117 at the same time to make the slice coincide with the imaging focal plane;

步骤二、参数设置采集训练样本数据:于计算机106上设置高光谱图像采集模块102的图像采集参数,包括采集模式、起始、终止波长、波长分辨率和曝光时间,开始采集获取未染色切片样本各光谱二维图像数据,在采集完成后保持同样的采集参数,分别采集无光照全黑背景及同光照参数下完全空白视野各光谱图像作为背景信息参考;Step 2, parameter setting and collecting training sample data: set the image acquisition parameters of the hyperspectral image acquisition module 102 on the computer 106, including acquisition mode, starting and ending wavelengths, wavelength resolution and exposure time, and start collecting and obtaining unstained section samples For the two-dimensional image data of each spectrum, the same acquisition parameters are kept after the acquisition is completed, and the spectral images of the dark background without illumination and the completely blank field of view under the same illumination parameters are respectively collected as background information reference;

步骤三:数据预处理,对步骤二所得黑/白参考信息对各光谱图像进行预处理后按谱段序列叠加得三维高光谱矩阵,此矩阵z轴方向对应空间每点光谱信息,对各点光谱曲线作宽间隔倒数法处理以突出特征;Step 3: Data preprocessing, the black/white reference information obtained in step 2 is preprocessed for each spectral image, and then superimposed according to the sequence of spectral segments to obtain a three-dimensional hyperspectral matrix. The z-axis direction of this matrix corresponds to the spectral information of each point in the space. For each point Spectral curves are processed by wide interval reciprocal method to highlight features;

步骤四:判别神经网络的训练,根据染色切片结果选取步骤三所得训练样本切片病变/非病变区域宽间隔光谱曲线作为训练数据,训练光谱数据判别网络;Step 4: Discriminate the training of the neural network, select the training sample slice lesion/non-lesion area wide-interval spectral curve obtained in step 3 according to the results of the stained slices as the training data, and train the spectral data discrimination network;

步骤五:未染色切片高光谱数据的采集及区域识别,根据步骤二的方式进行未染色切片高光谱数据的采集,根据步骤三的方式做相同的预处理,输入步骤四训练完毕的判别神经网络,得出每点病变/非病变判别结果,综合每点判别结果即可生成无染色切片的识别结果。Step 5: Acquisition and area identification of hyperspectral data of unstained slices, collect hyperspectral data of unstained slices according to the method of step 2, perform the same preprocessing according to the method of step 3, and input the discriminant neural network trained in step 4 , to obtain the lesion/non-lesion discrimination result of each point, and the recognition result of the unstained section can be generated by combining the discrimination results of each point.

所述的步骤三具体方案如下:Described step three specific schemes are as follows:

光谱图像的预处理采用像素点较正方法,在同采集切片高光谱图像相同的条件下,使用一组无光照完全黑暗的高光谱数据和一组完全空白视野的高光谱图像,用对应波长的高光谱图像减去黑暗图像的差比上空白图像与黑暗图像的差值,预处理公式如公式1所示。The preprocessing of the spectral image adopts the method of pixel point correction. Under the same conditions as the hyperspectral image acquisition of slices, a set of hyperspectral data without illumination and a set of completely dark hyperspectral images and a set of hyperspectral images with a completely blank field of view are used. The difference between the hyperspectral image minus the dark image is compared to the difference between the blank image and the dark image, and the preprocessing formula is shown in formula 1.

其中,R是经过预处理转化的透射率光谱值,Iim是原始高光谱图像各波长下的灰度值,Ibl是完全黑暗条件下各波长图像灰度值,Iwh是有光照空白视野下各波长图像的灰度值,随后采用了一种宽间隔导数的方法,即增大自变量间隔求导数,如公式2所示:Among them, R is the transmittance spectral value converted by preprocessing, I im is the gray value of the original hyperspectral image at each wavelength, I bl is the gray value of each wavelength image in complete darkness, and I wh is the illuminated blank field of view The gray value of each wavelength image below, and then a method of wide-interval derivative is adopted, that is, the derivative is obtained by increasing the interval of the independent variable, as shown in formula 2:

公式中λ表示波长,Δλ表示波长间隔;In the formula, λ represents the wavelength, and Δλ represents the wavelength interval;

所述的步骤四,具体过程如下:The step four, the specific process is as follows:

(1)建立判别神经网络模型(1) Establish a discriminative neural network model

输入信息代表着一个像素点的光谱信息,往右边接着是卷积层和最大池层用于计算一系列特征图片,特征图片分类后得到输出层。整个网络包含输入层,卷积层C1,最大池层S1,卷积层C2,最大池层S2,完整连接层F和输出层。输入层的样本尺寸为(n1,1),其中是n1波段数目。第一个隐含卷积层C1使用24个尺寸为k1×1的核函数过滤n1×1的输入数据。隐含卷积层C1包含有24×n2×1个节点,其中n2=n1-k1+1。输入层和卷积层C1之间有24×(k1+1)个可训练参数,最大池层S1是第二个隐含层,核函数的尺寸为(k2,1),最大池层S1包含有24×n3×1个节点,其中n3=n2/k2,这一层没有参数。卷积层C2包含有24×n4×1个节点,核函数为(k3,1)其中n4=n3-k3+1,最大池层S1和卷积层C2之间有24×(k3+1)个可训练参数。最大池层S2包含有24×n5×1个节点,核函数尺寸(k4,1),其中n5=n4/k4,这一层没有参数。完整连接层F包含有n6个节点,这一层与最大池层S2之间有(24×n6+1)个训练参数。最后输出层板含有n7个节点,这一层与完整连接层F之间有24×n6×1个节点,24×n6×1×n7个训练参数。建立上述参数的卷积神经网络分类器用于区分高光谱像素,其中n1是光谱通道的数目,n7是所输出的数据类别数,n2、n3、n4、n5分别是特征图片的维数,n6是全连接层的维数。The input information represents the spectral information of a pixel, followed by the convolutional layer and the maximum pooling layer to the right to calculate a series of feature images, and the output layer is obtained after the feature images are classified. The whole network consists of input layer, convolutional layer C1, max pooling layer S1, convolutional layer C2, max pooling layer S2, fully connected layer F and output layer. The sample size of the input layer is (n1,1), where is the number of n1 bands. The first hidden convolutional layer C1 uses 24 kernel functions of size k1×1 to filter n1×1 input data. The hidden convolutional layer C1 contains 24×n2×1 nodes, where n2=n1-k1+1. There are 24×(k1+1) trainable parameters between the input layer and the convolutional layer C1, the maximum pooling layer S1 is the second hidden layer, the size of the kernel function is (k2,1), and the maximum pooling layer S1 contains There are 24×n3×1 nodes, where n3=n2/k2, and this layer has no parameters. The convolutional layer C2 contains 24×n4×1 nodes, the kernel function is (k3,1) where n4=n3-k3+1, there is 24×(k3+1) between the maximum pooling layer S1 and the convolutional layer C2 a trainable parameter. The maximum pooling layer S2 contains 24×n5×1 nodes, the size of the kernel function is (k4,1), where n5=n4/k4, and this layer has no parameters. The fully connected layer F contains n6 nodes, and there are (24×n6+1) training parameters between this layer and the maximum pooling layer S2. The final output layer board contains n7 nodes, and there are 24×n6×1 nodes between this layer and the complete connection layer F, and 24×n6×1×n7 training parameters. The convolutional neural network classifier with the above parameters is used to distinguish hyperspectral pixels, where n1 is the number of spectral channels, n7 is the number of output data categories, n2, n3, n4, n5 are the dimensions of the feature image, and n6 is The dimensionality of the fully connected layer.

(2)前向传播(2) Forward propagation

使用的深度卷积神经网络是5层结构,加上输入输出层也可看作7层,表示为为(L+1)层,其中L=6,在输入层包含有n1个输入单元,在输出层包含有n7个输出单元,隐含层为C1,S1,C2,S2和F层。假设xi是第i层的输入第(i-1)层的输出,我们可以计算xi+1为:The deep convolutional neural network used is a 5-layer structure, plus the input and output layer can also be regarded as 7 layers, expressed as (L+1) layer, where L=6, contains n1 input units in the input layer, in The output layer contains n7 output units, and the hidden layers are C1, S1, C2, S2 and F layers. Suppose xi is the input of layer i and the output of layer (i-1), we can calculate xi+1 as:

xi+1=fi(ui) (公式3)x i+1 = f i (u i ) (Formula 3)

其中in

是第i层作用于输入数据的权重矩阵,bi是第i层附加的贝叶斯向量,fi(·)是第i层的激活函数,选择双曲线函数tanh(u)作为卷积层C1、C2和完整连接层F的激活函数,取最大值函数max(u)作为最大池层S1和S2的激活函数。采用分类器要将数据进行多分类,输出类别为n7,n7个类标回归模型定义如下: is the weight matrix of the i-th layer acting on the input data, b i is the additional Bayesian vector of the i-th layer, f i ( ) is the activation function of the i-th layer, and the hyperbolic function tanh(u) is selected as the convolutional layer The activation functions of C1, C2 and the complete connection layer F, take the maximum function max(u) as the activation function of the maximum pooling layers S1 and S2. The classifier is used to perform multi-classification of the data, the output category is n7, and the regression model of n7 class labels is defined as follows:

输出层的输出向量y=xL+1表示在当前迭代所有类别的概率。The output vector y=x L+1 of the output layer represents the probability of all categories in the current iteration.

(3)后向传播(3) Backpropagation

在后向传播阶段,训练得到的参数使用下降梯度法调整更新,通过使用最小化成本函数和计算成本函数偏导数来决定每一个参数,使用的损失函数定义如下:In the backward propagation stage, the trained parameters are adjusted and updated using the descending gradient method. Each parameter is determined by minimizing the cost function and calculating the partial derivative of the cost function. The loss function used is defined as follows:

其中,m是训练样本量,Y是输出量。是第i次训练样本实际输出y(i)的第j次的值,向量尺寸是n7。第i个样本的期望输出Y(i),标签的概率值为1,如果是其它类别,则概率值为0。1{j=Y(i)}意味着,如果j等同于第i次训练样本的期望类标,它的值则为1;否则,它的值为0。我们在J(θ)的前面增加了负号,使得计算更加便利。Among them, m is the training sample size and Y is the output size. is the j-th value of the actual output y (i) of the i-th training sample, and the vector size is n7. The expected output Y (i) of the i-th sample, the probability value of the label is 1, if it is another category, the probability value is 0. 1{j=Y (i) } means that if j is equal to the i-th training The expected class label of the sample, its value is 1; otherwise, its value is 0. We added a negative sign in front of J(θ) to make the calculation more convenient.

损失函数对ui求偏导得到;The loss function obtains the partial derivative of u i ;

其中°表示元素相乘。f′(ui)可以简单地表示为where ° denotes element-wise multiplication. f′(u i ) can be expressed simply as

因此,在每一次的迭代,都将执行更新:Therefore, at each iteration, an update will be performed:

为了判断训练参数,α是学习因子(α=0.01),并且To judge the training parameters, α is the learning factor (α=0.01), and

由于θi包含Wi和bi,并且Since θ i contains W i and b i , and

其中in

经过多次的迭代训练,成本函数的回归越来越小,这也意味着实际的输出与期望的输出越来越接近,当实际的输出与期望的输出差异足够小时,迭代停止,最终,训练好的深度卷积神经网络模型就可以用于高光谱的图像分类了。After many iterations of training, the regression of the cost function is getting smaller and smaller, which also means that the actual output is getting closer to the expected output. When the difference between the actual output and the expected output is small enough, the iteration stops. Finally, the training A good deep convolutional neural network model can be used for hyperspectral image classification.

与现有技术相比,本发明基于组织病变所导致的光谱差异,使用个人电脑同步控制相关模块,采集未染色病理组织切片的光谱序列图像并预处理叠加生成对应的三维高光谱数据,并基于此数据结合目前流行的神经网络分类思想开发光谱分类算法进行病变区域的识别分割,加快了病理组织切片的识别速率及效率。采用此装置后,病理医生在制作病理切片时无需传统的HE染色过程,避免了染色过程中可能引入的人工误差,降低了切片制作所需时间,且使用机器算法自动判别,降低了人工判别带来的主观性,可为病理医生检测病理切片提供较好的辅助。Compared with the prior art, the present invention uses a personal computer to synchronously control related modules based on the spectral differences caused by tissue lesions, collects spectral sequence images of unstained pathological tissue sections, preprocesses and superimposes them to generate corresponding three-dimensional hyperspectral data, and based on This data is combined with the current popular neural network classification idea to develop a spectral classification algorithm to identify and segment lesion areas, which speeds up the identification speed and efficiency of pathological tissue slices. After adopting this device, pathologists do not need the traditional HE staining process when making pathological slices, avoiding possible manual errors in the staining process, reducing the time required for slice production, and using machine algorithms for automatic discrimination, reducing the manual discrimination. The resulting subjectivity can provide better assistance for pathologists to detect pathological slides.

附图说明Description of drawings

图1为本发明装置的总体结构示意图。Figure 1 is a schematic diagram of the overall structure of the device of the present invention.

图2为本发明高光谱图像采集模块的结构示意图。Fig. 2 is a schematic structural diagram of the hyperspectral image acquisition module of the present invention.

图3为本发明装置的实物图。Fig. 3 is the physical figure of the device of the present invention.

图4为本发明配套采集软件截图。Fig. 4 is a screenshot of the supporting collection software of the present invention.

图5为本发明切片光谱数据识别算法流程图。Fig. 5 is a flow chart of the slice spectral data identification algorithm of the present invention.

图6为本发明所采用光谱数据判别神经网络结构示意图。Fig. 6 is a schematic diagram of the structure of the spectral data discrimination neural network used in the present invention.

图7a为本发明提取组织切片多点原始光谱图;图7b宽间隔导数法处理后的光谱曲线图。Fig. 7a is the multi-point original spectrum of extracted tissue slices according to the present invention; Fig. 7b is the spectrum curve after processing by the wide-interval derivative method.

图8为本发明未染色切片模型预测图像与同一部位染色切片分割结果对比。Fig. 8 is a comparison between the prediction image of the unstained slice model of the present invention and the segmentation results of the dyed slice at the same site.

图9为本发明未染色切片模型预测图像与支持向量机算法识别结果对比。Fig. 9 is a comparison between the prediction image of the unstained slice model of the present invention and the recognition result of the support vector machine algorithm.

具体实施方式Detailed ways

下面结合附图对本发明做进一步说明。The present invention will be further described below in conjunction with the accompanying drawings.

参见图1至图3,用于无染色病理切片高光谱图像采集及分割的装置,包括合金钢基板104,合金钢基板104上置有氙灯光源105,合金钢基板104上还垂直设置有固定支架101,固定支架101从上往下设置有高光谱图像采集模块102和样品平台103,高光谱图像采集模块102、样品平台103与氙灯光源105同轴心,高光谱图像采集模块102外接计算机106;Referring to Figures 1 to 3, the device for hyperspectral image acquisition and segmentation of non-stained pathological slices includes an alloy steel substrate 104, on which a xenon lamp light source 105 is placed, and a fixed bracket is vertically arranged on the alloy steel substrate 104 101, the fixed bracket 101 is provided with a hyperspectral image acquisition module 102 and a sample platform 103 from top to bottom, the hyperspectral image acquisition module 102, the sample platform 103 and the xenon light source 105 are coaxial, and the hyperspectral image acquisition module 102 is externally connected to a computer 106;

所述的高光谱图像采集模块102包括CCD相机110,通过CCD相机110侧面USB接口外接计算机106,CCD相机110通过中继镜头112、中继镜头调节环113与液晶可调谐滤光器114相连并保持同轴心,液晶可调谐滤光器114通过侧面USB接口115与计算机106相连实现与CCD相机110的同步控制,液晶可调谐滤光器114下部通过C口对焦环116、光圈117与物镜118相连,中继镜头112与物镜118均具备用以调节焦平面的对焦环与用以控制进光量大小的光圈环,可通过调节中继镜头调节环113、C口对焦环116、光圈117对切片样品平台104上置切片样品实现准确对焦与合适曝光,样品台侧方设置有氙灯光路镜头组,氙灯光路镜头组通过氙灯光源驱动电路控制,液晶可调谐滤光器和科研相机连接计算机。Described hyperspectral image acquisition module 102 comprises CCD camera 110, connects computer 106 through CCD camera 110 side USB interface, and CCD camera 110 is connected with liquid crystal tunable filter 114 by relay lens 112, relay lens adjustment ring 113 and Keeping the coaxial center, the liquid crystal tunable filter 114 is connected to the computer 106 through the side USB interface 115 to realize synchronous control with the CCD camera 110, and the lower part of the liquid crystal tunable filter 114 passes through the C-port focus ring 116, the aperture 117 and the objective lens 118 Connected, the relay lens 112 and the objective lens 118 are equipped with a focus ring for adjusting the focal plane and an aperture ring for controlling the amount of incoming light, and the slices can be adjusted by adjusting the relay lens adjustment ring 113, the C-mount focus ring 116, and the aperture 117. The sliced sample is placed on the sample platform 104 to achieve accurate focus and proper exposure. A xenon light path lens group is arranged on the side of the sample stage. The xenon light path lens group is controlled by a xenon light source drive circuit. The liquid crystal tunable filter and the scientific research camera are connected to the computer.

用于无染色病理切片高光谱图像采集及分割方法,包括以下步骤:A hyperspectral image acquisition and segmentation method for non-stained pathological sections, comprising the following steps:

步骤一、装置搭建:将未染色切片放置于切片样品台104上并调整氙灯103照明范围使得切片光照均匀,并通过计算机106设置CCD相机110与液晶可调谐滤光器114的参数使得图像灰度清晰,同时调整中继镜头调节环113、C口对焦环116、光圈117使得切片与成像焦平面重合;Step 1, device construction: place the unstained slice on the slice sample stage 104 and adjust the illumination range of the xenon lamp 103 to make the slice evenly illuminated, and set the parameters of the CCD camera 110 and the liquid crystal tunable filter 114 through the computer 106 to make the image grayscale Clear, adjust the relay lens adjustment ring 113, C-mount focus ring 116, and aperture 117 at the same time to make the slice coincide with the imaging focal plane;

步骤二、参照图4,参数设置采集训练样本数据:于计算机106上设置高光谱图像采集模块102的图像采集参数,包括采集模式、起始、终止波长、波长分辨率和曝光时间,开始采集获取未染色切片样本各光谱二维图像数据,在采集完成后保持同样的采集参数,分别采集无光照全黑背景及同光照参数下完全空白视野各光谱图像作为背景信息参考;Step 2, with reference to Fig. 4, parameter setting gathers training sample data: the image acquisition parameter of hyperspectral image acquisition module 102 is set on computer 106, comprises acquisition mode, start, stop wavelength, wavelength resolution and exposure time, start to gather and obtain For the two-dimensional image data of each spectrum of the unstained section sample, keep the same acquisition parameters after the acquisition is completed, and collect the spectral images of the dark background without light and the completely blank field of view under the same light parameters as background information reference;

步骤三:数据预处理,对步骤二所得黑/白参考信息对各光谱图像进行预处理后按谱段序列叠加得三维高光谱矩阵,此矩阵z轴方向对应空间每点光谱信息,对各点光谱曲线作宽间隔倒数法处理以突出特征;Step 3: Data preprocessing, the black/white reference information obtained in step 2 is preprocessed for each spectral image, and then superimposed according to the sequence of spectral segments to obtain a three-dimensional hyperspectral matrix. The z-axis direction of this matrix corresponds to the spectral information of each point in the space. For each point Spectral curves are processed by wide interval reciprocal method to highlight features;

所述的步骤三具体实施方案如下:Described step three specific implementation schemes are as follows:

光谱图像的预处理采用像素点较正方法,在同采集切片高光谱图像相同的条件下,使用一组无光照完全黑暗的高光谱数据和一组完全空白视野的高光谱图像,用对应波长的高光谱图像减去黑暗图像的差比上空白图像与黑暗图像的差值,预处理公式如公式1所示。The preprocessing of the spectral image adopts the method of pixel point correction. Under the same conditions as the hyperspectral image acquisition of slices, a set of hyperspectral data without illumination and a set of completely dark hyperspectral images and a set of hyperspectral images with a completely blank field of view are used. The difference between the hyperspectral image minus the dark image is compared to the difference between the blank image and the dark image, and the preprocessing formula is shown in formula 1.

其中,R是经过预处理转化的透射率光谱值,Iim是原始高光谱图像各波长下的灰度值,Ibl是完全黑暗条件下各波长图像灰度值,Iwh是有光照空白视野下各波长图像的灰度值,随后采用了一种宽间隔导数的方法,即增大自变量间隔求导数,如公式2所示:Among them, R is the transmittance spectral value converted by preprocessing, I im is the gray value of the original hyperspectral image at each wavelength, I bl is the gray value of each wavelength image in complete darkness, and I wh is the illuminated blank field of view The gray value of each wavelength image below, and then a method of wide-interval derivative is adopted, that is, the derivative is obtained by increasing the interval of the independent variable, as shown in formula 2:

公式中λ表示波长,Δλ表示波长间隔;In the formula, λ represents the wavelength, and Δλ represents the wavelength interval;

步骤四:判别神经网络的训练,根据染色切片结果选取步骤三所得训练样本切片病变/非病变区域宽间隔光谱曲线作为训练数据,训练光谱数据判别网络。网络参数训练算法如图5Step 4: Training of the discriminative neural network. According to the results of the stained slices, the wide-interval spectral curves of the lesion/non-lesion regions of the training sample slices obtained in step 3 are selected as training data, and the spectral data discriminant network is trained. The network parameter training algorithm is shown in Figure 5

所示,具体过程如下:As shown, the specific process is as follows:

(1)建立判别神经网络模型如图6所示,(1) Establish a discriminative neural network model as shown in Figure 6,

输入信息代表着一个像素点的光谱信息,往右边接着是卷积层和最大池层用于计算一系列特征图片,特征图片分类后得到输出层。整个网络包含输入层,卷积层C1,最大池层S1,卷积层C2,最大池层S2,完整连接层F和输出层。输入层的样本尺寸为(n1,1),其中是n1波段数目。第一个隐含卷积层C1使用24个尺寸为k1×1的核函数过滤n1×1的输入数据。隐含卷积层C1包含有24×n2×1个节点,其中n2=n1-k1+1。输入层和卷积层C1之间有24×(k1+1)个可训练参数,最大池层S1是第二个隐含层,核函数的尺寸为(k2,1),最大池层S1包含有24×n3×1个节点,其中n3=n2/k2,这一层没有参数。卷积层C2包含有24×n4×1个节点,核函数为(k3,1)其中n4=n3-k3+1,最大池层S1和卷积层C2之间有24×(k3+1)个可训练参数。最大池层S2包含有24×n5×1个节点,核函数尺寸(k4,1),其中n5=n4/k4,这一层没有参数。完整连接层F包含有n6个节点,这一层与最大池层S2之间有(24×n6+1)个训练参数。最后输出层板含有n7个节点,这一层与完整连接层F之间有24×n6×1个节点,24×n6×1×n7个训练参数。建立上述参数的卷积神经网络分类器用于区分高光谱像素,其中n1是光谱通道的数目,n7是所输出的数据类别数,n2、n3、n4、n5分别是特征图片的维数,n6是全连接层的维数。The input information represents the spectral information of a pixel, followed by the convolutional layer and the maximum pooling layer to the right to calculate a series of feature images, and the output layer is obtained after the feature images are classified. The whole network consists of input layer, convolutional layer C1, max pooling layer S1, convolutional layer C2, max pooling layer S2, fully connected layer F and output layer. The sample size of the input layer is (n1,1), where is the number of n1 bands. The first hidden convolutional layer C1 uses 24 kernel functions of size k1×1 to filter n1×1 input data. The hidden convolutional layer C1 contains 24×n2×1 nodes, where n2=n1-k1+1. There are 24×(k1+1) trainable parameters between the input layer and the convolutional layer C1, the maximum pooling layer S1 is the second hidden layer, the size of the kernel function is (k2,1), and the maximum pooling layer S1 contains There are 24×n3×1 nodes, where n3=n2/k2, and this layer has no parameters. The convolutional layer C2 contains 24×n4×1 nodes, the kernel function is (k3,1) where n4=n3-k3+1, there is 24×(k3+1) between the maximum pooling layer S1 and the convolutional layer C2 a trainable parameter. The maximum pooling layer S2 contains 24×n5×1 nodes, the size of the kernel function is (k4,1), where n5=n4/k4, and this layer has no parameters. The fully connected layer F contains n6 nodes, and there are (24×n6+1) training parameters between this layer and the maximum pooling layer S2. The final output layer board contains n7 nodes, and there are 24×n6×1 nodes between this layer and the complete connection layer F, and 24×n6×1×n7 training parameters. The convolutional neural network classifier with the above parameters is used to distinguish hyperspectral pixels, where n1 is the number of spectral channels, n7 is the number of output data categories, n2, n3, n4, and n5 are the dimensions of the feature image, and n6 is The dimensionality of the fully connected layer.

(2)前向传播(2) Forward propagation

使用的深度卷积神经网络是5层结构,加上输入输出层也可看作7层,表示为为(L+1)层,其中L=6,在输入层包含有n1个输入单元,在输出层包含有n7个输出单元,隐含层为C1,S1,C2,S2和F层。假设xi是第i层的输入第(i-1)层的输出,我们可以计算xi+1为:The deep convolutional neural network used is a 5-layer structure, plus the input and output layer can also be regarded as 7 layers, expressed as (L+1) layer, where L=6, contains n1 input units in the input layer, in The output layer contains n7 output units, and the hidden layers are C1, S1, C2, S2 and F layers. Suppose xi is the input of layer i and the output of layer (i-1), we can calculate xi+1 as:

xi+1=fi(ui) (公式3)x i+1 = f i (u i ) (Formula 3)

其中in

是第i层作用于输入数据的权重矩阵,bi是第i层附加的贝叶斯向量,fi(·)是第i层的激活函数,选择双曲线函数tanh(u)作为卷积层C1、C2和完整连接层F的激活函数,取最大值函数max(u)作为最大池层S1和S2的激活函数。采用分类器要将数据进行多分类,输出类别为n7,n7个类标回归模型定义如下: is the weight matrix of the i-th layer acting on the input data, b i is the additional Bayesian vector of the i-th layer, f i ( ) is the activation function of the i-th layer, and the hyperbolic function tanh(u) is selected as the convolutional layer The activation functions of C1, C2 and the complete connection layer F, take the maximum function max(u) as the activation function of the maximum pooling layers S1 and S2. The classifier is used to perform multi-classification of the data, the output category is n7, and the regression model of n7 class labels is defined as follows:

输出层的输出向量y=xL+1表示在当前迭代所有类别的概率。The output vector y=x L+1 of the output layer represents the probability of all categories in the current iteration.

(3)后向传播(3) Backpropagation

在后向传播阶段,训练得到的参数使用下降梯度法调整更新,通过使用最小化成本函数和计算成本函数偏导数来决定每一个参数,使用的损失函数定义如下:In the backward propagation stage, the trained parameters are adjusted and updated using the descending gradient method. Each parameter is determined by minimizing the cost function and calculating the partial derivative of the cost function. The loss function used is defined as follows:

其中,m是训练样本量,Y是输出量。是第i次训练样本实际输出y(i)的第j次的值,向量尺寸是n7。第i个样本的期望输出Y(i),标签的概率值为1,如果是其它类别,则概率值为0。1{j=Y(i)}意味着,如果j等同于第i次训练样本的期望类标,它的值则为1;否则,它的值为0。我们在J(θ)的前面增加了负号,使得计算更加便利。Among them, m is the training sample size and Y is the output size. is the j-th value of the actual output y (i) of the i-th training sample, and the vector size is n7. The expected output Y (i) of the i-th sample, the probability value of the label is 1, if it is another category, the probability value is 0. 1{j=Y (i) } means that if j is equal to the i-th training The expected class label of the sample, its value is 1; otherwise, its value is 0. We added a negative sign in front of J(θ) to make the calculation more convenient.

损失函数对ui求偏导得到;The loss function obtains the partial derivative of u i ;

其中°表示元素相乘。f′(ui)可以简单地表示为where ° denotes element-wise multiplication. f′(u i ) can be expressed simply as

因此,在每一次的迭代,都将执行更新:Therefore, at each iteration, an update will be performed:

为了判断训练参数,α是学习因子(α=0.01),并且To judge the training parameters, α is the learning factor (α=0.01), and

由于θi包含Wi和bi,并且Since θ i contains W i and b i , and

其中in

经过多次的迭代训练,成本函数的回归越来越小,这也意味着实际的输出与期望的输出越来越接近,当实际的输出与期望的输出差异足够小时,迭代停止,最终,训练好的深度卷积神经网络模型就可以用于高光谱的图像分类了。After many iterations of training, the regression of the cost function is getting smaller and smaller, which also means that the actual output is getting closer to the expected output. When the difference between the actual output and the expected output is small enough, the iteration stops. Finally, the training A good deep convolutional neural network model can be used for hyperspectral image classification.

步骤五:未染色切片高光谱数据的采集及区域识别,根据步骤二的方式进行未染色切片高光谱数据的采集,根据步骤三的方式做相同的预处理,输入步骤四训练完毕的判别神经网络,得出每点病变/非病变判别结果,综合每点判别结果即可生成无染色切片的识别结果。Step 5: Acquisition and area identification of hyperspectral data of unstained slices, collect hyperspectral data of unstained slices according to the method of step 2, perform the same preprocessing according to the method of step 3, and input the discriminant neural network trained in step 4 , to obtain the lesion/non-lesion discrimination result of each point, and the recognition result of the unstained section can be generated by combining the discrimination results of each point.

使用同部位染色/未染色切片对本装置及方法进行验证Validation of the device and method using isotopic stained/unstained sections

为验证本本装置及方法的可行性,以肝癌切片为例,我们采用福建医科大学孟超肝胆医院提供的同一部位的染色/未染色切片进行了系统验证。切片制作过程为:从肝癌患者体内取得病变组织,浸泡在福尔马林溶液中固定,将组织完全脱水后用石蜡包埋,用切片机切出薄片,对同一块组织切取厚度相同的两片薄片,一片用常规的苏木素-伊红染色方法染色制成H&E染色切片(用于理论对照),另一片直接放置在载玻片上,经过二甲苯脱蜡处理,制成未染色的组织切片。高光谱图像采集条件为:氙灯光源30mW/cm2光强,波长范围400~718nm,波长间隔3nm。In order to verify the feasibility of this device and method, taking liver cancer slices as an example, we used the stained/unstained slices of the same site provided by Mengchao Hepatobiliary Hospital of Fujian Medical University for system verification. The slice production process is as follows: the diseased tissue is obtained from the liver cancer patient, soaked in formalin solution and fixed, the tissue is completely dehydrated and embedded in paraffin, cut out a thin slice with a microtome, and two slices of the same thickness are cut from the same tissue Thin sections, one was stained with conventional hematoxylin-eosin staining method to make H&E stained section (for theoretical control), and the other was placed directly on the glass slide and dewaxed with xylene to make unstained tissue section. Hyperspectral image acquisition conditions are: xenon lamp light intensity 30mW/cm 2 , wavelength range 400-718nm, wavelength interval 3nm.

对于此批样本,我们选取了波长间隔为177nm下的导数光谱用于进一步组分间的区分,原始光谱如图7a与宽间隔导数法处理后曲线如图7b所示,将处理后的标准数据做为神经网络分类算法的输入训练训练神经网络参数,后对标准切片进行分割验证,预测结果如图8所示,同时将分割结果与传统卷积支持向量机算法分割结果对比,对比结果如图9所示,采取公式13所示精准度计算公式,计算两种算法癌症区域分割精准度,结果如表1所示 For this batch of samples, we selected the derivative spectrum with a wavelength interval of 177nm for further distinction between components. The original spectrum is shown in Figure 7a and the curve after processing by the wide-interval derivative method is shown in Figure 7b. The processed standard data As the input training of the neural network classification algorithm, the neural network parameters are trained, and then the standard slices are segmented and verified. The prediction results are shown in Figure 8. At the same time, the segmentation results are compared with the segmentation results of the traditional convolution support vector machine algorithm, and the comparison results are shown in Figure 8. As shown in 9, the accuracy calculation formula shown in formula 13 is adopted to calculate the cancer region segmentation accuracy of the two algorithms, and the results are shown in Table 1

表1深度卷积神经网络模型与支持向量机分类精确度对比Table 1 Comparison of classification accuracy between deep convolutional neural network model and support vector machine

由实验结果可知,本装置可有效采集未染色肝癌切片高光谱图像,且配套软件算法对癌症区域分割结果较好,相对于传统的苏木素伊红染色病理切片方法,省去了显微镜以及复杂的染色过程,降低了传统方法中的人工因素干扰,可为病理医生提供较好的诊断辅助。It can be seen from the experimental results that this device can effectively collect hyperspectral images of unstained liver cancer slices, and the supporting software algorithm has a better result for cancer region segmentation. The process reduces the interference of artificial factors in the traditional method, and can provide better diagnostic assistance for pathologists.

Claims (2)

1. the method for carrying out Image Acquisition and segmentation using the acquisition of dye-free pathological section high spectrum image and segmenting device, special Sign is that the acquisition of dye-free pathological section high spectrum image and segmenting device include alloy steel substrate (104), alloy steel substrate (104) it is equipped with xenon source (105), is also vertically installed on alloy steel substrate (104) fixed bracket (101) on, fixed bracket (101) it is provided with high spectrum image acquisition module (102) and example platform (103), high spectrum image acquisition module from top to bottom (102), example platform (103) and xenon source (105) are concentric, high spectrum image acquisition module (102) external computer (106);
The high spectrum image acquisition module (102) includes CCD camera (110), passes through CCD camera (110) side USB interface External computer (106), CCD camera (110) pass through relaying camera lens (112), relaying camera adjusting ring (113) and liquid crystal tunable Optical filter (114) is connected and keeps concentric, and liquid crystal tunable filter (114) passes through side USB interface (115) and computer (106) it is connected, liquid crystal tunable filter (114) lower part passes through C mouthfuls of focusing rings (116), aperture (117) and object lens (118) phase Even;
The following steps are included:
Step 1: device is built: the slice that will be unstained is placed on example platform (103) and adjusts xenon source (105) illumination Range to be sliced uniform illumination, and passes through computer (106) setting CCD camera (110) and liquid crystal tunable filter (114) Parameter make image grayscale clear, while adjust relaying camera adjusting ring (113), C mouthfuls of focusing rings (116), aperture (117) make It must be sliced and be overlapped with imaging focal plane;
Step 2: parameter setting and the acquisition of training sample data: in setting high spectrum image acquisition module on computer (106) (102) image acquisition parameter, including acquisition mode, starting, termination wavelength, wavelength resolution and time for exposure, start to acquire Acquisition, which is unstained, is sliced each spectrum two-dimensional image data of sample, and the acquisition parameter maintained like after the completion of acquisition acquires respectively The completely black background of no light and with each spectrum picture in the complete blank visual field under illumination parameter as background information refer to;
Step 3: data prediction, by spectrum after being pre-processed using black/white reference information obtained by step 2 to each spectrum picture Section superimposition obtains three-dimensional bloom spectrum matrix, this matrix z-axis direction corresponds to every, space spectral information, makees to each point curve of spectrum The processing of wide interval derivative method is with prominent features;
Step 4: differentiating the training of neural network, and three gained training sample of selecting step is sliced lesion/non-lesion region wide interval The curve of spectrum differentiates network as training data, training spectroscopic data;Detailed process is as follows:
(1) it establishes and differentiates neural network model
Input information represents the spectral information of a pixel, toward the right followed by convolutional layer and maximum pond layer for calculating one Series of features picture obtains output layer after feature image classification;Whole network includes input layer, convolutional layer C1, maximum pond layer S1, Convolutional layer C2, maximum pond layer S2, complete articulamentum F and output layer;The sample-size of input layer is (n1,1), and wherein n1 is spectrum Number of active lanes;Convolutional layer C1 is first hidden layer, uses the input of 24 kernel function filtering n1 × 1 having a size of k1 × 1 Data;Convolutional layer C1 includes 24 × n2 × 1 node, wherein n2=n1-k1+1;Have 24 between input layer and convolutional layer C1 × (k1+1) it is a can training parameter, maximum pond layer S1 is second hidden layer, and the size of kernel function is (k2,1), maximum pond layer S1 packet Containing 24 × n3 × 1 node, wherein n3=n2/k2, this layer do not have parameter;Convolutional layer C2 includes 24 × n4 × 1 section Point, kernel function are (k3,1) wherein n4=n3-k3+1, have 24 between maximum pond layer S1 and convolutional layer C2 × (k3+1) is a trains Parameter;Maximum pond layer S2 includes 24 × n5 × 1 node, and kernel function size (k4,1), wherein n5=n4/k4, this layer do not have There is parameter;Complete articulamentum F includes n6 node, has (24 × n6+1) a training to join between this layer and maximum pond layer S2 Number;Last output layer contains n7 node, has 24 × n6 × 1 node, 24 × n6 × 1 between this layer and complete articulamentum F × n7 training parameter;The convolutional neural networks classifier of above-mentioned parameter is established for distinguishing EO-1 hyperion pixel, wherein n1 is light Number of active lanes is composed, n7 is exported data category number, and n2, n3, n4, n5 are the dimension of feature image respectively, and n6 is completely to connect Connect the dimension of layer;
(2) propagated forward
The depth convolutional neural networks used are 5 layers of structures, in addition input and output layer can also regard 7 layers as, are expressed as (L+1) layer, Wherein L=6 includes n1 input unit in input layer, includes n7 output unit in output layer, hidden layer C1, S1, C2, S2 and F layers;Assuming that xiIt is the output of i-th layer input (i-1) layer, we can calculate xi+1Are as follows:
xi+1=fi(ui) (formula 3)
Wherein
It is i-th layer of weight matrix coefficient for acting on input data, biIt is i-th layer of additional Bayes's vector, fi() is I-th layer of activation primitive selects hyperbolic function tanh (u) as the activation primitive of convolutional layer C1, C2 and complete articulamentum F, It is maximized activation primitive of the function max (u) as maximum pond layer S1 and S2;Data are carried out by more classification using classifier, it is defeated Classification is n7 out, and n7 category regression model is defined as follows:
The output vector y=x of output layerL+1Indicate the probability in current iteration all categories;
(3) back-propagating
In the back-propagating stage, the parameter that training obtains is updated using Decent Gradient Methods adjustment, by using minimum cost letter Cost function partial derivative is counted and calculated to determine each parameter, the loss function used is defined as follows:
Wherein, m is training sample amount, and Y is output quantity;It is i-th training sample reality output y(i)Jth time value, to Taking measurements is n7;The desired output Y of i-th of sample(i), the probability value of label is 1, and if it is other classifications, then probability value is 0; 1 { j=Y(i)It is meant that if j is equal to the expectation category of i-th training sample, its value if is 1;Otherwise, its value is 0;Increase negative sign before J (θ), is more facilitated so that calculating;
Loss function is to uiLocal derviation is asked to obtain;
Wherein ° expression element multiplication;f′(ui) can be expressed simply as
Therefore, in iteration each time, update will be all executed, θ is all parameter sets in neural network, θiInclude WiAnd bi:
For training of judgement parameter, α is Studying factors, α=0.01, and
Due to θiInclude WiAnd bi, and
Wherein:
By multiple repetitive exercise, the recurrence of cost function is smaller and smaller, this also means that it is actual output with it is desired defeated It becomes closer to out, when actual output is sufficiently small with desired output difference, iteration stopping, finally, trained depth Convolutional neural networks model may be used for the image classification of EO-1 hyperion;
Step 5: being unstained and be sliced acquisition and the region recognition of high-spectral data, be unstained cutting according to the mode of step 2 The acquisition of piece high-spectral data does identical pretreatment, the differentiation mind that the training of input step four finishes according to the mode of step 3 Through network, show that every lesion/non-lesion differentiates as a result, differentiate that result produces the identification knot of dye-free slice at comprehensive every Fruit.
2. according to claim 1 adopted using the acquisition of dye-free pathological section high spectrum image and segmenting device progress image Collection and the method for segmentation, which is characterized in that step three concrete scheme is as follows:
The pretreatment of spectrum picture uses pixel correction method, is sliced high spectrum image under the same conditions in same acquisition, makes With the high-spectral data of one group of no light complete darkness and the high spectrum image in one group of complete blank visual field, with the height of corresponding wavelength Spectrum picture subtracts difference of the difference than upper blank image and dark image of dark image, and pretreatment formula is as shown in formula 1;
Wherein, R is by the Optical transmission spectrum value of pretreatment conversion, IimIt is the gray value under each wavelength of original high spectrum image, IblIt is each wavelength image gray value under the conditions of complete darkness, IwhIt is the gray value for having each wavelength image under the illumination blank visual field, with A kind of method for using wide interval derivative afterwards, i.e. increase independent variable interval are differentiated, as shown in formula 2:
λ indicates that wavelength, Δ λ indicate wavelength interval in formula.
CN201710353158.0A 2017-05-18 2017-05-18 The device and method for acquiring and dividing for dye-free pathological section high spectrum image Active CN107064019B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710353158.0A CN107064019B (en) 2017-05-18 2017-05-18 The device and method for acquiring and dividing for dye-free pathological section high spectrum image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710353158.0A CN107064019B (en) 2017-05-18 2017-05-18 The device and method for acquiring and dividing for dye-free pathological section high spectrum image

Publications (2)

Publication Number Publication Date
CN107064019A CN107064019A (en) 2017-08-18
CN107064019B true CN107064019B (en) 2019-11-26

Family

ID=59610992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710353158.0A Active CN107064019B (en) 2017-05-18 2017-05-18 The device and method for acquiring and dividing for dye-free pathological section high spectrum image

Country Status (1)

Country Link
CN (1) CN107064019B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107843593A (en) * 2017-10-13 2018-03-27 上海工程技术大学 A kind of textile material recognition methods and system based on high light spectrum image-forming technology
KR102811548B1 (en) * 2018-02-15 2025-05-21 고쿠리츠다이가쿠호진 니이가타 다이가쿠 System, program and method for the identification of high-frequency variant cancer
CN109034208B (en) * 2018-07-03 2020-10-23 怀光智能科技(武汉)有限公司 High-low resolution combined cervical cell slice image classification system
CN109272492B (en) * 2018-08-24 2022-02-15 深思考人工智能机器人科技(北京)有限公司 Method and system for processing cytopathology smear
CN109489816B (en) * 2018-10-23 2021-02-26 华东师范大学 Microscopic hyperspectral imaging platform and large-area data cube acquisition method
CN110008836B (en) * 2019-03-06 2023-04-25 华东师范大学 A feature extraction method for hyperspectral images of pathological tissue slices
CN109815945B (en) * 2019-04-01 2024-04-30 上海徒数科技有限公司 Respiratory tract examination result interpretation system and method based on image recognition
AU2020284538B2 (en) 2019-05-28 2022-03-24 PAIGE.AI, Inc. Systems and methods for processing images to prepare slides for processed images for digital pathology
CN110600106B (en) * 2019-08-28 2022-07-05 上海联影智能医疗科技有限公司 Pathological section processing method, computer device and storage medium
CN110517258A (en) * 2019-08-30 2019-11-29 山东大学 A device and system for cervical cancer image recognition based on hyperspectral imaging technology
CN212432993U (en) * 2019-09-11 2021-01-29 南京九川科学技术有限公司 Pathological tissue section's full field of vision image device
CN112734813B (en) * 2019-10-14 2025-03-14 志诺维思(北京)基因科技有限公司 Registration method, device, electronic device and computer readable storage medium
TWI781408B (en) * 2019-11-27 2022-10-21 靜宜大學 Artificial intelligence based cell detection method by using hyperspectral data analysis technology
CN111325757B (en) * 2020-02-18 2022-12-23 西北工业大学 A Point Cloud Recognition and Segmentation Method Based on Bayesian Neural Network
CN113450305B (en) * 2020-03-26 2023-01-24 太原理工大学 Medical image processing method, system, equipment and readable storage medium
CN112712877B (en) * 2020-12-07 2024-02-09 西安电子科技大学 Large-view-field high-flux high-resolution pathological section analyzer
CN113065403A (en) * 2021-03-05 2021-07-02 浙江大学 Machine learning cell classification method and device based on hyperspectral imaging
CN114820502B (en) * 2022-04-21 2023-10-24 济宁医学院附属医院 Coloring detection method for protein kinase CK2 in intestinal mucosa tissue
CN115236015B (en) * 2022-07-21 2024-05-03 华东师范大学 Puncture sample pathology analysis system and method based on hyperspectral imaging technology
CN115728236A (en) * 2022-11-21 2023-03-03 山东大学 A hyperspectral image acquisition and processing system and its working method
CN117456222A (en) * 2023-08-31 2024-01-26 台州安奇灵智能科技有限公司 Hyperspectral microscopic imaging rapid identification, classification and counting method for mixed bacteria

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011089895A (en) * 2009-10-22 2011-05-06 Arata Satori Device and method of hyperspectral imaging
CN104316473A (en) * 2014-10-28 2015-01-28 南京农业大学 Gender determination method for chicken hatching egg incubation early embryo based on hyperspectral image
US9345428B2 (en) * 2004-11-29 2016-05-24 Hypermed Imaging, Inc. Hyperspectral imaging of angiogenesis
CN106097355A (en) * 2016-06-14 2016-11-09 山东大学 The micro-Hyperspectral imagery processing method of gastroenteric tumor based on convolutional neural networks
CN106226247A (en) * 2016-07-15 2016-12-14 暨南大学 A kind of cell detection method based on EO-1 hyperion micro-imaging technique

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9345428B2 (en) * 2004-11-29 2016-05-24 Hypermed Imaging, Inc. Hyperspectral imaging of angiogenesis
JP2011089895A (en) * 2009-10-22 2011-05-06 Arata Satori Device and method of hyperspectral imaging
CN104316473A (en) * 2014-10-28 2015-01-28 南京农业大学 Gender determination method for chicken hatching egg incubation early embryo based on hyperspectral image
CN106097355A (en) * 2016-06-14 2016-11-09 山东大学 The micro-Hyperspectral imagery processing method of gastroenteric tumor based on convolutional neural networks
CN106226247A (en) * 2016-07-15 2016-12-14 暨南大学 A kind of cell detection method based on EO-1 hyperion micro-imaging technique

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于高光谱成像的基底细胞癌和肝癌病理切片检测研究;周湘连;《西安交通大学机构知识库》;20161231;第1页 *
高光谱成像技术用于肝切片癌变信息的提取;于翠荣 等;《科学技术与工程》;20150930;第15卷(第27期);第106-107页,图1-2 *

Also Published As

Publication number Publication date
CN107064019A (en) 2017-08-18

Similar Documents

Publication Publication Date Title
CN107064019B (en) The device and method for acquiring and dividing for dye-free pathological section high spectrum image
US12327362B2 (en) Method and system for digital staining of label-free fluorescence images using deep learning
US12229959B2 (en) Systems and methods for determining cell number count in automated stereology z-stack images
CN110033032B (en) Tissue slice classification method based on microscopic hyperspectral imaging technology
JP7695256B2 (en) A Federated Learning System for Training Machine Learning Algorithms and Maintaining Patient Privacy
Quinn et al. Deep convolutional neural networks for microscopy-based point of care diagnostics
Rana et al. Computational histological staining and destaining of prostate core biopsy RGB images with generative adversarial neural networks
CN113781455B (en) Cervical cell image anomaly detection method, device, equipment and medium
Rashid et al. NSGA-II-DL: metaheuristic optimal feature selection with deep learning framework for HER2 classification in breast cancer
CN109670510A (en) A kind of gastroscopic biopsy pathological data screening system and method based on deep learning
WO2021146705A1 (en) Non-tumor segmentation to support tumor detection and analysis
US20240079116A1 (en) Automated segmentation of artifacts in histopathology images
CN115032196A (en) Full-scribing high-flux color pathological imaging analysis instrument and method
CN115036011B (en) A system for evaluating the prognosis of solid tumors based on digital pathology images
CN112990015A (en) Automatic lesion cell identification method and device and electronic equipment
CN113450305A (en) Medical image processing method, system, equipment and readable storage medium
CN106706643B (en) A kind of liver cancer comparison slice detection method
US20040014165A1 (en) System and automated and remote histological analysis and new drug assessment
JP2005331394A (en) Image processor
CN105996986B (en) A kind of devices and methods therefor based on multispectral detection human eye Meibomian gland model
Tsafas et al. Application of a deep-learning technique to non-linear images from human tissue biopsies for shedding new light on breast cancer diagnosis
CN112330613A (en) Method and system for evaluating quality of cytopathology digital image
US20240169534A1 (en) Medical image analysis device, medical image analysis method, and medical image analysis system
Nair et al. Deep learning based approaches for accurate automated nuclei segmentation of Pap smear images
CN118469845B (en) A non-reference label-free cell microscopy image visual enhancement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240911

Address after: 11th Floor, 2043 Xizhou Road, Tong'an District, Xiamen City, Fujian Province 361001

Patentee after: TRIPLEX INTERNATIONAL BIOSCIENCES (CHINA) Co.,Ltd.

Country or region after: China

Address before: Beilin District Xianning West Road 710049, Shaanxi city of Xi'an province No. 28

Patentee before: XI'AN JIAOTONG University

Country or region before: China