CN107064019B - The device and method for acquiring and dividing for dye-free pathological section high spectrum image - Google Patents
The device and method for acquiring and dividing for dye-free pathological section high spectrum image Download PDFInfo
- Publication number
- CN107064019B CN107064019B CN201710353158.0A CN201710353158A CN107064019B CN 107064019 B CN107064019 B CN 107064019B CN 201710353158 A CN201710353158 A CN 201710353158A CN 107064019 B CN107064019 B CN 107064019B
- Authority
- CN
- China
- Prior art keywords
- layer
- image
- parameter
- training
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 230000001575 pathological effect Effects 0.000 title claims abstract description 32
- 238000001228 spectrum Methods 0.000 title claims description 29
- 230000003595 spectral effect Effects 0.000 claims abstract description 35
- 230000003902 lesion Effects 0.000 claims abstract description 17
- 230000011218 segmentation Effects 0.000 claims abstract description 14
- 238000013528 artificial neural network Methods 0.000 claims abstract description 12
- 238000012549 training Methods 0.000 claims description 42
- 239000004973 liquid crystal related substance Substances 0.000 claims description 13
- 229910052724 xenon Inorganic materials 0.000 claims description 13
- FHNFHKCVQCLJFQ-UHFFFAOYSA-N xenon atom Chemical compound [Xe] FHNFHKCVQCLJFQ-UHFFFAOYSA-N 0.000 claims description 13
- 238000013527 convolutional neural network Methods 0.000 claims description 10
- 238000005286 illumination Methods 0.000 claims description 10
- 229910000851 Alloy steel Inorganic materials 0.000 claims description 9
- 230000004913 activation Effects 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 9
- 239000000758 substrate Substances 0.000 claims description 9
- 238000003384 imaging method Methods 0.000 claims description 4
- 238000012937 correction Methods 0.000 claims description 3
- 238000003062 neural network model Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 claims 3
- 238000006243 chemical reaction Methods 0.000 claims 1
- 230000004069 differentiation Effects 0.000 claims 1
- 235000013399 edible fruits Nutrition 0.000 claims 1
- 238000001914 filtration Methods 0.000 claims 1
- 238000005259 measurement Methods 0.000 claims 1
- 230000003287 optical effect Effects 0.000 claims 1
- 238000000255 optical extinction spectrum Methods 0.000 claims 1
- 230000000644 propagated effect Effects 0.000 claims 1
- 230000003252 repetitive effect Effects 0.000 claims 1
- 238000004611 spectroscopical analysis Methods 0.000 claims 1
- 238000004519 manufacturing process Methods 0.000 abstract description 5
- 238000010186 staining Methods 0.000 abstract description 4
- 238000007635 classification algorithm Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 28
- 238000011176 pooling Methods 0.000 description 18
- 238000007781 pre-processing Methods 0.000 description 10
- 238000004422 calculation algorithm Methods 0.000 description 8
- 238000003745 diagnosis Methods 0.000 description 6
- 238000001514 detection method Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 201000007270 liver cancer Diseases 0.000 description 5
- 208000014018 liver neoplasm Diseases 0.000 description 5
- 238000012706 support-vector machine Methods 0.000 description 4
- 206010028980 Neoplasm Diseases 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 201000011510 cancer Diseases 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000007490 hematoxylin and eosin (H&E) staining Methods 0.000 description 3
- 238000010827 pathological analysis Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- WSFSSNUMVMOOMR-UHFFFAOYSA-N Formaldehyde Chemical compound O=C WSFSSNUMVMOOMR-UHFFFAOYSA-N 0.000 description 2
- 238000003759 clinical diagnosis Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 238000007447 staining method Methods 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- 238000002834 transmittance Methods 0.000 description 2
- 206010008342 Cervix carcinoma Diseases 0.000 description 1
- CTQNGGLPUBDAKN-UHFFFAOYSA-N O-Xylene Chemical compound CC1=CC=CC=C1C CTQNGGLPUBDAKN-UHFFFAOYSA-N 0.000 description 1
- 208000037273 Pathologic Processes Diseases 0.000 description 1
- 208000006105 Uterine Cervical Neoplasms Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 201000010881 cervical cancer Diseases 0.000 description 1
- 238000000701 chemical imaging Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000000155 isotopic effect Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000005693 optoelectronics Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 239000012188 paraffin wax Substances 0.000 description 1
- 231100000915 pathological change Toxicity 0.000 description 1
- 230000036285 pathological change Effects 0.000 description 1
- 230000009054 pathological process Effects 0.000 description 1
- 238000010882 preoperative diagnosis Methods 0.000 description 1
- 238000004393 prognosis Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000037303 wrinkles Effects 0.000 description 1
- 239000008096 xylene Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/17—Systems in which incident light is modified in accordance with the properties of the material investigated
- G01N21/25—Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/01—Arrangements or apparatus for facilitating the optical investigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/01—Arrangements or apparatus for facilitating the optical investigation
- G01N2021/0181—Memory or computer-assisted visual determination
Landscapes
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
用于无染色病理切片高光谱图像采集及分割的装置及方法,支架中部支有切片样品平台,通过计算机进行未染色病理切片高光谱图像的自动采集与处理得到病变区域分割结果;本发明基于组织病变所导致的光谱差异,使用个人电脑同步控制相关模块,采集未染色病理组织切片的光谱序列图像并预处理叠加生成对应的三维高光谱数据,并基于此数据结合目前流行的神经网络分类思想开发光谱分类算法进行病变区域的识别分割,加快了病理组织切片的识别速率及效率,避免了染色过程中可能引入的人工误差,降低了切片制作所需时间,且使用机器算法自动判别,降低了人工判别带来的主观性,可为病理医生检测病理切片提供较好的辅助。
A device and method for collecting and segmenting hyperspectral images of non-stained pathological slices. The middle part of the support has a slice sample platform, and the computer automatically collects and processes the hyperspectral images of unstained pathological slices to obtain the segmentation results of lesion regions; the present invention is based on tissue For the spectral difference caused by lesions, use a personal computer to synchronously control related modules, collect spectral sequence images of unstained pathological tissue sections, preprocess and superimpose to generate corresponding three-dimensional hyperspectral data, and develop based on this data combined with the current popular neural network classification ideas The spectral classification algorithm is used to identify and segment lesion areas, which speeds up the identification rate and efficiency of pathological tissue sections, avoids possible manual errors in the staining process, and reduces the time required for section production. The subjectivity brought about by discrimination can provide better assistance for pathologists to detect pathological slides.
Description
技术领域technical field
本发明属于检测装置领域,具体涉及一种基于高光谱数据的无染色病理切片快速采集及分割的装置及方法。The invention belongs to the field of detection devices, and in particular relates to a device and method for rapid collection and segmentation of non-stained pathological sections based on hyperspectral data.
背景技术Background technique
病理检查(pathological examination)已经大量应用于临床工作及科学研究。在临床方面主要进行尸体病理检查及手术病理检查。手术病理检查的目的,一是为了明确诊断及验证术前的诊断,提高临床的诊断水平;二是诊断明确后,可决定下步治疗方案及估计预后,进而提高临床的治疗水平。通过临床病理分析,也可获得大量极有价值的科研资料。Pathological examination has been widely used in clinical work and scientific research. In the clinical aspect, the postmortem pathological examination and surgical pathological examination are mainly carried out. The purpose of surgical pathological examination is to clarify the diagnosis and verify the preoperative diagnosis and improve the level of clinical diagnosis; secondly, after the diagnosis is confirmed, the next treatment plan can be determined and the prognosis can be estimated, so as to improve the level of clinical treatment. Through clinical and pathological analysis, a large amount of extremely valuable scientific research data can also be obtained.
病理切片是病理标本的一种。制作时将部分有病变的组织或脏器经过各种化学品和埋藏法的处理,使之固定硬化,在切片机上切成薄片,粘附在玻片上,染以各种颜色,供在显微镜下检查,以观察病理变化,作出病理诊断,为临床诊断和治疗提供帮助。病理常规制片技术是病理诊断的基础,切片质量是指导医师作出准确诊断的重要保证。苏木精(Hematoxylin)伊红(Eosin)染色法,简称HE染色法,是生物学和医学的细胞与组织学最广泛应用的染色方法。实际制作过程中,由于一些病理技术人员未经过严格而系统的工作培训,工作中经验不足,操作不当,常出现切片脱落、染色不均匀,模糊不清、皱褶及细胞核浆对比不明显等现象而影响诊断。因此若能简化切片制作过程,对于使用病理切片进行辅助诊断是十分有意义的。A pathological slide is a type of pathological specimen. During production, some diseased tissues or organs are treated with various chemicals and embedding methods to make them fixed and hardened, sliced into thin slices on a microtome, adhered to glass slides, dyed with various colors, and prepared for viewing under a microscope. Check to observe pathological changes, make pathological diagnosis, and provide assistance for clinical diagnosis and treatment. Routine pathological slice making technology is the basis of pathological diagnosis, and slice quality is an important guarantee to guide doctors to make accurate diagnosis. Hematoxylin and Eosin staining, referred to as HE staining, is the most widely used staining method in biology and medical cell and histology. In the actual production process, due to the fact that some pathological technicians have not received strict and systematic work training, lack of work experience, and improper operation, the phenomenon of section falling off, uneven staining, blurring, wrinkles, and inconspicuous nuclei-cytoplasmic contrast often occur. affect the diagnosis. Therefore, if the process of making slices can be simplified, it is very meaningful to use pathological slices for auxiliary diagnosis.
组织的病变过程往往伴随着组织结构在细胞层次与亚细胞层次的变化,从而导致组织光谱的迁移,通过将传统的二维成像技术与光谱技术进行有机结合,可以同时提供观测目标的二维空间信息及一维光谱信息,通过光谱数据分析,获取组织的形态结构及化学组分。将高光谱成像技术应用到病理切片的辅助检测领域,可有效降低对病理医生的经验要求,同时降低病理检测的主观性因素,进而提高病理检测效率。The pathological process of the tissue is often accompanied by changes in the tissue structure at the cellular and subcellular levels, which leads to the migration of the tissue spectrum. By combining the traditional two-dimensional imaging technology with the spectral technology, the two-dimensional space of the observation target can be provided at the same time. Information and one-dimensional spectral information, through the analysis of spectral data, the morphological structure and chemical composition of the tissue can be obtained. Applying hyperspectral imaging technology to the field of auxiliary detection of pathological slides can effectively reduce the experience requirements for pathologists, reduce the subjectivity of pathological detection, and improve the efficiency of pathological detection.
目前,国际上Siddiqi Anwer M等人使用显微光谱成像仪获取染色宫颈癌切片的高光谱数据,通过训练最小二乘支持向量机算法对切片进行机器识别,灵敏度和特异性均可达90%以上。国内青岛市光电工程技术研究院亦使用类似方法对肝癌切片进行识别,同样达到了较好的识别效果。但是此两种方法均需对癌症切片进行染色,仍对病理医生均有较高的实验与经验要求,耗时较多。At present, Siddiqi Anwer M and others in the world use a microspectral imager to obtain hyperspectral data of stained cervical cancer slices, and machine recognition of the slices by training the least squares support vector machine algorithm, the sensitivity and specificity can reach more than 90%. . Domestic Qingdao Institute of Optoelectronic Engineering Technology also used a similar method to identify liver cancer slices, and also achieved a good identification effect. However, both of these two methods require staining of cancer sections, which still require high experimental and experience requirements for pathologists, and are time-consuming.
发明内容Contents of the invention
为了克服上述现有技术的不足,本发明的目的在于提供一种用于无染色病理切片高光谱图像采集及分割的装置及方法,能够实现病理切片的高光谱图像自动采集,并基于样本切片的三维高光谱数据进行人工神经网络算法的训练,对无染色肝癌切片图像进行识别,在降低传统染切片检测的复杂度的同时,降低了判别过程中的主观性因素,可为病理医生的诊断提供较好的参考作用。In order to overcome the deficiencies in the prior art above, the object of the present invention is to provide a device and method for hyperspectral image acquisition and segmentation of non-stained pathological slices, which can realize automatic acquisition of hyperspectral images of pathological slices, and based on sample slices The three-dimensional hyperspectral data is used to train the artificial neural network algorithm to identify unstained liver cancer slice images. While reducing the complexity of traditional stained slice detection, it also reduces the subjective factors in the process of discrimination, and can provide pathologists with a diagnosis. good reference.
为了达到上述目的,本发明的技术方案为:In order to achieve the above object, technical scheme of the present invention is:
用于无染色病理切片高光谱图像采集及分割的装置,包括合金钢基板104,合金钢基板104上置有氙灯光源105,合金钢基板104上还垂直设置有固定支架101,固定支架101从上往下设置有高光谱图像采集模块102和样品平台103,高光谱图像采集模块102、样品平台103与氙灯光源105同轴心,高光谱图像采集模块102外接计算机106;The device for hyperspectral image acquisition and segmentation of non-stained pathological slices includes an alloy steel substrate 104, a xenon lamp light source 105 is placed on the alloy steel substrate 104, a fixed bracket 101 is vertically arranged on the alloy steel substrate 104, and the fixed bracket 101 is vertically arranged on the alloy steel substrate 104. A hyperspectral image acquisition module 102 and a sample platform 103 are arranged downward, the hyperspectral image acquisition module 102, the sample platform 103 are coaxial with the xenon lamp light source 105, and the hyperspectral image acquisition module 102 is externally connected to a computer 106;
所述的高光谱图像采集模块102包括CCD相机110,通过CCD相机110侧面USB接口外接计算机106,CCD相机110通过中继镜头112、中继镜头调节环113与液晶可调谐滤光器114相连并保持同轴心,液晶可调谐滤光器114通过侧面USB接口115与计算机106相连,液晶可调谐滤光器114下部通过C口对焦环116、光圈117与物镜118相连。Described hyperspectral image acquisition module 102 comprises CCD camera 110, connects computer 106 through CCD camera 110 side USB interface, and CCD camera 110 is connected with liquid crystal tunable filter 114 by relay lens 112, relay lens adjustment ring 113 and Keeping the coaxial center, the liquid crystal tunable filter 114 is connected to the computer 106 through the side USB interface 115 , and the lower part of the liquid crystal tunable filter 114 is connected to the objective lens 118 through the C-mount focusing ring 116 and the aperture 117 .
用于无染色病理切片高光谱图像采集及分割方法,包括以下步骤:A hyperspectral image acquisition and segmentation method for non-stained pathological sections, comprising the following steps:
步骤一、装置搭建:将未染色切片放置于切片样品台104上并调整氙灯103照明范围使得切片光照均匀,并通过计算机106设置CCD相机110与液晶可调谐滤光器114的参数使得图像灰度清晰,同时调整中继镜头调节环113、C口对焦环116、光圈117使得切片与成像焦平面重合;Step 1, device construction: place the unstained slice on the slice sample stage 104 and adjust the illumination range of the xenon lamp 103 to make the slice evenly illuminated, and set the parameters of the CCD camera 110 and the liquid crystal tunable filter 114 through the computer 106 to make the image grayscale Clear, adjust the relay lens adjustment ring 113, C-mount focus ring 116, and aperture 117 at the same time to make the slice coincide with the imaging focal plane;
步骤二、参数设置采集训练样本数据:于计算机106上设置高光谱图像采集模块102的图像采集参数,包括采集模式、起始、终止波长、波长分辨率和曝光时间,开始采集获取未染色切片样本各光谱二维图像数据,在采集完成后保持同样的采集参数,分别采集无光照全黑背景及同光照参数下完全空白视野各光谱图像作为背景信息参考;Step 2, parameter setting and collecting training sample data: set the image acquisition parameters of the hyperspectral image acquisition module 102 on the computer 106, including acquisition mode, starting and ending wavelengths, wavelength resolution and exposure time, and start collecting and obtaining unstained section samples For the two-dimensional image data of each spectrum, the same acquisition parameters are kept after the acquisition is completed, and the spectral images of the dark background without illumination and the completely blank field of view under the same illumination parameters are respectively collected as background information reference;
步骤三:数据预处理,对步骤二所得黑/白参考信息对各光谱图像进行预处理后按谱段序列叠加得三维高光谱矩阵,此矩阵z轴方向对应空间每点光谱信息,对各点光谱曲线作宽间隔倒数法处理以突出特征;Step 3: Data preprocessing, the black/white reference information obtained in step 2 is preprocessed for each spectral image, and then superimposed according to the sequence of spectral segments to obtain a three-dimensional hyperspectral matrix. The z-axis direction of this matrix corresponds to the spectral information of each point in the space. For each point Spectral curves are processed by wide interval reciprocal method to highlight features;
步骤四:判别神经网络的训练,根据染色切片结果选取步骤三所得训练样本切片病变/非病变区域宽间隔光谱曲线作为训练数据,训练光谱数据判别网络;Step 4: Discriminate the training of the neural network, select the training sample slice lesion/non-lesion area wide-interval spectral curve obtained in step 3 according to the results of the stained slices as the training data, and train the spectral data discrimination network;
步骤五:未染色切片高光谱数据的采集及区域识别,根据步骤二的方式进行未染色切片高光谱数据的采集,根据步骤三的方式做相同的预处理,输入步骤四训练完毕的判别神经网络,得出每点病变/非病变判别结果,综合每点判别结果即可生成无染色切片的识别结果。Step 5: Acquisition and area identification of hyperspectral data of unstained slices, collect hyperspectral data of unstained slices according to the method of step 2, perform the same preprocessing according to the method of step 3, and input the discriminant neural network trained in step 4 , to obtain the lesion/non-lesion discrimination result of each point, and the recognition result of the unstained section can be generated by combining the discrimination results of each point.
所述的步骤三具体方案如下:Described step three specific schemes are as follows:
光谱图像的预处理采用像素点较正方法,在同采集切片高光谱图像相同的条件下,使用一组无光照完全黑暗的高光谱数据和一组完全空白视野的高光谱图像,用对应波长的高光谱图像减去黑暗图像的差比上空白图像与黑暗图像的差值,预处理公式如公式1所示。The preprocessing of the spectral image adopts the method of pixel point correction. Under the same conditions as the hyperspectral image acquisition of slices, a set of hyperspectral data without illumination and a set of completely dark hyperspectral images and a set of hyperspectral images with a completely blank field of view are used. The difference between the hyperspectral image minus the dark image is compared to the difference between the blank image and the dark image, and the preprocessing formula is shown in formula 1.
其中,R是经过预处理转化的透射率光谱值,Iim是原始高光谱图像各波长下的灰度值,Ibl是完全黑暗条件下各波长图像灰度值,Iwh是有光照空白视野下各波长图像的灰度值,随后采用了一种宽间隔导数的方法,即增大自变量间隔求导数,如公式2所示:Among them, R is the transmittance spectral value converted by preprocessing, I im is the gray value of the original hyperspectral image at each wavelength, I bl is the gray value of each wavelength image in complete darkness, and I wh is the illuminated blank field of view The gray value of each wavelength image below, and then a method of wide-interval derivative is adopted, that is, the derivative is obtained by increasing the interval of the independent variable, as shown in formula 2:
公式中λ表示波长,Δλ表示波长间隔;In the formula, λ represents the wavelength, and Δλ represents the wavelength interval;
所述的步骤四,具体过程如下:The step four, the specific process is as follows:
(1)建立判别神经网络模型(1) Establish a discriminative neural network model
输入信息代表着一个像素点的光谱信息,往右边接着是卷积层和最大池层用于计算一系列特征图片,特征图片分类后得到输出层。整个网络包含输入层,卷积层C1,最大池层S1,卷积层C2,最大池层S2,完整连接层F和输出层。输入层的样本尺寸为(n1,1),其中是n1波段数目。第一个隐含卷积层C1使用24个尺寸为k1×1的核函数过滤n1×1的输入数据。隐含卷积层C1包含有24×n2×1个节点,其中n2=n1-k1+1。输入层和卷积层C1之间有24×(k1+1)个可训练参数,最大池层S1是第二个隐含层,核函数的尺寸为(k2,1),最大池层S1包含有24×n3×1个节点,其中n3=n2/k2,这一层没有参数。卷积层C2包含有24×n4×1个节点,核函数为(k3,1)其中n4=n3-k3+1,最大池层S1和卷积层C2之间有24×(k3+1)个可训练参数。最大池层S2包含有24×n5×1个节点,核函数尺寸(k4,1),其中n5=n4/k4,这一层没有参数。完整连接层F包含有n6个节点,这一层与最大池层S2之间有(24×n6+1)个训练参数。最后输出层板含有n7个节点,这一层与完整连接层F之间有24×n6×1个节点,24×n6×1×n7个训练参数。建立上述参数的卷积神经网络分类器用于区分高光谱像素,其中n1是光谱通道的数目,n7是所输出的数据类别数,n2、n3、n4、n5分别是特征图片的维数,n6是全连接层的维数。The input information represents the spectral information of a pixel, followed by the convolutional layer and the maximum pooling layer to the right to calculate a series of feature images, and the output layer is obtained after the feature images are classified. The whole network consists of input layer, convolutional layer C1, max pooling layer S1, convolutional layer C2, max pooling layer S2, fully connected layer F and output layer. The sample size of the input layer is (n1,1), where is the number of n1 bands. The first hidden convolutional layer C1 uses 24 kernel functions of size k1×1 to filter n1×1 input data. The hidden convolutional layer C1 contains 24×n2×1 nodes, where n2=n1-k1+1. There are 24×(k1+1) trainable parameters between the input layer and the convolutional layer C1, the maximum pooling layer S1 is the second hidden layer, the size of the kernel function is (k2,1), and the maximum pooling layer S1 contains There are 24×n3×1 nodes, where n3=n2/k2, and this layer has no parameters. The convolutional layer C2 contains 24×n4×1 nodes, the kernel function is (k3,1) where n4=n3-k3+1, there is 24×(k3+1) between the maximum pooling layer S1 and the convolutional layer C2 a trainable parameter. The maximum pooling layer S2 contains 24×n5×1 nodes, the size of the kernel function is (k4,1), where n5=n4/k4, and this layer has no parameters. The fully connected layer F contains n6 nodes, and there are (24×n6+1) training parameters between this layer and the maximum pooling layer S2. The final output layer board contains n7 nodes, and there are 24×n6×1 nodes between this layer and the complete connection layer F, and 24×n6×1×n7 training parameters. The convolutional neural network classifier with the above parameters is used to distinguish hyperspectral pixels, where n1 is the number of spectral channels, n7 is the number of output data categories, n2, n3, n4, n5 are the dimensions of the feature image, and n6 is The dimensionality of the fully connected layer.
(2)前向传播(2) Forward propagation
使用的深度卷积神经网络是5层结构,加上输入输出层也可看作7层,表示为为(L+1)层,其中L=6,在输入层包含有n1个输入单元,在输出层包含有n7个输出单元,隐含层为C1,S1,C2,S2和F层。假设xi是第i层的输入第(i-1)层的输出,我们可以计算xi+1为:The deep convolutional neural network used is a 5-layer structure, plus the input and output layer can also be regarded as 7 layers, expressed as (L+1) layer, where L=6, contains n1 input units in the input layer, in The output layer contains n7 output units, and the hidden layers are C1, S1, C2, S2 and F layers. Suppose xi is the input of layer i and the output of layer (i-1), we can calculate xi+1 as:
xi+1=fi(ui) (公式3)x i+1 = f i (u i ) (Formula 3)
其中in
是第i层作用于输入数据的权重矩阵,bi是第i层附加的贝叶斯向量,fi(·)是第i层的激活函数,选择双曲线函数tanh(u)作为卷积层C1、C2和完整连接层F的激活函数,取最大值函数max(u)作为最大池层S1和S2的激活函数。采用分类器要将数据进行多分类,输出类别为n7,n7个类标回归模型定义如下: is the weight matrix of the i-th layer acting on the input data, b i is the additional Bayesian vector of the i-th layer, f i ( ) is the activation function of the i-th layer, and the hyperbolic function tanh(u) is selected as the convolutional layer The activation functions of C1, C2 and the complete connection layer F, take the maximum function max(u) as the activation function of the maximum pooling layers S1 and S2. The classifier is used to perform multi-classification of the data, the output category is n7, and the regression model of n7 class labels is defined as follows:
输出层的输出向量y=xL+1表示在当前迭代所有类别的概率。The output vector y=x L+1 of the output layer represents the probability of all categories in the current iteration.
(3)后向传播(3) Backpropagation
在后向传播阶段,训练得到的参数使用下降梯度法调整更新,通过使用最小化成本函数和计算成本函数偏导数来决定每一个参数,使用的损失函数定义如下:In the backward propagation stage, the trained parameters are adjusted and updated using the descending gradient method. Each parameter is determined by minimizing the cost function and calculating the partial derivative of the cost function. The loss function used is defined as follows:
其中,m是训练样本量,Y是输出量。是第i次训练样本实际输出y(i)的第j次的值,向量尺寸是n7。第i个样本的期望输出Y(i),标签的概率值为1,如果是其它类别,则概率值为0。1{j=Y(i)}意味着,如果j等同于第i次训练样本的期望类标,它的值则为1;否则,它的值为0。我们在J(θ)的前面增加了负号,使得计算更加便利。Among them, m is the training sample size and Y is the output size. is the j-th value of the actual output y (i) of the i-th training sample, and the vector size is n7. The expected output Y (i) of the i-th sample, the probability value of the label is 1, if it is another category, the probability value is 0. 1{j=Y (i) } means that if j is equal to the i-th training The expected class label of the sample, its value is 1; otherwise, its value is 0. We added a negative sign in front of J(θ) to make the calculation more convenient.
损失函数对ui求偏导得到;The loss function obtains the partial derivative of u i ;
其中°表示元素相乘。f′(ui)可以简单地表示为where ° denotes element-wise multiplication. f′(u i ) can be expressed simply as
因此,在每一次的迭代,都将执行更新:Therefore, at each iteration, an update will be performed:
为了判断训练参数,α是学习因子(α=0.01),并且To judge the training parameters, α is the learning factor (α=0.01), and
由于θi包含Wi和bi,并且Since θ i contains W i and b i , and
其中in
经过多次的迭代训练,成本函数的回归越来越小,这也意味着实际的输出与期望的输出越来越接近,当实际的输出与期望的输出差异足够小时,迭代停止,最终,训练好的深度卷积神经网络模型就可以用于高光谱的图像分类了。After many iterations of training, the regression of the cost function is getting smaller and smaller, which also means that the actual output is getting closer to the expected output. When the difference between the actual output and the expected output is small enough, the iteration stops. Finally, the training A good deep convolutional neural network model can be used for hyperspectral image classification.
与现有技术相比,本发明基于组织病变所导致的光谱差异,使用个人电脑同步控制相关模块,采集未染色病理组织切片的光谱序列图像并预处理叠加生成对应的三维高光谱数据,并基于此数据结合目前流行的神经网络分类思想开发光谱分类算法进行病变区域的识别分割,加快了病理组织切片的识别速率及效率。采用此装置后,病理医生在制作病理切片时无需传统的HE染色过程,避免了染色过程中可能引入的人工误差,降低了切片制作所需时间,且使用机器算法自动判别,降低了人工判别带来的主观性,可为病理医生检测病理切片提供较好的辅助。Compared with the prior art, the present invention uses a personal computer to synchronously control related modules based on the spectral differences caused by tissue lesions, collects spectral sequence images of unstained pathological tissue sections, preprocesses and superimposes them to generate corresponding three-dimensional hyperspectral data, and based on This data is combined with the current popular neural network classification idea to develop a spectral classification algorithm to identify and segment lesion areas, which speeds up the identification speed and efficiency of pathological tissue slices. After adopting this device, pathologists do not need the traditional HE staining process when making pathological slices, avoiding possible manual errors in the staining process, reducing the time required for slice production, and using machine algorithms for automatic discrimination, reducing the manual discrimination. The resulting subjectivity can provide better assistance for pathologists to detect pathological slides.
附图说明Description of drawings
图1为本发明装置的总体结构示意图。Figure 1 is a schematic diagram of the overall structure of the device of the present invention.
图2为本发明高光谱图像采集模块的结构示意图。Fig. 2 is a schematic structural diagram of the hyperspectral image acquisition module of the present invention.
图3为本发明装置的实物图。Fig. 3 is the physical figure of the device of the present invention.
图4为本发明配套采集软件截图。Fig. 4 is a screenshot of the supporting collection software of the present invention.
图5为本发明切片光谱数据识别算法流程图。Fig. 5 is a flow chart of the slice spectral data identification algorithm of the present invention.
图6为本发明所采用光谱数据判别神经网络结构示意图。Fig. 6 is a schematic diagram of the structure of the spectral data discrimination neural network used in the present invention.
图7a为本发明提取组织切片多点原始光谱图;图7b宽间隔导数法处理后的光谱曲线图。Fig. 7a is the multi-point original spectrum of extracted tissue slices according to the present invention; Fig. 7b is the spectrum curve after processing by the wide-interval derivative method.
图8为本发明未染色切片模型预测图像与同一部位染色切片分割结果对比。Fig. 8 is a comparison between the prediction image of the unstained slice model of the present invention and the segmentation results of the dyed slice at the same site.
图9为本发明未染色切片模型预测图像与支持向量机算法识别结果对比。Fig. 9 is a comparison between the prediction image of the unstained slice model of the present invention and the recognition result of the support vector machine algorithm.
具体实施方式Detailed ways
下面结合附图对本发明做进一步说明。The present invention will be further described below in conjunction with the accompanying drawings.
参见图1至图3,用于无染色病理切片高光谱图像采集及分割的装置,包括合金钢基板104,合金钢基板104上置有氙灯光源105,合金钢基板104上还垂直设置有固定支架101,固定支架101从上往下设置有高光谱图像采集模块102和样品平台103,高光谱图像采集模块102、样品平台103与氙灯光源105同轴心,高光谱图像采集模块102外接计算机106;Referring to Figures 1 to 3, the device for hyperspectral image acquisition and segmentation of non-stained pathological slices includes an alloy steel substrate 104, on which a xenon lamp light source 105 is placed, and a fixed bracket is vertically arranged on the alloy steel substrate 104 101, the fixed bracket 101 is provided with a hyperspectral image acquisition module 102 and a sample platform 103 from top to bottom, the hyperspectral image acquisition module 102, the sample platform 103 and the xenon light source 105 are coaxial, and the hyperspectral image acquisition module 102 is externally connected to a computer 106;
所述的高光谱图像采集模块102包括CCD相机110,通过CCD相机110侧面USB接口外接计算机106,CCD相机110通过中继镜头112、中继镜头调节环113与液晶可调谐滤光器114相连并保持同轴心,液晶可调谐滤光器114通过侧面USB接口115与计算机106相连实现与CCD相机110的同步控制,液晶可调谐滤光器114下部通过C口对焦环116、光圈117与物镜118相连,中继镜头112与物镜118均具备用以调节焦平面的对焦环与用以控制进光量大小的光圈环,可通过调节中继镜头调节环113、C口对焦环116、光圈117对切片样品平台104上置切片样品实现准确对焦与合适曝光,样品台侧方设置有氙灯光路镜头组,氙灯光路镜头组通过氙灯光源驱动电路控制,液晶可调谐滤光器和科研相机连接计算机。Described hyperspectral image acquisition module 102 comprises CCD camera 110, connects computer 106 through CCD camera 110 side USB interface, and CCD camera 110 is connected with liquid crystal tunable filter 114 by relay lens 112, relay lens adjustment ring 113 and Keeping the coaxial center, the liquid crystal tunable filter 114 is connected to the computer 106 through the side USB interface 115 to realize synchronous control with the CCD camera 110, and the lower part of the liquid crystal tunable filter 114 passes through the C-port focus ring 116, the aperture 117 and the objective lens 118 Connected, the relay lens 112 and the objective lens 118 are equipped with a focus ring for adjusting the focal plane and an aperture ring for controlling the amount of incoming light, and the slices can be adjusted by adjusting the relay lens adjustment ring 113, the C-mount focus ring 116, and the aperture 117. The sliced sample is placed on the sample platform 104 to achieve accurate focus and proper exposure. A xenon light path lens group is arranged on the side of the sample stage. The xenon light path lens group is controlled by a xenon light source drive circuit. The liquid crystal tunable filter and the scientific research camera are connected to the computer.
用于无染色病理切片高光谱图像采集及分割方法,包括以下步骤:A hyperspectral image acquisition and segmentation method for non-stained pathological sections, comprising the following steps:
步骤一、装置搭建:将未染色切片放置于切片样品台104上并调整氙灯103照明范围使得切片光照均匀,并通过计算机106设置CCD相机110与液晶可调谐滤光器114的参数使得图像灰度清晰,同时调整中继镜头调节环113、C口对焦环116、光圈117使得切片与成像焦平面重合;Step 1, device construction: place the unstained slice on the slice sample stage 104 and adjust the illumination range of the xenon lamp 103 to make the slice evenly illuminated, and set the parameters of the CCD camera 110 and the liquid crystal tunable filter 114 through the computer 106 to make the image grayscale Clear, adjust the relay lens adjustment ring 113, C-mount focus ring 116, and aperture 117 at the same time to make the slice coincide with the imaging focal plane;
步骤二、参照图4,参数设置采集训练样本数据:于计算机106上设置高光谱图像采集模块102的图像采集参数,包括采集模式、起始、终止波长、波长分辨率和曝光时间,开始采集获取未染色切片样本各光谱二维图像数据,在采集完成后保持同样的采集参数,分别采集无光照全黑背景及同光照参数下完全空白视野各光谱图像作为背景信息参考;Step 2, with reference to Fig. 4, parameter setting gathers training sample data: the image acquisition parameter of hyperspectral image acquisition module 102 is set on computer 106, comprises acquisition mode, start, stop wavelength, wavelength resolution and exposure time, start to gather and obtain For the two-dimensional image data of each spectrum of the unstained section sample, keep the same acquisition parameters after the acquisition is completed, and collect the spectral images of the dark background without light and the completely blank field of view under the same light parameters as background information reference;
步骤三:数据预处理,对步骤二所得黑/白参考信息对各光谱图像进行预处理后按谱段序列叠加得三维高光谱矩阵,此矩阵z轴方向对应空间每点光谱信息,对各点光谱曲线作宽间隔倒数法处理以突出特征;Step 3: Data preprocessing, the black/white reference information obtained in step 2 is preprocessed for each spectral image, and then superimposed according to the sequence of spectral segments to obtain a three-dimensional hyperspectral matrix. The z-axis direction of this matrix corresponds to the spectral information of each point in the space. For each point Spectral curves are processed by wide interval reciprocal method to highlight features;
所述的步骤三具体实施方案如下:Described step three specific implementation schemes are as follows:
光谱图像的预处理采用像素点较正方法,在同采集切片高光谱图像相同的条件下,使用一组无光照完全黑暗的高光谱数据和一组完全空白视野的高光谱图像,用对应波长的高光谱图像减去黑暗图像的差比上空白图像与黑暗图像的差值,预处理公式如公式1所示。The preprocessing of the spectral image adopts the method of pixel point correction. Under the same conditions as the hyperspectral image acquisition of slices, a set of hyperspectral data without illumination and a set of completely dark hyperspectral images and a set of hyperspectral images with a completely blank field of view are used. The difference between the hyperspectral image minus the dark image is compared to the difference between the blank image and the dark image, and the preprocessing formula is shown in formula 1.
其中,R是经过预处理转化的透射率光谱值,Iim是原始高光谱图像各波长下的灰度值,Ibl是完全黑暗条件下各波长图像灰度值,Iwh是有光照空白视野下各波长图像的灰度值,随后采用了一种宽间隔导数的方法,即增大自变量间隔求导数,如公式2所示:Among them, R is the transmittance spectral value converted by preprocessing, I im is the gray value of the original hyperspectral image at each wavelength, I bl is the gray value of each wavelength image in complete darkness, and I wh is the illuminated blank field of view The gray value of each wavelength image below, and then a method of wide-interval derivative is adopted, that is, the derivative is obtained by increasing the interval of the independent variable, as shown in formula 2:
公式中λ表示波长,Δλ表示波长间隔;In the formula, λ represents the wavelength, and Δλ represents the wavelength interval;
步骤四:判别神经网络的训练,根据染色切片结果选取步骤三所得训练样本切片病变/非病变区域宽间隔光谱曲线作为训练数据,训练光谱数据判别网络。网络参数训练算法如图5Step 4: Training of the discriminative neural network. According to the results of the stained slices, the wide-interval spectral curves of the lesion/non-lesion regions of the training sample slices obtained in step 3 are selected as training data, and the spectral data discriminant network is trained. The network parameter training algorithm is shown in Figure 5
所示,具体过程如下:As shown, the specific process is as follows:
(1)建立判别神经网络模型如图6所示,(1) Establish a discriminative neural network model as shown in Figure 6,
输入信息代表着一个像素点的光谱信息,往右边接着是卷积层和最大池层用于计算一系列特征图片,特征图片分类后得到输出层。整个网络包含输入层,卷积层C1,最大池层S1,卷积层C2,最大池层S2,完整连接层F和输出层。输入层的样本尺寸为(n1,1),其中是n1波段数目。第一个隐含卷积层C1使用24个尺寸为k1×1的核函数过滤n1×1的输入数据。隐含卷积层C1包含有24×n2×1个节点,其中n2=n1-k1+1。输入层和卷积层C1之间有24×(k1+1)个可训练参数,最大池层S1是第二个隐含层,核函数的尺寸为(k2,1),最大池层S1包含有24×n3×1个节点,其中n3=n2/k2,这一层没有参数。卷积层C2包含有24×n4×1个节点,核函数为(k3,1)其中n4=n3-k3+1,最大池层S1和卷积层C2之间有24×(k3+1)个可训练参数。最大池层S2包含有24×n5×1个节点,核函数尺寸(k4,1),其中n5=n4/k4,这一层没有参数。完整连接层F包含有n6个节点,这一层与最大池层S2之间有(24×n6+1)个训练参数。最后输出层板含有n7个节点,这一层与完整连接层F之间有24×n6×1个节点,24×n6×1×n7个训练参数。建立上述参数的卷积神经网络分类器用于区分高光谱像素,其中n1是光谱通道的数目,n7是所输出的数据类别数,n2、n3、n4、n5分别是特征图片的维数,n6是全连接层的维数。The input information represents the spectral information of a pixel, followed by the convolutional layer and the maximum pooling layer to the right to calculate a series of feature images, and the output layer is obtained after the feature images are classified. The whole network consists of input layer, convolutional layer C1, max pooling layer S1, convolutional layer C2, max pooling layer S2, fully connected layer F and output layer. The sample size of the input layer is (n1,1), where is the number of n1 bands. The first hidden convolutional layer C1 uses 24 kernel functions of size k1×1 to filter n1×1 input data. The hidden convolutional layer C1 contains 24×n2×1 nodes, where n2=n1-k1+1. There are 24×(k1+1) trainable parameters between the input layer and the convolutional layer C1, the maximum pooling layer S1 is the second hidden layer, the size of the kernel function is (k2,1), and the maximum pooling layer S1 contains There are 24×n3×1 nodes, where n3=n2/k2, and this layer has no parameters. The convolutional layer C2 contains 24×n4×1 nodes, the kernel function is (k3,1) where n4=n3-k3+1, there is 24×(k3+1) between the maximum pooling layer S1 and the convolutional layer C2 a trainable parameter. The maximum pooling layer S2 contains 24×n5×1 nodes, the size of the kernel function is (k4,1), where n5=n4/k4, and this layer has no parameters. The fully connected layer F contains n6 nodes, and there are (24×n6+1) training parameters between this layer and the maximum pooling layer S2. The final output layer board contains n7 nodes, and there are 24×n6×1 nodes between this layer and the complete connection layer F, and 24×n6×1×n7 training parameters. The convolutional neural network classifier with the above parameters is used to distinguish hyperspectral pixels, where n1 is the number of spectral channels, n7 is the number of output data categories, n2, n3, n4, and n5 are the dimensions of the feature image, and n6 is The dimensionality of the fully connected layer.
(2)前向传播(2) Forward propagation
使用的深度卷积神经网络是5层结构,加上输入输出层也可看作7层,表示为为(L+1)层,其中L=6,在输入层包含有n1个输入单元,在输出层包含有n7个输出单元,隐含层为C1,S1,C2,S2和F层。假设xi是第i层的输入第(i-1)层的输出,我们可以计算xi+1为:The deep convolutional neural network used is a 5-layer structure, plus the input and output layer can also be regarded as 7 layers, expressed as (L+1) layer, where L=6, contains n1 input units in the input layer, in The output layer contains n7 output units, and the hidden layers are C1, S1, C2, S2 and F layers. Suppose xi is the input of layer i and the output of layer (i-1), we can calculate xi+1 as:
xi+1=fi(ui) (公式3)x i+1 = f i (u i ) (Formula 3)
其中in
是第i层作用于输入数据的权重矩阵,bi是第i层附加的贝叶斯向量,fi(·)是第i层的激活函数,选择双曲线函数tanh(u)作为卷积层C1、C2和完整连接层F的激活函数,取最大值函数max(u)作为最大池层S1和S2的激活函数。采用分类器要将数据进行多分类,输出类别为n7,n7个类标回归模型定义如下: is the weight matrix of the i-th layer acting on the input data, b i is the additional Bayesian vector of the i-th layer, f i ( ) is the activation function of the i-th layer, and the hyperbolic function tanh(u) is selected as the convolutional layer The activation functions of C1, C2 and the complete connection layer F, take the maximum function max(u) as the activation function of the maximum pooling layers S1 and S2. The classifier is used to perform multi-classification of the data, the output category is n7, and the regression model of n7 class labels is defined as follows:
输出层的输出向量y=xL+1表示在当前迭代所有类别的概率。The output vector y=x L+1 of the output layer represents the probability of all categories in the current iteration.
(3)后向传播(3) Backpropagation
在后向传播阶段,训练得到的参数使用下降梯度法调整更新,通过使用最小化成本函数和计算成本函数偏导数来决定每一个参数,使用的损失函数定义如下:In the backward propagation stage, the trained parameters are adjusted and updated using the descending gradient method. Each parameter is determined by minimizing the cost function and calculating the partial derivative of the cost function. The loss function used is defined as follows:
其中,m是训练样本量,Y是输出量。是第i次训练样本实际输出y(i)的第j次的值,向量尺寸是n7。第i个样本的期望输出Y(i),标签的概率值为1,如果是其它类别,则概率值为0。1{j=Y(i)}意味着,如果j等同于第i次训练样本的期望类标,它的值则为1;否则,它的值为0。我们在J(θ)的前面增加了负号,使得计算更加便利。Among them, m is the training sample size and Y is the output size. is the j-th value of the actual output y (i) of the i-th training sample, and the vector size is n7. The expected output Y (i) of the i-th sample, the probability value of the label is 1, if it is another category, the probability value is 0. 1{j=Y (i) } means that if j is equal to the i-th training The expected class label of the sample, its value is 1; otherwise, its value is 0. We added a negative sign in front of J(θ) to make the calculation more convenient.
损失函数对ui求偏导得到;The loss function obtains the partial derivative of u i ;
其中°表示元素相乘。f′(ui)可以简单地表示为where ° denotes element-wise multiplication. f′(u i ) can be expressed simply as
因此,在每一次的迭代,都将执行更新:Therefore, at each iteration, an update will be performed:
为了判断训练参数,α是学习因子(α=0.01),并且To judge the training parameters, α is the learning factor (α=0.01), and
由于θi包含Wi和bi,并且Since θ i contains W i and b i , and
其中in
经过多次的迭代训练,成本函数的回归越来越小,这也意味着实际的输出与期望的输出越来越接近,当实际的输出与期望的输出差异足够小时,迭代停止,最终,训练好的深度卷积神经网络模型就可以用于高光谱的图像分类了。After many iterations of training, the regression of the cost function is getting smaller and smaller, which also means that the actual output is getting closer to the expected output. When the difference between the actual output and the expected output is small enough, the iteration stops. Finally, the training A good deep convolutional neural network model can be used for hyperspectral image classification.
步骤五:未染色切片高光谱数据的采集及区域识别,根据步骤二的方式进行未染色切片高光谱数据的采集,根据步骤三的方式做相同的预处理,输入步骤四训练完毕的判别神经网络,得出每点病变/非病变判别结果,综合每点判别结果即可生成无染色切片的识别结果。Step 5: Acquisition and area identification of hyperspectral data of unstained slices, collect hyperspectral data of unstained slices according to the method of step 2, perform the same preprocessing according to the method of step 3, and input the discriminant neural network trained in step 4 , to obtain the lesion/non-lesion discrimination result of each point, and the recognition result of the unstained section can be generated by combining the discrimination results of each point.
使用同部位染色/未染色切片对本装置及方法进行验证Validation of the device and method using isotopic stained/unstained sections
为验证本本装置及方法的可行性,以肝癌切片为例,我们采用福建医科大学孟超肝胆医院提供的同一部位的染色/未染色切片进行了系统验证。切片制作过程为:从肝癌患者体内取得病变组织,浸泡在福尔马林溶液中固定,将组织完全脱水后用石蜡包埋,用切片机切出薄片,对同一块组织切取厚度相同的两片薄片,一片用常规的苏木素-伊红染色方法染色制成H&E染色切片(用于理论对照),另一片直接放置在载玻片上,经过二甲苯脱蜡处理,制成未染色的组织切片。高光谱图像采集条件为:氙灯光源30mW/cm2光强,波长范围400~718nm,波长间隔3nm。In order to verify the feasibility of this device and method, taking liver cancer slices as an example, we used the stained/unstained slices of the same site provided by Mengchao Hepatobiliary Hospital of Fujian Medical University for system verification. The slice production process is as follows: the diseased tissue is obtained from the liver cancer patient, soaked in formalin solution and fixed, the tissue is completely dehydrated and embedded in paraffin, cut out a thin slice with a microtome, and two slices of the same thickness are cut from the same tissue Thin sections, one was stained with conventional hematoxylin-eosin staining method to make H&E stained section (for theoretical control), and the other was placed directly on the glass slide and dewaxed with xylene to make unstained tissue section. Hyperspectral image acquisition conditions are: xenon lamp light intensity 30mW/cm 2 , wavelength range 400-718nm, wavelength interval 3nm.
对于此批样本,我们选取了波长间隔为177nm下的导数光谱用于进一步组分间的区分,原始光谱如图7a与宽间隔导数法处理后曲线如图7b所示,将处理后的标准数据做为神经网络分类算法的输入训练训练神经网络参数,后对标准切片进行分割验证,预测结果如图8所示,同时将分割结果与传统卷积支持向量机算法分割结果对比,对比结果如图9所示,采取公式13所示精准度计算公式,计算两种算法癌症区域分割精准度,结果如表1所示 For this batch of samples, we selected the derivative spectrum with a wavelength interval of 177nm for further distinction between components. The original spectrum is shown in Figure 7a and the curve after processing by the wide-interval derivative method is shown in Figure 7b. The processed standard data As the input training of the neural network classification algorithm, the neural network parameters are trained, and then the standard slices are segmented and verified. The prediction results are shown in Figure 8. At the same time, the segmentation results are compared with the segmentation results of the traditional convolution support vector machine algorithm, and the comparison results are shown in Figure 8. As shown in 9, the accuracy calculation formula shown in formula 13 is adopted to calculate the cancer region segmentation accuracy of the two algorithms, and the results are shown in Table 1
表1深度卷积神经网络模型与支持向量机分类精确度对比Table 1 Comparison of classification accuracy between deep convolutional neural network model and support vector machine
由实验结果可知,本装置可有效采集未染色肝癌切片高光谱图像,且配套软件算法对癌症区域分割结果较好,相对于传统的苏木素伊红染色病理切片方法,省去了显微镜以及复杂的染色过程,降低了传统方法中的人工因素干扰,可为病理医生提供较好的诊断辅助。It can be seen from the experimental results that this device can effectively collect hyperspectral images of unstained liver cancer slices, and the supporting software algorithm has a better result for cancer region segmentation. The process reduces the interference of artificial factors in the traditional method, and can provide better diagnostic assistance for pathologists.
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710353158.0A CN107064019B (en) | 2017-05-18 | 2017-05-18 | The device and method for acquiring and dividing for dye-free pathological section high spectrum image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710353158.0A CN107064019B (en) | 2017-05-18 | 2017-05-18 | The device and method for acquiring and dividing for dye-free pathological section high spectrum image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107064019A CN107064019A (en) | 2017-08-18 |
CN107064019B true CN107064019B (en) | 2019-11-26 |
Family
ID=59610992
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710353158.0A Active CN107064019B (en) | 2017-05-18 | 2017-05-18 | The device and method for acquiring and dividing for dye-free pathological section high spectrum image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107064019B (en) |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107843593A (en) * | 2017-10-13 | 2018-03-27 | 上海工程技术大学 | A kind of textile material recognition methods and system based on high light spectrum image-forming technology |
KR102811548B1 (en) * | 2018-02-15 | 2025-05-21 | 고쿠리츠다이가쿠호진 니이가타 다이가쿠 | System, program and method for the identification of high-frequency variant cancer |
CN109034208B (en) * | 2018-07-03 | 2020-10-23 | 怀光智能科技(武汉)有限公司 | High-low resolution combined cervical cell slice image classification system |
CN109272492B (en) * | 2018-08-24 | 2022-02-15 | 深思考人工智能机器人科技(北京)有限公司 | Method and system for processing cytopathology smear |
CN109489816B (en) * | 2018-10-23 | 2021-02-26 | 华东师范大学 | Microscopic hyperspectral imaging platform and large-area data cube acquisition method |
CN110008836B (en) * | 2019-03-06 | 2023-04-25 | 华东师范大学 | A feature extraction method for hyperspectral images of pathological tissue slices |
CN109815945B (en) * | 2019-04-01 | 2024-04-30 | 上海徒数科技有限公司 | Respiratory tract examination result interpretation system and method based on image recognition |
AU2020284538B2 (en) | 2019-05-28 | 2022-03-24 | PAIGE.AI, Inc. | Systems and methods for processing images to prepare slides for processed images for digital pathology |
CN110600106B (en) * | 2019-08-28 | 2022-07-05 | 上海联影智能医疗科技有限公司 | Pathological section processing method, computer device and storage medium |
CN110517258A (en) * | 2019-08-30 | 2019-11-29 | 山东大学 | A device and system for cervical cancer image recognition based on hyperspectral imaging technology |
CN212432993U (en) * | 2019-09-11 | 2021-01-29 | 南京九川科学技术有限公司 | Pathological tissue section's full field of vision image device |
CN112734813B (en) * | 2019-10-14 | 2025-03-14 | 志诺维思(北京)基因科技有限公司 | Registration method, device, electronic device and computer readable storage medium |
TWI781408B (en) * | 2019-11-27 | 2022-10-21 | 靜宜大學 | Artificial intelligence based cell detection method by using hyperspectral data analysis technology |
CN111325757B (en) * | 2020-02-18 | 2022-12-23 | 西北工业大学 | A Point Cloud Recognition and Segmentation Method Based on Bayesian Neural Network |
CN113450305B (en) * | 2020-03-26 | 2023-01-24 | 太原理工大学 | Medical image processing method, system, equipment and readable storage medium |
CN112712877B (en) * | 2020-12-07 | 2024-02-09 | 西安电子科技大学 | Large-view-field high-flux high-resolution pathological section analyzer |
CN113065403A (en) * | 2021-03-05 | 2021-07-02 | 浙江大学 | Machine learning cell classification method and device based on hyperspectral imaging |
CN114820502B (en) * | 2022-04-21 | 2023-10-24 | 济宁医学院附属医院 | Coloring detection method for protein kinase CK2 in intestinal mucosa tissue |
CN115236015B (en) * | 2022-07-21 | 2024-05-03 | 华东师范大学 | Puncture sample pathology analysis system and method based on hyperspectral imaging technology |
CN115728236A (en) * | 2022-11-21 | 2023-03-03 | 山东大学 | A hyperspectral image acquisition and processing system and its working method |
CN117456222A (en) * | 2023-08-31 | 2024-01-26 | 台州安奇灵智能科技有限公司 | Hyperspectral microscopic imaging rapid identification, classification and counting method for mixed bacteria |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011089895A (en) * | 2009-10-22 | 2011-05-06 | Arata Satori | Device and method of hyperspectral imaging |
CN104316473A (en) * | 2014-10-28 | 2015-01-28 | 南京农业大学 | Gender determination method for chicken hatching egg incubation early embryo based on hyperspectral image |
US9345428B2 (en) * | 2004-11-29 | 2016-05-24 | Hypermed Imaging, Inc. | Hyperspectral imaging of angiogenesis |
CN106097355A (en) * | 2016-06-14 | 2016-11-09 | 山东大学 | The micro-Hyperspectral imagery processing method of gastroenteric tumor based on convolutional neural networks |
CN106226247A (en) * | 2016-07-15 | 2016-12-14 | 暨南大学 | A kind of cell detection method based on EO-1 hyperion micro-imaging technique |
-
2017
- 2017-05-18 CN CN201710353158.0A patent/CN107064019B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9345428B2 (en) * | 2004-11-29 | 2016-05-24 | Hypermed Imaging, Inc. | Hyperspectral imaging of angiogenesis |
JP2011089895A (en) * | 2009-10-22 | 2011-05-06 | Arata Satori | Device and method of hyperspectral imaging |
CN104316473A (en) * | 2014-10-28 | 2015-01-28 | 南京农业大学 | Gender determination method for chicken hatching egg incubation early embryo based on hyperspectral image |
CN106097355A (en) * | 2016-06-14 | 2016-11-09 | 山东大学 | The micro-Hyperspectral imagery processing method of gastroenteric tumor based on convolutional neural networks |
CN106226247A (en) * | 2016-07-15 | 2016-12-14 | 暨南大学 | A kind of cell detection method based on EO-1 hyperion micro-imaging technique |
Non-Patent Citations (2)
Title |
---|
基于高光谱成像的基底细胞癌和肝癌病理切片检测研究;周湘连;《西安交通大学机构知识库》;20161231;第1页 * |
高光谱成像技术用于肝切片癌变信息的提取;于翠荣 等;《科学技术与工程》;20150930;第15卷(第27期);第106-107页,图1-2 * |
Also Published As
Publication number | Publication date |
---|---|
CN107064019A (en) | 2017-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107064019B (en) | The device and method for acquiring and dividing for dye-free pathological section high spectrum image | |
US12327362B2 (en) | Method and system for digital staining of label-free fluorescence images using deep learning | |
US12229959B2 (en) | Systems and methods for determining cell number count in automated stereology z-stack images | |
CN110033032B (en) | Tissue slice classification method based on microscopic hyperspectral imaging technology | |
JP7695256B2 (en) | A Federated Learning System for Training Machine Learning Algorithms and Maintaining Patient Privacy | |
Quinn et al. | Deep convolutional neural networks for microscopy-based point of care diagnostics | |
Rana et al. | Computational histological staining and destaining of prostate core biopsy RGB images with generative adversarial neural networks | |
CN113781455B (en) | Cervical cell image anomaly detection method, device, equipment and medium | |
Rashid et al. | NSGA-II-DL: metaheuristic optimal feature selection with deep learning framework for HER2 classification in breast cancer | |
CN109670510A (en) | A kind of gastroscopic biopsy pathological data screening system and method based on deep learning | |
WO2021146705A1 (en) | Non-tumor segmentation to support tumor detection and analysis | |
US20240079116A1 (en) | Automated segmentation of artifacts in histopathology images | |
CN115032196A (en) | Full-scribing high-flux color pathological imaging analysis instrument and method | |
CN115036011B (en) | A system for evaluating the prognosis of solid tumors based on digital pathology images | |
CN112990015A (en) | Automatic lesion cell identification method and device and electronic equipment | |
CN113450305A (en) | Medical image processing method, system, equipment and readable storage medium | |
CN106706643B (en) | A kind of liver cancer comparison slice detection method | |
US20040014165A1 (en) | System and automated and remote histological analysis and new drug assessment | |
JP2005331394A (en) | Image processor | |
CN105996986B (en) | A kind of devices and methods therefor based on multispectral detection human eye Meibomian gland model | |
Tsafas et al. | Application of a deep-learning technique to non-linear images from human tissue biopsies for shedding new light on breast cancer diagnosis | |
CN112330613A (en) | Method and system for evaluating quality of cytopathology digital image | |
US20240169534A1 (en) | Medical image analysis device, medical image analysis method, and medical image analysis system | |
Nair et al. | Deep learning based approaches for accurate automated nuclei segmentation of Pap smear images | |
CN118469845B (en) | A non-reference label-free cell microscopy image visual enhancement method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240911 Address after: 11th Floor, 2043 Xizhou Road, Tong'an District, Xiamen City, Fujian Province 361001 Patentee after: TRIPLEX INTERNATIONAL BIOSCIENCES (CHINA) Co.,Ltd. Country or region after: China Address before: Beilin District Xianning West Road 710049, Shaanxi city of Xi'an province No. 28 Patentee before: XI'AN JIAOTONG University Country or region before: China |