[go: up one dir, main page]

CN109102498A - A kind of method of cluster type nucleus segmentation in cervical smear image - Google Patents

A kind of method of cluster type nucleus segmentation in cervical smear image Download PDF

Info

Publication number
CN109102498A
CN109102498A CN201810769112.1A CN201810769112A CN109102498A CN 109102498 A CN109102498 A CN 109102498A CN 201810769112 A CN201810769112 A CN 201810769112A CN 109102498 A CN109102498 A CN 109102498A
Authority
CN
China
Prior art keywords
feature
features
low
segmentation
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810769112.1A
Other languages
Chinese (zh)
Other versions
CN109102498B (en
Inventor
张见威
刘珍梅
黎官钊
何君婷
陈丹妮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201810769112.1A priority Critical patent/CN109102498B/en
Publication of CN109102498A publication Critical patent/CN109102498A/en
Application granted granted Critical
Publication of CN109102498B publication Critical patent/CN109102498B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供了一种宫颈涂片图像中簇型细胞核的分割方法,包括下述步骤:(1)分割数据集准备;(2)数据集挑选,分成测试集和训练集;(3)定义DeepHLF网络,把原图片输入网络、网络递进提取特征并保留每一个层次特征后进行特征分组;DeepHLF采用高低耦并行融合模块对每组的特征融合,生成三类特征;对三类特征交叉循环结合,生成多个特征图,最后每个特征图均生成分割结果图;(4)提出解决核与背景类别纠正的数学方法和权重损失函数。本发明的方法不仅能分割出簇型细胞核,而且分割过程中不会遗漏浅的细胞核、也不会遗漏细胞核与细胞质灰度相近的细胞核。

The invention provides a method for segmenting clustered nuclei in a cervical smear image, comprising the steps of: (1) preparing a data set for segmentation; (2) selecting a data set and dividing it into a test set and a training set; (3) defining DeepHLF Network, the original picture is input into the network, the network progressively extracts features and retains each level of features, and then groups the features; DeepHLF uses a high-low coupling parallel fusion module to fuse the features of each group to generate three types of features; the three types of features are combined in a cross cycle , generate multiple feature maps, and finally each feature map generates a segmentation result map; (4) Propose a mathematical method and a weight loss function to solve the kernel and background category correction. The method of the present invention can not only segment cluster-type nuclei, but also will not miss shallow nuclei or nuclei with similar gray scales to the cytoplasm during the segmentation process.

Description

A kind of method of cluster type nucleus segmentation in cervical smear image
Technical field
The invention belongs to a kind of sides of cell segmentation in the technical field of image procossing more particularly to cervical smear image Method.
Background technique
The early detection of cervical carcinoma is of great significance to the death rate for reducing cervical carcinoma, typically now uses cervical smear Screening technology is checked.Current smear screening is mainly based on artificial diagosis, but this mode efficiency is very low;With The maturation of cervical cell diagnostic techniques and and the automatic tabletting technology of state's inner cell development, exploitation adaptable meter therewith Calculation machine assistant diagnosis system also becomes inevitable, it has huge meaning to the generaI investigation of cervical carcinoma.
Currently, there are many cell image segmentation methods, information used in these algorithms has been counted here or theory can With the abstract cell segmentation algorithm for being divided into using area information, using cell marginal information partitioning algorithm, manage in conjunction with correlation It is other by the cell segmentation method three categories of generation.The substantially step of the cell segmentation algorithm of using area information is according in image The information of each pixel, is then grouped into pixel according to given judge criterion different classes of, and main method includes threshold values point Segmentation method, region-growing method, clustering procedure and watershed transform segmentation;Partitioning algorithm using the marginal information of cell is according to thin In born of the same parents' image, numerically there are great differences for the gray scale of the gray scale of usual boundary part and non-boundary part, i.e., general gray scale Discontinuous place is boundary, and according to this characteristic, the method for extracting boundary uses gradient method more.It is usually used in the related reason of cell segmentation By or algorithm include: wavelet analysis, genetic algorithm, mathematical morphology, neural network etc., especially neural network, this is several years With the deep development of deep learning, some networks, such as Unet, shallow-layer nerve net are also derived in nucleus segmentation field Network, FCN, but it can only divide individual cells core well as conventional method, the nucleus segmentation effect of cluster type is bad. The cell segmentation algorithm of using area information, it is unevenly sensitive to noise, gray scale, and it has been easy the phenomenon that missing nucleus.Make With the partitioning algorithm of the marginal information of cell, the grayscale information of picture is relied primarily on, the picture segmentation much like to prospect background It is ineffective;Compare office in conjunction with the cell segmentation method that correlation theory generates, such as existing neural network framework segmentation ability Limit, can only divide single cell core, divide to cluster type nucleus and divide the nucleus of nucleus and cytoplasm similar gray value Cut poor effect.
Summary of the invention
The shortcomings that it is a primary object of the present invention to overcome the prior art and insufficient, provides cluster in a kind of cervical smear image The method of type nucleus segmentation, cluster type nucleus in correct Ground Split cervical smear image.
In order to achieve the above object, the invention adopts the following technical scheme:
The dividing method of cluster type nucleus, includes the following steps: in a kind of cervical smear image of the present invention
(1) partitioned data set is pre-processed, the partitioned data set includes cluster type cytological map, corresponding with cluster type cytological map The GroundTruth divided;
(2) data set is selected, and is divided into test set and training set, guarantees that test set is consistent with the distribution of training set;
(3) DeepHLF network is formed, original picture input network, the progressive extraction feature of network and retains each level Feature grouping is carried out after feature, the DeepHLF network is an end to end network, the network by progressive retention characteristic module, Height coupling Parallel Fusion module and cross-circulation module composition;
The progressive retention characteristic module original picture input network, the progressive extraction feature of network and retains each layer Secondary feature is then grouped the feature of reservation;
The height coupling Parallel Fusion module, for distinguishing processing feature using high coupling process and lower coupling method, such as Fusion Features in this group to every group generate high-level semantics feature High, intermediate comprehensive characteristics Middle, low-level details feature Tri- category feature of Low;
The cross-circulation module combines generation more for carrying out cross-circulation to tri- category feature of High, Middle, Low A characteristic pattern, each characteristic pattern generate segmentation figure with softmax function category, and multiple characteristic patterns just generate multiple segmentation figures;
(4) the mathematical method correction core for proposing that core and background classification are corrected is damaged with the unbalanced problem of background classification and weight Function is lost, to optimize nuclear boundary Optimized Segmentation knot.
The step (1) as a preferred technical solution, specifically:
(1-1) first collects picture, then the picture comprising cluster type nucleus is picked out and is arranged, and is then diced into and sets Determine the picture of size, finally does data augmentation;
(1-2) carries out picture with PS to scratch figure, outlines nuclear area, nuclear area according to the guidance of pathologist It is other for background white for black.
The step (2) as a preferred technical solution, specifically:
Data set is divided into multiclass by the method clustered by (2-1);
The picture that (2-2) comes out cluster, every one kind selects three one-tenth and is used as test set, other to be used as training set.
As a preferred technical solution, in the step (3), the progressive retention characteristic module is by five residual block groups At specific implementation includes the following steps:
Progressive extraction is carried out to characteristics of image with the progressive retention characteristic module in DeepHLF network, and retains network and adds Five level characteristics during depth, the component part of first layer and second layer feature as Lower-level details feature, third layer and Component part of the 4th layer of feature as intermediate comprehensive characteristics, progressive the 5th residual block for retaining characteristic module are namely most deep Component part of that block as high-level semantics feature;
The feature of five levels is divided into three groups of feature compositions, is high-level semantics feature High composition respectively, intermediate comprehensive Close feature Middle composition and low-level details feature Low composition.
As a preferred technical solution, in the step (3), the progressive retention characteristic module, under specific implementation includes State step: progressive first extraction characteristic block for retaining characteristic module indicates are as follows:
Layer1=F1(x)
Wherein x is the picture of input, F1It (x) is first convolutional network block function, Layer1It is first to be saved Shallow-layer feature;
Progressive second to the 5th extraction characteristic block for retaining characteristic module indicates are as follows:
Layeri=Fi(Layeri-1)
Wherein Layeri-1It is the feature retained after a upper convolution block is handled, FiIt (x) is i-th of convolutional network block function, LayeriIt is to be saved the feature got off i-th;
Next feature is grouped:
High composition: Layer5;Middle composition: Layer4Layer3;Low composition: Layer1Layer2
As a preferred technical solution, in the step (3), the height coupling Parallel Fusion module specifically includes following places Manage step:
High coupling feature processing, if HjIt is a certain group of high coupling processing of feature as a result, WiIt is the Layer inside feature groupi Shared weight;That HjMiddle j is respectively represented in Low, Middle, High when being 1,2,3 to the high coupling processing knot for forming its feature Fruit forms high coupling feature using convolution block after being added to feature according to weight proportion in feature group, specific as follows:
Low-level details feature forms high coupling processing result: H1=fconv1(W1×Layer1+W2×Layer2);
Intermediate comprehensive characteristics form high coupling processing result: H2=fconv2(W3×Layer3+W4×Layer4);
High-level semantics feature forms high coupling processing result: H3=fconv3(W5×Layer5);
Lower coupling characteristic processing, if LjIt is a certain group of feature lower coupling processing as a result, LjMiddle j is respectively represented when being 1,2,3 To the lower coupling processing result for forming its feature in Low, Middle, High, Cat symbol is that opposite flow control two dimension carries out splicing behaviour Make, specific as follows:
Low-level details feature forms lower coupling processing result: L1=Cat (Layer1,Layer2);
Intermediate comprehensive characteristics form lower coupling processing result: L2=Cat (Layer3,Layer4);
High-level semantics feature forms lower coupling processing result: L3=Layer5
Height coupling feature fusion, before to the subcharacter of composition characteristic group used respectively it is high coupling and lower coupling method Processing, both fusions, Fusion when final stepjMiddle j respectively represents Low, Middle, High, Wh when being 1,2,3jRepresent jth Weight shared by high coupling processing feature, Wl in groupjWeight shared by lower coupling processing feature in jth group is represented, specifically:
Low-level details feature (j=1): Fusionj=Whj×Hj+Wlj×Lj
Intermediate comprehensive characteristics (j=2): Fusionj=Whj×Hj+Wlj×Lj
High-level semantics feature (j=3): Fusionj=Whj×Hj+Wlj×Lj
As a preferred technical solution, in the step (3), the cross-circulation module method particularly includes:
Process of convolution is carried out as first characteristic pattern, first producible segmentation figure to high-level semantics feature High first Characteristic pattern be Predict0, Fcov0For convolution block, it is expressed as follows:
Predict0=Fcov0(High);
Then splice Predict again0Process of convolution is done with the value of Middle while adding the characteristic pattern of previous step Predict0, allow the intermediate comprehensive characteristics of first characteristic pattern fusion ratio high-level semantics feature more details, regenerated after fusion special Sign figure, second characteristic pattern are Predict1, it is expressed as follows:
Predict1=Fcov1(Cat(Predict0,Middle))+Predict0
Followed by splicing Predict1Process of convolution is done with the value of Low while adding the characteristic pattern Predict of previous step1, learn The low-level details feature in low-level details characteristic layer is practised, and Low feature is only combined one in entire cross-circulation cohesive process Secondary, Middle and High need cross-circulation to combine repeatedly, and third can directly predict that the characteristic pattern of segmentation result is Predict2, it is expressed as follows:
Predict2=Fcov2(Cat(Predict1,Low))+Predict1
Next it is being recycled during fusion Middle or fusion High feature two with the characteristic pattern of generation, iteration weight Multiple formulaUntil training reaches most Good effect;The value of m is since 3, because the characteristic pattern of the generation of previous step is Predict2
As a preferred technical solution, in the step (3), cross knot symphysis each time at characteristic pattern Predicti, A two-dimensional segmentation result Segmentation is generated by classification function Softmax functioni
Segmentationi=softmax (Predicti)
Loss asks cross entropy to form with GroundTruth respectively by multiple segmentation results in training, during the test, Loss asks cross entropy to form by the last one segmentation result with GroundTruth, and last segmentation result is as test Middle result figure.
As a preferred technical solution, in the step (4), the mathematical method tool for solving core and the correction of background classification Body are as follows:
For each characteristic value Predict of step (3)i, to its two-dimensional first magnitude value multiplied by a weight Wp, which is to learn to obtain during training, the feature map values Predict obtainedi
Predicti=Predicti[:, O: :] × Wp
It is described to emphasize loss function specifically:
All segmentation result figure and GroundTruth is taken to ask cross entropy, LosssumDistinguish for all segmentation result figures The result for asking cross entropy finally to sum with GroundTruth,That represent is segmentation result figure SegmentationiWith Weight shared by GroundTruth emphasizes that the formula of loss function is as follows:
Compared with the prior art, the invention has the following advantages and beneficial effects:
(1), it is inconsistent, right can accurately to divide dyeing for cluster type nucleus segmentation in cervical cell image by the present invention The cluster type cell image smaller than degree, uneven illumination is even can effectively help clinician as clinical auxiliary diagnosis mode Diagnosis.
(2), the progressive extraction feature of DeepHLF network proposed by the present invention and feature is carried out after retaining each modular character Grouping proposes that the feature that basic network extracts is melted into height by height coupling parallel method (height coupling distinguishes processing feature with lower coupling) Grade semantic feature, intermediate comprehensive characteristics, three category feature of low-level details feature, can be used as the segmentation of medical cell image nucleus Universal network takes into account speed and precision, and it is thin to can be good at the cluster type inconsistent, that contrast is small, uneven illumination is even of segmentation dyeing Karyon image.
(3), the Feature fusion combined the invention proposes cross-circulation is applied to DeepHLF network, to advanced language Adopted feature, intermediate comprehensive characteristics, low-level details feature three classes characteristic crossover circulation combine, and multiple characteristic patterns are generated, further according to more A characteristic pattern is partitioned into multiple segmentation results, the last one segmentation result is as final test result in testing.The network It can not only be partitioned into cluster type nucleus, and shallow nucleus will not be omitted in cutting procedure, nucleus will not be omitted and thin The nucleus of cytoplasm similar gray value.
(4), the present invention proposes the correcting method of class imbalance, the pixel point in nucleus segmentation problem, in picture For two classes, nucleus and background, and the classification of the two is extremely uneven, network tends to pixel classifications prospect is divided into background, In order to solve this problem the result for proposing that a mathematical method exports network is corrected, then carries out pixel with softmax Classification, separates core and background, effect is fine.
The present invention can apply to following field:
(1), the pathology department of hospital is divided nucleus, to carry out auxiliary diagnosis;
(2), laboratory research, to the pathological study of cervical cell core region segmentation;
(3), other segmentation fields other than medicine.
Detailed description of the invention
Fig. 1 is the overall flow figure that the present invention proposes the segmentation of DeepHLF network cross-circulation.
Fig. 2 is height coupling parallel method proposed by the present invention (processing feature merges again respectively with lower coupling for height coupling) Fusion Features at high-level semantics feature, intermediate comprehensive characteristics, low-level details feature operational flowchart.
Fig. 3 (a), Fig. 3 (b) are respectively the high coupling synthesis process being related in Fig. 2 and lower coupling synthesis process figure.
Fig. 4 emphasizes loss function calculating process figure when being training.
Fig. 5 is that cluster type nucleus segmentation result compares figure.
Specific embodiment
Present invention will now be described in further detail with reference to the embodiments and the accompanying drawings, but embodiments of the present invention are unlimited In this.
Embodiment
As shown in Figure 1, the method that cluster type nucleus is divided in a kind of cervical smear image of the present invention, the first step first use The progressive extraction characteristics of image of five residual blocks of DeepHLF, and save the feature of each piece of extraction, altogether five kinds of features.Second Five kinds of features that the first step is extracted are walked, high-level semantics feature composition, intermediate comprehensive characteristics composition, low-level details feature group are divided into At three groups, wherein component part of preceding two layers of shallow-layer feature of five kinds of features as Lower-level details feature, third layer and the 4th layer Component part of the feature as intermediate comprehensive characteristics, that namely most deep block of the 5th residual block of DeepHLF extract knot Component part of the fruit as high-level semantics feature.Third step forms high-level semantics feature composition, intermediate comprehensive characteristics, is low Grade minutia forms this three groups of features, and every group of feature all uses height coupling Parallel Fusion module, and (height coupling is located respectively with lower coupling Reason feature merges again) the final high-level semantics feature of fusion generation, intermediate comprehensive characteristics, low-level details feature.4th step pair Three groups of the high-level semantics feature of third step generation, intermediate comprehensive characteristics, low-level details feature features carry out cross-circulation combinations, raw At multiple characteristic patterns, multiple segmentation results, the intersection entropy loss that multiple segmentation results acquire are predicted further according to multiple characteristic patterns It is weighted to be added again and generates final loss.For the final characteristic pattern of network output, the mathematics side of classification correction is taken Method optimizes.
The method of abnormal core region screening specifically includes following main technical points in cervical smear image of the invention:
1, partitioned data set prepares, including cluster type cytological map, the GroundTruth divided corresponding with cytological map;
(1), picture is first collected, then the picture comprising cluster type nucleus is picked out and is arranged, altogether 104 figures Piece, is then diced into the picture of 256*256 size, finally does data augmentation;
(2), picture is carried out with PS scratching figure, outlines nuclear area, cytosolic domain is black, and other is that background is white Color;
2, data set is selected, and is divided into test set and training set, guarantees that the distribution of test set training set is consistent;
(1), data set is divided into multiclass by the method clustered
(2), the picture come out to cluster, every one kind selects three one-tenth and is used as test set, other to be used as training set
3, DeepHLF composition and training method
As shown in Figure 1, dividing task for cluster type nucleus, the present invention is devised based on high coupling lower coupling fusion DeepHLF network (Deep network base Hign coupling and Low coupling Fusion), while being one A end to end network.By progressive retention characteristic module, height coupling Parallel Fusion, (height coupling handles spy with lower coupling to the network respectively Sign merges again) module, cross-circulation module composition.A kind of method of new height Fusion Features of the network design, is compared Faster in traditional network speed, required parameter amount is less, and can be good at being partitioned into cluster type nucleus.
DeepHLF network is sequentially completed four tasks:
(1) progressive retention characteristic module is grouped after carrying out progressive extraction to characteristics of image
As shown in the left part of Fig. 2, progressive retention characteristic module is made of five residual blocks, relative to Inception series The structure of the ResNet of network and previous version, the residual block can improve standard under the premise of not increasing parameter complexity True rate, while also reducing the quantity of hyper parameter.The first step is with the progressive retention characteristic module in DeepHLF network to image Feature carries out progressive extraction, and retains five level characteristics during network is deepened, and is then divided into the feature of five levels Three groups are that high-level semantics feature (being represented with High) forms, intermediate comprehensive characteristics (being represented with Middle) form, rudimentary thin respectively Save feature (being represented with Low) composition.Component part of preceding two layers of shallow-layer feature as Lower-level details feature, third layer and the 4th layer Component part of the feature as intermediate comprehensive characteristics, namely most deep that of progressive the 5th residual block for retaining characteristic module Component part of the block as high-level semantics feature.The purpose being grouped in this way is to allow the height coupling Parallel Fusion mould of DeepHLF network Block (processing feature merges again respectively with lower coupling for height coupling) merges the feature in group, and fusion results are that high-level semantics are special Levy (High), intermediate comprehensive characteristics (Middle), low-level details feature (Low).Why in this way generate three kinds of features, be in order to It allows the cross-circulation module of DeepHLF to recycle fusion to characteristic crossover and generates multiple characteristic patterns.Cross-circulation module can basis Actual conditions, selectively highlight the one or more of three kinds of features, and the behavior for the property emphasized refers to overlapping combination weight The feature wanted.Fine-characterization classification allows DeepHLF network that can extract the semantic feature of higher level in this way, while selecting as far as possible Property keep the minutia of bottom, do not allow excessively excessively random low-level details feature to upset segmentation result, it is thin to help to improve cluster type Karyon segmentation effect.
It is that (wherein x is the picture of input, F that progressive first for retaining characteristic module, which extracts characteristic block,1It (x) is first Convolutional network block function, Layer1It is first and is saved the shallow-layer feature got off):
Layer1=F1(x)
Progressive second to the 5th extraction characteristic block for retaining characteristic module is (wherein Layeri-1It is a upper convolution The feature retained after block processing, FiIt (x) is i-th of convolutional network block function, LayeriIt is to be saved the feature got off i-th):
Layeri=Fi(Layeri-1)
Next feature is grouped, wherein High=high-level semantics feature, Middle=middle rank comprehensive characteristics, Low =low-level details feature:
High composition: Layer5;Middle composition: Layer4 Layer3;Low composition: Layer1 Layer2.Why Layer5Component part separately as High is because high-level semantics are most important to segmentation effect, if with other layer of feature Fusion can upset its high-level semantics information;And the feature of low level is merged each other, it can be mutually complementary between minutia, Promote segmentation effect.
The 3-D image of input is converted three category features of multichannel by DeepHLF network, and last three classes characteristic crossover follows Ring, which combines, generates multiple characteristic patterns.So grouping decoupling feature, can repeatedly combine important feature group, secondary important It is few to combine, while saving net training time, promote segmentation effect.
(2), height coupling Parallel Fusion module (processing feature merges again respectively with lower coupling for height coupling) is to every group of group Interior feature merges
It contents extraction feature and is grouped in (1) above, this part is exactly the Fusion Features to grouping, is such as schemed 2 right part, takes the parallel fusion method of height coupling, it is popular for be exactly high coupling processing Fusion Features lower coupling at The feature of reason, this method have following three sub-steps:
1) high coupling feature processing, as shown in Fig. 3 (a), wherein HjIt is a certain group of high coupling processing of feature as a result, WiIt is Layer inside feature groupiShared weight.That HjMiddle j respectively represents special to it is formed in Low, Middle, High when being 1,2,3 The high coupling processing result of sign.It is special using the high coupling of convolution block composition after being added to feature according to weight proportion in feature group Sign.
Low-level details feature forms high coupling processing result: H1=fconv1(W1×Layer1+W2×Layer2)
Intermediate comprehensive characteristics form high coupling processing result: H2=fconv2(W3×Layer3+W4×Layer4)
High-level semantics feature forms high coupling processing result: H3=fconv3(W5×Layer5)
2) lower coupling characteristic processing, as shown in Fig. 3 (b), wherein LjIt is a certain group of feature lower coupling processing as a result, LjMiddle j It is respectively represented when being 1,2,3 to the lower coupling processing result for forming its feature in Low, Middle, High, Cat symbol is opposite Flow control two dimension carries out concatenation:
Low-level details feature forms lower coupling processing result: L1=Cat (Layer1,Layer2)
Intermediate comprehensive characteristics form lower coupling processing result: L2=Cat (Layer3,Layer4)
High-level semantics feature forms lower coupling processing result: L3=Layer5
3) height coupling feature merge, as is shown on the right of figure 2, before height has been used respectively to the subcharacter of composition characteristic group The method of lower coupling is handled, both fusions, Fusion when final stepjMiddle j be 1,2,3 when respectively represent Low, Middle, High, WhjRepresent weight shared by high coupling processing feature, Wl in jth groupjRepresent the shared power of lower coupling processing feature in jth group Weight.
Low-level details feature (j=1): Fusionj=Whj×Hj+Wlj×Lj
Intermediate comprehensive characteristics (j=2): Fusionj=Whj×Hj+Wlj×Lj
High-level semantics feature (j=3): Fusionj=Whj×Hj+Wlj×Lj
Why handle in this way, is because high coupling can extract Global Information, lower coupling can retain detailed information, most It is taken into account afterwards in conjunction with the two.
(3), the characteristic crossover circulation merged in (2) is combined and generates multiple characteristic patterns, each characteristic pattern softmax letter Number generates segmentation figure, so just generates multiple segmentation figures
Height coupling Parallel Fusion module generates low-level details feature (Low), intermediate comprehensive characteristics in (2) above (Middle), high-level semantics feature (High), this step carries out cross-circulation segmentation to this three groups of features, as shown in Figure 1, first Process of convolution is carried out as first characteristic pattern, because the feature that deep layer network extracts has height to High (high-level semantics feature) Grade is semantic, and the abundant information comprising being conducive to segmentation, first characteristic pattern is Predict0, Fcov0For convolution block (seeing below formula):
Predict0=Fcov0(High)
Then splice (Cat) Predict again0Process of convolution is done with the value of Middle while adding the characteristic pattern of previous step Predict0, it is new in order to allow the intermediate comprehensive characteristics of first characteristic pattern fusion ratio high-level semantics feature more details to generate in this way Characteristic pattern, second characteristic pattern is Predict1(seeing below formula):
Predict1=Fcov1(Cat(Predict0, Middle))+Predict0
Followed by splicing (Cat) Predict1Process of convolution is done with the value of Low while adding the characteristic pattern of previous step Predict1, and combined in entire cross-circulation to learn the low-level details feature in low-level details characteristic layer in this way Low feature is only combined once in journey, because the information that the help that low-level details feature is capable of providing is divided is limited, and is led to Cross it is demonstrated experimentally that combine be greater than it is primary in the case where effect it is worse.Middle (intermediate comprehensive characteristics) and High (high-level semantics Feature) it is even more important, so it is multiple to need cross-circulation to combine.Third characteristic pattern is Predict2(seeing below formula):
Predict2=Fcov2(Cat(Predict1,Low))+Predict1
Next (advanced in characteristic value fusion Middle (intermediate comprehensive characteristics) or fusion High generated with " previous step " Semantic feature) feature two recycles in the process, therefore this step name " cross-circulation ".Following formula process is iteratively repeated until instruction Experienced and worldly-wise to arrive optimum efficiency, the value of m is since 3, because the characteristic pattern of the generation of previous step is Predict2
Why in this way, be through a large number of experiments and the more higher leveled information of theoretical proof on segmentation result influence specific gravity It is bigger, and can better Optimized Segmentation result.
(4), cross knot symphysis each time at feature pass through classification function export a segmentation result figure
As the following formula, cross knot symphysis each time at characteristic pattern PredictiBy classification function Softmax function Generate a two-dimensional segmentation result Segmentationi
Segmentationi=softmax (Predicti)
It such as Fig. 1, loses in training and is generated by multiple segmentation results, during the test, lose by the last one (i value Maximum Segmentationi) segmentation generates, and test result of last segmentation result as original image.
4, it proposes to solve the mathematical method and weight loss function that core is corrected with background classification
Find in an experiment, in cervical smear image the problem of cluster type cell segmentation in, two class of nucleus and background Quantity is mutually far short of what is expected, leads to that core mistake is divided into the ratio of background much larger than the ratio that background mistake is divided into core doing the process divided Example, this patent invent a kind of method that mathematics is named as classification correction by experimental summary and go to correct this seed nucleus with background classification not Balance, as shown in Figure 4.For each characteristic value Predict of previous stepiWe to its two-dimensional first magnitude value multiplied by One weight Wp, which is to learn to obtain during training.The feature map values Predict obtainedi
Predicti=Predicti[:, 0: :] × Wp
By experimental results demonstrate this method has general applicability to nucleus segmentation problem, can be adapted for very much Nucleus divides network, including Unet, Fcn etc., part cluster type cell segmentation result figure as shown in Figure 5, and the left side of every a line Column are original images, and the right is test segmentation result figure, it can be seen from the figure that can achieve very using dividing method of the invention Good segmentation effect.
It is proposed weight loss function, different with previous loss function, this patent is not only during training It only takes a partition value and GroundTruth to seek cross entropy, is to take all segmentation result values to ask with GroundTruth to intersect Entropy, LosssumThe result of asking cross entropy to be added according to weight with GroundTruth respectively for all partition values and,Generation Table is SegmentationiThe cross entropy total losses such as Fig. 4 acquired, formula are as follows:
The above embodiment is a preferred embodiment of the present invention, but embodiments of the present invention are not by above-described embodiment Limitation, other any changes, modifications, substitutions, combinations, simplifications made without departing from the spirit and principles of the present invention, It should be equivalent substitute mode, be included within the scope of the present invention.

Claims (9)

1.一种宫颈涂片图像中簇型细胞核的分割方法,其特征在于,包括下述步骤:1. a segmentation method of cluster type nucleus in cervical smear image, is characterized in that, comprises the steps: (1)对分割数据集做预处理,所述分割数据集包括簇型细胞图、与簇型细胞图对应的分割好的GroundTruth;(1) Preprocessing the segmentation data set, the segmentation data set includes a clustered cell map, and the segmented GroundTruth corresponding to the clustered cell map; (2)数据集挑选,并分成测试集和训练集,保证测试集和训练集的分布一致;(2) The data set is selected and divided into a test set and a training set to ensure that the distribution of the test set and the training set are consistent; (3)组成DeepHLF网络,把原图片输入网络、网络递进提取特征并保留每一个层次特征后进行特征分组,所述DeepHLF网络是一个端到端网络,该网络由递进留存特征模块、高低耦并行融合模块和交叉循环模块组成;(3) form the DeepHLF network, input the original picture into the network, the network progressively extracts features and keep each level feature and then perform feature grouping. The DeepHLF network is an end-to-end network, and the network consists of progressive retention feature modules, high and low Composed of coupled parallel fusion module and cross cycle module; 所述递进留存特征模块,把原图片输入网络、网络递进提取特征并保留每一个层次特征,接着对保留的特征进行分组;The progressive retention feature module inputs the original picture into the network, and the network progressively extracts features and retains each level feature, and then groups the retained features; 所述高低耦并行融合模块,用于采用高耦合方法与低耦合方法分别处理特征,如此对每组的组内特征融合,生成高级语义特征High、中级综合特征Middle、低级细节特征Low三类特征;The high-low-coupling parallel fusion module is used to separately process features using a high-coupling method and a low-coupling method, so that the intra-group features of each group are fused to generate three types of features: high-level semantic features High, mid-level comprehensive features Middle, and low-level detail features Low ; 所述交叉循环模块,用于对High、Middle、Low三类特征进行交叉循环结合生成多个特征图,每个特征图用softmax函数分类生成分割图,多个特征图就生成了多个分割图。The cross-loop module is used to perform cross-cycle combination of High, Middle, and Low three types of features to generate multiple feature maps, each feature map is classified by softmax function to generate segmentation maps, and multiple feature maps generate multiple segmentation maps . (4)提出核与背景类别纠正的数学方法纠正核与背景类别不平衡的问题和权重损失函数,从而优化细胞核边界优化分割结。(4) A mathematical method of kernel and background category correction is proposed to correct the problem of unbalanced kernel and background categories and a weight loss function, thereby optimizing the boundary of the cell nucleus to optimize the segmentation knot. 2.根据权利要求1所述一种宫颈涂片图像中簇型细胞核的分割方法,其特征在于,所述步骤(1)具体为:2. according to the segmentation method of cluster type nucleus in a kind of cervical smear image according to claim 1, it is characterized in that, described step (1) is specially: (1-1)先征集图片,再把包含簇型细胞核的图片挑选出来进行整理,接着切割成设定大小的图片,最后做数据增广;(1-1) Collect pictures first, then select and organize the pictures containing clustered nuclei, then cut them into pictures of a set size, and finally do data augmentation; (1-2)根据病理医生的指导,用PS对图片进行抠图,框出细胞核区域,细胞核区域为黑色,其它为背景白色。(1-2) According to the guidance of the pathologist, use PS to cut out the picture, frame the nucleus area, the nucleus area is black, and the rest is the background white. 3.根据权利要求1所述一种宫颈涂片图像中簇型细胞核的分割方法,其特征在于,所述步骤(2)具体为:3. according to the segmentation method of cluster type cell nucleus in a kind of cervical smear image according to claim 1, it is characterized in that, described step (2) is specially: (2-1)把数据集通过聚类的方法分成多类;(2-1) Divide the data set into multiple categories by clustering method; (2-2)对聚类出来的图片,每一类挑选三成作为测试集,其它作为训练集。(2-2) For the clustered pictures, select 30% of each category as the test set, and the rest as the training set. 4.根据权利要求1所述一种宫颈涂片图像中簇型细胞核的分割方法,其特征在于,所述步骤(3)中,所述递进留存特征模块由五个残差块组成,具体实现包括下述步骤:4. according to the segmentation method of cluster type cell nucleus in a kind of cervical smear image according to claim 1, it is characterized in that, in the described step (3), described progressive retention feature module is made up of five residual blocks, specifically Implementation includes the following steps: 用DeepHLF网络中的递进留存特征模块对图像特征进行递进提取,并保留网络加深过程中的五个层次特征,第一层和第二层特征作为低层细节特征的组成部分,第三层和第四层特征作为中级综合特征的组成部分,递进留存特征模块的第五个残差块也就是最深的那个块作为高级语义特征的组成部分;Use the progressive retention feature module in the DeepHLF network to progressively extract the image features, and retain the five-level features in the network deepening process, the first and second level features are used as components of the low-level detail features, the third level and The fourth-level feature is a component of the intermediate comprehensive feature, and the fifth residual block of the progressive retention feature module, which is the deepest block, is a component of the high-level semantic feature; 即将五个层次的特征分成三组特征组成,分别是高级语义特征High组成、中级综合特征Middle组成和低级细节特征Low组成。The five-level features are divided into three groups of features, which are composed of high-level semantic features High, middle-level comprehensive features Middle, and low-level detail features Low. 5.根据权利要求1所述一种宫颈涂片图像中簇型细胞核的分割方法,其特征在于,所述步骤(3)中,所述递进留存特征模块,具体实现包括下述步骤:所述递进留存特征模块的第一个提取特征块表示为:5. according to the segmentation method of cluster type cell nucleus in a kind of cervical smear image according to claim 1, it is characterized in that, in the described step (3), described progressive retention characteristic module, concrete realization comprises the following steps: The first extracted feature block of the progressive retained feature module is expressed as: Layer1=F1(x)Layer 1 = F 1 (x) 其中x是输入的图片,F1(x)是第一个卷积网络块函数,Layer1是第一个被保存下来的浅层特征;Where x is the input image, F 1 (x) is the first convolutional network block function, and Layer 1 is the first saved shallow feature; 所述递进留存特征模块的第二个到第五个提取特征块表示为:The second to fifth extraction feature blocks of the progressive retention feature module are expressed as: Layeri=Fi(Layeri-1)Layer i = F i (Layer i-1 ) 其中Layeri-1是上一个卷积块处理后留存的特征,Fi(x)是第i个卷积网络块函数,Layeri是第i个被保存下来的特征;Among them, Layer i-1 is the feature retained after the previous convolution block processing, F i (x) is the i-th convolutional network block function, and Layer i is the i-th preserved feature; 接下来对特征进行分组:Next group the features: High组成:Layer5;Middle组成:Layer4 Layer3;Low组成:Layer1 Layer2High composition: Layer 5 ; Middle composition: Layer 4 Layer 3 ; Low composition: Layer 1 Layer 2 . 6.根据权利要求1所述一种宫颈涂片图像中簇型细胞核的分割方法,其特征在于,所述步骤(3)中,所述高低耦并行融合模块具体包括下述处理步骤:6. according to the segmentation method of cluster type nucleus in a kind of cervical smear image according to claim 1, it is characterized in that, in described step (3), described high and low coupling parallel fusion module specifically comprises following processing steps: 高耦合特征处理,设Hj是某一组特征高耦合处理的结果,Wi是特征组里面的Layeri所占权重;那Hj中j为1、2、3时分别代表Low、Middle、High中对组成其特征的高耦合处理结果,特征组里对特征依据权重比例相加以后再经过卷积块组成高耦合特征,具体如下:Highly coupled feature processing, let H j be the result of high coupling processing of a certain group of features, W i is the weight of Layer i in the feature group; when j in H j is 1, 2, or 3, it represents Low, Middle, and In High, the high-coupling processing results that make up its features, the features in the feature group are added according to the weight ratio, and then the high-coupling features are formed through the convolution block, as follows: 低级细节特征组成高耦合处理结果:H1=fconv1(W1×Layer1+W2×Layer2);The low-level detail features constitute the high-coupling processing result: H 1 =f conv1 (W 1 ×Layer 1 +W 2 ×Layer 2 ); 中级综合特征组成高耦合处理结果:H2=fconv2(W3×Layer3+W4×Layer4);The high-coupling processing result composed of intermediate comprehensive features: H 2 =f conv2 (W 3 ×Layer 3 +W 4 ×Layer 4 ); 高级语义特征组成高耦合处理结果:H3=fconv3(W5×Layer5);High-level semantic features form a high-coupling processing result: H 3 =f conv3 (W 5 ×Layer 5 ); 低耦合特征处理,设Lj是某一组特征低耦合处理的结果,Lj中j为1、2、3时分别代表Low、Middle、High中对组成其特征的低耦合处理结果,Cat符号是对向量第二维进行拼接操作,具体如下:Low-coupling feature processing. Let L j be the result of low-coupling processing of a certain group of features. When j in L j is 1, 2, and 3, it represents the low-coupling processing results of the features in Low, Middle, and High, and the Cat symbol It is the splicing operation on the second dimension of the vector, as follows: 低级细节特征组成低耦合处理结果:L1=Cat(Layer1,Layer2);The low-level detail features constitute the low-coupling processing result: L 1 =Cat(Layer 1 , Layer 2 ); 中级综合特征组成低耦合处理结果:L2=Cat(Layer3,Layer4);Low-coupling processing results composed of intermediate comprehensive features: L 2 =Cat(Layer 3 , Layer 4 ); 高级语义特征组成低耦合处理结果:L3=Layer5High-level semantic features form low-coupling processing results: L 3 =Layer 5 ; 高低耦合特征融合,前面对组成特征组的子特征分别用了高耦合和低耦合的方法处理,最后一步时融合两者,Fusionj中j为1、2、3时分别代表Low、Middle、High,Whj代表第j组中高耦合处理特征所占权重,Wlj代表第j组中低耦合处理特征所占权重,具体为:High- and low-coupling feature fusion. The sub-features that make up the feature group are processed by high-coupling and low-coupling methods respectively. The last step is to fuse the two. When j in Fusion j is 1, 2, and 3, it represents Low, Middle, and High, Wh j represents the weight of high-coupling processing features in group j, and Wl j represents the weight of low-coupling processing features in group j, specifically: 低级细节特征(j=1):Fusionj=Whj×Hj+Wlj×LjLow-level detail features (j=1): Fusion j =Wh j ×H j +Wl j ×L j ; 中级综合特征(j=2):Fusionj=Whj×Hj+Wlj×LjIntermediate comprehensive features (j=2): Fusion j =Wh j ×H j +Wl j ×L j ; 高级语义特征(j=3):Fusionj=Whj×Hj+Wlj×LjHigh-level semantic features (j=3): Fusion j =Wh j ×H j +Wl j ×L j . 7.根据权利要求1所述一种宫颈涂片图像中簇型细胞核的分割方法,其特征在于,所述步骤(3)中,所述交叉循环模块的具体方法为:7. according to the segmentation method of cluster type cell nucleus in a kind of cervical smear image according to claim 1, it is characterized in that, in the described step (3), the concrete method of described interleaved cycle module is: 首先对高级语义特征High进行卷积处理作为第一个特征图,第一个可生成分割图的特征图为Predict0,Fcov0为卷积块,表示如下:First, the high-level semantic feature High is convolved as the first feature map. The first feature map that can generate a segmentation map is Predict 0 , and F cov0 is a convolution block, which is expressed as follows: Predict0=Fcov0(High);Predict 0 = F cov0 (High); 接着再拼接Predict0和Middle的值做卷积处理同时加上上一步的特征图Predict0,让第一个特征图融合比高级语义特征更细节的中级综合特征,融合后再生成特征图,第二个特征图为Predict1,表示如下:Then concatenate the values of Predict 0 and Middle for convolution processing and add the feature map Predict 0 of the previous step, so that the first feature map is fused with intermediate comprehensive features that are more detailed than advanced semantic features, and then the feature map is generated after fusion. The two feature maps are Predict 1 , expressed as follows: Predict1=Fcov1(Cat(Predict0,Middle))+Predict0Predict 1 = F cov1 (Cat(Predict 0 , Middle))+Predict 0 ; 再接着拼接Predict1和Low的值做卷积处理同时加上上一步的特征图Predict1,学习低级细节特征层中的低级细节特征,且在整个交叉循环结合过程中Low特征只被结合一次,Middle和High需要交叉循环结合多次,第三个可直接预测分割结果的特征图为Predict2,表示如下:Then concatenate the values of Predict 1 and Low for convolution processing and add the feature map Predict 1 of the previous step to learn the low-level detail features in the low-level detail feature layer, and the Low feature is only combined once during the entire cross-loop combination process. Middle and High need to be cross-looped and combined multiple times. The third feature map that can directly predict the segmentation result is Predict 2 , which is expressed as follows: Predict2=Fcov2(Cat(Predict1,Low))+Predict1 Predict 2 = F cov2 (Cat(Predict 1 , Low))+Predict 1 接下来在用生成的特征图在融合Middle或融合High特征两个过程中循环,迭代重复公式直至训练达到最佳效果;m的值从3开始,因为上一步的生成的特征图是Predict2Next, use the generated feature map to loop through the process of fusing Middle or fusing High features, iteratively repeating the formula Until the training reaches the best effect; the value of m starts from 3, because the feature map generated in the previous step is Predict 2 . 8.根据权利要求1所述一种宫颈涂片图像中簇型细胞核的分割方法,其特征在于,所述步骤(3)中,每一次交叉结合生成的特征图Predicti,经过分类函数Softmax函数生成一张二维的分割结果Segmentationi8. according to the segmentation method of cluster type cell nucleus in a kind of cervical smear image according to claim 1, it is characterized in that, in the described step (3), the feature map Predict i that each intersection is combined generates, through classification function Softmax function Generate a two-dimensional segmentation result Segmentation i ; Segmentationi=softmax(Predicti)Segmentation i =softmax(Predict i ) 在训练中损失由多个分割结果分别与GroundTruth求交叉熵组成,在测试过程中,损失由最后一个分割结果与GroundTruth求交叉熵组成,并且最后一张分割结果作为测试中结果图。In the training, the loss is composed of multiple segmentation results and the cross entropy obtained by GroundTruth respectively. During the test process, the loss is composed of the last segmentation result and the cross entropy obtained by GroundTruth, and the last segmentation result is used as the test result image. 9.根据权利要求1所述一种宫颈涂片图像中簇型细胞核的分割方法,其特征在于,所述步骤(4)中,所述解决核与背景类别纠正的数学方法具体为:9. according to the segmentation method of cluster type cell nucleus in a kind of cervical smear image according to claim 1, it is characterized in that, in the described step (4), the mathematical method that described solving nucleus and background category correction is specifically: 对于步骤(3)的每个特征值Predicti,对其第二维的第一个数值值乘以一个权重Wp,该权重是在训练的过程中学习得到的,得出的特征图值Predicti For each feature value Predict i in step (3), multiply the first numerical value of its second dimension by a weight W p , which is learned during the training process, and the resulting feature map value Predict i Predicti=Predicti[:,0,:,:]×WpPredict i = Predict i [:, 0,:,:]×W p ; 所述强调损失函数具体为:The emphasis loss function is specifically: 取所有的分割结果图与GroundTruth求交叉熵,Losssum为所有的分割结果图分别与GroundTruth求交叉熵最后求和的结果,代表的是分割结果图Segmentationi与GroundTruth所占的权重,强调损失函数的公式如下:Take all the segmentation result graphs and GroundTruth to calculate the cross entropy, and Loss sum is the final summation result of all the segmentation result graphs and GroundTruth to obtain the cross entropy respectively. It represents the weight of the segmentation result graph Segmentation i and GroundTruth. The formula for emphasizing the loss function is as follows:
CN201810769112.1A 2018-07-13 2018-07-13 A method for segmentation of cluster nuclei in cervical smear images Expired - Fee Related CN109102498B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810769112.1A CN109102498B (en) 2018-07-13 2018-07-13 A method for segmentation of cluster nuclei in cervical smear images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810769112.1A CN109102498B (en) 2018-07-13 2018-07-13 A method for segmentation of cluster nuclei in cervical smear images

Publications (2)

Publication Number Publication Date
CN109102498A true CN109102498A (en) 2018-12-28
CN109102498B CN109102498B (en) 2022-04-22

Family

ID=64846376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810769112.1A Expired - Fee Related CN109102498B (en) 2018-07-13 2018-07-13 A method for segmentation of cluster nuclei in cervical smear images

Country Status (1)

Country Link
CN (1) CN109102498B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109523018A (en) * 2019-01-08 2019-03-26 重庆邮电大学 A kind of picture classification method based on depth migration study
CN110675411A (en) * 2019-09-26 2020-01-10 重庆大学 Cervical squamous intraepithelial lesion recognition algorithm based on deep learning
CN111738310A (en) * 2020-06-04 2020-10-02 科大讯飞股份有限公司 Material classification method and device, electronic equipment and storage medium
CN112070722A (en) * 2020-08-14 2020-12-11 厦门骁科码生物科技有限公司 A kind of fluorescence in situ hybridization cell nucleus segmentation method and system
CN112085067A (en) * 2020-08-17 2020-12-15 浙江大学 Method for high-throughput screening of DNA damage response inhibitor
CN112365471A (en) * 2020-11-12 2021-02-12 哈尔滨理工大学 Cervical cancer cell intelligent detection method based on deep learning
CN113012167A (en) * 2021-03-24 2021-06-22 哈尔滨理工大学 Combined segmentation method for cell nucleus and cytoplasm

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682305A (en) * 2012-04-25 2012-09-19 深圳市迈科龙医疗设备有限公司 Automatic screening system and automatic screening method using thin-prep cytology test
CN102831607A (en) * 2012-08-08 2012-12-19 深圳市迈科龙生物技术有限公司 Method for segmenting cervix uteri liquid base cell image
KR101396308B1 (en) * 2013-05-14 2014-05-27 인하대학교 산학협력단 A method for extracting morphological features for grading pancreatic ductal adenocarcinomas
CN104732229A (en) * 2015-03-16 2015-06-24 华南理工大学 Segmentation method for overlapping cells in cervical smear image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682305A (en) * 2012-04-25 2012-09-19 深圳市迈科龙医疗设备有限公司 Automatic screening system and automatic screening method using thin-prep cytology test
CN102831607A (en) * 2012-08-08 2012-12-19 深圳市迈科龙生物技术有限公司 Method for segmenting cervix uteri liquid base cell image
KR101396308B1 (en) * 2013-05-14 2014-05-27 인하대학교 산학협력단 A method for extracting morphological features for grading pancreatic ductal adenocarcinomas
CN104732229A (en) * 2015-03-16 2015-06-24 华南理工大学 Segmentation method for overlapping cells in cervical smear image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
沃焱 韩国强 张见威: "基于自适应预处理的图像分割方法", 《电子与信息学报》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109523018A (en) * 2019-01-08 2019-03-26 重庆邮电大学 A kind of picture classification method based on depth migration study
CN109523018B (en) * 2019-01-08 2022-10-18 重庆邮电大学 Image classification method based on deep migration learning
CN110675411A (en) * 2019-09-26 2020-01-10 重庆大学 Cervical squamous intraepithelial lesion recognition algorithm based on deep learning
CN110675411B (en) * 2019-09-26 2023-05-16 重庆大学 Cervical squamous intraepithelial lesion recognition algorithm based on deep learning
CN111738310A (en) * 2020-06-04 2020-10-02 科大讯飞股份有限公司 Material classification method and device, electronic equipment and storage medium
CN111738310B (en) * 2020-06-04 2023-12-01 科大讯飞股份有限公司 Material classification method, device, electronic equipment and storage medium
CN112070722A (en) * 2020-08-14 2020-12-11 厦门骁科码生物科技有限公司 A kind of fluorescence in situ hybridization cell nucleus segmentation method and system
CN112085067A (en) * 2020-08-17 2020-12-15 浙江大学 Method for high-throughput screening of DNA damage response inhibitor
CN112365471A (en) * 2020-11-12 2021-02-12 哈尔滨理工大学 Cervical cancer cell intelligent detection method based on deep learning
CN112365471B (en) * 2020-11-12 2022-06-24 哈尔滨理工大学 Intelligent detection method of cervical cancer cells based on deep learning
CN113012167A (en) * 2021-03-24 2021-06-22 哈尔滨理工大学 Combined segmentation method for cell nucleus and cytoplasm

Also Published As

Publication number Publication date
CN109102498B (en) 2022-04-22

Similar Documents

Publication Publication Date Title
CN109102498A (en) A kind of method of cluster type nucleus segmentation in cervical smear image
CN112288706B (en) An automated karyotype analysis and abnormality detection method
CN107273490B (en) Combined wrong question recommendation method based on knowledge graph
CN111368896B (en) Hyperspectral Remote Sensing Image Classification Method Based on Dense Residual 3D Convolutional Neural Network
CN112700434B (en) Medical image classification method and classification device thereof
CN108257135A (en) The assistant diagnosis system of medical image features is understood based on deep learning method
CN109166100A (en) Multi-task learning method for cell count based on convolutional neural networks
CN112347970B (en) Remote sensing image ground object identification method based on graph convolution neural network
CN106980858A (en) The language text detection of a kind of language text detection with alignment system and the application system and localization method
CN107610123A (en) A kind of image aesthetic quality evaluation method based on depth convolutional neural networks
CN104573000B (en) Automatic call answering arrangement and method based on sequence study
CN107154043A (en) A kind of Lung neoplasm false positive sample suppressing method based on 3DCNN
CN109389129A (en) A kind of image processing method, electronic equipment and storage medium
CN108629369A (en) A kind of Visible Urine Sediment Components automatic identifying method based on Trimmed SSD
Chen et al. Cell nuclei detection and segmentation for computational pathology using deep learning
CN109344888A (en) A kind of image recognition method, device and equipment based on convolutional neural network
CN111783688B (en) A classification method of remote sensing image scene based on convolutional neural network
CN113435254A (en) Sentinel second image-based farmland deep learning extraction method
CN107909102A (en) A kind of sorting technique of histopathology image
CN113781385A (en) A joint attention map convolution method for automatic classification of medical images of the brain
CN110245249B (en) An Intelligent Retrieval Method for 3D CAD Models Based on Two-layer Deep Residual Network
CN116563640A (en) Mammary gland pathology image classification method based on multi-attention mechanism and migration learning
CN116862836A (en) System and computer equipment for detecting extensive organ lymph node metastasis cancer
Ye et al. Multitask classification of breast cancer pathological images using SE-DenseNet
CN110136113B (en) Vagina pathology image classification method based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220422

CF01 Termination of patent right due to non-payment of annual fee