A kind of method of cluster type nucleus segmentation in cervical smear image
Technical field
The invention belongs to a kind of sides of cell segmentation in the technical field of image procossing more particularly to cervical smear image
Method.
Background technique
The early detection of cervical carcinoma is of great significance to the death rate for reducing cervical carcinoma, typically now uses cervical smear
Screening technology is checked.Current smear screening is mainly based on artificial diagosis, but this mode efficiency is very low;With
The maturation of cervical cell diagnostic techniques and and the automatic tabletting technology of state's inner cell development, exploitation adaptable meter therewith
Calculation machine assistant diagnosis system also becomes inevitable, it has huge meaning to the generaI investigation of cervical carcinoma.
Currently, there are many cell image segmentation methods, information used in these algorithms has been counted here or theory can
With the abstract cell segmentation algorithm for being divided into using area information, using cell marginal information partitioning algorithm, manage in conjunction with correlation
It is other by the cell segmentation method three categories of generation.The substantially step of the cell segmentation algorithm of using area information is according in image
The information of each pixel, is then grouped into pixel according to given judge criterion different classes of, and main method includes threshold values point
Segmentation method, region-growing method, clustering procedure and watershed transform segmentation;Partitioning algorithm using the marginal information of cell is according to thin
In born of the same parents' image, numerically there are great differences for the gray scale of the gray scale of usual boundary part and non-boundary part, i.e., general gray scale
Discontinuous place is boundary, and according to this characteristic, the method for extracting boundary uses gradient method more.It is usually used in the related reason of cell segmentation
By or algorithm include: wavelet analysis, genetic algorithm, mathematical morphology, neural network etc., especially neural network, this is several years
With the deep development of deep learning, some networks, such as Unet, shallow-layer nerve net are also derived in nucleus segmentation field
Network, FCN, but it can only divide individual cells core well as conventional method, the nucleus segmentation effect of cluster type is bad.
The cell segmentation algorithm of using area information, it is unevenly sensitive to noise, gray scale, and it has been easy the phenomenon that missing nucleus.Make
With the partitioning algorithm of the marginal information of cell, the grayscale information of picture is relied primarily on, the picture segmentation much like to prospect background
It is ineffective;Compare office in conjunction with the cell segmentation method that correlation theory generates, such as existing neural network framework segmentation ability
Limit, can only divide single cell core, divide to cluster type nucleus and divide the nucleus of nucleus and cytoplasm similar gray value
Cut poor effect.
Summary of the invention
The shortcomings that it is a primary object of the present invention to overcome the prior art and insufficient, provides cluster in a kind of cervical smear image
The method of type nucleus segmentation, cluster type nucleus in correct Ground Split cervical smear image.
In order to achieve the above object, the invention adopts the following technical scheme:
The dividing method of cluster type nucleus, includes the following steps: in a kind of cervical smear image of the present invention
(1) partitioned data set is pre-processed, the partitioned data set includes cluster type cytological map, corresponding with cluster type cytological map
The GroundTruth divided;
(2) data set is selected, and is divided into test set and training set, guarantees that test set is consistent with the distribution of training set;
(3) DeepHLF network is formed, original picture input network, the progressive extraction feature of network and retains each level
Feature grouping is carried out after feature, the DeepHLF network is an end to end network, the network by progressive retention characteristic module,
Height coupling Parallel Fusion module and cross-circulation module composition;
The progressive retention characteristic module original picture input network, the progressive extraction feature of network and retains each layer
Secondary feature is then grouped the feature of reservation;
The height coupling Parallel Fusion module, for distinguishing processing feature using high coupling process and lower coupling method, such as
Fusion Features in this group to every group generate high-level semantics feature High, intermediate comprehensive characteristics Middle, low-level details feature
Tri- category feature of Low;
The cross-circulation module combines generation more for carrying out cross-circulation to tri- category feature of High, Middle, Low
A characteristic pattern, each characteristic pattern generate segmentation figure with softmax function category, and multiple characteristic patterns just generate multiple segmentation figures;
(4) the mathematical method correction core for proposing that core and background classification are corrected is damaged with the unbalanced problem of background classification and weight
Function is lost, to optimize nuclear boundary Optimized Segmentation knot.
The step (1) as a preferred technical solution, specifically:
(1-1) first collects picture, then the picture comprising cluster type nucleus is picked out and is arranged, and is then diced into and sets
Determine the picture of size, finally does data augmentation;
(1-2) carries out picture with PS to scratch figure, outlines nuclear area, nuclear area according to the guidance of pathologist
It is other for background white for black.
The step (2) as a preferred technical solution, specifically:
Data set is divided into multiclass by the method clustered by (2-1);
The picture that (2-2) comes out cluster, every one kind selects three one-tenth and is used as test set, other to be used as training set.
As a preferred technical solution, in the step (3), the progressive retention characteristic module is by five residual block groups
At specific implementation includes the following steps:
Progressive extraction is carried out to characteristics of image with the progressive retention characteristic module in DeepHLF network, and retains network and adds
Five level characteristics during depth, the component part of first layer and second layer feature as Lower-level details feature, third layer and
Component part of the 4th layer of feature as intermediate comprehensive characteristics, progressive the 5th residual block for retaining characteristic module are namely most deep
Component part of that block as high-level semantics feature;
The feature of five levels is divided into three groups of feature compositions, is high-level semantics feature High composition respectively, intermediate comprehensive
Close feature Middle composition and low-level details feature Low composition.
As a preferred technical solution, in the step (3), the progressive retention characteristic module, under specific implementation includes
State step: progressive first extraction characteristic block for retaining characteristic module indicates are as follows:
Layer1=F1(x)
Wherein x is the picture of input, F1It (x) is first convolutional network block function, Layer1It is first to be saved
Shallow-layer feature;
Progressive second to the 5th extraction characteristic block for retaining characteristic module indicates are as follows:
Layeri=Fi(Layeri-1)
Wherein Layeri-1It is the feature retained after a upper convolution block is handled, FiIt (x) is i-th of convolutional network block function,
LayeriIt is to be saved the feature got off i-th;
Next feature is grouped:
High composition: Layer5;Middle composition: Layer4Layer3;Low composition: Layer1Layer2。
As a preferred technical solution, in the step (3), the height coupling Parallel Fusion module specifically includes following places
Manage step:
High coupling feature processing, if HjIt is a certain group of high coupling processing of feature as a result, WiIt is the Layer inside feature groupi
Shared weight;That HjMiddle j is respectively represented in Low, Middle, High when being 1,2,3 to the high coupling processing knot for forming its feature
Fruit forms high coupling feature using convolution block after being added to feature according to weight proportion in feature group, specific as follows:
Low-level details feature forms high coupling processing result: H1=fconv1(W1×Layer1+W2×Layer2);
Intermediate comprehensive characteristics form high coupling processing result: H2=fconv2(W3×Layer3+W4×Layer4);
High-level semantics feature forms high coupling processing result: H3=fconv3(W5×Layer5);
Lower coupling characteristic processing, if LjIt is a certain group of feature lower coupling processing as a result, LjMiddle j is respectively represented when being 1,2,3
To the lower coupling processing result for forming its feature in Low, Middle, High, Cat symbol is that opposite flow control two dimension carries out splicing behaviour
Make, specific as follows:
Low-level details feature forms lower coupling processing result: L1=Cat (Layer1,Layer2);
Intermediate comprehensive characteristics form lower coupling processing result: L2=Cat (Layer3,Layer4);
High-level semantics feature forms lower coupling processing result: L3=Layer5;
Height coupling feature fusion, before to the subcharacter of composition characteristic group used respectively it is high coupling and lower coupling method
Processing, both fusions, Fusion when final stepjMiddle j respectively represents Low, Middle, High, Wh when being 1,2,3jRepresent jth
Weight shared by high coupling processing feature, Wl in groupjWeight shared by lower coupling processing feature in jth group is represented, specifically:
Low-level details feature (j=1): Fusionj=Whj×Hj+Wlj×Lj;
Intermediate comprehensive characteristics (j=2): Fusionj=Whj×Hj+Wlj×Lj;
High-level semantics feature (j=3): Fusionj=Whj×Hj+Wlj×Lj。
As a preferred technical solution, in the step (3), the cross-circulation module method particularly includes:
Process of convolution is carried out as first characteristic pattern, first producible segmentation figure to high-level semantics feature High first
Characteristic pattern be Predict0, Fcov0For convolution block, it is expressed as follows:
Predict0=Fcov0(High);
Then splice Predict again0Process of convolution is done with the value of Middle while adding the characteristic pattern of previous step
Predict0, allow the intermediate comprehensive characteristics of first characteristic pattern fusion ratio high-level semantics feature more details, regenerated after fusion special
Sign figure, second characteristic pattern are Predict1, it is expressed as follows:
Predict1=Fcov1(Cat(Predict0,Middle))+Predict0;
Followed by splicing Predict1Process of convolution is done with the value of Low while adding the characteristic pattern Predict of previous step1, learn
The low-level details feature in low-level details characteristic layer is practised, and Low feature is only combined one in entire cross-circulation cohesive process
Secondary, Middle and High need cross-circulation to combine repeatedly, and third can directly predict that the characteristic pattern of segmentation result is
Predict2, it is expressed as follows:
Predict2=Fcov2(Cat(Predict1,Low))+Predict1
Next it is being recycled during fusion Middle or fusion High feature two with the characteristic pattern of generation, iteration weight
Multiple formulaUntil training reaches most
Good effect;The value of m is since 3, because the characteristic pattern of the generation of previous step is Predict2。
As a preferred technical solution, in the step (3), cross knot symphysis each time at characteristic pattern Predicti,
A two-dimensional segmentation result Segmentation is generated by classification function Softmax functioni;
Segmentationi=softmax (Predicti)
Loss asks cross entropy to form with GroundTruth respectively by multiple segmentation results in training, during the test,
Loss asks cross entropy to form by the last one segmentation result with GroundTruth, and last segmentation result is as test
Middle result figure.
As a preferred technical solution, in the step (4), the mathematical method tool for solving core and the correction of background classification
Body are as follows:
For each characteristic value Predict of step (3)i, to its two-dimensional first magnitude value multiplied by a weight
Wp, which is to learn to obtain during training, the feature map values Predict obtainedi
Predicti=Predicti[:, O: :] × Wp;
It is described to emphasize loss function specifically:
All segmentation result figure and GroundTruth is taken to ask cross entropy, LosssumDistinguish for all segmentation result figures
The result for asking cross entropy finally to sum with GroundTruth,That represent is segmentation result figure SegmentationiWith
Weight shared by GroundTruth emphasizes that the formula of loss function is as follows:
Compared with the prior art, the invention has the following advantages and beneficial effects:
(1), it is inconsistent, right can accurately to divide dyeing for cluster type nucleus segmentation in cervical cell image by the present invention
The cluster type cell image smaller than degree, uneven illumination is even can effectively help clinician as clinical auxiliary diagnosis mode
Diagnosis.
(2), the progressive extraction feature of DeepHLF network proposed by the present invention and feature is carried out after retaining each modular character
Grouping proposes that the feature that basic network extracts is melted into height by height coupling parallel method (height coupling distinguishes processing feature with lower coupling)
Grade semantic feature, intermediate comprehensive characteristics, three category feature of low-level details feature, can be used as the segmentation of medical cell image nucleus
Universal network takes into account speed and precision, and it is thin to can be good at the cluster type inconsistent, that contrast is small, uneven illumination is even of segmentation dyeing
Karyon image.
(3), the Feature fusion combined the invention proposes cross-circulation is applied to DeepHLF network, to advanced language
Adopted feature, intermediate comprehensive characteristics, low-level details feature three classes characteristic crossover circulation combine, and multiple characteristic patterns are generated, further according to more
A characteristic pattern is partitioned into multiple segmentation results, the last one segmentation result is as final test result in testing.The network
It can not only be partitioned into cluster type nucleus, and shallow nucleus will not be omitted in cutting procedure, nucleus will not be omitted and thin
The nucleus of cytoplasm similar gray value.
(4), the present invention proposes the correcting method of class imbalance, the pixel point in nucleus segmentation problem, in picture
For two classes, nucleus and background, and the classification of the two is extremely uneven, network tends to pixel classifications prospect is divided into background,
In order to solve this problem the result for proposing that a mathematical method exports network is corrected, then carries out pixel with softmax
Classification, separates core and background, effect is fine.
The present invention can apply to following field:
(1), the pathology department of hospital is divided nucleus, to carry out auxiliary diagnosis;
(2), laboratory research, to the pathological study of cervical cell core region segmentation;
(3), other segmentation fields other than medicine.
Detailed description of the invention
Fig. 1 is the overall flow figure that the present invention proposes the segmentation of DeepHLF network cross-circulation.
Fig. 2 is height coupling parallel method proposed by the present invention (processing feature merges again respectively with lower coupling for height coupling)
Fusion Features at high-level semantics feature, intermediate comprehensive characteristics, low-level details feature operational flowchart.
Fig. 3 (a), Fig. 3 (b) are respectively the high coupling synthesis process being related in Fig. 2 and lower coupling synthesis process figure.
Fig. 4 emphasizes loss function calculating process figure when being training.
Fig. 5 is that cluster type nucleus segmentation result compares figure.
Specific embodiment
Present invention will now be described in further detail with reference to the embodiments and the accompanying drawings, but embodiments of the present invention are unlimited
In this.
Embodiment
As shown in Figure 1, the method that cluster type nucleus is divided in a kind of cervical smear image of the present invention, the first step first use
The progressive extraction characteristics of image of five residual blocks of DeepHLF, and save the feature of each piece of extraction, altogether five kinds of features.Second
Five kinds of features that the first step is extracted are walked, high-level semantics feature composition, intermediate comprehensive characteristics composition, low-level details feature group are divided into
At three groups, wherein component part of preceding two layers of shallow-layer feature of five kinds of features as Lower-level details feature, third layer and the 4th layer
Component part of the feature as intermediate comprehensive characteristics, that namely most deep block of the 5th residual block of DeepHLF extract knot
Component part of the fruit as high-level semantics feature.Third step forms high-level semantics feature composition, intermediate comprehensive characteristics, is low
Grade minutia forms this three groups of features, and every group of feature all uses height coupling Parallel Fusion module, and (height coupling is located respectively with lower coupling
Reason feature merges again) the final high-level semantics feature of fusion generation, intermediate comprehensive characteristics, low-level details feature.4th step pair
Three groups of the high-level semantics feature of third step generation, intermediate comprehensive characteristics, low-level details feature features carry out cross-circulation combinations, raw
At multiple characteristic patterns, multiple segmentation results, the intersection entropy loss that multiple segmentation results acquire are predicted further according to multiple characteristic patterns
It is weighted to be added again and generates final loss.For the final characteristic pattern of network output, the mathematics side of classification correction is taken
Method optimizes.
The method of abnormal core region screening specifically includes following main technical points in cervical smear image of the invention:
1, partitioned data set prepares, including cluster type cytological map, the GroundTruth divided corresponding with cytological map;
(1), picture is first collected, then the picture comprising cluster type nucleus is picked out and is arranged, altogether 104 figures
Piece, is then diced into the picture of 256*256 size, finally does data augmentation;
(2), picture is carried out with PS scratching figure, outlines nuclear area, cytosolic domain is black, and other is that background is white
Color;
2, data set is selected, and is divided into test set and training set, guarantees that the distribution of test set training set is consistent;
(1), data set is divided into multiclass by the method clustered
(2), the picture come out to cluster, every one kind selects three one-tenth and is used as test set, other to be used as training set
3, DeepHLF composition and training method
As shown in Figure 1, dividing task for cluster type nucleus, the present invention is devised based on high coupling lower coupling fusion
DeepHLF network (Deep network base Hign coupling and Low coupling Fusion), while being one
A end to end network.By progressive retention characteristic module, height coupling Parallel Fusion, (height coupling handles spy with lower coupling to the network respectively
Sign merges again) module, cross-circulation module composition.A kind of method of new height Fusion Features of the network design, is compared
Faster in traditional network speed, required parameter amount is less, and can be good at being partitioned into cluster type nucleus.
DeepHLF network is sequentially completed four tasks:
(1) progressive retention characteristic module is grouped after carrying out progressive extraction to characteristics of image
As shown in the left part of Fig. 2, progressive retention characteristic module is made of five residual blocks, relative to Inception series
The structure of the ResNet of network and previous version, the residual block can improve standard under the premise of not increasing parameter complexity
True rate, while also reducing the quantity of hyper parameter.The first step is with the progressive retention characteristic module in DeepHLF network to image
Feature carries out progressive extraction, and retains five level characteristics during network is deepened, and is then divided into the feature of five levels
Three groups are that high-level semantics feature (being represented with High) forms, intermediate comprehensive characteristics (being represented with Middle) form, rudimentary thin respectively
Save feature (being represented with Low) composition.Component part of preceding two layers of shallow-layer feature as Lower-level details feature, third layer and the 4th layer
Component part of the feature as intermediate comprehensive characteristics, namely most deep that of progressive the 5th residual block for retaining characteristic module
Component part of the block as high-level semantics feature.The purpose being grouped in this way is to allow the height coupling Parallel Fusion mould of DeepHLF network
Block (processing feature merges again respectively with lower coupling for height coupling) merges the feature in group, and fusion results are that high-level semantics are special
Levy (High), intermediate comprehensive characteristics (Middle), low-level details feature (Low).Why in this way generate three kinds of features, be in order to
It allows the cross-circulation module of DeepHLF to recycle fusion to characteristic crossover and generates multiple characteristic patterns.Cross-circulation module can basis
Actual conditions, selectively highlight the one or more of three kinds of features, and the behavior for the property emphasized refers to overlapping combination weight
The feature wanted.Fine-characterization classification allows DeepHLF network that can extract the semantic feature of higher level in this way, while selecting as far as possible
Property keep the minutia of bottom, do not allow excessively excessively random low-level details feature to upset segmentation result, it is thin to help to improve cluster type
Karyon segmentation effect.
It is that (wherein x is the picture of input, F that progressive first for retaining characteristic module, which extracts characteristic block,1It (x) is first
Convolutional network block function, Layer1It is first and is saved the shallow-layer feature got off):
Layer1=F1(x)
Progressive second to the 5th extraction characteristic block for retaining characteristic module is (wherein Layeri-1It is a upper convolution
The feature retained after block processing, FiIt (x) is i-th of convolutional network block function, LayeriIt is to be saved the feature got off i-th):
Layeri=Fi(Layeri-1)
Next feature is grouped, wherein High=high-level semantics feature, Middle=middle rank comprehensive characteristics, Low
=low-level details feature:
High composition: Layer5;Middle composition: Layer4 Layer3;Low composition: Layer1 Layer2.Why
Layer5Component part separately as High is because high-level semantics are most important to segmentation effect, if with other layer of feature
Fusion can upset its high-level semantics information;And the feature of low level is merged each other, it can be mutually complementary between minutia,
Promote segmentation effect.
The 3-D image of input is converted three category features of multichannel by DeepHLF network, and last three classes characteristic crossover follows
Ring, which combines, generates multiple characteristic patterns.So grouping decoupling feature, can repeatedly combine important feature group, secondary important
It is few to combine, while saving net training time, promote segmentation effect.
(2), height coupling Parallel Fusion module (processing feature merges again respectively with lower coupling for height coupling) is to every group of group
Interior feature merges
It contents extraction feature and is grouped in (1) above, this part is exactly the Fusion Features to grouping, is such as schemed
2 right part, takes the parallel fusion method of height coupling, it is popular for be exactly high coupling processing Fusion Features lower coupling at
The feature of reason, this method have following three sub-steps:
1) high coupling feature processing, as shown in Fig. 3 (a), wherein HjIt is a certain group of high coupling processing of feature as a result, WiIt is
Layer inside feature groupiShared weight.That HjMiddle j respectively represents special to it is formed in Low, Middle, High when being 1,2,3
The high coupling processing result of sign.It is special using the high coupling of convolution block composition after being added to feature according to weight proportion in feature group
Sign.
Low-level details feature forms high coupling processing result: H1=fconv1(W1×Layer1+W2×Layer2)
Intermediate comprehensive characteristics form high coupling processing result: H2=fconv2(W3×Layer3+W4×Layer4)
High-level semantics feature forms high coupling processing result: H3=fconv3(W5×Layer5)
2) lower coupling characteristic processing, as shown in Fig. 3 (b), wherein LjIt is a certain group of feature lower coupling processing as a result, LjMiddle j
It is respectively represented when being 1,2,3 to the lower coupling processing result for forming its feature in Low, Middle, High, Cat symbol is opposite
Flow control two dimension carries out concatenation:
Low-level details feature forms lower coupling processing result: L1=Cat (Layer1,Layer2)
Intermediate comprehensive characteristics form lower coupling processing result: L2=Cat (Layer3,Layer4)
High-level semantics feature forms lower coupling processing result: L3=Layer5
3) height coupling feature merge, as is shown on the right of figure 2, before height has been used respectively to the subcharacter of composition characteristic group
The method of lower coupling is handled, both fusions, Fusion when final stepjMiddle j be 1,2,3 when respectively represent Low, Middle,
High, WhjRepresent weight shared by high coupling processing feature, Wl in jth groupjRepresent the shared power of lower coupling processing feature in jth group
Weight.
Low-level details feature (j=1): Fusionj=Whj×Hj+Wlj×Lj
Intermediate comprehensive characteristics (j=2): Fusionj=Whj×Hj+Wlj×Lj
High-level semantics feature (j=3): Fusionj=Whj×Hj+Wlj×Lj
Why handle in this way, is because high coupling can extract Global Information, lower coupling can retain detailed information, most
It is taken into account afterwards in conjunction with the two.
(3), the characteristic crossover circulation merged in (2) is combined and generates multiple characteristic patterns, each characteristic pattern softmax letter
Number generates segmentation figure, so just generates multiple segmentation figures
Height coupling Parallel Fusion module generates low-level details feature (Low), intermediate comprehensive characteristics in (2) above
(Middle), high-level semantics feature (High), this step carries out cross-circulation segmentation to this three groups of features, as shown in Figure 1, first
Process of convolution is carried out as first characteristic pattern, because the feature that deep layer network extracts has height to High (high-level semantics feature)
Grade is semantic, and the abundant information comprising being conducive to segmentation, first characteristic pattern is Predict0, Fcov0For convolution block (seeing below formula):
Predict0=Fcov0(High)
Then splice (Cat) Predict again0Process of convolution is done with the value of Middle while adding the characteristic pattern of previous step
Predict0, it is new in order to allow the intermediate comprehensive characteristics of first characteristic pattern fusion ratio high-level semantics feature more details to generate in this way
Characteristic pattern, second characteristic pattern is Predict1(seeing below formula):
Predict1=Fcov1(Cat(Predict0, Middle))+Predict0
Followed by splicing (Cat) Predict1Process of convolution is done with the value of Low while adding the characteristic pattern of previous step
Predict1, and combined in entire cross-circulation to learn the low-level details feature in low-level details characteristic layer in this way
Low feature is only combined once in journey, because the information that the help that low-level details feature is capable of providing is divided is limited, and is led to
Cross it is demonstrated experimentally that combine be greater than it is primary in the case where effect it is worse.Middle (intermediate comprehensive characteristics) and High (high-level semantics
Feature) it is even more important, so it is multiple to need cross-circulation to combine.Third characteristic pattern is Predict2(seeing below formula):
Predict2=Fcov2(Cat(Predict1,Low))+Predict1
Next (advanced in characteristic value fusion Middle (intermediate comprehensive characteristics) or fusion High generated with " previous step "
Semantic feature) feature two recycles in the process, therefore this step name " cross-circulation ".Following formula process is iteratively repeated until instruction
Experienced and worldly-wise to arrive optimum efficiency, the value of m is since 3, because the characteristic pattern of the generation of previous step is Predict2。
Why in this way, be through a large number of experiments and the more higher leveled information of theoretical proof on segmentation result influence specific gravity
It is bigger, and can better Optimized Segmentation result.
(4), cross knot symphysis each time at feature pass through classification function export a segmentation result figure
As the following formula, cross knot symphysis each time at characteristic pattern PredictiBy classification function Softmax function
Generate a two-dimensional segmentation result Segmentationi。
Segmentationi=softmax (Predicti)
It such as Fig. 1, loses in training and is generated by multiple segmentation results, during the test, lose by the last one (i value
Maximum Segmentationi) segmentation generates, and test result of last segmentation result as original image.
4, it proposes to solve the mathematical method and weight loss function that core is corrected with background classification
Find in an experiment, in cervical smear image the problem of cluster type cell segmentation in, two class of nucleus and background
Quantity is mutually far short of what is expected, leads to that core mistake is divided into the ratio of background much larger than the ratio that background mistake is divided into core doing the process divided
Example, this patent invent a kind of method that mathematics is named as classification correction by experimental summary and go to correct this seed nucleus with background classification not
Balance, as shown in Figure 4.For each characteristic value Predict of previous stepiWe to its two-dimensional first magnitude value multiplied by
One weight Wp, which is to learn to obtain during training.The feature map values Predict obtainedi
Predicti=Predicti[:, 0: :] × Wp
By experimental results demonstrate this method has general applicability to nucleus segmentation problem, can be adapted for very much
Nucleus divides network, including Unet, Fcn etc., part cluster type cell segmentation result figure as shown in Figure 5, and the left side of every a line
Column are original images, and the right is test segmentation result figure, it can be seen from the figure that can achieve very using dividing method of the invention
Good segmentation effect.
It is proposed weight loss function, different with previous loss function, this patent is not only during training
It only takes a partition value and GroundTruth to seek cross entropy, is to take all segmentation result values to ask with GroundTruth to intersect
Entropy, LosssumThe result of asking cross entropy to be added according to weight with GroundTruth respectively for all partition values and,Generation
Table is SegmentationiThe cross entropy total losses such as Fig. 4 acquired, formula are as follows:
The above embodiment is a preferred embodiment of the present invention, but embodiments of the present invention are not by above-described embodiment
Limitation, other any changes, modifications, substitutions, combinations, simplifications made without departing from the spirit and principles of the present invention,
It should be equivalent substitute mode, be included within the scope of the present invention.