CN110119710A - Cell sorting method, device, computer equipment and storage medium - Google Patents
Cell sorting method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN110119710A CN110119710A CN201910393118.8A CN201910393118A CN110119710A CN 110119710 A CN110119710 A CN 110119710A CN 201910393118 A CN201910393118 A CN 201910393118A CN 110119710 A CN110119710 A CN 110119710A
- Authority
- CN
- China
- Prior art keywords
- image
- cell
- target cell
- target
- classification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/698—Matching; Classification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
This application involves a kind of cell sorting method, device, computer equipment and storage mediums.Method includes: to be analysed to image input target detection model trained in advance, obtain the location information and initial classification information of each target cell in image to be analyzed, divide image to be analyzed according to location information, obtain multiple target cell images, it include the disaggregated model for expanding convolutional network by the input of target cell image, obtain the secondary classification information that target cell image corresponds to target cell, it is the classification results of target cell by secondary classification information flag when initial classification information is identical as secondary classification information.On the one hand, accurate positionin and first subseries by target detection model realization target cell, on the other hand, classification analysis is carried out to single target cell image by disaggregated model, obtain accurate classification results, by being mutually authenticated for double classification, improves to the quick positioning of target cell and Accurate classification, improve the working efficiency of doctor.
Description
Technical field
This application involves technical field of medical image processing, more particularly to a kind of cell sorting method, device, computer
Equipment and storage medium.
Background technique
With the development of medical technology, the identification of various types of cells in pathological section image is played in terms of medical treatment important
Effect, in recent years, domestic and international many medical research team start to be dedicated to studying the various types of cells identification in pleural effusion, chest
Chamber hydrops pathogenic factor is more, is summed up and is divided into two major classes: a kind of lesion as caused by inflammation, such as: by virus, fungi, carefully
Bacterium, etc. infection pleura cause infective inflammation, so as to cause pleural effusion, or due to the non-sense such as pulmonary embolism and connective tissue disease
Pleural effusion caused by infectious diseases, this kind of non-malignant lesions cells being referred to as in thoracic cavity.Another kind of is malignant tumour
Cell, such as growth of tumour cell between thoracic cavity or transfer invade to hydrops caused by pleura, such as: by mesothelioma of pleura, lung cancer, cream
Malignant tumour pleural effusion caused by gland cancer, gastric cancer etc..
But since under normal conditions, all there are a large amount of sick cell, various cell types in each pathological section image
It is various, eucaryotic cell structure is complicated, cellular morphology is various, require doctor every time for the classification of sick cell and position from pathological section
It marks out to come one by one in image, on the one hand, it undoubtedly will increase the multiplicity of the routine work of doctor, on the other hand, sick cell
Identification with classification depend on doctor professional skill, so as to cause low to the analysis treatment effeciency of pathological section image.
Summary of the invention
Based on this, it is necessary to which in view of the above technical problems, providing one kind can be improved to pathological section image (in vitro sample
Product are object) analysis treatment effeciency cell sorting method, device, computer equipment and storage medium.
A kind of cell sorting method, which comprises
It is analysed to image input target detection model, obtains the location information of each target cell in the image to be analyzed
And initial classification information, the target detection model are carried out using the sample cell image set for carrying markup information as training set
Training obtains, and the markup information includes cell position and cell class;
Divide the image to be analyzed according to the positional information, obtains multiple target cell images;
Include the disaggregated model for expanding convolutional network by target cell image input, obtains the target cell figure
As the secondary classification information of corresponding target cell, wherein the disaggregated model is carried out by the sample cell image set
Training obtains;
It is institute by the secondary classification information flag when the initial classification information is identical as the secondary classification information
State the classification results of target cell.
The disaggregated model includes feature extraction network, expansion convolutional network, full articulamentum in one of the embodiments,
And output layer;Described input the target cell image includes the disaggregated model for expanding convolutional network, obtains the target
The secondary classification information that cell image corresponds to target cell includes:
The target cell image is inputted into the feature extraction network, extracts the cell characteristic of the target cell image
Vector, and the cell characteristic vector is inputted into the expansion convolutional network;
The expansion convolutional network carries out expansion process of convolution to the cell characteristic vector, and will be after expansion process of convolution
The cell characteristic vector input the full articulamentum;
The full articulamentum carries out recurrence classification to the cell characteristic vector, and obtained recurrence classification data is inputted
Output layer;
The output layer carries out calculation processing to the recurrence classification data according to preset activation primitive, obtains the mesh
Mark cell image corresponds to the probability data that target cell belongs to each pre-set categories, by the maximum pre-set categories mark of the probability data
It is denoted as the secondary classification information of the target cell.
The feature extraction network includes texture feature extraction network and morphological feature extraction in one of the embodiments,
Network;Described that the target cell image is inputted the feature extraction network, the cell for extracting the target cell image is special
Levying vector includes:
The cell image is inputted into the texture feature extraction network, extract the textural characteristics of the target cell to
Amount;
The target cell image is subjected to gray processing processing, the gray level image that gray processing is handled inputs the shape
State feature extraction network, extracts the morphological feature vector of the target cell;
Two-dimensional feature vector is converted by the texture feature vector and the morphological feature vector respectively;
According to preset weight parameter, the two-dimensional feature vector is spliced, obtains the target cell image
Cell characteristic vector.
It is described in one of the embodiments, to be analysed to image input target detection model, obtain the figure to be analyzed
As in each target cell location information and initial classification information before, further includes:
Obtain the sample cell image set and K initial target detection model for being divided into K parts of data;
Wherein N parts of data in the K parts of data are successively chosen as test set, K-N parts of data are carried out as training set
Data combination, the K group data set for obtaining various combination close, and the K group data set closes and the K initial target detection model phase
It is corresponding;
According to the training set in the data acquisition system, the training initial target corresponding with the data acquisition system detects mould
Type calculates the mould for the initial target detection model that training is completed according to the test set in the same sample image data
Type evaluation index;
The average value of the K group model evaluation index is calculated, and using the average value as goal-based assessment index, screens K
In the group model evaluation index with the smallest model evaluation index of the goal-based assessment scale error;
The corresponding initial target detection model of the smallest model evaluation index of the error is detected labeled as selected objective target
Model;
According to the sample cell image set, model training is carried out to the selected objective target detection model, is trained
The target detection model completed.
The target detection model is examined by the SSD of core network of DenseNet network in one of the embodiments,
Survey model.
It is described in one of the embodiments, to be analysed to image input target detection model, obtain the figure to be analyzed
The location information of each target cell and initial classification information include: as in
The image to be analyzed is inputted into the DenseNet network in the target detection model, obtains the figure to be analyzed
The characteristic pattern of picture;
Convolutional calculation processing is carried out to the characteristic pattern, is determined in the target image according to convolutional calculation result and includes
Multiple target cells obtain the location information and initial category information of the target cell.
Described input the target cell image includes the classification for expanding convolutional network in one of the embodiments,
Model, after obtaining the secondary classification information that the target cell image corresponds to target cell, further includes:
It is when the initial classification information and the secondary classification information difference, the target cell is different labeled as classifying
Often.
It is described in one of the embodiments, to be analysed to image input target detection model, obtain the figure to be analyzed
As in each target cell location information and initial classification information before, further includes:
Sectioning image is obtained, image preprocessing is carried out to the sectioning image, is obtained described wait divide
Analyse image, wherein described image pretreatment include image denoising, image enhancement, image scaling and pixel value and
Color normalization.
A kind of cell classification device, described device include:
Module of target detection obtains each in the image to be analyzed for being analysed to image input target detection model
The location information and initial classification information of target cell, the target detection model is to carry the sample cell image of markup information
Set is trained to obtain as training set, and the markup information includes cell position and cell class;
Image segmentation module obtains multiple cell images for dividing the image to be analyzed according to the positional information;
Classification processing module, for including the disaggregated model for expanding convolutional network by target cell image input,
Obtain the secondary classification information that the target cell image corresponds to target cell, wherein the disaggregated model is by the sample
This cell image set is trained to obtain;
Classification results mark module is used for when the initial classification information is identical as the secondary classification information, by institute
State the classification results that secondary classification information flag is the target cell.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the processing
Device performs the steps of when executing the computer program
It is analysed to image input target detection model, obtains the location information of each target cell in the image to be analyzed
And initial classification information, the target detection model are carried out using the sample cell image set for carrying markup information as training set
Training obtains, and the markup information includes cell position and cell class;
Divide the image to be analyzed according to the positional information, obtains multiple target cell images;
Include the disaggregated model for expanding convolutional network by target cell image input, obtains the target cell figure
As the secondary classification information of corresponding target cell, wherein the disaggregated model is carried out by the sample cell image set
Training obtains;
It is institute by the secondary classification information flag when the initial classification information is identical as the secondary classification information
State the classification results of target cell.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor
It is performed the steps of when row
It is analysed to image input target detection model, obtains the location information of each target cell in the image to be analyzed
And initial classification information, the target detection model are carried out using the sample cell image set for carrying markup information as training set
Training obtains, and the markup information includes cell position and cell class;
Divide the image to be analyzed according to the positional information, obtains multiple target cell images;
Include the disaggregated model for expanding convolutional network by target cell image input, obtains the target cell figure
As the secondary classification information of corresponding target cell, wherein the disaggregated model is carried out by the sample cell image set
Training obtains;
It is institute by the secondary classification information flag when the initial classification information is identical as the secondary classification information
State the classification results of target cell.
Above-mentioned cell sorting method, device, computer equipment and storage medium, it is to be analyzed comprising many cells by obtaining
The preparatory training of image input obtains target detection model, identifies that the location information of each target cell divides with initial in image to be analyzed
Then category information divides each target cell in image to be analyzed according to location information, multiple cell images is obtained, by cell
Image input disaggregated model trained in advance, according to the expansion convolutional network in disaggregated model to the expansion convolution of characteristic at
Reason, realize convolution kernel it is identical i.e. number of parameters is constant in the case where, obtain bigger receptive field, and then by cell image
Classified to obtain more accurate secondary classification information, finally, by judging the initial classification information of target cell and secondary
Whether classification information is identical to be verified, and more accurate cell classification result is obtained.On the one hand, real by target detection model
The accurate positionin of target cell and first subseries in image to be analyzed are showed, on the other hand, by disaggregated model to single target
Cell image carries out classification analysis, obtains accurate classification results, by being mutually authenticated for double classification, improves to be analyzed
The quick positioning of target cell and Accurate classification in image, so that doctor be assisted to carry out diagnostic analysis, raising working efficiency.
Detailed description of the invention
Fig. 1 is the application scenario diagram of cell sorting method in one embodiment;
Fig. 2 is the flow diagram of cell sorting method in one embodiment;
Fig. 3 is the flow diagram of cell sorting method in another embodiment;
Fig. 4 is the flow diagram of the sub-step of step S420 in one embodiment;
Fig. 5 is the flow diagram of cell sorting method in another embodiment;
Fig. 6 is the flow diagram of cell sorting method in a still further embodiment;
Fig. 7 is the structural block diagram of cell classification device in one embodiment;
Fig. 8 is the internal structure chart of computer equipment in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, not
For limiting the application.
Cell sorting method provided by the present application can be applied in application environment as shown in Figure 1.Wherein, terminal 102
It is communicated with server 104 by network by network.Image is analysed to by terminal 102 to input in server 104 in advance
The target detection model first stored obtains the location information and initial classification information of each target cell in image to be analyzed, target
Detection model is trained to obtain using the sample cell image set for carrying markup information as training set, and markup information includes thin
Born of the same parents position and cell class divide image to be analyzed according to location information in server 104, obtain multiple target cell images,
And inputting target cell image pre-stored in server 104 includes the disaggregated model for expanding convolutional network, obtains mesh
Mark cell image corresponds to the secondary classification information of target cell, wherein disaggregated model is carried out by sample cell image set
Training obtains, and is point of target cell by secondary classification information flag when initial classification information is identical as secondary classification information
Class is as a result, be sent to terminal 102 for the image for carrying location information and secondary classification information, wherein terminal 102 can with but not
It is limited to be various personal computers, laptop, smart phone, tablet computer and portable wearable device, server 104
It can be realized with the server cluster of the either multiple server compositions of independent server.
By taking pathological section image of the cell sorting method to pleural effusion is analyzed as an example, process is diagnosed in clinical treatment
In, pleural effusion belongs to a part of human body dropsy of serous cavity, referred to as " hydrothorax ".Under human normal physiological conditions, answered in thoracic cavity
There are a certain amount of liquid, the lubricating action to each internal organs of human body is primarily served, is conducive to the movement between internal organs.Normal adult's
Pleural effusion should be in 20ml hereinafter, intrathoracic hydrops is if it exceeds this value, is diagnosed as pleural effusion.Due to causing human body
The cause of disease of pleural effusion is more, and hydrops is generally also the performance of a variety of disease complications, this just needs clinical pathology doctor to chest
The pathogenic factor and property of chamber hydrops make specific judgement.Pleural effusion pathogenic factor is more, is summed up and is divided into two major classes:
One kind lesion as caused by inflammation, such as: infective inflammation is caused by virus, fungi, bacterium, infection pleura, so as to cause chest
Chamber hydrops, or the pleural effusion due to caused by the noninfectious diseases such as pulmonary embolism and connective tissue disease, this kind are referred to as
Non-malignant lesions cell in thoracic cavity.It is another kind of be malignant cell, such as growth of tumour cell between thoracic cavity or transfer invade
To hydrops caused by pleura, as: the malignant tumour pleural effusion as caused by mesothelioma of pleura, lung cancer, breast cancer, gastric cancer etc..Cause
This, pathologist is needed to sick cell Classification and Identification of different nature and confirmation, still, due to cell pathology amount of inspection
More, difficulty is big, and pathologist is few, and diagnostic level is inconsistent, lacks quality control, in addition the existing many primary care machines in China
This can only be examined work to be placed in clinical examination department, by not having by structure in no pathologist and pathological examination department
The test technician of pathological examination qualification goes to complete, and is easy to cause mistaken diagnosis and fails to pinpoint a disease in diagnosis, and seriously affects the medical diagnosis on disease and treatment of patient,
Assist in identifying carry out cell classification by artificial intelligence, doctor can effectively be assisted to carry out the cell classification of pathological section image,
The work multiplicity for mitigating doctor is conducive to improve working efficiency.
In one embodiment, as shown in Fig. 2, providing a kind of cell sorting method, it is applied in Fig. 1 in this way
It is illustrated for server, comprising the following steps:
Step S200 is analysed to image input target detection model, obtains the position of each target cell in image to be analyzed
Confidence breath and initial classification information, target detection model using carry the sample cell image set of markup information as training set into
Row training obtains, and markup information includes cell position and cell class.
Target detection is the process for identifying all interested targets in image, and determining target position and size, mesh
Mark detection model be used for detect include many cells image to be analyzed, by carry cell position and cell class mark believe
The sample cell image set of breath, is trained and tests to initial target detection model, so that initial target detection model
It can recognize that the target cell in image to be analyzed, the cell class marked in the classification and this image cell aggregation of target cell
Not identical, image to be analyzed can be pathological section image after the image preprocessings such as image denoising, enhancing and normalization
Image, target detection model is the SSD detection model using Densnet as core network, wherein Densnet network be used for will
The network to be analyzed of input carries out feature extraction, obtains the various sizes of characteristic pattern of image to be analyzed, and by various sizes of spy
The Recurrent networks of sign figure and the corresponding priori frame input target detection model of various sizes of characteristic pattern, return characteristic pattern
Return classification, identifies the location information and initial classification information of the target cell and target cell in image to be analyzed.
Step S300 divides image to be analyzed according to location information, obtains multiple target cell images.
Location information includes the coordinate information of target cell target picture frame, according to the coordinate information of target picture frame, by target
Cell is split from image to be analyzed, obtain it is multiple only include single target cell target cell image, will pass through
Disaggregated model carries out secondary classification analysis to the target cell image divided.
The input of target cell image is included the disaggregated model for expanding convolutional network, obtains target cell by step S400
Image corresponds to the secondary classification information of target cell, wherein disaggregated model is trained by sample cell image set
It arrives.
Disaggregated model is obtained by carrying the sample cell image set training of classification annotation information, sample cytological map
As intersection includes that the big figure of each many cells is split the obtained unicellular sample image of processing according to location information, according to list
Classification mark in cell sample image, is trained disaggregated model, so that trained disaggregated model identifies target cell
Image belongs to the probability of each cell class, so that it is determined that the classification results of target cell image, expansion convolution and common convolution
It compares, other than the size of convolution kernel, there are one spreading rate parameter, is mainly used to indicate the size of expansion.Expand convolution
With common convolution it is identical in that, the size of convolution kernel be it is the same, in neural network be number of parameters it is constant, difference exist
There is bigger receptive field in expansion convolution, and then believed by being classified to obtain more accurate secondary classification to cell image
Breath.In embodiment, disaggregated model includes input layer, feature extraction network, expands convolutional network, full articulamentum and output layer,
Disaggregated model is a multi input model, and the color image of target cell image and gray level image are inputted disaggregated model respectively
Input layer, gray level image refer to that the colored target cell image that will be obtained after dividing processing carries out the figure that gray processing is handled
Picture, color image are used to carry out the texture feature extraction of cell, and gray level image is used to carry out the Shape Feature Extraction of cell, passes through
Textural characteristics and shape feature are blended, the feature vector for characterizing cell characteristic is obtained, by expanding convolutional network, are expanded
Big convolution receptive field, obtains more accurate vector data, by the way that cell characteristic vector is inputted full articulamentum, carries out data and returns
Class calculates the probability that cell image belongs to every a kind of pre-set categories by output layer, using the classification of maximum probability as the cell
The secondary classification information of image, wherein pre-set categories are the classes that the classification data obtained in disaggregated model training process is constituted
Not.
Secondary classification information flag is target when initial classification information is identical as secondary classification information by step S500
The classification results of cell.
Precise positioning and the simple classification of target cell may be implemented in target detection model, but there are accuracy for classification results
Not high problem, target detection model obtains more accurately classification results by extracting the feature and subseries again of cell, logical
Cross being mutually authenticated for just subseries and secondary classification, it is ensured that the accuracy of classification results, in other embodiments, when first point
It is abnormal results by the category label of target cell correspondence image when the class categories difference of category information and secondary classification information.
Above-mentioned cell sorting method obtains target inspection by obtaining the image to be analyzed input training in advance comprising many cells
Model is surveyed, identifies the location information and initial classification information of each target cell in image to be analyzed, then according to location information point
Each target cell in image to be analyzed is cut, multiple cell images are obtained, by cell image input classification mould trained in advance
Type realizes i.e. ginseng identical in convolution kernel according to the expansion convolutional network in disaggregated model to the expansion process of convolution of characteristic
In the case that number quantity is constant, bigger receptive field is obtained, and then more accurate by being classified to obtain to cell image
Secondary classification information, finally, by judging the initial classification information of target cell and whether secondary classification information is identical is tested
Card, obtains more accurate cell classification result.On the one hand, thin by target in target detection model realization image to be analyzed
On the other hand the accurate positionin of born of the same parents and first subseries carry out classification analysis to single target cell image by disaggregated model, obtain
Accurate classification results are obtained, by being mutually authenticated for double classification, improves and the quick of target cell in image to be analyzed is determined
Position and Accurate classification improve working efficiency so that doctor be assisted to carry out diagnostic analysis.
Disaggregated model includes feature extraction network, expansion convolutional network, full articulamentum and defeated in one of the embodiments,
Layer out.As shown in figure 3, step S400, includes the disaggregated model for expanding convolutional network by the input of target cell image, obtains mesh
Mark cell image corresponds to the secondary classification information of target cell and includes:
Step S420, by target cell image input feature vector extract network, extract target cell image cell characteristic to
Amount, and cell characteristic vector is inputted into expansion convolutional network.
Step S440, expansion convolutional network carries out expansion process of convolution to cell characteristic vector, and will expand process of convolution
Cell characteristic vector afterwards inputs full articulamentum.
Step S460, full articulamentum carry out recurrence classification to cell characteristic vector, and obtained recurrence classification data is defeated
Enter output layer.
Step S480, output layer carry out calculation processing to classification data is returned according to preset activation primitive, obtain target
Cell image corresponds to the probability data that target cell belongs to each pre-set categories, and the maximum pre-set categories of probability data are labeled as mesh
Mark the secondary classification information of cell.
After being analysed to image segmentation and being multiple target cell images, each target cell can be represented by needing to obtain
Characteristic, characteristic includes textural characteristics and shape feature, and textural characteristics describe the surface layer attribute of target cell, line
Reason feature carries out the calculating of statistical model as unit of pixel region, thus textural characteristics will not because of local detail difference and
Lead to that it fails to match, meanwhile, textural characteristics are also equipped with to antimierophonic ability and rotational invariance.Shape Feature Extraction include with
The domain provincial characteristics in entire shape area and using object edge as its contour feature, in embodiment, it is special can to pass through boundary
Value indicative method, geometry parameter method, shape invariance moments method, Fourier's shape description method etc. carry out the extraction of shape feature.Pass through texture
The Fusion Features of characteristic and character shape data obtain the cell characteristic vector for characterizing cell characteristic data, will be thin
Born of the same parents' feature vector carries out expansion convolution, expands convolution receptive field.Full articulamentum is used for each characteristic in cell characteristic vector
According to recurrence classification is carried out, output layer is stored with pre-set activation primitive, carries out probability calculation to classification data is returned, determines
Target image belongs to the size of the probability data of each classification, and by sorting to probability data, it is maximum general to choose probability data
Rate classification obtains the secondary classification information of target cell image.
Feature extraction network includes texture feature extraction network and morphological feature extraction net in one of the embodiments,
Network.As shown in figure 4, step S420, extracts network for target cell image input feature vector, the cell for extracting target cell image is special
Levying vector includes:
Cell image is inputted texture feature extraction network, extracts the texture feature vector of target cell by step S422.
Target cell image is carried out gray processing processing by step S424, and the gray level image that gray processing is handled inputs
Morphological feature extraction network extracts the morphological feature vector of target cell;
Step S426 converts two-dimensional feature vector for texture feature vector and morphological feature vector respectively;
Step S428 splices two-dimensional feature vector according to preset weight parameter, obtains target cell image
Cell characteristic vector.
Texture feature extraction network is made of multilayer convolutional layer and full articulamentum, and input data is target color cytological map
Picture, Shape Feature Extraction network use full convolutional neural networks, and the difference with texture feature extraction network is to change full articulamentum
For convolutional layer, reduce parameter amount.Input data in Shape Feature Extraction network is by gray processing treated gray scale
Target cell image, is handled by gray processing, can remove the influence of color in target cell image, so that Shape Feature Extraction net
Network only focuses on the shape feature information of target cell image.Before carrying out vector splicing, need first to texture feature vector and
Operation is normalized in shape eigenvectors respectively, further according to preset weight coefficient, by the texture feature vector after normalization
And shape eigenvectors are spliced, and after the vector obtained after splicing is normalized again, target cell can be obtained
The cell characteristic vector of image.Wherein, vector splicing refers to spread vector dimension.For example, the texture of an A dimension is special
The shape eigenvectors for levying vector and an A dimension carry out vector splicing, and the cell characteristic vector of 2*A dimension can be obtained.For example,
Normalizing formula can beWherein, ffuseFor cell characteristic vector, frgbFor
Texture feature vector, fsFor shape eigenvectors, ‖ ‖ is norm operation, we select 2- norm herein,Symbol indicates
Two vectors are subjected to concatenations, and real number λ ∈ (0,1] it is weight coefficient, it is an empirical value, it can be by many experiments result point
Analysis determines, and self-setting as required.
In one of the embodiments, as shown in figure 5, step S200, is analysed to image input target detection model, obtains
Into image to be analyzed before the location information of each target cell and initial classification information, further includes:
Step S110 obtains the sample cell image set and K initial target detection model for being divided into K parts of data.
Step S120 successively chooses wherein N parts of data in K parts of data as test set, and K-N parts of data are as training set
Data combination is carried out, the K group data set for obtaining various combination closes, and K group data set closes opposite with K initial target detection model
It answers.
Step S130 trains initial target detection model corresponding with data acquisition system according to the training set in data acquisition system,
According to the test set in same sample image data, the model evaluation index for the initial target detection model that training is completed is calculated.
Step S140 calculates the average value of K group model evaluation index, and using average value as goal-based assessment index, screens K
In group model evaluation index with the smallest model evaluation index of goal-based assessment scale error.
The corresponding initial target detection model of the smallest model evaluation index of error is labeled as selected objective target by step S150
Detection model;
Step S160 carries out model training to selected objective target detection model, is trained according to sample cell image set
The target detection model of completion.
Hyper parameter in K initial target detection model be it is different, by sample image data to K initial target
Detection model is trained and tests, exactly in order to adjust the hyper parameter in target detection model and to each target detection model
Ability is assessed.The mode assessed to the ability of each target detection model can be with are as follows: calculates each initial target detection mould
The assessment parameter of type, using the average value of the assessment parameter of K initial target detection model as goal-based assessment parameter, thus from K
Initial target detection corresponding with the smallest assessment parameter of goal-based assessment parameter error is picked out in a initial target detection model
Model, as preferred target detection model.Then according to sample cell image set, mould is carried out to selected objective target detection model
Type training obtains the target detection model of training completion for example, the mode being trained to selected objective target detection model can be with are as follows:
5000 sample image cell aggregations finely marked are obtained, the ratio cut partition according to 8:1:1 is training set, verifying collects and test
Collection, intends iteration 1,000,000 times in the training stage, the data that verifying is concentrated can be tested for every iteration 1000 times, so as to adjust target detection
The hyper parameter of model and entry evaluation is carried out to the ability of target detection model.After the completion of the training stage, test set is used
Model prediction is carried out, target detection model is assessed, and targetedly the hyper parameter in target detection model is carried out
It improves.Wherein, the sample image cell aggregation for carrying markup information refers to the sample image cell aggregation manually marked, tool
Body, pathologist can manually mark the sample slice image for needing server process by digital pathology scanner,
So as to server use.Marked content includes the cell class information and cell of each sample cell image in sample slice image
Location information.
In embodiment, it by the way that sample cell image to be inputted to the DenseNet network of selected objective target detection model, obtains
The characteristic pattern of multiple and different sizes of sample cell image, then by the characteristic pattern input selected objective target detection of multiple and different sizes
Candidate frame processing module in model, by multiple candidate frames in candidate frame processing module in the multiple and different of sample cell image
It is returned on the characteristic pattern of size, determines the corresponding priori frame of various sizes of characteristic pattern, then by sample cell image
Multiple and different sizes characteristic pattern and characteristic pattern corresponding priori frame input selected objective target detection model DenseNet net
The convolutional network of network obtains the location information and initial classification information of each target cell in each sample cell image, by what is obtained
Location information and initial classification information and the markup information of sample cell image compare, adjustment selected objective target detection model
Network architecture parameters complete preferred mesh when the accuracy of identification of selected objective target detection model reaches preset required precision
The training for marking detection model, obtains the target detection model for being analyzed image to be analyzed.
Target detection model is to detect mould by the SSD of core network of DenseNet network in one of the embodiments,
Type.
Target detection model is using DenseNet network as the SSD detection model of core network, in DenseNet network, often
A layer can all receive all layers of the front input additional as its, and each layer can be (logical in channel with all layers in front
Road) it links together (each layer of characteristic pattern size is identical here) in dimension, and as next layer of input.By straight
The characteristic pattern from different layers is fetched in succession, and feature reuse, raising efficiency may be implemented in this.In the present embodiment, DenseNet net
The structure of DenseBlock+Convpool can be used in network, wherein DenseBlock is the module comprising plurality of layers, each layer
Characteristic pattern size it is identical, between layers use intensive connection type.And Convpool module is that connection is two adjacent
DenseBlock makes the reduction of characteristic pattern size.
In one of the embodiments, as shown in fig. 6, step S200, is analysed to image input target detection model, obtains
Into image to be analyzed, the location information of each target cell and initial classification information include:
Step S220, the DenseNet network being analysed in image input target detection model, obtains image to be analyzed
Characteristic pattern.
Step S240 carries out convolutional calculation processing to characteristic pattern, is determined in target image according to convolutional calculation result and include
Multiple target cells, obtain the location information and initial category information of target cell.
The DenseNet network being analysed in image input target detection model, obtains the multiple and different rulers of image to be analyzed
Very little characteristic pattern, by the candidate frame processing module in the characteristic patterns of multiple and different sizes input target detection model, obtain with respectively
The corresponding priori frame of the characteristic pattern of size, will various sizes of characteristic pattern and corresponding priori frame input target detection model
In convolutional network, obtain the location information of each target cell and initial category information in image to be analyzed.
After the characteristic pattern for obtaining image to be analyzed, priori frame, root can be arranged for various sizes of characteristic pattern in server
According to characteristic pattern and priori frame, location information of the target cell in characteristic pattern is determined by convolutional calculation, according to image to be analyzed
With the location information of the mapping relations and target cell of characteristic pattern in characteristic pattern, determine target cell in image to be analyzed
Location information the initial classification information of each target cell is determined according to the sorter network in the target detection model trained.
For deep neural network, the characteristic pattern of shallow-layer contains more detailed information, is more suitable for carrying out wisp
Detection, and deeper characteristic pattern contains more global informations with the expansion of receptive field, is more suitable for the detection of big object.
In order to make various sizes of cell have better detection effect, in model training, can use on different characteristic figure to difference
The candidate frame of size is returned.
The input of target cell image is included point for expanding convolutional network by step S400 in one of the embodiments,
Class model, after obtaining the secondary classification information that target cell image corresponds to target cell, further includes:
It is abnormal classification by target cell image tagged when initial classification information and secondary classification information difference.
Server can integrate first classification information and secondary point after obtaining initial classification information and secondary classification information
Category information determines the classification results of target cell, when initial classification information and secondary classification information difference, by target cell mark
It is denoted as abnormal results.Abnormal results refer to that server can be by marked exception in the presence of the cell that can not determine classification results
As a result corresponding cell marking comes out and pushes to terminal.
In one of the embodiments, as shown in fig. 6, step S200, is analysed to image input target detection model, obtains
Into image to be analyzed before the location information of each target cell and initial classification information, further includes:
Step S180 obtains sectioning image, carries out image preprocessing to sectioning image, obtains image to be analyzed, wherein figure
As pretreatment includes that image denoising, image enhancement, image scaling and pixel value and color normalize.
Pathological section image is pre-processed, and solved bright to remove the noise for including in pathological section image
The problems such as uneven is spent, clearly pathological data is obtained, for handling in next step.Wherein, when carrying out image denoising, can pass through
Gaussian filtering handles pathological section image, by the way that preset Gaussian filter algorithm and pathological section image are carried out convolution
The mode of operation, the pathological section image after denoising can be obtained.In order to protrude the local detail characteristic of image, enlarged image
Difference between middle lesion region and normal region feature inhibits uninterested feature, can be changed by way of image enhancement
Kind picture quality, abundant information amount, reinforce image interpretation and recognition effect, and specifically, the image that logarithm Log transformation can be used increases
Strong algorithms handle pathological section image.Logarithmic transformation can extend the low ash angle value part of image, show low ash
The more details in part are spent, its high gray value is partially compressed, the details of high gray value part is reduced, emphasizes image to reach
The purpose of low gray portion.Operation is normalized in the pixel value of image to refer to for brightness range being adjusted to from (0,255)
(0,1) can be specifically adjusted by formula y=(x-MinValue)/(MaxValue-MinValue), wherein x is indicated
Pixel value before normalization, y indicate that pixel value adjusted, MinValue indicate the minimum value of original image pixel;MaxValue table
Show the maximum value of original image pixel.
It should be understood that although each step in the flow chart of Fig. 2-6 is successively shown according to the instruction of arrow,
These steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly stating otherwise herein, these steps
Execution there is no stringent sequences to limit, these steps can execute in other order.Moreover, at least one in Fig. 2-6
Part steps may include that perhaps these sub-steps of multiple stages or stage are not necessarily in synchronization to multiple sub-steps
Completion is executed, but can be executed at different times, the execution sequence in these sub-steps or stage is also not necessarily successively
It carries out, but can be at least part of the sub-step or stage of other steps or other steps in turn or alternately
It executes.
In one embodiment, as shown in fig. 7, providing a kind of cell classification device, comprising:
Module of target detection 200 obtains each mesh in image to be analyzed for being analysed to image input target detection model
The location information and initial classification information of cell are marked, target detection model is to carry the sample cell image collection cooperation of markup information
It is trained to obtain for training set, markup information includes cell position and cell class;
Image segmentation module 300 obtains multiple cell images for dividing image to be analyzed according to location information;
Classification processing module 400 is obtained for including the disaggregated model for expanding convolutional network by the input of target cell image
The secondary classification information of target cell is corresponded to target cell image, wherein disaggregated model is by sample cell image set
It is trained to obtain;
Classification results mark module 500 is used for when initial classification information is identical as secondary classification information, by secondary classification
Information flag is the classification results of target cell.
Classification processing module 400 in one of the embodiments, including input layer, feature extraction network, expansion convolution net
Network, full articulamentum and output layer;Input layer is used to target cell image input feature vector extracting network, and feature extraction network is used for
The cell characteristic vector of target cell image is extracted, and cell characteristic vector is inputted into expansion convolutional network, expands convolutional network
For carrying out expansion process of convolution to cell characteristic vector, and the cell characteristic vector input after expansion process of convolution is connected entirely
Layer, full articulamentum are defeated for carrying out recurrence classification, and the recurrence classification data input and output layer that will be obtained to cell characteristic vector
Layer is used to carry out calculation processing to classification data is returned according to preset activation primitive out, obtains target cell image and corresponds to target
Cell belongs to the probability data of each pre-set categories, and the maximum pre-set categories of probability data are labeled as to the secondary classification of target cell
Information.
In one of the embodiments, feature extraction network include texture feature extraction network morphology feature extraction network with
And vector splices network;
Input layer is used to cell image inputting texture feature extraction network, and texture feature extraction network is for extracting target
The texture feature vector of cell;
Input layer is also used to target cell image carrying out gray processing processing, and the gray level image that gray processing is handled is defeated
Enter morphological feature extraction network, morphological feature extraction network is used to extract the morphological feature vector of target cell;
Vector splicing network converts two-dimensional feature vector for texture feature vector and morphological feature vector respectively, according to pre-
If weight parameter, two-dimensional feature vector is spliced, the cell characteristic vector of target cell image is obtained.
Cell classification device further includes target detection model training module in one of the embodiments, is drawn for obtaining
It is divided into the sample cell image set and K initial target detection model of K parts of data, successively chooses wherein N parts in K parts of data
Data carry out data combination as training set as test set, K-N parts of data, and the K group data set for obtaining various combination closes, K group
Data acquisition system is corresponding with K initial target detection model, and according to the training set in data acquisition system, training is corresponding with data acquisition system
Initial target detection model the initial target detection that training is completed is calculated according to the test set in same sample image data
The model evaluation index of model calculates the average value of K group model evaluation index, and using average value as goal-based assessment index, sieve
Select in K group model evaluation index with the smallest model evaluation index of goal-based assessment scale error, by the smallest model evaluation of error
The corresponding initial target detection model of index is labeled as selected objective target detection model, according to sample cell image set, to preferred
Target detection model carries out model training, obtains the target detection model of training completion.
Target detection model is to detect mould by the SSD of core network of DenseNet network in one of the embodiments,
Type.
Module of target detection 200 in one of the embodiments, are also used to be analysed to image input target detection model
In DenseNet network, obtain the characteristic pattern of image to be analyzed, to characteristic pattern carry out convolutional calculation processing, according to convolutional calculation
As a result the multiple target cells for including in target image are determined, the location information and initial category information of target cell are obtained.
Classification results mark module 500 in one of the embodiments, is also used to when initial classification information and secondary classification
When information difference, target cell is labeled as abnormal classification.
Cell classification device further includes image pre-processing module in one of the embodiments, for obtaining sectioning image,
To sectioning image carry out image preprocessing, obtain image to be analyzed, wherein image preprocessing include image denoising, image enhancement,
Image scaling and pixel value and color normalization.
Above-mentioned cell classification device obtains target inspection by obtaining the image to be analyzed input training in advance comprising many cells
Model is surveyed, identifies the location information and initial classification information of each target cell in image to be analyzed, then according to location information point
Each target cell in image to be analyzed is cut, multiple cell images are obtained, by cell image input classification mould trained in advance
Type realizes i.e. ginseng identical in convolution kernel according to the expansion convolutional network in disaggregated model to the expansion process of convolution of characteristic
In the case that number quantity is constant, bigger receptive field is obtained, and then more accurate by being classified to obtain to cell image
Secondary classification information, finally, by judging the initial classification information of target cell and whether secondary classification information is identical is tested
Card, obtains more accurate cell classification result.On the one hand, thin by target in target detection model realization image to be analyzed
On the other hand the accurate positionin of born of the same parents and first subseries carry out classification analysis to single target cell image by disaggregated model, obtain
Accurate classification results are obtained, by being mutually authenticated for double classification, improves and the quick of target cell in image to be analyzed is determined
Position and Accurate classification improve working efficiency so that doctor be assisted to carry out diagnostic analysis.
Specific about cell classification device limits the restriction that may refer to above for cell sorting method, herein not
It repeats again.Modules in above-mentioned cell classification device can be realized fully or partially through software, hardware and combinations thereof.On
Stating each module can be embedded in the form of hardware or independently of in the processor in computer equipment, can also store in a software form
In memory in computer equipment, the corresponding operation of the above modules is executed in order to which processor calls.
In one embodiment, a kind of computer equipment is provided, which can be server, internal junction
Composition can be as shown in Figure 8.The computer equipment include by system bus connect processor, memory, network interface and
Database.Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory packet of the computer equipment
Include non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system, computer program and data
Library.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculating
The database of machine equipment is used for torage cell classification data.The network interface of the computer equipment is used to pass through with external terminal
Network connection communication.To realize a kind of cell sorting method when the computer program is executed by processor.
It will be understood by those skilled in the art that structure shown in Fig. 8, only part relevant to application scheme is tied
The block diagram of structure does not constitute the restriction for the computer equipment being applied thereon to application scheme, specific computer equipment
It may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
In one embodiment, a kind of computer equipment, including memory and processor are provided, which is stored with
Computer program, the processor perform the steps of when executing computer program
It is analysed to image input target detection model, obtains in image to be analyzed the location information of each target cell and just
Beginning classification information, target detection model are trained using the sample cell image set for carrying markup information as training set
It arrives, markup information includes cell position and cell class;
Divide image to be analyzed according to location information, obtains multiple target cell images;
Include the disaggregated model for expanding convolutional network by the input of target cell image, obtains target cell image and correspond to mesh
Mark the secondary classification information of cell, wherein disaggregated model is to be trained to obtain by sample cell image set;
It is the classification of target cell by secondary classification information flag when initial classification information is identical as secondary classification information
As a result.
In one embodiment, disaggregated model includes feature extraction network, expansion convolutional network, full articulamentum and output
Layer;Processor also performs the steps of when executing computer program
Target cell image input feature vector is extracted into network, extracts the cell characteristic vector of target cell image, and will be thin
Born of the same parents' feature vector input expansion convolutional network;
Expansion convolutional network carries out expansion process of convolution to cell characteristic vector, and the cell after expansion process of convolution is special
It levies vector and inputs full articulamentum;
Full articulamentum carries out recurrence classification, and the recurrence classification data input and output layer that will be obtained to cell characteristic vector;
Output layer carries out calculation processing to classification data is returned according to preset activation primitive, obtains target cell image pair
It answers target cell to belong to the probability data of each pre-set categories, the maximum pre-set categories of probability data is labeled as the two of target cell
Subseries information.
In one embodiment, it is also performed the steps of when processor executes computer program
Cell image is inputted into texture feature extraction network, extracts the texture feature vector of target cell;
Target cell image is subjected to gray processing processing, the gray level image input morphological feature that gray processing is handled is mentioned
Network is taken, the morphological feature vector of target cell is extracted;
Two-dimensional feature vector is converted by texture feature vector and morphological feature vector respectively;
According to preset weight parameter, two-dimensional feature vector is spliced, obtains the cell characteristic of target cell image
Vector.
In one embodiment, it is also performed the steps of when processor executes computer program
Obtain the sample cell image set and K initial target detection model for being divided into K parts of data;
Wherein N parts of data in K parts of data are successively chosen as test set, K-N parts of data carry out data as training set
Combination, the K group data set for obtaining various combination close, and K group data set closes corresponding with K initial target detection model;
According to the training set in data acquisition system, training initial target detection model corresponding with data acquisition system, according to same
Test set in sample image data calculates the model evaluation index for the initial target detection model that training is completed;
The average value of K group model evaluation index is calculated, and using average value as goal-based assessment index, screens K group model and comments
Estimate in index with the smallest model evaluation index of goal-based assessment scale error;
The corresponding initial target detection model of the smallest model evaluation index of error is labeled as selected objective target detection model;
According to sample cell image set, model training is carried out to selected objective target detection model, obtains the mesh of training completion
Mark detection model.
In one embodiment, it is also performed the steps of when processor executes computer program
The DenseNet network being analysed in image input target detection model, obtains the characteristic pattern of image to be analyzed;
Convolutional calculation processing is carried out to characteristic pattern, the multiple targets for including in target image are determined according to convolutional calculation result
Cell obtains the location information and initial category information of target cell.
In one embodiment, it is also performed the steps of when processor executes computer program
When initial classification information and secondary classification information difference, target cell is labeled as abnormal classification.
In one embodiment, it is also performed the steps of when processor executes computer program
Sectioning image is obtained, image preprocessing is carried out to sectioning image, obtains image to be analyzed, wherein image preprocessing
It is normalized including image denoising, image enhancement, image scaling and pixel value and color.
The above-mentioned computer equipment for realizing cell sorting method, it is defeated by obtaining the image to be analyzed comprising many cells
Enter training in advance and obtain target detection model, identifies that the location information of each target cell and preliminary classification are believed in image to be analyzed
Then breath divides each target cell in image to be analyzed according to location information, multiple cell images is obtained, by cell image
Input disaggregated model trained in advance, according to the expansion convolutional network in disaggregated model to the expansion process of convolution of characteristic,
Realize convolution kernel it is identical i.e. number of parameters is constant in the case where, obtain bigger receptive field, so by cell image into
Row classification obtains more accurate secondary classification information, finally, passing through the initial classification information for judging target cell and secondary point
Whether category information is identical to be verified, and more accurate cell classification result is obtained.On the one hand, pass through target detection model realization
The accurate positionin of target cell and first subseries in image to be analyzed are on the other hand, by disaggregated model thin to single target
Born of the same parents' image carries out classification analysis, obtains accurate classification results, by being mutually authenticated for double classification, improves and treat analysis chart
The quick positioning of target cell and Accurate classification as in, so that doctor be assisted to carry out diagnostic analysis, raising working efficiency.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated
Machine program performs the steps of when being executed by processor
It is analysed to image input target detection model, obtains in image to be analyzed the location information of each target cell and just
Beginning classification information, target detection model are trained using the sample cell image set for carrying markup information as training set
It arrives, markup information includes cell position and cell class;
Divide image to be analyzed according to location information, obtains multiple target cell images;
Include the disaggregated model for expanding convolutional network by the input of target cell image, obtains target cell image and correspond to mesh
Mark the secondary classification information of cell, wherein disaggregated model is to be trained to obtain by sample cell image set;
It is the classification of target cell by secondary classification information flag when initial classification information is identical as secondary classification information
As a result.
In one embodiment, disaggregated model includes feature extraction network, expansion convolutional network, full articulamentum and output
Layer;It is also performed the steps of when computer program is executed by processor
Cell image is inputted into texture feature extraction network, extracts the texture feature vector of target cell;
Target cell image is subjected to gray processing processing, the gray level image input morphological feature that gray processing is handled is mentioned
Network is taken, the morphological feature vector of target cell is extracted;
Two-dimensional feature vector is converted by texture feature vector and morphological feature vector respectively;
According to preset weight parameter, two-dimensional feature vector is spliced, obtains the cell characteristic of target cell image
Vector.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Obtain the sample cell image set and K initial target detection model for being divided into K parts of data;
Wherein N parts of data in K parts of data are successively chosen as test set, K-N parts of data carry out data as training set
Combination, the K group data set for obtaining various combination close, and K group data set closes corresponding with K initial target detection model;
According to the training set in data acquisition system, training initial target detection model corresponding with data acquisition system, according to same
Test set in sample image data calculates the model evaluation index for the initial target detection model that training is completed;
The average value of K group model evaluation index is calculated, and using average value as goal-based assessment index, screens K group model and comments
Estimate in index with the smallest model evaluation index of goal-based assessment scale error;
The corresponding initial target detection model of the smallest model evaluation index of error is labeled as selected objective target detection model;
According to sample cell image set, model training is carried out to selected objective target detection model, obtains the mesh of training completion
Mark detection model.
In one embodiment, it is also performed the steps of when computer program is executed by processor
The DenseNet network being analysed in image input target detection model, obtains the characteristic pattern of image to be analyzed;
Convolutional calculation processing is carried out to characteristic pattern, the multiple targets for including in target image are determined according to convolutional calculation result
Cell obtains the location information and initial category information of target cell.
In one embodiment, it is also performed the steps of when computer program is executed by processor
When initial classification information and secondary classification information difference, target cell is labeled as abnormal classification.
In one embodiment, it is also performed the steps of when computer program is executed by processor
Sectioning image is obtained, image preprocessing is carried out to sectioning image, obtains image to be analyzed, wherein image preprocessing
It is normalized including image denoising, image enhancement, image scaling and pixel value and color.
The above-mentioned computer readable storage medium for realizing cell sorting method includes many cells wait divide by obtaining
Analysis image input training in advance obtains target detection model, identifies in image to be analyzed the location information of each target cell and initial
Then classification information divides each target cell in image to be analyzed according to location information, obtains multiple cell images, will be thin
Born of the same parents' image input disaggregated model trained in advance, according to the expansion convolutional network in disaggregated model to the expansion convolution of characteristic
Processing, realize convolution kernel it is identical i.e. number of parameters is constant in the case where, obtain bigger receptive field, and then by cytological map
As being classified to obtain more accurate secondary classification information, finally, the initial classification information and two by judging target cell
Whether subseries information is identical to be verified, and more accurate cell classification result is obtained.On the one hand, pass through target detection model
The accurate positionin of target cell and first subseries in image to be analyzed are realized, on the other hand, by disaggregated model to single mesh
It marks cell image and carries out classification analysis, obtain accurate classification results, by being mutually authenticated for double classification, improve and treat point
The quick positioning of target cell and Accurate classification in image are analysed, so that doctor be assisted to carry out diagnostic analysis, raising working efficiency.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Instruct relevant hardware to complete by computer program, computer program to can be stored in a non-volatile computer readable
It takes in storage medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, this Shen
Please provided by any reference used in each embodiment to memory, storage, database or other media, may each comprise
Non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM
(PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include
Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms,
Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing
Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment
In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance
Shield all should be considered as described in this specification.
Above embodiments only express the several embodiments of the application, and the description thereof is more specific and detailed, but can not
Therefore it is construed as limiting the scope of the patent.It should be pointed out that for those of ordinary skill in the art,
Under the premise of not departing from the application design, various modifications and improvements can be made, these belong to the protection scope of the application.
Therefore, the scope of protection shall be subject to the appended claims for the application patent.
Claims (10)
1. a kind of cell sorting method, which comprises
It is analysed to image input target detection model, obtains in the image to be analyzed location information of each target cell and just
Beginning classification information, the target detection model are trained using the sample cell image set for carrying markup information as training set
It obtains, the markup information includes cell position and cell class;
Divide the image to be analyzed according to the positional information, obtains multiple target cell images;
Include the disaggregated model for expanding convolutional network by target cell image input, obtains the target cell image pair
Answer the secondary classification information of target cell, wherein the disaggregated model is trained by the sample cell image set
It arrives;
It is the mesh by the secondary classification information flag when the initial classification information is identical as the secondary classification information
Mark the classification results of cell.
2. the method according to claim 1, wherein the disaggregated model includes feature extraction network, expansion volume
Product network, full articulamentum and output layer;Described input the target cell image includes the classification mould for expanding convolutional network
Type, obtain the target cell image and correspond to the secondary classification information of target cell include:
The target cell image is inputted into the feature extraction network, extract the cell characteristic of the target cell image to
Amount, and the cell characteristic vector is inputted into the expansion convolutional network;
The expansion convolutional network carries out expansion process of convolution to the cell characteristic vector, and will expand the institute after process of convolution
It states cell characteristic vector and inputs the full articulamentum;
The full articulamentum carries out recurrence classification, and the recurrence classification data input and output that will be obtained to the cell characteristic vector
Layer;
The output layer carries out calculation processing to the recurrence classification data according to preset activation primitive, and it is thin to obtain the target
Born of the same parents' image corresponds to the probability data that target cell belongs to each pre-set categories, and the maximum pre-set categories of the probability data are labeled as
The secondary classification information of the target cell.
3. according to the method described in claim 2, it is characterized in that, the feature extraction network includes texture feature extraction network
With morphological feature extraction network;It is described that the target cell image is inputted into the feature extraction network, it is thin to extract the target
The cell characteristic vector of born of the same parents' image includes:
The target cell image is inputted into the texture feature extraction network, extract the textural characteristics of the target cell to
Amount;
The target cell image is subjected to gray processing processing, it is special that the gray level image that gray processing is handled inputs the form
Sign extracts network, extracts the morphological feature vector of the target cell;
Two-dimensional feature vector is converted by the texture feature vector and the morphological feature vector respectively;
According to preset weight parameter, the two-dimensional feature vector is spliced, obtains the cell of the target cell image
Feature vector.
4. being obtained the method according to claim 1, wherein described be analysed to image input target detection model
Into the image to be analyzed before the location information of each target cell and initial classification information, further includes:
Obtain the sample cell image set and K initial target detection model for being divided into K parts of data;
Wherein N parts of data in the K parts of data are successively chosen as test set, K-N parts of data carry out data as training set
Combination, the K group data set for obtaining various combination close, and the K group data set closes opposite with the K initial target detection model
It answers;
According to the training set in the data acquisition system, the initial target detection model corresponding with the data acquisition system is trained,
According to the test set in the same sample image data, the model for calculating the initial target detection model that training is completed is commented
Estimate index;
The average value of the K group model evaluation index is calculated, and using the average value as goal-based assessment index, screens K group institute
State in model evaluation index with the smallest model evaluation index of the goal-based assessment scale error;
The corresponding initial target detection model of the smallest model evaluation index of the error is labeled as selected objective target detection model;
According to the sample cell image set, model training is carried out to the selected objective target detection model, obtains training completion
The target detection model.
5. the method according to claim 1, wherein the target detection model is based on DenseNet network
The SSD detection model of dry network.
6. according to the method described in claim 5, it is characterized in that, it is described be analysed to image input target detection model, obtain
Into the image to be analyzed, the location information of each target cell and initial classification information include:
The image to be analyzed is inputted into the DenseNet network in the target detection model, obtains the image to be analyzed
Characteristic pattern;
Convolutional calculation processing is carried out to the characteristic pattern, include in the target image multiple are determined according to convolutional calculation result
Target cell obtains the location information and initial category information of the target cell.
7. being obtained the method according to claim 1, wherein described be analysed to image input target detection model
Into the image to be analyzed before the location information of each target cell and initial classification information, further includes:
Sectioning image is obtained, image preprocessing is carried out to the sectioning image, obtains the image to be analyzed, wherein the figure
As pretreatment includes that image denoising, image enhancement, image scaling and pixel value and color normalize.
8. a kind of cell classification device, which is characterized in that described device includes:
Module of target detection obtains each target in the image to be analyzed for being analysed to image input target detection model
The location information and initial classification information of cell, the target detection model is to carry the sample cell image set of markup information
It is trained to obtain as training set, the markup information includes cell position and cell class;
Image segmentation module obtains multiple target cell images for dividing the image to be analyzed according to the positional information;
Classification processing module is obtained for including the disaggregated model for expanding convolutional network by target cell image input
The target cell image corresponds to the secondary classification information of target cell, wherein the disaggregated model is thin by the sample
Born of the same parents' image collection is trained to obtain;
Classification results mark module is used for when the initial classification information is identical as the secondary classification information, by described two
Subseries information flag is the classification results of the target cell.
9. a kind of computer equipment, including memory and processor, the memory are stored with computer program, feature exists
In the step of processor realizes any one of claims 1 to 7 the method when executing the computer program.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
The step of method described in any one of claims 1 to 7 is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910393118.8A CN110119710A (en) | 2019-05-13 | 2019-05-13 | Cell sorting method, device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910393118.8A CN110119710A (en) | 2019-05-13 | 2019-05-13 | Cell sorting method, device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110119710A true CN110119710A (en) | 2019-08-13 |
Family
ID=67522223
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910393118.8A Pending CN110119710A (en) | 2019-05-13 | 2019-05-13 | Cell sorting method, device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110119710A (en) |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110569769A (en) * | 2019-08-29 | 2019-12-13 | 浙江大搜车软件技术有限公司 | image recognition method and device, computer equipment and storage medium |
CN110797097A (en) * | 2019-10-11 | 2020-02-14 | 武汉兰丁医学高科技有限公司 | Artificial intelligence cloud diagnosis platform |
CN110807426A (en) * | 2019-11-05 | 2020-02-18 | 北京罗玛壹科技有限公司 | Parasite detection system and method based on deep learning |
CN110866931A (en) * | 2019-11-18 | 2020-03-06 | 东声(苏州)智能科技有限公司 | Image segmentation model training method and classification-based enhanced image segmentation method |
CN111079579A (en) * | 2019-12-02 | 2020-04-28 | 英华达(上海)科技有限公司 | Cell image recognition method, device and system |
CN111291667A (en) * | 2020-01-22 | 2020-06-16 | 上海交通大学 | Method for detecting abnormality in cell visual field map and storage medium |
CN111461220A (en) * | 2020-04-01 | 2020-07-28 | 腾讯科技(深圳)有限公司 | Image analysis method, image analysis device, and image analysis system |
CN111815633A (en) * | 2020-09-08 | 2020-10-23 | 上海思路迪医学检验所有限公司 | Medical image diagnosis apparatus, image processing apparatus and method, determination unit, and storage medium |
CN111968106A (en) * | 2020-08-28 | 2020-11-20 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN112017730A (en) * | 2020-08-26 | 2020-12-01 | 东莞太力生物工程有限公司 | Cell screening method and device based on expression quantity prediction model |
CN112884055A (en) * | 2021-03-03 | 2021-06-01 | 歌尔股份有限公司 | Target labeling method and target labeling device |
CN112926612A (en) * | 2019-12-06 | 2021-06-08 | 中移(成都)信息通信科技有限公司 | Pathological image classification model training method, pathological image classification method and device |
CN113033389A (en) * | 2021-03-23 | 2021-06-25 | 天津凌视科技有限公司 | Method and system for image recognition by using high-speed imaging device |
CN113076909A (en) * | 2021-04-16 | 2021-07-06 | 重庆大学附属肿瘤医院 | Automatic cell detection method |
CN113095383A (en) * | 2021-03-30 | 2021-07-09 | 广州图匠数据科技有限公司 | Auxiliary sale material identification method and device |
CN113243018A (en) * | 2020-08-01 | 2021-08-10 | 商汤国际私人有限公司 | Target object identification method and device |
CN113257412A (en) * | 2021-06-16 | 2021-08-13 | 腾讯科技(深圳)有限公司 | Information processing method, information processing device, computer equipment and storage medium |
CN113409923A (en) * | 2021-05-25 | 2021-09-17 | 济南大学 | Error correction method and system in bone marrow image individual cell automatic marking |
CN113468936A (en) * | 2020-06-23 | 2021-10-01 | 青岛海信电子产业控股股份有限公司 | Food material identification method, device and equipment |
CN113744798A (en) * | 2021-09-01 | 2021-12-03 | 腾讯医疗健康(深圳)有限公司 | Tissue sample classification method, device, equipment and storage medium |
WO2022029482A1 (en) * | 2020-08-01 | 2022-02-10 | Sensetime International Pte. Ltd. | Target object identification method and apparatus |
CN114239678A (en) * | 2021-11-09 | 2022-03-25 | 杭州迪英加科技有限公司 | Pathological section image labeling method and system and readable storage medium |
WO2022083047A1 (en) * | 2020-10-23 | 2022-04-28 | 上海交通大学医学院附属新华医院 | Method and apparatus for obtaining cell classification model, and computer readable storage medium |
CN114496083A (en) * | 2022-01-26 | 2022-05-13 | 腾讯科技(深圳)有限公司 | Cell type determination method, device, equipment and storage medium |
CN114648527A (en) * | 2022-05-19 | 2022-06-21 | 赛维森(广州)医疗科技服务有限公司 | Urothelium cell slide image classification method, device, equipment and medium |
CN114743121A (en) * | 2022-03-14 | 2022-07-12 | 中国工商银行股份有限公司 | Image processing method, training method and device for image processing model |
CN114821046A (en) * | 2022-03-28 | 2022-07-29 | 深思考人工智能科技(上海)有限公司 | Method and system for cell detection and cell nucleus segmentation based on cell image |
CN115578598A (en) * | 2022-10-26 | 2023-01-06 | 北京大学第三医院(北京大学第三临床医学院) | Bone marrow cell identification method and system based on convolutional neural network |
CN118279912A (en) * | 2024-06-03 | 2024-07-02 | 深圳市合一康生物科技股份有限公司 | Stem cell differentiation degree assessment method and system based on image analysis |
Citations (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1462884A (en) * | 2003-06-24 | 2003-12-24 | 南京大学 | Method of recognizing image of lung cancer cells with high accuracy and low rate of false negative |
CN101226155A (en) * | 2007-12-21 | 2008-07-23 | 中国人民解放军第八一医院 | Intelligent image recognition and processing method for early stage cytopathology of lung cancer |
CN101477630A (en) * | 2009-02-17 | 2009-07-08 | 吴俊� | System and method for intelligent water treatment micro-organism machine vision identification |
CN103208008A (en) * | 2013-03-21 | 2013-07-17 | 北京工业大学 | Fast adaptation method for traffic video monitoring target detection based on machine vision |
CN103745210A (en) * | 2014-01-28 | 2014-04-23 | 爱威科技股份有限公司 | Method and device for classifying white blood cells |
CN103927516A (en) * | 2014-04-09 | 2014-07-16 | 海南大学 | Seawater pearl authentication system based on digital image processing |
US20150213302A1 (en) * | 2014-01-30 | 2015-07-30 | Case Western Reserve University | Automatic Detection Of Mitosis Using Handcrafted And Convolutional Neural Network Features |
GB201618160D0 (en) * | 2016-10-27 | 2016-12-14 | Nokia Technologies Oy | A method for analysing media content |
CN106778788A (en) * | 2017-01-13 | 2017-05-31 | 河北工业大学 | The multiple features fusion method of aesthetic evaluation is carried out to image |
CN106778506A (en) * | 2016-11-24 | 2017-05-31 | 重庆邮电大学 | A kind of expression recognition method for merging depth image and multi-channel feature |
WO2017106645A1 (en) * | 2015-12-18 | 2017-06-22 | The Regents Of The University Of California | Interpretation and quantification of emergency features on head computed tomography |
CN106951863A (en) * | 2017-03-20 | 2017-07-14 | 贵州电网有限责任公司电力科学研究院 | A kind of substation equipment infrared image change detecting method based on random forest |
CN107292339A (en) * | 2017-06-16 | 2017-10-24 | 重庆大学 | High-resolution landform classification method for UAV low-altitude remote sensing images based on feature fusion |
CN107527029A (en) * | 2017-08-18 | 2017-12-29 | 卫晨 | A kind of improved Faster R CNN method for detecting human face |
CN107766820A (en) * | 2017-10-20 | 2018-03-06 | 北京小米移动软件有限公司 | Image classification method and device |
CN107944459A (en) * | 2017-12-09 | 2018-04-20 | 天津大学 | A kind of RGB D object identification methods |
CN108062559A (en) * | 2017-11-30 | 2018-05-22 | 华南师范大学 | A kind of image classification method based on multiple receptive field, system and device |
CN108629369A (en) * | 2018-04-19 | 2018-10-09 | 中南大学 | A kind of Visible Urine Sediment Components automatic identifying method based on Trimmed SSD |
CN108734211A (en) * | 2018-05-17 | 2018-11-02 | 腾讯科技(深圳)有限公司 | The method and apparatus of image procossing |
CN108932479A (en) * | 2018-06-06 | 2018-12-04 | 上海理工大学 | A kind of human body anomaly detection method |
CN109472784A (en) * | 2018-10-31 | 2019-03-15 | 安徽医学高等专科学校 | Recognition of mitotic cells in pathological images based on cascaded fully convolutional networks |
CN109635871A (en) * | 2018-12-12 | 2019-04-16 | 浙江工业大学 | A kind of capsule endoscope image classification method based on multi-feature fusion |
-
2019
- 2019-05-13 CN CN201910393118.8A patent/CN110119710A/en active Pending
Patent Citations (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1462884A (en) * | 2003-06-24 | 2003-12-24 | 南京大学 | Method of recognizing image of lung cancer cells with high accuracy and low rate of false negative |
CN101226155A (en) * | 2007-12-21 | 2008-07-23 | 中国人民解放军第八一医院 | Intelligent image recognition and processing method for early stage cytopathology of lung cancer |
CN101477630A (en) * | 2009-02-17 | 2009-07-08 | 吴俊� | System and method for intelligent water treatment micro-organism machine vision identification |
CN103208008A (en) * | 2013-03-21 | 2013-07-17 | 北京工业大学 | Fast adaptation method for traffic video monitoring target detection based on machine vision |
CN103745210A (en) * | 2014-01-28 | 2014-04-23 | 爱威科技股份有限公司 | Method and device for classifying white blood cells |
US20150213302A1 (en) * | 2014-01-30 | 2015-07-30 | Case Western Reserve University | Automatic Detection Of Mitosis Using Handcrafted And Convolutional Neural Network Features |
CN103927516A (en) * | 2014-04-09 | 2014-07-16 | 海南大学 | Seawater pearl authentication system based on digital image processing |
WO2017106645A1 (en) * | 2015-12-18 | 2017-06-22 | The Regents Of The University Of California | Interpretation and quantification of emergency features on head computed tomography |
GB201618160D0 (en) * | 2016-10-27 | 2016-12-14 | Nokia Technologies Oy | A method for analysing media content |
CN106778506A (en) * | 2016-11-24 | 2017-05-31 | 重庆邮电大学 | A kind of expression recognition method for merging depth image and multi-channel feature |
CN106778788A (en) * | 2017-01-13 | 2017-05-31 | 河北工业大学 | The multiple features fusion method of aesthetic evaluation is carried out to image |
CN106951863A (en) * | 2017-03-20 | 2017-07-14 | 贵州电网有限责任公司电力科学研究院 | A kind of substation equipment infrared image change detecting method based on random forest |
CN107292339A (en) * | 2017-06-16 | 2017-10-24 | 重庆大学 | High-resolution landform classification method for UAV low-altitude remote sensing images based on feature fusion |
CN107527029A (en) * | 2017-08-18 | 2017-12-29 | 卫晨 | A kind of improved Faster R CNN method for detecting human face |
CN107766820A (en) * | 2017-10-20 | 2018-03-06 | 北京小米移动软件有限公司 | Image classification method and device |
CN108062559A (en) * | 2017-11-30 | 2018-05-22 | 华南师范大学 | A kind of image classification method based on multiple receptive field, system and device |
CN107944459A (en) * | 2017-12-09 | 2018-04-20 | 天津大学 | A kind of RGB D object identification methods |
CN108629369A (en) * | 2018-04-19 | 2018-10-09 | 中南大学 | A kind of Visible Urine Sediment Components automatic identifying method based on Trimmed SSD |
CN108734211A (en) * | 2018-05-17 | 2018-11-02 | 腾讯科技(深圳)有限公司 | The method and apparatus of image procossing |
CN108932479A (en) * | 2018-06-06 | 2018-12-04 | 上海理工大学 | A kind of human body anomaly detection method |
CN109472784A (en) * | 2018-10-31 | 2019-03-15 | 安徽医学高等专科学校 | Recognition of mitotic cells in pathological images based on cascaded fully convolutional networks |
CN109635871A (en) * | 2018-12-12 | 2019-04-16 | 浙江工业大学 | A kind of capsule endoscope image classification method based on multi-feature fusion |
Non-Patent Citations (5)
Title |
---|
LILI ZHAO,KUAN LI,JIANPING YIN,QIANG LIU,SIQI WANG: "Complete three-phase detection framework for identifying abnormal cervical cells", 《IET IMAGE PROCESSING》 * |
YING HAN,SHENGYONG CHEN,MENG ZHAO,FAN SHI: "Suspected Abnormal Cervical Nucleus Screening Based on a Two-Cascade Classifier", 《2018 IEEE 4TH INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATIONS(ICCC)》 * |
ZHIQIANG SHEN ET AL.: "DSOD: Learning Deeply Supervised Object Detectors from Scratch", 《ARXIV.ORG》 * |
宁正元: "《计算机在生物科学研究中的应用》", 30 November 2006, 厦门大学出版社 * |
王倩: "《CT图像中肺部疾病的计算机辅助诊断方法研究》", 31 December 2015, 华中科技大学出版社 * |
Cited By (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110569769A (en) * | 2019-08-29 | 2019-12-13 | 浙江大搜车软件技术有限公司 | image recognition method and device, computer equipment and storage medium |
WO2021068857A1 (en) * | 2019-10-11 | 2021-04-15 | 武汉兰丁智能医学股份有限公司 | Artificial intelligence-based cloud diagnosis platform |
CN110797097A (en) * | 2019-10-11 | 2020-02-14 | 武汉兰丁医学高科技有限公司 | Artificial intelligence cloud diagnosis platform |
CN110807426A (en) * | 2019-11-05 | 2020-02-18 | 北京罗玛壹科技有限公司 | Parasite detection system and method based on deep learning |
CN110807426B (en) * | 2019-11-05 | 2023-11-21 | 苏州华文海智能科技有限公司 | Deep learning-based parasite detection system and method |
CN110866931A (en) * | 2019-11-18 | 2020-03-06 | 东声(苏州)智能科技有限公司 | Image segmentation model training method and classification-based enhanced image segmentation method |
CN111079579A (en) * | 2019-12-02 | 2020-04-28 | 英华达(上海)科技有限公司 | Cell image recognition method, device and system |
CN111079579B (en) * | 2019-12-02 | 2023-07-25 | 英华达(上海)科技有限公司 | Cell image identification method, device and system |
CN112926612A (en) * | 2019-12-06 | 2021-06-08 | 中移(成都)信息通信科技有限公司 | Pathological image classification model training method, pathological image classification method and device |
CN111291667A (en) * | 2020-01-22 | 2020-06-16 | 上海交通大学 | Method for detecting abnormality in cell visual field map and storage medium |
CN111461220B (en) * | 2020-04-01 | 2022-11-01 | 腾讯科技(深圳)有限公司 | Image analysis method, image analysis device, and image analysis system |
CN111461220A (en) * | 2020-04-01 | 2020-07-28 | 腾讯科技(深圳)有限公司 | Image analysis method, image analysis device, and image analysis system |
CN113468936A (en) * | 2020-06-23 | 2021-10-01 | 青岛海信电子产业控股股份有限公司 | Food material identification method, device and equipment |
AU2020403709B2 (en) * | 2020-08-01 | 2022-07-14 | Sensetime International Pte. Ltd. | Target object identification method and apparatus |
CN113243018B (en) * | 2020-08-01 | 2025-02-21 | 商汤国际私人有限公司 | Method and device for identifying target object |
CN113243018A (en) * | 2020-08-01 | 2021-08-10 | 商汤国际私人有限公司 | Target object identification method and device |
WO2022029482A1 (en) * | 2020-08-01 | 2022-02-10 | Sensetime International Pte. Ltd. | Target object identification method and apparatus |
CN112017730A (en) * | 2020-08-26 | 2020-12-01 | 东莞太力生物工程有限公司 | Cell screening method and device based on expression quantity prediction model |
CN111968106A (en) * | 2020-08-28 | 2020-11-20 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN111815633A (en) * | 2020-09-08 | 2020-10-23 | 上海思路迪医学检验所有限公司 | Medical image diagnosis apparatus, image processing apparatus and method, determination unit, and storage medium |
WO2022083047A1 (en) * | 2020-10-23 | 2022-04-28 | 上海交通大学医学院附属新华医院 | Method and apparatus for obtaining cell classification model, and computer readable storage medium |
CN112884055B (en) * | 2021-03-03 | 2023-02-03 | 歌尔股份有限公司 | Target labeling method and target labeling device |
CN112884055A (en) * | 2021-03-03 | 2021-06-01 | 歌尔股份有限公司 | Target labeling method and target labeling device |
CN113033389A (en) * | 2021-03-23 | 2021-06-25 | 天津凌视科技有限公司 | Method and system for image recognition by using high-speed imaging device |
CN113033389B (en) * | 2021-03-23 | 2022-12-16 | 天津凌视科技有限公司 | Method and system for image recognition by using high-speed imaging device |
CN113095383A (en) * | 2021-03-30 | 2021-07-09 | 广州图匠数据科技有限公司 | Auxiliary sale material identification method and device |
CN113076909A (en) * | 2021-04-16 | 2021-07-06 | 重庆大学附属肿瘤医院 | Automatic cell detection method |
CN113076909B (en) * | 2021-04-16 | 2022-10-25 | 重庆大学附属肿瘤医院 | A kind of automatic detection method of cells |
CN113409923A (en) * | 2021-05-25 | 2021-09-17 | 济南大学 | Error correction method and system in bone marrow image individual cell automatic marking |
CN113409923B (en) * | 2021-05-25 | 2022-03-04 | 济南大学 | Error correction method and system in automatic labeling of individual cells in bone marrow images |
CN113257412A (en) * | 2021-06-16 | 2021-08-13 | 腾讯科技(深圳)有限公司 | Information processing method, information processing device, computer equipment and storage medium |
CN113744798A (en) * | 2021-09-01 | 2021-12-03 | 腾讯医疗健康(深圳)有限公司 | Tissue sample classification method, device, equipment and storage medium |
CN114239678A (en) * | 2021-11-09 | 2022-03-25 | 杭州迪英加科技有限公司 | Pathological section image labeling method and system and readable storage medium |
CN114496083A (en) * | 2022-01-26 | 2022-05-13 | 腾讯科技(深圳)有限公司 | Cell type determination method, device, equipment and storage medium |
CN114496083B (en) * | 2022-01-26 | 2024-09-27 | 腾讯科技(深圳)有限公司 | Cell type determination method, device, apparatus and storage medium |
CN114743121A (en) * | 2022-03-14 | 2022-07-12 | 中国工商银行股份有限公司 | Image processing method, training method and device for image processing model |
CN114821046A (en) * | 2022-03-28 | 2022-07-29 | 深思考人工智能科技(上海)有限公司 | Method and system for cell detection and cell nucleus segmentation based on cell image |
CN114648527B (en) * | 2022-05-19 | 2022-08-16 | 赛维森(广州)医疗科技服务有限公司 | Urothelial cell slide image classification method, device, equipment and medium |
CN114648527A (en) * | 2022-05-19 | 2022-06-21 | 赛维森(广州)医疗科技服务有限公司 | Urothelium cell slide image classification method, device, equipment and medium |
CN115578598A (en) * | 2022-10-26 | 2023-01-06 | 北京大学第三医院(北京大学第三临床医学院) | Bone marrow cell identification method and system based on convolutional neural network |
CN115578598B (en) * | 2022-10-26 | 2023-09-05 | 北京大学第三医院(北京大学第三临床医学院) | Bone marrow cell identification method and system based on convolutional neural network |
CN118279912A (en) * | 2024-06-03 | 2024-07-02 | 深圳市合一康生物科技股份有限公司 | Stem cell differentiation degree assessment method and system based on image analysis |
CN118279912B (en) * | 2024-06-03 | 2024-08-06 | 深圳市合一康生物科技股份有限公司 | Stem cell differentiation degree assessment method and system based on image analysis |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110119710A (en) | Cell sorting method, device, computer equipment and storage medium | |
Sun et al. | Computer-aided diagnosis in histopathological images of the endometrium using a convolutional neural network and attention mechanisms | |
CN110120040B (en) | Slice image processing method, slice image processing device, computer equipment and storage medium | |
CN112101451B (en) | Breast cancer tissue pathological type classification method based on generation of antagonism network screening image block | |
CN110110799A (en) | Cell sorting method, device, computer equipment and storage medium | |
Chekkoury et al. | Automated malignancy detection in breast histopathological images | |
CN109003672A (en) | A kind of early stage of lung cancer detection classification integration apparatus and system based on deep learning | |
CN112703531B (en) | Generate annotation data for tissue images | |
CN112990214A (en) | Medical image feature recognition prediction model | |
CN110189293A (en) | Cell image processing method, device, storage medium and computer equipment | |
Kanwal et al. | Quantifying the effect of color processing on blood and damaged tissue detection in whole slide images | |
CN110288613A (en) | A Ultra-High Pixel Histopathological Image Segmentation Method | |
Beevi et al. | Detection of mitotic nuclei in breast histopathology images using localized ACM and Random Kitchen Sink based classifier | |
Kuse et al. | A classification scheme for lymphocyte segmentation in H&E stained histology images | |
CN116563647B (en) | Age-related maculopathy image classification method and device | |
Visalaxi et al. | Lesion extraction of endometriotic images using open computer vision | |
CN113870194A (en) | Deep layer characteristic and superficial layer LBP characteristic fused breast tumor ultrasonic image processing device | |
CN113689950A (en) | Method, system and storage medium for identifying vascular distribution pattern of liver cancer IHC staining map | |
Yonekura et al. | Glioblastoma multiforme tissue histopathology images based disease stage classification with deep CNN | |
CN111598144B (en) | Training method and device for image recognition model | |
Ali et al. | Optic Disc Localization in Retinal Fundus Images Based on You Only Look Once Network (YOLO). | |
Hassan et al. | A dilated residual hierarchically fashioned segmentation framework for extracting Gleason tissues and grading prostate cancer from whole slide images | |
CN110634118A (en) | Artificial intelligence-based mammary gland image recognition system and method | |
CN118230942A (en) | Tumor intraoperative auxiliary diagnosis system based on frozen section image feature fusion | |
Samsi et al. | Glomeruli segmentation in H&E stained tissue using perceptual organization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190813 |
|
RJ01 | Rejection of invention patent application after publication |