[go: up one dir, main page]

CN117876401B - Cervix liquid-based lamellar cell image segmentation method based on SAM segmentation model - Google Patents

Cervix liquid-based lamellar cell image segmentation method based on SAM segmentation model Download PDF

Info

Publication number
CN117876401B
CN117876401B CN202410275835.1A CN202410275835A CN117876401B CN 117876401 B CN117876401 B CN 117876401B CN 202410275835 A CN202410275835 A CN 202410275835A CN 117876401 B CN117876401 B CN 117876401B
Authority
CN
China
Prior art keywords
cell
area
image
mask
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410275835.1A
Other languages
Chinese (zh)
Other versions
CN117876401A (en
Inventor
胡蕾
樊绍锋
周晨
刘捷发
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Medical Zhichu Medical Pathological Diagnosis Management Co ltd
Original Assignee
Jiangxi Medical Zhichu Medical Pathological Diagnosis Management Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Medical Zhichu Medical Pathological Diagnosis Management Co ltd filed Critical Jiangxi Medical Zhichu Medical Pathological Diagnosis Management Co ltd
Priority to CN202410275835.1A priority Critical patent/CN117876401B/en
Publication of CN117876401A publication Critical patent/CN117876401A/en
Application granted granted Critical
Publication of CN117876401B publication Critical patent/CN117876401B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a cervical liquid-based thin-layer cell image segmentation method based on a SAM segmentation model, which comprises the following steps of constructing a color brightness histogram of a cell area in a sampling mode after determining the cell area to obtain seed points of cell nuclei, simultaneously obtaining auxiliary points of cytoplasm at the periphery of the seed points of the cell nuclei, processing the cervical liquid-based thin-layer cell image by using the SAM segmentation model to obtain a mask area corresponding to the cell area, and further optimizing and updating the mask area according to the outline of the mask area to remove holes and noise points so as to ensure that the segmented cell area is complete; the method effectively covers the potential areas of the cell nucleuses in a unique sampling mode, improves the accuracy of cell nucleuses identification, and optimizes the efficiency and effect of a cell segmentation process.

Description

Cervix liquid-based lamellar cell image segmentation method based on SAM segmentation model
Technical Field
The invention relates to the technical field of image segmentation, in particular to a cervical fluid-based lamellar cell image segmentation method based on a SAM segmentation model.
Background
With the rise of digitization and artificial intelligence, cervical fluid-based thin-layer cell slides can be converted into digital images by microscopic pathology scanners. Squamous epithelial cells in the cervical fluid-based lamellar cell image are a major class of cells, and accurate segmentation is an important basis for subsequent intelligent cell diagnosis.
The existing cell segmentation method mainly comprises a traditional image characteristic segmentation method, a traditional machine learning segmentation method and a deep learning segmentation method. The traditional image feature segmentation method mainly utilizes color and texture features for segmentation, is sensitive to threshold values of the color and texture features, is difficult to adapt to color change in a cell image, and is easy to miss-score for partial light-color cytoplasmic areas. The traditional machine learning segmentation method mostly adopts an unsupervised or supervised learning method, such as a K-means algorithm, a expectation maximization (Exceptation Maximization, EM) algorithm, a decision tree method, a support vector machine (Support Vector Machine, SVM) algorithm and the like, and the segmentation accuracy on the adherent cells is generally low due to insufficient utilization of spatial context and texture information of a segmented object. The segmentation method for deep learning mainly utilizes multiple convolution to extract image features and constructs multi-layer neural network learning sample image features, such as a CNN network, an FCN network, a LANet network, a SegNet network and the like, and the networks have better capability of adapting to color and texture changes of similar objects, and the segmentation effect is superior to that of the segmentation method for traditional machine learning, but the segmentation method still has defects in cell adhesion segmentation. The SAM (Segment Anything) method belongs to a deep learning segmentation method, introduces a attention mechanism and the like, has better segmentation effect on objects with unclear boundaries in some images, but depends on segmentation prompt information to a great extent, and the more accurate the prompt information, the better the segmentation effect.
The squamous epithelial cells in the cervical liquid-based lamellar cell image comprise nuclei and cytoplasm, and the nuclei and the cytoplasm have differences in color and texture, so that the nuclei and the cytoplasm are easily divided into two independent areas in image segmentation, and in practice, the nuclei and the cytoplasm of one cell are integral, and the nuclei and the cytoplasm are required to be segmented into a whole. In addition, cell-to-cell adhesion is common in cervical fluid-based thin layer cell images, and the adhered cells are divided into a region, and are actually a plurality of cells. The invention provides a cervical fluid-based thin-layer cell image segmentation method based on a SAM segmentation model aiming at prompt information required by cell segmentation in the SAM segmentation cervical fluid-based thin-layer cell image.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a cervical liquid-based lamellar cell image segmentation method based on a SAM segmentation model, and the method can adaptively provide nucleus central point prompt information of cells on the basis of the SAM segmentation model so as to realize automation of mask prediction, thereby efficiently and accurately segmenting the whole single cell.
In order to achieve the above purpose, the present invention provides the following technical solutions: the method for segmenting the cervical fluid-based lamellar cell image based on the SAM segmentation model comprises the following steps:
step S1: acquiring a cell area in a cell slide image, and calculating the radius of the cell area;
Step S2: constructing a brightness histogram of a cell area in a cell slide image in a sampling mode, regarding a dark-colored area in the brightness histogram of the cell area as a potential area of a cell nucleus, removing noise points from the potential area of the cell nucleus by adopting corrosion expansion, and calculating all cell nucleus seed points of the whole image by adopting a density clustering algorithm;
step S3: searching an auxiliary point corresponding to the cell nucleus seed point by taking each cell nucleus seed point as a center and using a non-background color around the cell nucleus seed point;
step S4: constructing a SAM segmentation model, taking each cell nucleus seed point and an auxiliary point of the corresponding cell nucleus seed as a prompting point of the SAM segmentation model for masking segmentation, obtaining a cell area corresponding to the corresponding cell nucleus seed point, and representing the cell area in a masking area mode;
the SAM segmentation model consists of an image encoder, a prompt encoder and a lightweight mask decoder;
step S5: and optimizing a mask area corresponding to the nucleus seed point, removing holes and noise points in the mask area by solving a maximum contour mode, wherein the optimized mask area corresponds to the cell area.
Further, the specific process of obtaining the cell area in the cell slide image and calculating the radius of the cell area is as follows:
calculating each pixel point in the cell slide image Value and background colorA difference value of the values, the difference value being less than a threshold valueRegarding the pixel points of the pixel points as background color, and setting the difference value smaller than the threshold valueSetting the value of the pixel point of (2) to a non-zero value; the difference value is larger than the threshold valueRegarding the pixel points of the pixel array as a cell area, and regarding the difference value larger than the threshold valueSetting the value of the pixel point of (2) to be zero; wherein the background color is white or near white;
According to zero value pixel points in cell slide images Maximum axis coordinateMinimum coordinatesAndMaximum axis coordinateMinimum coordinatesCalculating to obtain approximate half value of cell region circle in cell slide image
(1);
Let the height of the cell slide image beWidth ofThe coordinates of the upper left, upper right, lower left and lower right of the cell region in the cell slide image are respectivelyAnd
Further, the specific process of constructing the brightness histogram of the cell area in the cell slide image by adopting the sampling mode is as follows:
the middle circular area of the cell slide image is a cell area, the periphery is a blank area, and a square area in the center of the cell area is selected The square area is 1/4 of the area of the cell areaIs longer than the side length of (a)The method comprises the following steps:
(2);
Selecting the total area of the sampling area as square area Area of areaBy the width of the cell nucleusIs high, square areaIs longer than the side length of (a)For the width selectStrips at intervals ofIn square areasEvenly sampling:
(3);
(4);
By being in square areas The region obtained by the middle sampling is used as a pixel point set for constructing a brightness histogram
Gathering pixel pointsA kind of electronic deviceConversion of values intoThe value, according to the characteristic of the deep color of the cell nucleus, selects the brightness valueTo construct a luminance histogram
(5);
(6);
In the method, in the process of the invention,Respectively representThe number of pixels with values of 0/255, 1/255 and … …/255; Representation of The largest channel value among the three channels.
Further, the specific process of regarding the dark areas in the luminance histogram of the cell area as potential areas of the nucleus is as follows: equalizing the brightness histogram to obtain brightness channel image, and maximizing the inter-class variance of the brightness channel imageThe method calculates the brightness dividing threshold valueWherein, the method comprises the steps of, wherein,Expressed as:
(7);
in the method, in the process of the invention, The number of pixels representing the foreground is the proportion of the whole brightness channel image; Representing the average gray level of the foreground pixel points; the number of pixels representing the background is the proportion of the whole brightness channel image; representing the average gray level of background pixel points; representing the inter-class variance; luminance value in luminance channel image Less than the segmentation thresholdIs a potential area of the nucleus, is set to white, and the other pixels are set to black.
Further, the specific process of removing noise points from the potential areas of the cell nuclei by adopting corrosion expansion and calculating all the cell nucleus seed points of the whole graph by adopting a density clustering algorithm is as follows: firstly, binarizing a brightness channel image, wherein the formula is as follows:
(8);
(9);
in the method, in the process of the invention, Representing pixel points in luminance channel imagesLuminance value of (2)Representing a segmentation threshold; Representing a binarized image; representing the binarized image obtained after segmentation;
screening seed points in a white area after the binary image is segmented, wherein the specific screening process comprises the following steps: using corrosion expansion algorithm to eliminate noise in white region, firstly performing corrosion operation to eliminate noise in image mask, then performing expansion operation on the corroded binary image, retaining original binary image mask, and obtaining binary image after expansion operation The middle white area is the area where the cell nucleus is located;
The etching operation is expressed as:
(10);
The expansion operation is expressed as:
(11);
in the method, in the process of the invention, Inputting the segmented binarized image; to binarize the image after the etching operation, the convolution kernel is For inputting pixel points in the divided binary imagePixel values of (2); to binarize pixel points in the image after corrosion operation Pixel values of (2); Is a binarized image after expansion operation; is the pixel point in the binarized image after expansion operation Pixel values of (2); Inputting pixel values of the divided binarized image at offset positions; Is shown at the pixel point Offset on;
Obtaining a binarized image The center point of the middle white area is used as a seed point of the cell nucleus, and the specific process is as follows:
Judging binarized images one by one Whether the pixels in the array are white or not, and obtaining the coordinates of each white pixel, namely obtaining a white pixel coordinate set
Clustering white pixel coordinates by using a density clustering algorithm, setting a pixel neighborhood distance threshold, marking a neighborhood with the distance of the pixel coordinates smaller than or equal to the distance threshold as a z-neighborhood, and setting a threshold of the number of white pixels in the z-neighborhoodThe specific process is as follows:
the first step: initializing a set of core objects Initializing the cluster numberInitializing all white pixels as an unaccessed sample setCluster partitioning
And a second step of: finding each white pixel by distance measurementWhite pixel set in z-neighborhood of (2)The number of white pixels in (1) satisfiesWill thenJoining to core object sample setsIn (a) and (b);
And a third step of: core object sample set Ending the algorithm, otherwise continuing the next step;
Fourth step: randomly selecting a core object sample set A core object in (a)Initializing a current cluster core object queueInitializing the cluster numberInitializing a current cluster sample setUpdating non-accessed sample sets
Fifth step: current cluster core object queueThen the current cluster sample setUpdating cluster partition after generation is completedUpdating a core object sample setTurning to the third step, otherwise, updating the core object set
Sixth step: core object queue in current clusterA core object is fetchedFind outAll white pixels within the z-neighborhood of (2)Order-makingUpdating a current cluster sample setUpdating a set of unvisited samplesUpdatingTurning to a fifth step;
and finding clustered seed points through a density clustering algorithm, and locating the clustered seed points to the center point coordinates of the cell nuclei.
Further, the specific process of searching an auxiliary point corresponding to the cell nucleus seed point with a non-background color around the cell nucleus seed point by taking each cell nucleus seed point as a center is as follows: and respectively aiming at each cell nucleus seed point, taking the cell nucleus seed point as a circle center, acquiring proper points on a circle with the radius r as auxiliary points, sequentially judging whether the points on the circle are reasonable according to 30 degrees clockwise from the position right above the cell nucleus seed point, stopping the selection process of the auxiliary points of the cell nucleus seed point after the first reasonable points are acquired, and taking the first reasonable points as the auxiliary points of the cell nucleus seed point.
Further, step S4 further includes performing dicing processing on the whole cell slide image, dividing the whole cell slide image into image blocks with a specified size in sequence, and overlapping each image block in space; after the dicing is completed, the image blocks are screened, when the nucleus seed points exist in the image blocks, SAM segmentation model mask segmentation is carried out, the nucleus seed points do not exist in the image blocks, and the image blocks are not processed.
Further, the SAM segmentation model mask segmentation flow is: slicing the image block by using an image encoder to generate a picture to be embedded; encoding the input prompt points by using a prompt encoder to generate point embedding; the picture embedding and the point embedding are decoded with a lightweight mask decoder, generating a prediction mask and a prediction mask quality.
Further, the image encoder is composed of a plurality ofModule and neck structureThe specific processing procedure for generating the picture embedding is as follows: first, image blocks are formedSlice embedding is performed, SAM segmentation model weight absolute position embedding is used to obtain embedded features, and then a plurality of SAM segmentation models are usedThe module processes the embedded features to obtain an embedded representation, and finally, through the neck structureFurther processing the embedded representation to generate a picture embedded; the processing procedure is expressed as:
(12);
(13);
(14);
(15);
(16);
in the method, in the process of the invention, Representing an image block slice; for an input image block; representing a slice embedding operation; for an input image block size; Is the slice size; information embedded for absolute position; Representing intermediate variables for storing the processing results; Indicating use The module processes the embedded features; Processing the embedded representation for the neck structure; the result of the picture embedding, namely the output of the image encoder;
The hint encoder consists of random position codes, and the specific processing procedure of generating point embedding is as follows: firstly, translating the coordinates of the input prompting point and the auxiliary point by 0.5 pixel, and adjusting to the center of the pixel; coding normalization is carried out on the translated prompting points through random position coding, and the specific positions of coordinates of the prompting points are converted into 0 1, And then realizing position coding by linear transformation and sine and cosine function transformation on the coordinates of the normalized prompting points; adding position coding weights of different labels in the SAM segmentation model weights on the normalized cue points, and finally generating point embedding, namely the output of a cue encoder, wherein the processing process is expressed as follows:
(17);
(18);
(19);
in the method, in the process of the invention, The input prompting point; Representing a translation operation; Is an input label; representing a random position code; the pre-processed prompting points; Inputting the position coding weight of the label into the SAM segmentation model weight; Embedding for the point;
Lightweight mask decoder consists of Module and multilayer perceptronComposition, multi-layer perceptronThe method comprises an IOU multi-layer perceptron and a mask multi-layer perceptron; the specific processing procedure for generating the prediction mask and the prediction mask quality is as follows: firstly, inputting image embedding, point embedding and point embedding position coding, in the process of processing, splicing the embedded layer weight of an IOU token and the embedded layer weight of a group of mask tokens in SAM segmentation model weight, splicing the spliced embedded layer weight of the IOU token and the embedded layer weight of the mask token with the point embedding to form a new tensor, and then inputting the image embedding, the point embedding position coding and the spliced tensor intoIn the module, getThe hidden state and the characteristic representation after module processing; extracting features of a first position in a second dimension from the hidden state as IOU token outputs, and extracting features of the second position to a fifth position in the second dimension from the hidden state as a set of mask token outputs; each extracted mask token is output, and the output results obtained through the mask multi-layer perceptron are stacked and then are subjected to matrix multiplication with the up-sampled feature representation to obtain a prediction mask; outputting the extracted IOU token to obtain the quality of a prediction mask through an IOU multi-layer perceptron; the processing procedure is expressed as:
(20);
(21);
(22);
(23);
(24);
(25);
in the method, in the process of the invention, AndEmbedding layer weights of the IOU token and a group of mask tokens in the SAM segmentation model weights respectively; Representing a splicing operation; The weight after splicing; Embedding the representation points; Embedding the points after splicing the weights of the embedded layers; Is the output of the image encoder; coding for a position; Representation of A module; And Respectively isThe hidden state and the characteristic representation obtained after the module processing; Representing an extraction operation; Outputting for a set of mask tokens; outputting for the IOU token; Representing a multi-layer perceptron; Representing an upsampling operation; representing a matrix multiplication operation; Is a prediction mask; To predict mask quality.
Further, the specific process of step S5 is as follows: for each mask region, detecting all contours in the mask region by analyzing connectivity between adjacent pixels in the mask region; calculating the area of each contour, taking the contour with the largest area as the main body of the cell, and neglecting the areas corresponding to the contours except the contour with the largest area; when the area of the area corresponding to the outline with the largest area is smaller than half of that of a normal cell, taking the outline as a noise point, neglecting a mask corresponding to the outline, and updating the mask area; when the area of the area corresponding to the outline with the largest area is greater than or equal to half of the area of a normal cell, the mask corresponding to the outline is a segmented cell area, the mask area is updated to be the area corresponding to the outline, and holes in the outline are automatically filled;
the contour area calculation method calculates the area of an area surrounded by a contour, and the contour area is expressed as:
(26);
in the method, in the process of the invention, Representing the number of points on the contour; Representing coordinates of an i-th point on the contour; sigma represents the pair of To the point ofAccumulating the areas among the vertexes; representing the area enclosed by the outline; Represent the first The abscissa and ordinate of the individual points;
and (3) circumscribing the updated mask with the rectangle, obtaining the left upper corner coordinates and the length and width of the rectangle, cutting out a target area image with the size of the circumscribed rectangle from the image block, creating a white background image with the size identical to that of the cut target area, and applying the mask part in the cut target area image to the white background image to obtain a single cell map.
Compared with the prior art, the invention has the following beneficial effects:
(1) The sampling mode effectively covers potential areas of cell nuclei, improves the accuracy of cell nucleus identification, and optimizes the efficiency and effect of a cell segmentation process.
(2) The invention uses the equalization processing and the equalization processing of the brightness histogram of the cell areaBy the application of the method, the potential areas of the cell nucleus can be identified more accurately, the accuracy of cell nucleus positioning is improved, and a basis is provided for independence of cell segmentation.
(3) The invention combines the cell nucleus seed point and the auxiliary point thereof as the prompting point through the SAM segmentation model, can more accurately segment the whole cell region containing the cell nucleus and the cytoplasm, and can overcome the problem of incomplete cell segmentation caused by low cytoplasm brightness.
(4) The invention obviously improves the accuracy, efficiency and automation level of cervical liquid-based lamellar cell image segmentation through an automatic flow and accurate algorithm processing.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a schematic diagram of a sampling mode according to the present invention.
Fig. 3 is a diagram showing a SAM segmentation model structure according to the present invention.
Fig. 4 is a block diagram of an image encoder of the SAM segmentation model of the present invention.
Fig. 5 is a block diagram of a neck structure Neck of an image encoder of the present invention.
Fig. 6 is a block diagram of a hint encoder of the SAM segmentation model of the present invention.
Fig. 7 is a block diagram of a mask decoder of the SAM segmentation model of the present invention.
Detailed Description
As shown in fig. 1, the present invention provides the following technical solutions: the method for segmenting the cervical fluid-based lamellar cell image based on the SAM segmentation model comprises the following steps:
step S1: acquiring a cell area in a cell slide image, and calculating the radius of the cell area;
Step S1.1, zeroing and non-zeroing the cell slide image;
calculating each pixel point in the cell slide image Value and background color (white or near white)A difference value of the values, the difference value being less than a threshold valueRegarding the pixel points of the pixel points as background color, and setting the difference value smaller than the threshold valueSetting the value of the pixel point of (2) to a non-zero value; the difference value is larger than the threshold valueRegarding the pixel points of the pixel array as a cell area, and regarding the difference value larger than the threshold valueSetting the value of the pixel point of (2) to be zero; wherein the threshold valueIs the recommended value of (1)And a mean 245.
Step S1.2, solving the cell area diameter;
According to zero value pixel points in cell slide images Maximum axis coordinateMinimum coordinatesAndMaximum axis coordinateMinimum coordinatesCalculating to obtain approximate half value of cell region circle in cell slide image
(1);
Assume that the height of the cell slide image size isWidth ofThe coordinates of the upper left, upper right, lower left and lower right of the cell region in the cell slide image are respectivelyAnd
Step S2: the brightness histogram of the cell area in the cell slide image is constructed in a sampling mode, the dark area in the brightness histogram of the cell area is taken as a potential area of the cell nucleus, the potential area of the cell nucleus is corroded, swelled and noise removed, and the density clustering algorithm is adopted to calculate all the seed points of the cell nucleus in the whole image, as shown in figure 2.
S2.1, selecting a brightness histogram construction area;
the middle circular area of the cell slide image is a cell area, the periphery is a blank area, and a square area in the center of the cell area is selected The square area is 1/4 of the area of the cell areaIs longer than the side length of (a)The method comprises the following steps:
(2);
S2.2, selecting a sampling area;
Selecting the total area of the sampling area as square area Area of areaIn order to allow the selected sampling area to cover the nuclei in the cell area, the width of the nuclei is usedIs high, square areaIs wide and is selected fromStrips at intervals ofIn square areasEvenly sampling:
(3);
(4);
By being in square areas The region obtained by the middle sampling is used as a pixel point set for constructing a brightness histogram
S2.3, obtaining a brightness histogram segmentation threshold;
Gathering pixel points A kind of electronic deviceConversion of values intoThe value, according to the characteristic of the deep color of the cell nucleus, selects the brightness valueTo construct a luminance histogram
(5);
(6);
In the method, in the process of the invention,Respectively representThe number of pixels with values of 0/255, 1/255 and … …/255; Representation of The maximum channel value of the three channels;
equalizing the brightness histogram to obtain brightness channel image, and maximizing the inter-class variance of the brightness channel image The method calculates the brightness dividing threshold valueWherein, the method comprises the steps of, wherein,Can be expressed as:
(7);
in the method, in the process of the invention, The number of pixels representing the foreground is the proportion of the whole brightness channel image; Representing the average gray level of the foreground pixel points; the number of pixels representing the background is the proportion of the whole brightness channel image; representing the average gray level of background pixel points; representing the inter-class variance; the method is implemented by traversing all possible thresholds Calculating the inter-class variance corresponding to each thresholdThen select to makeMaximum threshold valueAs a segmentation thresholdThus, by virtue of this,The method can find an optimal segmentation threshold; Luminance value in luminance channel imageLess than the segmentation thresholdIs considered to be a potential region of the nucleus, is set to white, and the other pixels are set to black.
S2.4, binarizing the brightness channel image;
the binarization formula is as follows:
(8);
(9);
in the method, in the process of the invention, Representing pixel points in luminance channel imagesLuminance value of (2)Representing a segmentation threshold; Representing a binarized image; The binarized image obtained after segmentation is shown.
S2.5, eliminating noise points by a corrosion expansion algorithm;
The white area after the binary image segmentation not only contains potential areas of cell nuclei, but also possibly contains some impurities and noise points formed by too deep dyeing, and seed points in the candidate areas need to be screened, wherein the specific screening process is as follows: the noise generated in the flaking and dyeing processes is eliminated by using a corrosion expansion algorithm, so that the screening of seed points is prevented from being influenced; firstly, performing corrosion operation to eliminate noise points in an image mask; then, performing expansion operation on the corroded image to keep the original binary image mask, wherein the process is beneficial to optimizing the mask effect of the binary image, removing unnecessary tiny parts and keeping the structural integrity of the original mask; obtaining a binarized image after expansion operation The middle white area is the area where the cell nucleus is located;
The etching operation can be expressed as:
(10);
the expansion operation can be expressed as:
(11);
in the method, in the process of the invention, Inputting the segmented binarized image; to binarize the image after the etching operation, the convolution kernel is For inputting pixel points in the divided binary imagePixel values of (2); to binarize pixel points in the image after corrosion operation Pixel values of (2); Is a binarized image after expansion operation; is the pixel point in the binarized image after expansion operation Pixel values of (2); Inputting pixel values of the divided binarized image at offset positions; Is shown at the pixel point Offset on the upper surface.
S2.6, obtaining seed points by a density clustering algorithm;
Obtaining a binarized image The center point of the middle white area is used as a seed point of the cell nucleus, and the specific process is as follows:
Judging binarized images one by one Whether the pixels in the white pixel are white or not, thereby obtaining the coordinates of each white pixel, namely obtaining a white pixel coordinate set
Clustering white pixel coordinates by using a density clustering algorithm, setting a pixel neighborhood distance threshold, marking a neighborhood with the distance of the pixel coordinates smaller than or equal to the distance threshold as a z-neighborhood, and setting a threshold of the number of white pixels in the z-neighborhoodThe specific process is as follows:
the first step: initializing a set of core objects Initializing the cluster numberInitializing all white pixels as an unaccessed sample setCluster partitioning
And a second step of: finding each white pixel by distance measurementWhite pixel set in z-neighborhood of (2)The number of white pixels in (1) satisfiesWill thenJoining to core object sample setsIn (a) and (b);
And a third step of: core object sample set Ending the algorithm, otherwise continuing the next step;
Fourth step: randomly selecting a core object sample set A core object in (a)Initializing a current cluster core object queueInitializing the cluster numberInitializing a current cluster sample setUpdating non-accessed sample sets
Fifth step: current cluster core object queueThen the current cluster sample setUpdating cluster partition after generation is completedUpdating a core object sample setTurning to the third step, otherwise, updating the core object set
Sixth step: core object queue in current clusterA core object is fetchedFind outAll white pixels within the z-neighborhood of (2)Order-makingUpdating a current cluster sample setUpdating a set of unvisited samplesUpdatingAnd (5) transferring to a fifth step.
And finding clustered seed points through a density clustering algorithm, and locating the clustered seed points to the center point coordinates of the cell nuclei.
Step S3: searching an auxiliary point corresponding to the cell nucleus seed point by taking each cell nucleus seed point as a center and using a non-background color around the cell nucleus seed point;
The specific process of step S3 is as follows: respectively aiming at each cell nucleus seed point, taking the cell nucleus seed point as a circle center, acquiring proper points on a circle with the radius r as auxiliary points, sequentially judging whether the points on the circle are reasonable or not according to 30 degrees clockwise interval from the position right above the cell nucleus seed point, stopping the selection process of the auxiliary points of the cell nucleus seed point after the first reasonable point is acquired, and taking the first reasonable point as the auxiliary point of the cell nucleus seed point; wherein, whether the point on the circle is reasonable is judged based on the principle that the point is not the background point of the cell slide image, namely the point is in the cell slide image If the color is not white or is not approximately white, if no auxiliary point meeting the conditions is selected until the seed point is returned to the position right above the seed point, the condition that the cells are too small and have no adhesion is judged, or the cells are judged to have only cell nuclei and the like, and the seed point is used for prompting and dividing.
Step S4: constructing a SAM segmentation model, taking each cell nucleus seed point and an auxiliary point of the corresponding cell nucleus seed as a prompting point of the SAM segmentation model for masking segmentation, and obtaining a cell area corresponding to the cell nucleus seed point, wherein the cell area comprises cell nuclei and cytoplasm and is expressed in a masking area mode; the SAM segmentation model is composed of an image encoder, a hint encoder and a lightweight mask decoder, as shown in fig. 3;
s4.1, cutting and screening cell slide images;
Since Gong Gengji liquid thin-layer cell slide image I cell is very large, and is not suitable for SAM segmentation model mask segmentation of the whole cell slide image, the whole cell slide image needs to be diced, which comprises the following specific steps:
Dividing the whole cell slide image into 1024×1024 image blocks in sequence, and enabling each image block to be overlapped in space, wherein the overlapped distance is 1-2 times of the diameter of the largest cell, so as to ensure the integrity of the cell;
after the dicing is completed, the image block is screened, SAM segmentation model mask segmentation is performed only when the cell nucleus seed points exist in the image block, otherwise, the image block is not processed.
Step S4.2SAM, segmentation model mask segmentation;
Using a cell nucleus seed point and the corresponding auxiliary points as primary prompting points of the SAM segmentation model, and processing all prompting points one by one;
The specific process of the SAM segmentation model for processing the prompt points is as follows: slicing the image block by using an image encoder to generate a picture to be embedded; encoding the input prompt points by using a prompt encoder to generate point embedding; the picture embedding and the point embedding are decoded with a lightweight mask decoder, generating a prediction mask and a prediction mask quality.
As shown in fig. 4, the image encoder is composed of a plurality ofModule and neck structureComposition; each of which isThe module consists of self-attention and a multi-layer perceptron; as shown in fig. 5, the neck structureThe four-layer structure is characterized in that a first layer and a third layer are convolution layers, and a second layer and a fourth layer are layer normalization; the specific processing procedure for generating the picture embedding is as follows: first, image blocks are formedSlice embedding is performed, SAM segmentation model weight absolute position embedding is used to obtain embedded features, and then a plurality of SAM segmentation models are usedThe module processes the embedded features to obtain an embedded representation, and finally, through the neck structureFurther processing the embedded representation to generate a picture embedding, the processing being represented as:
(12);
(13);
(14);
(15);
(16);
in the method, in the process of the invention, Representing an image block slice; for an input image block; representing a slice embedding operation; for an input image block size; Is the slice size; information embedded for absolute position; Representing intermediate variables for storing the processing results; Indicating use The block processes the embedded features; Processing the embedded representation for the neck structure; is the result of picture embedding, i.e. the output of the image encoder.
As shown in fig. 6, the hint encoder is composed of random position codes, and the specific processing procedure for generating point embedding is as follows: firstly, translating the coordinates of the input prompting point and the auxiliary point by 0.5 pixel, and adjusting to the center of the pixel; coding normalization is carried out on the translated prompting points through random position coding, and the specific positions of coordinates of the prompting points are converted into 01, And then realizing position coding by linear transformation and sine and cosine function transformation on the coordinates of the normalized prompting points; adding position coding weights of different labels in the SAM segmentation model weights on the normalized cue points, and finally generating point embedding, namely the output of a cue encoder, wherein the processing process is expressed as follows:
(17);
(18);
(19);
in the method, in the process of the invention, The input prompting point; Representing a translation operation; Is an input label; representing a random position code; the pre-processed prompting points; Inputting the position coding weight of the label into the SAM segmentation model weight; Is dot embedded.
As shown in fig. 7, the lightweight mask decoder consists ofThe module and the multi-layer perceptron MLP are formed, and the multi-layer perceptron MLP comprises an IOU multi-layer perceptron and a mask multi-layer perceptron; the specific processing procedure for generating the prediction mask and the prediction mask quality is as follows: firstly, inputting image embedding, point embedding and point embedding position coding, in the process of processing, splicing the embedded layer weight of an IOU token and the embedded layer weight of a group of mask tokens in SAM segmentation model weight, splicing the spliced embedded layer weight of the IOU token and the embedded layer weight of the mask token with the point embedding to form a new tensor, and then inputting the image embedding, the point embedding position coding and the spliced tensor intoIn the module, getThe hidden state and the characteristic representation after module processing; wherein the hidden state comprises information of the input sequence, and the characteristic representation comprises semantic information of the input sequence; extracting features of a first position in a second dimension from the hidden state as IOU token outputs, and extracting features of the second position to a fifth position in the second dimension from the hidden state as a set of mask token outputs; each extracted mask token is output, and the output results obtained through the mask multi-layer perceptron are stacked and then are subjected to matrix multiplication with the up-sampled feature representation to obtain a prediction mask; outputting the extracted IOU token to obtain the quality of a prediction mask through an IOU multi-layer perceptron; the processing procedure is expressed as:
(20);
(21);
(22);
(23);
(24);
(25);
in the method, in the process of the invention, AndEmbedding layer weights of the IOU token and a group of mask tokens in the SAM segmentation model weights respectively; Representing a splicing operation; The weight after splicing; Embedding the representation points; Embedding the points after splicing the weights of the embedded layers; Is the output of the image encoder; coding for a position; Representation of A module; And Respectively isThe hidden state and the characteristic representation obtained after the module processing; Representing an extraction operation; Outputting for a set of mask tokens; outputting for the IOU token; Representing a multi-layer perceptron; Representing an upsampling operation; representing a matrix multiplication operation; Is a prediction mask; To predict mask quality.
Under the condition that a cell nucleus seed point and an auxiliary point corresponding to the cell nucleus seed point are taken as prompting points, the SAM segmentation model can divide the whole cell, and each cell area corresponds to a mask area; the SAM segmentation model can divide the whole cell, and the whole cell that divides can appear problems such as hole to and partial noise, and the area of noise is obviously less than the area of cell, and mask area optimization can remove the hole of cell segmentation area, removes the noise.
Step S5: and optimizing a mask area corresponding to the nucleus seed point, removing holes and small-area noise points in the mask area by solving a maximum outline mode, wherein the optimized mask area corresponds to the cell area.
S5.1, obtaining the maximum outline of the mask area;
if a hole exists in a certain mask area, contours with different areas exist; for each mask region, detecting all contours in the mask region by analyzing connectivity between adjacent pixels in the mask region; calculating the area of each contour, taking the contour with the largest area as the main body of the cell, and neglecting the areas corresponding to other contours; if the area of the area corresponding to the outline with the largest area is smaller than half of that of a normal cell, taking the outline as a noise point, neglecting a mask corresponding to the outline, and updating the mask area; if the area of the area corresponding to the outline with the largest area is more than or equal to half of the area of the normal cells, the mask corresponding to the outline is the segmented cell area, and the mask area is updated to be the area corresponding to the outline, and at the moment, the holes in the outline are automatically filled.
The contour area calculating method calculates the area (regardless of the direction) of the area surrounded by the contour, and according to the green formula, the contour area can be expressed as:
(26);
in the method, in the process of the invention, Representing the number of points on the contour; Representing coordinates of an i-th point on the contour; sigma represents the pair of To the point ofAccumulating the areas among the vertexes; representing the area enclosed by the outline; Represent the first The abscissa and ordinate of the individual points.
Step S5.2, visualizing cells corresponding to the updated mask;
And (3) circumscribing the updated mask with the rectangle, obtaining the left upper corner coordinates and the length and width of the rectangle, cutting out a target area image with the size of the circumscribed rectangle from the original image, creating a white background image with the size identical to that of the cut target area image, and applying the mask part in the cut target area image to the white background image, thereby obtaining the single cell map.
Through experimental tests on a plurality of digitized cervical liquid-based thin-layer cell slide images, the cell segmentation effect of the SAM segmentation model mask segmentation on the cervical liquid-based cell picture is good under the prompting point of the invention, and the whole cell can be segmented, thereby achieving the purpose of accurately segmenting the zero sample of the cervical liquid-based cell.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (10)

1. The method for segmenting the cervical fluid-based lamellar cell image based on the SAM segmentation model is characterized by comprising the following steps:
step S1: acquiring a cell area in a cell slide image, and calculating the radius of the cell area;
Step S2: constructing a brightness histogram of a cell area in a cell slide image in a sampling mode, regarding a dark-colored area in the brightness histogram of the cell area as a potential area of a cell nucleus, removing noise points from the potential area of the cell nucleus by adopting corrosion expansion, and calculating all cell nucleus seed points of the whole image by adopting a density clustering algorithm;
The specific process of constructing the brightness histogram of the cell area in the cell slide image by adopting the sampling mode is as follows:
the middle circular area of the cell slide image is a cell area, the periphery is a blank area, and a square area in the center of the cell area is selected The area of the selected square area is 1/4 of the area of the cell;
Selecting the total area of the sampling area as square area Area/>By the width of the nucleus/>Is high, square area/>Side length/>For the width, choose/>Strips, at intervals/>In square area/>Evenly sampling:
By being in square areas The sampled region is used as a pixel point set/>, which constructs a brightness histogram
Gathering pixel points/>Value conversion to/>The value, according to the characteristic of deep color of the cell nucleus, selects the brightness value/>To construct a luminance histogram/>
Step S3: searching an auxiliary point corresponding to the cell nucleus seed point by taking each cell nucleus seed point as a center and using a non-background color around the cell nucleus seed point;
step S4: constructing a SAM segmentation model, taking each cell nucleus seed point and an auxiliary point of the corresponding cell nucleus seed as a prompting point of the SAM segmentation model for masking segmentation, obtaining a cell area corresponding to the corresponding cell nucleus seed point, and representing the cell area in a masking area mode;
the SAM segmentation model consists of an image encoder, a prompt encoder and a lightweight mask decoder;
step S5: and optimizing a mask area corresponding to the nucleus seed point, removing holes and noise points in the mask area by solving a maximum contour mode, wherein the optimized mask area corresponds to the cell area.
2. The SAM segmentation model-based cervical fluid-based thin-layer cell image segmentation method according to claim 1, wherein: the specific process of obtaining the cell area in the cell slide image and calculating the radius of the cell area is as follows:
calculating each pixel point in the cell slide image Value and background color/>A difference value of the values, the difference value being less than a threshold/>Regarding the pixel points of the pixel points as background colors, and setting the difference value smaller than the threshold value/>Setting the value of the pixel point of (2) to a non-zero value; the difference value is greater than the threshold/>Regarding the pixel points of the pixel array as a cell area, and regarding the difference value as larger than the threshold value/>Setting the value of the pixel point of (2) to be zero; wherein the background color is white or near white;
According to zero value pixel points in cell slide images Shaft maximum coordinates/>Minimum coordinates/>/>Maximum axis coordinateMinimum coordinates/>Calculating to obtain approximate half value/>, of cell region circle in cell slide image
Let the height of the cell slide image beWidth is/>The upper left, upper right, lower left and lower right coordinates of the cell region in the cell slide image are/>, respectively、/>、/>And
3. The SAM segmentation model-based cervical fluid-based thin-layer cell image segmentation method according to claim 2, wherein: the luminance histogram is represented by the following formula:
in the method, in the process of the invention, Respectively represent/>The number of pixels with values of 0/255, 1/255 and … …/255; Representation/> The largest channel value among the three channels.
4. A method of dividing a cervical fluid-based thin-layer cell image based on a SAM segmentation model according to claim 3, characterized in that: the specific process of regarding the dark areas in the brightness histogram of the cell area as potential areas of the nucleus is as follows: equalizing the brightness histogram to obtain brightness channel image, and maximizing the inter-class variance of the brightness channel imageThe method calculates the brightness segmentation threshold/>Wherein/>Expressed as:
in the method, in the process of the invention, The number of pixels representing the foreground is the proportion of the whole brightness channel image; /(I)Representing the average gray level of the foreground pixel points; /(I)The number of pixels representing the background is the proportion of the whole brightness channel image; /(I)Representing the average gray level of background pixel points; /(I)Representing the inter-class variance; luminance value in luminance channel image/>Less than segmentation threshold/>Is a potential area of the nucleus, is set to white, and the other pixels are set to black.
5. The SAM segmentation model-based cervical fluid-based thin-layer cell image segmentation method according to claim 4, wherein: the specific process of removing noise points from potential areas of the cell nuclei by adopting corrosion expansion and calculating all cell nucleus seed points of the whole graph by adopting a density clustering algorithm is as follows: firstly, binarizing a brightness channel image, wherein the formula is as follows:
in the method, in the process of the invention, Representing pixel points/>, in luminance channel imagesLuminance value/>;/>Representing a segmentation threshold; /(I)Representing a binarized image; /(I)Representing the binarized image obtained after segmentation;
screening seed points in a white area after the binary image is segmented, wherein the specific screening process comprises the following steps: using corrosion expansion algorithm to eliminate noise in white region, firstly performing corrosion operation to eliminate noise in image mask, then performing expansion operation on the corroded binary image, retaining original binary image mask, and obtaining binary image after expansion operation The middle white area is the area where the cell nucleus is located;
The etching operation is expressed as:
The expansion operation is expressed as:
in the method, in the process of the invention, Inputting the segmented binarized image; /(I)For the binarized image after the etching operation, the convolution kernel is/>For inputting pixel points/>, in the segmented binarized imagePixel values of (2); /(I)To binarize pixel points/> in the image after corrosion operationPixel values of (2); /(I)Is a binarized image after expansion operation; /(I)Is pixel point/>, in the binarized image after expansion operationPixel values of (2); /(I)Inputting pixel values of the divided binarized image at offset positions; /(I)Expressed in pixel/>Offset on;
Obtaining a binarized image The center point of the middle white area is used as a seed point of the cell nucleus, and the specific process is as follows:
Judging binarized images one by one Whether the pixels in the pixel array are white or not, and obtaining the coordinates of each white pixel, namely obtaining a white pixel coordinate set/>
Clustering white pixel coordinates by using a density clustering algorithm, setting a pixel neighborhood distance threshold, marking a neighborhood with the distance of the pixel coordinates smaller than or equal to the distance threshold as a z-neighborhood, and setting a threshold of the number of white pixels in the z-neighborhoodThe specific process is as follows:
the first step: initializing a set of core objects Initializing cluster number/>Initializing all white pixels as an unvisited sample set/>Cluster division/>
And a second step of: finding each white pixel by distance measurementWhite pixel set/>, within z-neighborhood of (x-y)The number of white pixels in (1) satisfies/>Will/>Join to core object sample set/>In (a) and (b);
And a third step of: core object sample set Ending the algorithm, otherwise continuing the next step;
Fourth step: randomly selecting a core object sample set Core object/>Initializing a current cluster core object queue/>Initializing cluster number/>Initializing a current cluster sample set/>Update non-accessed sample set/>
Fifth step: current cluster core object queueThen the current cluster sample set/>Updating cluster partition after generation is completedUpdate core object sample set/>Turning to the third step, otherwise, updating the core object set/>
Sixth step: core object queue in current clusterA core object is fetched/>Find/>All white pixels/>, within z-neighborhood of (2)Let/>Update current cluster sample set/>Updating a set of unvisited samplesUpdate/>Turning to a fifth step;
and finding clustered seed points through a density clustering algorithm, and locating the clustered seed points to the center point coordinates of the cell nuclei.
6. The SAM segmentation model-based cervical fluid-based thin-layer cell image segmentation method according to claim 5, wherein: the specific process of searching an auxiliary point corresponding to the cell nucleus seed point by taking each cell nucleus seed point as the center and using the non-background color around the cell nucleus seed point is as follows: and respectively aiming at each cell nucleus seed point, taking the cell nucleus seed point as a circle center, acquiring proper points on a circle with the radius r as auxiliary points, sequentially judging whether the points on the circle are reasonable according to 30 degrees clockwise from the position right above the cell nucleus seed point, stopping the selection process of the auxiliary points of the cell nucleus seed point after the first reasonable points are acquired, and taking the first reasonable points as the auxiliary points of the cell nucleus seed point.
7. The SAM segmentation model-based cervical fluid-based thin-layer cell image segmentation method according to claim 6, wherein: step S4, dicing the whole cell slide image, dividing the whole cell slide image into image blocks with specified sizes in sequence, and enabling the image blocks to be overlapped in space; after the dicing is completed, the image blocks are screened, when the nucleus seed points exist in the image blocks, SAM segmentation model mask segmentation is carried out, the nucleus seed points do not exist in the image blocks, and the image blocks are not processed.
8. The SAM segmentation model-based cervical fluid-based thin-layer cell image segmentation method according to claim 7, wherein: the SAM segmentation model mask segmentation flow is as follows: slicing the image block by using an image encoder to generate a picture to be embedded; encoding the input prompt points by using a prompt encoder to generate point embedding; the picture embedding and the point embedding are decoded with a lightweight mask decoder, generating a prediction mask and a prediction mask quality.
9. The SAM segmentation model-based cervical fluid-based thin-layer cell image segmentation method according to claim 8, wherein: an image encoder is composed of a plurality ofModule and neck Structure/>The specific processing procedure for generating the picture embedding is as follows: first image block/>Slice embedding is carried out, SAM segmentation model weight absolute position embedding is used to obtain embedded features, and then a plurality of/>, a plurality of SAM segmentation model weights are usedThe module processes the embedded features to obtain an embedded representation, and finally, through the neck structureFurther processing the embedded representation to generate a picture embedded; the processing procedure is expressed as:
in the method, in the process of the invention, Representing an image block slice; /(I)For an input image block; /(I)Representing a slice embedding operation; /(I)For an input image block size; /(I)Is the slice size; /(I)Information embedded for absolute position; /(I)Represents an intermediate variable; /(I)Indicating useThe module processes the embedded features; /(I)Processing the embedded representation for the neck structure; /(I)The result of the picture embedding, namely the output of the image encoder;
the hint encoder consists of random position codes, and the specific processing procedure of generating point embedding is as follows: firstly, coordinate translation of an input prompting point and an auxiliary point is adjusted to a pixel center; coding normalization is carried out on the translated prompting points through random position coding, and the specific positions of coordinates of the prompting points are converted into 0 1, And then realizing position coding by linear transformation and sine and cosine function transformation on the coordinates of the normalized prompting points; adding position coding weights of different labels in the SAM segmentation model weights on the normalized cue points, and finally generating point embedding, namely the output of a cue encoder, wherein the processing process is expressed as follows:
in the method, in the process of the invention, The input prompting point; /(I)Representing a translation operation; /(I)Is an input label; /(I)Representing a random position code; /(I)The pre-processed prompting points; /(I)Inputting the position coding weight of the label into the SAM segmentation model weight; /(I)Embedding for the point;
Lightweight mask decoder consists of Module and multilayer perceptron/>Composition, multilayer perceptron/>The method comprises an IOU multi-layer perceptron and a mask multi-layer perceptron; the specific processing procedure for generating the prediction mask and the prediction mask quality is as follows: firstly, inputting image embedding, point embedding and point embedding position coding, in the processing process, splicing the embedded layer weight of an IOU token and the embedded layer weight of a group of mask tokens in SAM segmentation model weight, splicing the spliced embedded layer weight of the IOU token and the embedded layer weight of the mask token with the point embedding to form a new tensor, and then inputting the image embedding, the point embedding position coding and the spliced tensor into/>In the module, get/>The hidden state and the characteristic representation after module processing; extracting features of a first position in a second dimension from the hidden state as IOU token outputs, and extracting features of the second position to a fifth position in the second dimension from the hidden state as a set of mask token outputs; each extracted mask token is output, and the output results obtained through the mask multi-layer perceptron are stacked and then are subjected to matrix multiplication with the up-sampled feature representation to obtain a prediction mask; outputting the extracted IOU token to obtain the quality of a prediction mask through an IOU multi-layer perceptron; the processing procedure is expressed as:
in the method, in the process of the invention, And/>Embedding layer weights of the IOU token and a group of mask tokens in the SAM segmentation model weights respectively; /(I)Representing a splicing operation; /(I)The weight after splicing; /(I)Embedding the representation points; /(I)Embedding the points after splicing the weights of the embedded layers; /(I)Is the output of the image encoder; /(I)Coding for a position; /(I)Representation/>A module; /(I)And/>Respectively/>The hidden state and the characteristic representation obtained after the module processing; /(I)Representing an extraction operation; /(I)Outputting for a set of mask tokens; /(I)Outputting for the IOU token; /(I)Representing a multi-layer perceptron; /(I)Representing an upsampling operation; /(I)Representing a matrix multiplication operation; /(I)Is a prediction mask; To predict mask quality.
10. The SAM segmentation model-based cervical fluid-based thin-layer cell image segmentation method according to claim 9, wherein: the specific process of step S5 is: for each mask region, detecting all contours in the mask region by analyzing connectivity between adjacent pixels in the mask region; calculating the area of each contour, taking the contour with the largest area as the main body of the cell, and neglecting the areas corresponding to the contours except the contour with the largest area; when the area of the area corresponding to the outline with the largest area is smaller than half of that of a normal cell, taking the outline as a noise point, neglecting a mask corresponding to the outline, and updating the mask area; when the area of the area corresponding to the outline with the largest area is greater than or equal to half of the area of a normal cell, the mask corresponding to the outline is a segmented cell area, the mask area is updated to be the area corresponding to the outline, and holes in the outline are automatically filled;
the contour area calculation method calculates the area of an area surrounded by a contour, and the contour area is expressed as:
in the method, in the process of the invention, Representing the number of points on the contour; /(I)Representing coordinates of an i-th point on the contour; sigma represents the pair/>To the point ofAccumulating the areas among the vertexes; /(I)Representing the area enclosed by the outline; /(I),/>Represents the/>The abscissa and ordinate of the individual points;
and (3) circumscribing the updated mask with the rectangle, obtaining the left upper corner coordinates and the length and width of the rectangle, cutting out a target area image with the size of the circumscribed rectangle from the image block, creating a white background image with the size identical to that of the cut target area, and applying the mask part in the cut target area image to the white background image to obtain a single cell map.
CN202410275835.1A 2024-03-12 2024-03-12 Cervix liquid-based lamellar cell image segmentation method based on SAM segmentation model Active CN117876401B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410275835.1A CN117876401B (en) 2024-03-12 2024-03-12 Cervix liquid-based lamellar cell image segmentation method based on SAM segmentation model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410275835.1A CN117876401B (en) 2024-03-12 2024-03-12 Cervix liquid-based lamellar cell image segmentation method based on SAM segmentation model

Publications (2)

Publication Number Publication Date
CN117876401A CN117876401A (en) 2024-04-12
CN117876401B true CN117876401B (en) 2024-05-03

Family

ID=90595243

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410275835.1A Active CN117876401B (en) 2024-03-12 2024-03-12 Cervix liquid-based lamellar cell image segmentation method based on SAM segmentation model

Country Status (1)

Country Link
CN (1) CN117876401B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118470466B (en) * 2024-07-09 2024-09-27 腾讯科技(深圳)有限公司 Model processing method, device, equipment, medium and product
CN118781594B (en) * 2024-09-11 2024-11-12 江西医至初医学病理诊断管理有限公司 SAM-based method and system for segmentation and classification of pleural and peritoneal effusion cells

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11704808B1 (en) * 2022-02-25 2023-07-18 Wuxi Second People's Hospital Segmentation method for tumor regions in pathological images of clear cell renal cell carcinoma based on deep learning
CN116580203A (en) * 2023-05-29 2023-08-11 哈尔滨理工大学 An Unsupervised Cervical Cell Instance Segmentation Method Based on Visual Attention
CN117197808A (en) * 2023-10-17 2023-12-08 武汉呵尔医疗科技发展有限公司 Cervical cell image cell nucleus segmentation method based on RGB channel separation

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170091948A1 (en) * 2015-09-30 2017-03-30 Konica Minolta Laboratory U.S.A., Inc. Method and system for automated analysis of cell images
US10402623B2 (en) * 2017-11-30 2019-09-03 Metal Industries Research & Development Centre Large scale cell image analysis method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11704808B1 (en) * 2022-02-25 2023-07-18 Wuxi Second People's Hospital Segmentation method for tumor regions in pathological images of clear cell renal cell carcinoma based on deep learning
CN116580203A (en) * 2023-05-29 2023-08-11 哈尔滨理工大学 An Unsupervised Cervical Cell Instance Segmentation Method Based on Visual Attention
CN117197808A (en) * 2023-10-17 2023-12-08 武汉呵尔医疗科技发展有限公司 Cervical cell image cell nucleus segmentation method based on RGB channel separation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱琳琳 ; 韩璐 ; 杜泓 ; 范慧杰 ; .基于U-Net网络的多主动轮廓细胞分割方法研究.红外与激光工程.2020,(第S1期),全文. *

Also Published As

Publication number Publication date
CN117876401A (en) 2024-04-12

Similar Documents

Publication Publication Date Title
CN117876401B (en) Cervix liquid-based lamellar cell image segmentation method based on SAM segmentation model
CN113780259B (en) A pavement defect detection method, device, electronic device and readable storage medium
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN108549891B (en) Multi-scale diffusion well-marked target detection method based on background Yu target priori
CN110428432B (en) Deep neural network algorithm for automatically segmenting colon gland image
CN113313164B (en) Digital pathological image classification method and system based on super-pixel segmentation and graph convolution
CN111986125B (en) Method for multi-target task instance segmentation
CN112347970B (en) Remote sensing image ground object identification method based on graph convolution neural network
CN109271960A (en) A kind of demographic method based on convolutional neural networks
CN111950453A (en) Optional-shape text recognition method based on selective attention mechanism
CN107424159A (en) Image, semantic dividing method based on super-pixel edge and full convolutional network
CN102982539B (en) Characteristic self-adaption image common segmentation method based on image complexity
CN113888536B (en) Printed matter double image detection method and system based on computer vision
CN111695373B (en) Zebra stripes positioning method, system, medium and equipment
CN109886271B (en) Image accurate segmentation method integrating deep learning network and improving edge detection
CN111008632A (en) License plate character segmentation method based on deep learning
CN111178451A (en) A license plate detection method based on YOLOv3 network
CN111489370A (en) A segmentation method of remote sensing images based on deep learning
CN111797920B (en) Remote sensing extraction method and system for depth network impervious surface with gate control feature fusion
CN113537173B (en) A Face Image Authenticity Recognition Method Based on Facial Patch Mapping
CN112270317A (en) Traditional digital water meter reading identification method based on deep learning and frame difference method
CN111062381A (en) License plate position detection method based on deep learning
CN115035293A (en) An unsupervised deep learning intelligent extraction method for marine aquaculture from SAR images
CN112686902A (en) Two-stage calculation method for brain glioma identification and segmentation in nuclear magnetic resonance image
CN115423802A (en) Automatic classification and segmentation method of squamous epithelial tumor cell images based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant