[go: up one dir, main page]

CN102831424B - Method for extracting visible component by microscope system - Google Patents

Method for extracting visible component by microscope system Download PDF

Info

Publication number
CN102831424B
CN102831424B CN201210268344.1A CN201210268344A CN102831424B CN 102831424 B CN102831424 B CN 102831424B CN 201210268344 A CN201210268344 A CN 201210268344A CN 102831424 B CN102831424 B CN 102831424B
Authority
CN
China
Prior art keywords
visible component
pixel
image
point
convex polygon
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210268344.1A
Other languages
Chinese (zh)
Other versions
CN102831424A (en
Inventor
宋洁
沈继楠
唐松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dirui Medical Technology Co Ltd
Original Assignee
Changchun Dirui Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Dirui Medical Technology Co Ltd filed Critical Changchun Dirui Medical Technology Co Ltd
Priority to CN201210268344.1A priority Critical patent/CN102831424B/en
Publication of CN102831424A publication Critical patent/CN102831424A/en
Application granted granted Critical
Publication of CN102831424B publication Critical patent/CN102831424B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a method for extracting visible components by a microscope system, and belongs to the field of image processing of the microscope system. The method comprises the following steps: first, filming to obtain a digital image including a visible component area, then, obtaining a pixel collection of boundaries of the visible component area through image analysis, increasing or decreasing elements in the collection through calculation and arranging the elements to make the elements in the pixel collection connect in sequence to form a closed convex polygon which completely contains the whole area of the visible component. The convex polygon is a minimal convex polygon including the visible component area. Thus, the visible component area can be completely obtained. The method is advantaged by good extracting effect aimed at boundary vague, a certain extent out-of-focus, low-scale contrast, and broken visible components.

Description

A kind of visible component by microscope system extracting method
Technical field
The present invention relates to visible component by microscope system extracting method, particularly relate to the location in visible component region and the method for complete extraction in a kind of microscopic system visual field.
Background technology
The method extracted based on visible component by microscope system is well-known.As the method and system disclosed in the patent of application number 200380108664.X and 201010568805.Splitting extraction target area based on background and visible component gray scale difference value is the method generally adopted at present.Regrettably, visible component extraction effect for obscurity boundary, to a certain degree out of focus, low contrast and fragmentation is often not good, intactly can not extract target area, even can not obtain visible component quantity accurately, and this is related to the follow-up all operations to visible component, as Classification and Identification etc., will directly cause the mistake of net result.
Summary of the invention
The invention provides a kind of visible component by microscope system extracting method, not good with the visible component extraction effect solved for obscurity boundary, to a certain degree out of focus, low contrast and fragmentation, intactly can not extract target area, even can not obtain the problem of visible component quantity accurately.
The technical scheme that the present invention takes is: comprise the following steps:
(1), shooting acquisition includes and forms subregional digital picture f, and wherein digital picture is made up of pixel, includes and forms subregion pixel and background pixel, and each pixel has oneself pixel value;
(2), the image gradient of digital picture f is calculated, wherein G is the image gradient of digital picture f, and S is gradient operator template, and S can select Sobel operator, Robert operator, Prewitt operator;
(3), to G binary conversion treatment, the maximum between-cluster variance method on the one dimensional histograms that binarization method can select fixed threshold binarization method or Otsu to propose;
(4), to bianry image carry out frontier tracing, obtain the boundary pixel orderly two-dimensional coordinate set P{p in a visible component region 1, p 2..., p n, n is boundary pixel collection element number, and each element of P is made up of, as p x component and y component 1=(p 1.x, p 1.y);
(5), calculating includes the subregional minimum area convex polygon of formation:
1), the pixel set obtained in step (2) sort from small to large by polar angle, obtain new set P ' { p 1, p 2..., p n, n is boundary pixel collection element number;
2), order is from the middle reading of P ' 3 element p 1, p 2, p 3, form two vectors between two calculate two vectorial cross product ψ, if ψ is greater than 0, from the middle reading p of P ' 4, p 2, p 3, p 4form two vectors between two to continue to calculate cross product; Otherwise, from the middle deletion p of P ' 3, read p 4, p 1, p 2, p 4calculate cross product; Calculate the element deleted in set as stated above, until the middle all elements of P ' all travels through once, obtain a subset P of P ' ", by P, " interior element head and the tail are connected in turn, the minimal convex polygon comprising particle region can be obtained, i.e. P " interior element is the summit of minimal convex polygon;
(6) convex polygon, using step (5) obtained, as mask, obtains visible component region in digital picture f.
One embodiment of the present invention is in step (2), use gradient operator S to carry out neighborhood territory pixel value weighted sum operation to gray level image, operator S is as weighting weights foundation.
One embodiment of the present invention is in step (4), the method adopting chain code or neighborhood to judge carries out frontier tracing, chain code method adopts clockwise or counterclockwise traversal order to four of pixel or eight neighborhood coding and sorting order: find a frontier point as initial point, according to traversal order, get back to initial point after traveling through one week as traversal end condition, the frontier point that period travels through is all stored in set P; Neighborhood determining method, based on binaryzation result, judges whether each pixel four or eight neighborhood are all visible component pixel, if namely this point is visible component internal point, do not process; Otherwise this point is visible component frontier point, stored in frontier point set.
One embodiment of the present invention is in step (5) 1) in, polar angle sequence is based on polar coordinates, and polar initial point can choose four angles of image, as the lower left corner, and polar angle scope
One embodiment of the present invention is in step (5) 2) in, vector is calculated by set P ' interior element and gets, p 1 p 2 → = ( p 2 . x - p 1 . x , p 2 . y - p 1 . y ) , p 2 p 3 → = ( p 3 . x - p 2 . x , p 3 . y - p 2 . y ) ; Vector with cross product be expressed as (p 2.x-p 1.x) * (p 3.y-p 2.y)-(p 3.x-p 2.x) * (p 2.y-p 1.y).
The microscopic system that the present invention adopts comprises: imaging system, visible component container, light source, image processing system.The enhanced X-ray source brightness and contrast of visible component in its container, imaging system shooting visible component also forms Digital Image Transmission to image processing system analysis; Wherein said visible component container is transparent, and light source and imaging system can be positioned at homonymy or the heteropleural of visible component container.
The feature of image that microscopic system obtains be little in the object lens depth of field, affect by pathology and medicine factor under, it is damaged very easily to there is unintelligible and border, local in visible component, is difficult to split extracts complete visible component region by conventional method.
Advantage of the present invention is: provide one visible component by microscope system extracting method more accurately, and the visible component for obscurity boundary, to a certain degree out of focus, low contrast and fragmentation also has good extraction effect.The present invention adopts border salient point connection to split extraction visible component, can connect damaged boundary, while accurately extracting visible component region, for the extraction of follow-up shape information and content analysis are laid a good foundation.
Accompanying drawing explanation
Fig. 1 is the microscopic system structural representation that the present invention adopts, and wherein light source can be placed on position shown in current location or dotted line frame;
Fig. 2 a is the urine particle source images taken by microscopic system, is specially single squamous cell image;
Fig. 2 b is the urine particle source images taken by microscopic system, is specially single squamous cell image;
Fig. 2 c is the urine particle source images taken by microscopic system, is specially leucocyte and cliques graph picture;
Fig. 2 d is the urine particle source images taken by microscopic system, is specially leucocyte and cliques graph picture;
Fig. 3 a is depicted as the image gradient schematic diagram after source images and sobel operator convolution, and the image in figure is corresponding with the image in Fig. 2 a;
Fig. 3 b is depicted as the image gradient schematic diagram after source images and sobel operator convolution, and the image in figure is corresponding with the image in Fig. 2 b;
Fig. 3 c is depicted as the image gradient schematic diagram after source images and sobel operator convolution, and the image in figure is corresponding with the image in Fig. 2 c;
Fig. 3 d is depicted as the image gradient schematic diagram after source images and sobel operator convolution, and the image in figure is corresponding with the image in Fig. 2 d;
Fig. 4 a is depicted as the minimum area convex polygon apex coordinate schematic diagram in visible component region, and the image in figure is corresponding with the image in Fig. 3 a;
Fig. 4 b is depicted as the minimum area convex polygon apex coordinate schematic diagram in visible component region, and the image in figure is corresponding with the image in Fig. 3 b;
Fig. 4 c is depicted as the minimum area convex polygon apex coordinate schematic diagram in visible component region, and the image in figure is corresponding with the image in Fig. 3 c;
Fig. 4 d is depicted as the minimum area convex polygon apex coordinate schematic diagram in visible component region, and the image in figure is corresponding with the image in Fig. 3 d;
Fig. 5 a is depicted as the visible component obtained according to the inventive method and extracts result schematic diagram, and the image in figure is corresponding with the image in Fig. 4 a;
Fig. 5 b is depicted as the visible component obtained according to the inventive method and extracts result schematic diagram, and the image in figure is corresponding with the image in Fig. 4 b;
Fig. 5 c is depicted as the visible component obtained according to the inventive method and extracts result schematic diagram, and the image in figure is corresponding with the image in Fig. 4 c;
Fig. 5 d is depicted as the visible component obtained according to the inventive method and extracts result schematic diagram, and the image in figure is corresponding with the image in Fig. 4 d.
Embodiment
(1), microscopic system structural representation as shown in Figure 1, wherein light source can be placed on position shown in current location or dotted line frame.Imaging system shooting visible component container obtains to include and forms subregional digital picture f, and wherein digital picture is made up of some pixels, includes and forms subregion pixel and background pixel, and each pixel has oneself pixel value.
(2), the image gradient of digital picture f is calculated, wherein, G is the image gradient of digital picture f, and S is gradient operator template, and S can select Sobel operator or Robert operator or Prewitt operator;
When S chooses Sobel operator, G x, G yrepresent the gradient statistics that image f is respective in the horizontal direction and the vertical direction respectively;
Horizontal direction Sobel operator: S x = - 1 0 1 - 2 0 2 - 1 0 1 , G x = f ⊗ S x ;
Vertical direction Sobel operator: S y = 1 2 1 0 0 0 - 1 - 2 - 1 , G y = f ⊗ S y ;
G = f × S = G x 2 + G y 2
Result of calculation G is as shown in Fig. 3 a, Fig. 3 b, Fig. 3 c, Fig. 3 d.
This step of the present invention or employing:
Robert operator: 0 1 - 1 0 With 1 0 0 - 1 ;
Prewitt operator: - 1 - 1 - 1 0 0 0 1 1 1 With 1 0 - 1 1 0 - 1 1 0 - 1 ;
(3), to G binary conversion treatment, this example adopts fixed threshold binarization segmentation method, and binary-state threshold is 40; Or adopt Otsu method calculated threshold to be 41, the two difference on net result without impact;
(4), adopt chain code method to carry out frontier tracing, chain code method adopts the clockwise traversal order coding and sorting order of pixel eight neighborhood, coding rule:
0 1 2
7 p 3
6 5 4
Wherein, p is frontier point, and its 8 neighborhood is encoded to 0-7 successively, and 0 direction is initial traverse direction, and according to clockwise traversal order, 7 directions finally travel through;
When starting to travel through, first the most left boundary bottom point in bianry image is found as initial point, then according to traversal order, to continue traversal until return initial point as traversal end condition, record traversed point, obtain the boundary pixel orderly two-dimensional coordinate set P{p in a visible component region 1, p 2..., p n, n is boundary pixel collection element number, and each element of P is made up of x component and y component, i.e. p 1=(p 1.x, p 1.y);
(5), calculating includes the subregional minimum area convex polygon of formation:
1), set the image lower left corner as true origin, the element in pixel set P is sorted from small to large by polar angle, obtain new set P ' { p 1, p 2..., p n, n is boundary pixel collection element number;
2), order is from the middle reading of P ' 3 element p 1, p 2, p 3, form two vectors between two calculate two vectorial cross product ψ, if ψ is greater than 0, from the middle reading p of P ' 4, p 2, p 3, p 4form two vectors between two to continue to calculate cross product; Otherwise, from the middle deletion p of P ' 3, read p 4, p 1, p 2, p 4calculate cross product; Calculate the element deleted in set as stated above, until the middle all elements of P ' all travels through once, obtain a subset P of P ' ", by P, " interior element head and the tail are connected in turn, can obtain the minimal convex polygon comprising particle region, the element that namely P ' is interior is the summit of minimal convex polygon;
" the interior coordinate schematic diagram of point on image as Fig. 4 a, Fig. 4 b, Fig. 4 c, Fig. 4 d are depicted as P;
(6), be linked in sequence P, and " interior point obtains including and forms subregional minimum area convex polygon, using convex polygon interior zone as mask, obtains visible component region in digital picture f, as Fig. 5 a, Fig. 5 b, Fig. 5 c, Fig. 5 d.
One embodiment of the present invention is in step (5) 2) in, vector is calculated by set P ' interior element and gets, p 1 p 2 → = ( p 2 . x - p 1 . x , p 2 . y - p 1 . y ) , p 2 p 3 → = ( p 3 . x - p 2 . x , p 3 . y - p 2 . y ) ; Vector with cross product be expressed as (p 2.x-p 1.x) * (p 3.y-p 2.y)-(p 3.x-p 2.x) * (p 2.y-p 1.y).

Claims (4)

1. a visible component by microscope system extracting method, is characterized in that comprising the following steps:
(1), shooting acquisition includes and forms subregional digital picture f, and wherein digital picture is made up of pixel, includes and forms subregion pixel and background pixel, and each pixel has oneself pixel value;
(2), the image gradient of digital picture f is calculated, wherein G is the image gradient of digital picture f, and S is gradient operator template, and S can select Sobel operator, Robert operator, Prewitt operator;
(3), to G binary conversion treatment, the maximum between-cluster variance method on the one dimensional histograms that binarization method can select fixed threshold binarization method or Otsu to propose;
(4), frontier tracing is carried out to bianry image, the method adopting chain code or neighborhood to judge carries out frontier tracing, chain code method adopts clockwise or counterclockwise traversal order to four of pixel or eight neighborhood coding and sorting order: find a frontier point as initial point, according to traversal order, initial point is got back to as traversal end condition after traveling through one week, the frontier point that period travels through is all stored in set P, neighborhood determining method judges whether each pixel four or eight neighborhood are all visible component pixel, if namely this point is visible component internal point, do not process; Otherwise this point is visible component frontier point, stored in frontier point set; Obtain the boundary pixel orderly two-dimensional coordinate set P{p in a visible component region 1, p 2..., p n, n is boundary pixel collection element number, and each element of P is made up of x component and y component, i.e. p 1=(p 1.x, p 1.y);
(5), calculating includes the subregional minimum area convex polygon of formation:
1), the pixel set obtained in step (2) sort from small to large by polar angle, obtain new set P ' { p 1, p 2..., p n, n is boundary pixel collection element number;
2), order is from the middle reading of P ' 3 element p 1, p 2, p 3, form two vectors between two calculate two vectorial cross product ψ, if ψ is greater than 0, from the middle reading p of P ' 4, p 2, p 3, p 4form two vectors between two to continue to calculate cross product; Otherwise, from the middle deletion p of P ' 3, read p 4, p 1, p 2, p 4calculate cross product; Calculate the element deleted in set as stated above, until the middle all elements of P ' all travels through once, obtain a subset P of P ' ", by P, " interior element head and the tail are connected in turn, the minimal convex polygon comprising particle region can be obtained, i.e. P " interior element is the summit of minimal convex polygon;
(6) convex polygon, using step (5) obtained, as mask, obtains visible component region in digital picture f.
2. a kind of visible component by microscope system extracting method as claimed in claim 1, is characterized in that: in step (2), use gradient operator template S to carry out neighborhood territory pixel value weighted sum operation to gray level image, gradient operator template S is as weighting weights foundation.
3. a kind of visible component by microscope system extracting method as claimed in claim 1, is characterized in that: 1 of step (5)) in, polar angle sequence is based on polar coordinates, and polar initial point chooses four angles of image.
4. a kind of visible component by microscope system extracting method as claimed in claim 1, is characterized in that: 2 of step (5)) in, vector is calculated by set P ' interior element and gets, p 2 p 3 → = ( p 3 · x - p 2 · x , p 3 · y - p 2 · y ) , Vector with cross product be expressed as (p 2.x-p 1.x) * (p 3.y-p 2.y)-(p 3.x-p 2.x) * (p 2.y-p 1.y).
CN201210268344.1A 2012-07-31 2012-07-31 Method for extracting visible component by microscope system Active CN102831424B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210268344.1A CN102831424B (en) 2012-07-31 2012-07-31 Method for extracting visible component by microscope system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210268344.1A CN102831424B (en) 2012-07-31 2012-07-31 Method for extracting visible component by microscope system

Publications (2)

Publication Number Publication Date
CN102831424A CN102831424A (en) 2012-12-19
CN102831424B true CN102831424B (en) 2015-01-14

Family

ID=47334552

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210268344.1A Active CN102831424B (en) 2012-07-31 2012-07-31 Method for extracting visible component by microscope system

Country Status (1)

Country Link
CN (1) CN102831424B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107305181A (en) * 2016-04-18 2017-10-31 重庆大学 A method for studying solvent penetration of transdermal administration

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1380543A (en) * 2001-04-12 2002-11-20 清华大学 Image segmentation and identification method in industrial radiation imaging system
CN101826209A (en) * 2010-04-29 2010-09-08 电子科技大学 Canny model-based method for segmenting three-dimensional medical image
CN102073876A (en) * 2011-01-10 2011-05-25 中国科学院光电技术研究所 Biochip sampling point identification method based on edge detection
CN102270233A (en) * 2011-07-29 2011-12-07 中国航天科技集团公司第五研究院第五一三研究所 Searching method for convex hull

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101713776B (en) * 2009-11-13 2013-04-03 长春迪瑞医疗科技股份有限公司 Neural network-based method for identifying and classifying visible components in urine

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1380543A (en) * 2001-04-12 2002-11-20 清华大学 Image segmentation and identification method in industrial radiation imaging system
CN101826209A (en) * 2010-04-29 2010-09-08 电子科技大学 Canny model-based method for segmenting three-dimensional medical image
CN102073876A (en) * 2011-01-10 2011-05-25 中国科学院光电技术研究所 Biochip sampling point identification method based on edge detection
CN102270233A (en) * 2011-07-29 2011-12-07 中国航天科技集团公司第五研究院第五一三研究所 Searching method for convex hull

Also Published As

Publication number Publication date
CN102831424A (en) 2012-12-19

Similar Documents

Publication Publication Date Title
CN109934163B (en) Aerial image vehicle detection method based on scene prior and feature re-fusion
Chen et al. An end-to-end shape modeling framework for vectorized building outline generation from aerial images
CN108280450B (en) A method for detecting highway pavement based on lane lines
CN111310760B (en) Oracle Bone Inscription Text Detection Method Combining Local Prior Features and Deep Convolution Features
JP5775225B2 (en) Text detection using multi-layer connected components with histograms
WO2017041396A1 (en) Driving lane data processing method, device, storage medium and apparatus
CN104299009B (en) License plate character recognition method based on multi-feature fusion
CN104794479B (en) This Chinese detection method of natural scene picture based on the transformation of local stroke width
CN104134234A (en) Full-automatic three-dimensional scene construction method based on single image
CN103810716B (en) Move and the image partition method of Renyi entropy based on gray scale
CN104715238A (en) Pedestrian detection method based on multi-feature fusion
CN108171695A (en) A kind of express highway pavement detection method based on image procossing
CN113033390B (en) Dam remote sensing intelligent detection method based on deep learning
CN116486273B (en) A Method for Extracting Water Body Information from Small Sample Remote Sensing Images
CN114494283B (en) A method and system for automatic segmentation of farmland
CN110008900A (en) A Region-to-target Candidate Target Extraction Method for Visible Light Remote Sensing Images
CN117197686A (en) Satellite image-based high-standard farmland plot boundary automatic identification method
Xu et al. Pixel-level pavement crack detection using enhanced high-resolution semantic network
CN105374010A (en) A panoramic image generation method
Wang et al. A deep and multiscale network for pavement crack detection based on function-specific modules
CN104834926B (en) A kind of character zone extracting method and system
CN102831424B (en) Method for extracting visible component by microscope system
Farajzadeh et al. Automatic building extraction from uav-based images and dsms using deep learning
Tian et al. Robust text segmentation using graph cut
CN108875565A (en) The recognition methods of railway column, storage medium, electronic equipment, system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 130012 Changchun province high tech Zone, the River Street, No. 95, No.

Patentee after: Medical Polytron Technologies Inc

Address before: 130012 Changchun province high tech Zone, the River Street, No. 95, No.

Patentee before: Changchun Dirui Medical Technology Co., Ltd.

CP01 Change in the name or title of a patent holder