[go: up one dir, main page]

CN110930423B - Object edge feature recognition and extraction method - Google Patents

Object edge feature recognition and extraction method Download PDF

Info

Publication number
CN110930423B
CN110930423B CN201911171699.7A CN201911171699A CN110930423B CN 110930423 B CN110930423 B CN 110930423B CN 201911171699 A CN201911171699 A CN 201911171699A CN 110930423 B CN110930423 B CN 110930423B
Authority
CN
China
Prior art keywords
edge
pixel
gradient
pixel points
length
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911171699.7A
Other languages
Chinese (zh)
Other versions
CN110930423A (en
Inventor
王彦之
丘炜胜
石锡敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sharpvision Co ltd
Original Assignee
Sharpvision Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sharpvision Co ltd filed Critical Sharpvision Co ltd
Priority to CN201911171699.7A priority Critical patent/CN110930423B/en
Publication of CN110930423A publication Critical patent/CN110930423A/en
Application granted granted Critical
Publication of CN110930423B publication Critical patent/CN110930423B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an object edge feature recognition and extraction method, which comprises the steps of converting a captured original image of an object into an image with gradient amplitude, thresholding the gradient amplitude according to the gradient direction, mapping the image into a plurality of edge images in different directions according to the corresponding gradient direction, defining an angle range of an ideal edge line by taking the vertical line direction of the gradient direction as the center according to the gradient direction corresponding to the edge image, defining the minimum threshold value of the edge line length, and checking pixel points and pixel points in a certain range around the pixel points through bit operation; if the pixel belongs to the edge pixel and the extension length of the edge line in the ideal angle range meets the minimum length specification through bit operation detection, the pixel is still used as the edge pixel after being processed; otherwise, if the pixel points belong to the edge pixel points, the pixel points are not used as the edge pixel points after being processed, and then the pooling operation is carried out to highlight the edges of the object.

Description

Object edge feature recognition and extraction method
Technical Field
The invention relates to the technical field of object edge recognition, in particular to an object edge feature recognition extraction method.
Background
Edge detection techniques are used to detect bright-dark cut-offs in optical imaging and have been commonly used in a number of simple image recognition applications since their advent. In the traditional object edge detection algorithm can, the core idea is still in gradient amplitude and the amplitude corresponds to threshold value, the essential idea is still that the larger the amplitude of the gradient is, the more obvious the edge of the object is, and the screening idea has serious problems when facing to the background of complex leaves, branches and the like. When facing to image noise, the problems of large brightness variation of leaves and branches, large gradient amplitude and dense gradient amplitude exist, and huge confusion is caused in the edge area of a slice, even the whole algorithm is completely invalid. Meanwhile, the detected target is smaller than the edge, the gradient threshold value is possibly smaller, and the edge of the target object is possibly broken at the time, so that the target cannot be identified.
Disclosure of Invention
The invention aims to provide an object edge feature recognition and extraction method so as to obtain a real distinguishable object edge.
The invention discloses an object edge feature recognition and extraction method, which comprises the following steps:
s1: converting the captured original picture of the object into a picture with gradient amplitude;
s2: the picture with gradient amplitude value is thresholded according to the gradient direction;
s3: the pictures obtained after thresholding are mapped into a plurality of edge pictures in different directions according to the corresponding gradient directions;
s4: according to the gradient directions corresponding to the edge diagrams, the angle range of an ideal edge line is defined by taking the vertical direction of the gradient direction as the center, and meanwhile, the minimum threshold value of the length of the edge line is defined; checking pixel points in a certain range around the pixel points through bit operation; if the pixel belongs to an edge pixel and the extension length of the edge line in the ideal angle range meets the minimum length specification through bit operation detection, the pixel is still used as the edge pixel after being processed; otherwise, if the pixel belongs to the edge pixel, but the above condition is not satisfied, the pixel is not used as the edge pixel after being processed;
s5: and carrying out pooling operation on the edges serving as the edge pixel points, and finally identifying the edges of the highlighted objects.
Compared with the prior art, the invention has the following beneficial effects:
(1) Compared with the original scheme, the edge line judgment method not only depends on the amplitude value of the gradient, but also depends on the length of the edge line. When thresholding the gradient magnitude, we can directly use a smaller gradient magnitude thresholding threshold to accommodate the more blurred edges. Therefore, compared with the original CANY technical scheme, the illumination adaptation capability is stronger.
(2) According to the gradient directions corresponding to the edge diagrams, the angle range of an ideal edge line is defined by taking the vertical direction of the gradient direction as the center according to the edge diagrams in different directions, and meanwhile, the minimum threshold value of the length of the edge line is defined; checking pixel points in a certain range around the pixel points through bit operation; if the pixel belongs to an edge pixel and the extension length of the edge line in the ideal angle range meets the minimum length specification through bit operation detection, the pixel is still used as the edge pixel after being processed; on the contrary, if the pixel belongs to the edge pixel, but the above condition is not satisfied, the pixel is not used as the edge pixel after being processed, and the pooling operation is performed on the edge used as the edge pixel, so that the edge of the real distinguishable (with a certain size) object can be obtained, and the edge pixel with small and scattered and light and shade changes, such as branches, leaves, shadows on the ground, and the like, can be perfectly filtered.
(3) Compared with the CANY technical scheme, the result output by the method is not a single dot matrix of edges and non-edges. The edge lattice is classified into a plurality of edge graphs according to the gradient directions, and each edge graph corresponds to a different gradient direction. The operation is favorable for reserving more edges and information of the edge normal direction, and provides more basis for upper-layer operations such as further image contour judgment and the like.
Drawings
FIG. 1 is a flow chart of an object edge feature recognition and extraction method according to the present invention;
FIG. 2 is an illustration of an edge map of the present invention mapped to a plurality of different directions;
FIG. 3 is an edge map representation of a picture with gradient magnitude mapped to multiple different directions after non-maxima suppression prior to gradient magnitude thresholding;
FIG. 4 is an exemplary diagram of bit calculation by edge mapping memory blocks in memory.
Detailed Description
As shown in fig. 1 and 2, an object edge feature recognition and extraction method includes the following steps:
s1: converting the captured original picture of the object into a picture with gradient amplitude;
s2: the picture with gradient amplitude value is thresholded according to the gradient direction;
s3: the pictures obtained after thresholding are mapped into a plurality of edge pictures in different directions according to the corresponding gradient directions;
s4: according to the gradient directions corresponding to the edge diagrams, the angle range of an ideal edge line is defined by taking the vertical direction of the gradient direction as the center, and meanwhile, the minimum threshold value of the length of the edge line is defined; checking pixel points in a certain range around the pixel points through bit operation; if the pixel belongs to an edge pixel and the extension length of the edge line in the ideal angle range meets the minimum length specification through bit operation detection, the pixel is still used as the edge pixel after being processed; otherwise, if the pixel belongs to the edge pixel, but the above condition is not satisfied, the pixel is not used as the edge pixel after being processed;
s5: and carrying out pooling operation on the edges serving as the edge pixel points, and finally identifying the edges of the highlighted objects.
The edge line judgment of the invention not only depends on the amplitude of the gradient, but also depends on the length of the edge line. When thresholding the gradient magnitude, we can directly use a smaller gradient magnitude thresholding threshold to accommodate the more blurred edges. Therefore, compared with the original CANY technical scheme, the illumination adaptation capability is stronger.
According to the gradient directions corresponding to the edge diagrams, the angle range of an ideal edge line is defined by taking the vertical direction of the gradient direction as the center according to the edge diagrams in different directions, and meanwhile, the minimum threshold value of the length of the edge line is defined; checking pixel points in a certain range around the pixel points through bit operation; if the pixel belongs to an edge pixel and the extension length of the edge line in the ideal angle range meets the minimum length specification through bit operation detection, the pixel is still used as the edge pixel after being processed; on the contrary, if the pixel belongs to the edge pixel, but the above condition is not satisfied, the pixel is not used as the edge pixel after being processed, and the pooling operation is performed on the edge used as the edge pixel, so that the edge of the real distinguishable (with a certain size) object can be obtained, and the edge pixel with small and scattered and light and shade changes, such as branches, leaves, shadows on the ground, and the like, can be perfectly filtered.
Compared with the CANY technical scheme, the result output by the method is not a single dot matrix of edges and non-edges. The edge lattice is classified into a plurality of edge graphs according to the gradient directions, and each edge graph corresponds to a different gradient direction. The operation is favorable for reserving more edges and information of the edge normal direction, and provides more basis for upper-layer operations such as further image contour judgment and the like.
As shown in fig. 3, the captured original image of the object is converted into an image with gradient amplitude, the image with gradient amplitude is subjected to non-maximum suppression according to the gradient direction of the image, the gradient amplitude of the image is thresholded, the thresholded image is mapped into a plurality of edge images in different directions according to the corresponding gradient direction of the image, each pixel point in the edge image is described by a single binary bit, is divided into an edge pixel point and a non-edge pixel point, and then bit operation is performed, so that the operation amount is reduced, the memory access amount is reduced, the current and subsequent access cost of all DDR L2CACHE is reduced, the space occupation of each image is reduced, and the memory bandwidth problem of large-scale full-image operation is thoroughly solved.
According to the gradient direction corresponding to the edge diagram, an angle range of an ideal edge line is defined by taking a perpendicular direction of the gradient direction as a center, and meanwhile, the minimum threshold value of the length of the edge line is defined; checking pixel points in a certain range around the pixel points through bit operation; if the pixel belongs to an edge pixel and the extension length of the edge line in the ideal angle range meets the minimum length specification through bit operation detection, the pixel is still used as the edge pixel after being processed; on the contrary, if the pixel belongs to the edge pixel, but the above condition is not satisfied, the pixel is not used as the edge pixel after being processed.
For example, a single edge map needs to reject pixels with edge line lengths less than 3, as shown in table one:
the first is that the gradient direction is the horizontal direction:
list one
0 1[1] 0 0 0 0
0 1[1] 0 0 0 0
0 1[1] 0 1[2] 1[2] 1[2]
0 0 0 0 0 0
0 0 1[3] 0 0 0
The length of the edge line where the pixel point of annotation [1] is positioned is 3, and the gradient direction meets the requirement, so the pixel point is reserved;
the length of the edge line where the pixel point of annotation [2] is positioned is 3, but the gradient direction does not meet the requirement, so that the pixel point is removed;
the length of the edge line where the annotation [3] pixel point is located is 1, and the annotation is directly removed.
The calculation result table two shows:
watch II
0 1 0 0 0 0
0 1 0 0 0 0
0 1 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
For example, when two pixel points are located on an edge line which is not perpendicular to the gradient direction, but has an included angle between 90 degrees and 45 degrees with the gradient direction, the two pixel points need to be kept. Table three shows:
watch III
0 0 1[1] 0 0 1[2]
0 1[1] 0 0 1[2] 0
0 1[1] 0 1[2] 0 0
0 0 0 0 0 0
0 0 1[3] 1[3] 1[3] 1[3]
0 1[3] 0 0 0 0
The edge lines of the two pixel points of the annotation [1] and the annotation [2] are not perpendicular to the gradient direction, but have an included angle of between 90 degrees and 45 degrees with the gradient direction, so that the two pixel points need to be reserved; the edge line of the pixel point of note [3] is longer, but the included angle between the pixel point and the gradient direction is obviously smaller than 45 degrees, and the pixel point is directly removed.
The calculation results are shown in Table IV:
table four
0 0 1 0 0 1
0 1 0 0 1 0
0 1 0 1 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
Depending on a plurality of edge graphs, according to the gradient direction corresponding to the edge graphs, an angle range of an ideal edge line is defined by taking a perpendicular direction of the gradient direction as a center, and meanwhile, a minimum threshold value of the length of the edge line is defined; checking pixel points in a certain range around the pixel points through bit operation; if the pixel belongs to an edge pixel and the extension length of the edge line in the ideal angle range meets the minimum length specification through bit operation detection, the pixel is still used as the edge pixel after being processed; on the contrary, if the pixel belongs to the edge pixel, but the above condition is not satisfied, the pixel is not used as the edge pixel after being processed. For example, the gradient directions of the first direction edge map and the second direction edge map are similar, and the first direction edge map and the second direction edge map are combined, when the length of the edge line where the required pixel point is located is calculated to be not less than 5.
A first directional edge map, table five:
TABLE five
0 0 0 0 0 0
0 1 0 0 0 0
0 0 1[1] 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
Second directional edge map, table six:
TABLE six
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 1[1] 0 1[2]
0 0 0 1 0 1[2]
0 0 0 1 0 1[2]
0 0 0 0 0 0
Annotation one: although the length of the edge line where the pixel point is located in the second direction edge map is only 3, the length of the edge line where the pixel point is located in the first direction edge map is 2, and the length is 5 when the edge lines are combined together, so that the edge line is reserved.
Annotation two: although in the second direction edge map, the length of the edge line where the pixel point is located is only 3, and the first direction edge map is not complemented, so that the pixel point is eliminated.
The specific calculation idea is as follows:
we phase or get the intermediate graph between the first and second directional edge graphs, as shown in table seven:
watch seven
0 0 0 0 0 0
0 1 0 0 0 0
0 0 1 0 0 0
0 0 0 1 0 1
0 0 0 1 0 1
0 0 0 1 0 1
0 0 0 0 0 0
The calculation method based on a single edge graph is used for the graph, and the default gradient direction is the gradient direction 2 because we calculate the second direction edge graph.
The results are shown in Table eight:
table eight
0 0 0 0 0 0
0 1 0 0 0 0
0 0 1 0 0 0
0 0 0 1 0 0
0 0 0 1 0 0
0 0 0 1 0 0
0 0 0 0 0 0
The original pattern of this pattern and the second directional edge pattern is then summed to obtain the pattern shown in table nine:
table nine
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 1 0 0
0 0 0 1 0 0
0 0 0 1 0 0
0 0 0 0 0 0
This is a method of eliminating insufficiently sharp edges by combining the first direction edge map and the second direction edge map.
According to the gradient direction corresponding to the edge diagram, an angle range of an ideal edge line is defined by taking a perpendicular direction of the gradient direction as a center, and meanwhile, the minimum threshold value of the length of the edge line is defined; checking pixel points in a certain range around the pixel points through bit operation; if the pixel belongs to an edge pixel and the extension length of the edge line in the ideal angle range meets the minimum length specification through bit operation detection, the pixel is still used as the edge pixel after being processed; on the contrary, if the pixel belongs to the edge pixel, but the above condition is not satisfied, the pixel is not used as the edge pixel after being processed. Therefore, the pixel points and the surrounding pixels within a certain range are checked by bit operation; if one or a plurality of non-edge pixel points are found to divide a certain continuous edge line, and the two divided edge lines have a certain length in the extending direction, setting the non-edge pixel points as edges, and recovering the integrity of the edge lines. Table ten shows:
ten meters
0 0 0 0 0 0
0 0 0 1 0 0
0 0 0 1 0 0
0 0 0 0[1] 0 0
0 0 0 1 0 0
0 0 0 1 0 0
0 0 0 0 0 0
Two or more edge pixels in the same direction are arranged above and below the pixel, and are perpendicular to the gradient direction, so that connection can be allowed under severe conditions. The calculation results are shown in Table eleven:
table eleven
0 0 0 0 0 0
0 0 0 1 0 0
0 0 0 1 0 0
0 0 0 1 0 0
0 0 0 1 0 0
0 0 0 1 0 0
0 0 0 0 0 0
For both steps S4 and S5, the memory is partitioned by performing bit operation of the edge map by partitioning the memory in the memory, so as to ensure that all the read and operated areas do not exceed the size of the first cache, and complete all the calculation work in the first cache. As shown in fig. 4, assuming that the size of the first level cache is 32KB, the data is returned directly after all pixel operations are completed except for the pooling operation. Pooling would add more space to the 1/4 image size and we can handle 109 rows of pixels at a time (i.e., block size) for an image width of 1920P considering that 32 x 8 x 1024 x 4/5 is equal to (each pixel is represented by one bit) and 109 rows of pixels can be handled at a time as shown in fig. 3, 1. By the operation, data which are frequently operated do not need to be returned to DDR or even to L2-CACHE, the waiting time of memory data transmission is shortened, and the utilization rate of the CPU is greatly improved.
In addition, the SIMD function of the main control chip, such as NEON in ARM, can be utilized to process 16 bytes at a time; since each pixel is represented by a single bit, in practice, we can process 128 pixels in a single core of ARM. Meanwhile, the sequence of the instructions is highly optimized, the inter lock among the CPU instructions is reduced as much as possible, and the pipeline of the CPU is fully utilized. With this premise, extremely high parallel computing capability can be obtained even without using a GPU.
In step S5, after the pooling operation, the angle range of an ideal edge line is regulated by taking the vertical direction of the gradient direction as the center according to the gradient direction corresponding to the edge map, and meanwhile, the minimum threshold value of the length of the edge line is regulated; checking pixel points in a certain range around the pixel points through bit operation; if the pixel belongs to an edge pixel and the extension length of the edge line in the ideal angle range meets the minimum length specification through bit operation detection, the pixel is still used as the edge pixel after being processed; on the contrary, if the pixel belongs to the edge pixel, but the above condition is not satisfied, the pixel is not used as the edge pixel after being processed. Wherein the pooling operation reduces the image size by half, every four pixels by one pixel.
The pooling logic is shown in the following table:
table twelve is the image before pooling:
twelve watches
0 0 0 0 0 0
0 0 0 1 0 0
0 0 0 1 0 0
0 0 0 1 0 0
0 0 0 1 0 0
0 0 0 1 0 0
0 0 0 0 0 0
0 0 0 0 0 0
Table thirteen is the pooled image:
watch thirteen
0 1 0
0 1 0
0 1 0
0 0 0
The image after pooling corresponds to two pixels of the image of the previous layer. Repeatedly performing bit operation after pooling operation, and checking the pixel points and the surrounding pixel points within a certain range; if one or a plurality of non-edge pixel points are found to divide a certain continuous edge line and the two divided edge lines have a certain length in the extending direction, setting the non-edge pixel point as an edge to restore the integrity of the edge line, and
according to the gradient direction corresponding to the edge map, an angle range of an ideal edge line is defined by taking a perpendicular direction of the gradient direction as a center, and meanwhile, the minimum threshold value of the length of the edge line is defined; checking pixel points in a certain range around the pixel points through bit operation; if the pixel belongs to an edge pixel and the extension length of the edge line in the ideal angle range meets the minimum length specification through bit operation detection, the pixel is still used as the edge pixel after being processed; otherwise, if the pixel belongs to the edge pixel, but the above condition is not satisfied, the pixel is not used as the edge pixel after being processed; based on the above operations, pooling is performed again, and so on, an image pyramid is constructed. Equivalent to twice the previous round of image, strictly speaking, with the pooling of the image and edge culling, the higher the level of residual edge, the longer the essence, the more obvious the more likely it is to be our target edge. And the pooled image is greatly reduced, so that the complexity of the subsequent operation and calculation and the memory operation is reduced.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several simple deductions or substitutions may be made without departing from the spirit of the invention, and these should be considered to be within the scope of the invention.

Claims (9)

1. The object edge feature recognition and extraction method is characterized by comprising the following steps of:
s1: converting the captured original picture of the object into a picture with gradient amplitude;
s2: thresholding the gradient amplitude of the picture with the gradient amplitude according to the gradient direction of the picture;
s3: the pictures obtained after thresholding are mapped into a plurality of edge pictures in different directions according to the corresponding gradient directions;
s4: according to the gradient direction corresponding to the edge images, the angle range of an ideal edge line is defined by taking the vertical line direction of the gradient direction as the center, and the minimum threshold value of the edge line length is defined at the same time; checking pixel points in a certain range around the pixel points through bit operation; if the pixel belongs to an edge pixel and the extension length of the edge line in the ideal angle range meets the minimum length specification through bit operation detection, the pixel is still used as the edge pixel after being processed; otherwise, if the pixel belongs to the edge pixel, but the above condition is not satisfied, the pixel is not used as the edge pixel after being processed;
s5: and carrying out pooling operation on the edges serving as the edge pixel points, and finally identifying the edges of the highlighted objects.
2. The object edge feature recognition extraction method according to claim 1, wherein in step S2, the picture with the gradient magnitude is further subjected to non-maximum suppression before the gradient magnitude thresholding.
3. The method according to claim 2, wherein in the step S3, each pixel in the edge map is described by a single binary bit, and is divided into an edge pixel and a non-edge pixel.
4. A method for identifying and extracting edge features of an object according to claim 3, wherein said steps S4 and S5 are each performed by performing bit operation on memory blocks in a memory.
5. The method according to claim 4, wherein step S4 further comprises checking the pixel points and surrounding pixels within a certain range by bit operation; if one or a plurality of non-edge pixel points are found to divide a certain continuous edge line, and the two divided edge lines have a certain length in the extending direction, setting the non-edge pixel points as edges, and recovering the integrity of the edge lines.
6. The method according to claim 5, wherein in the step S5, the pixel points and the surrounding pixels within a certain range are inspected by repeating the bit operation after the pooling operation; if one or a plurality of non-edge pixel points are found to divide a certain continuous edge line and the two divided edge lines have a certain length in the extending direction, setting the non-edge pixel point as an edge to restore the integrity of the edge line, and
according to the gradient direction corresponding to the edge map, an angle range of an ideal edge line is defined by taking a perpendicular direction of the gradient direction as a center, and meanwhile, a minimum threshold value of the length of the edge line is defined; checking pixel points in a certain range around the pixel points through bit operation; if the pixel belongs to an edge pixel and the extension length of the edge line in the ideal angle range meets the minimum length specification through bit operation detection, the pixel is still used as the edge pixel after being processed; otherwise, if the pixel belongs to the edge pixel, but the above condition is not satisfied, the pixel is not used as the edge pixel after being processed;
based on the above operations, pooling is performed again, and so on, an image pyramid is constructed.
7. The method according to claim 1, wherein the pooling operation is to scale down the image according to a certain scale, and scale down a plurality of pixels into one pixel.
8. The method according to claim 7, wherein the pooled image is scaled down by a specific factor of half, and each four pixels is scaled down to a pixel point.
9. The method for identifying and extracting the edge features of the object according to claim 8, wherein the method is characterized in that the binary bits corresponding to the four pooled pixels are zero if all the binary bits are zero, and the new pooled bits are zero; otherwise, the binary bit obtained after pooling is one.
CN201911171699.7A 2019-11-26 2019-11-26 Object edge feature recognition and extraction method Active CN110930423B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911171699.7A CN110930423B (en) 2019-11-26 2019-11-26 Object edge feature recognition and extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911171699.7A CN110930423B (en) 2019-11-26 2019-11-26 Object edge feature recognition and extraction method

Publications (2)

Publication Number Publication Date
CN110930423A CN110930423A (en) 2020-03-27
CN110930423B true CN110930423B (en) 2023-07-14

Family

ID=69851954

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911171699.7A Active CN110930423B (en) 2019-11-26 2019-11-26 Object edge feature recognition and extraction method

Country Status (1)

Country Link
CN (1) CN110930423B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113467680A (en) * 2021-06-28 2021-10-01 网易(杭州)网络有限公司 Drawing processing method, drawing processing device, electronic equipment and storage medium
CN113643272A (en) * 2021-08-24 2021-11-12 凌云光技术股份有限公司 Target positioning modeling method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101465001A (en) * 2008-12-31 2009-06-24 昆山锐芯微电子有限公司 Method for detecting image edge based on Bayer RGB
CN109816673A (en) * 2018-12-27 2019-05-28 合肥工业大学 A method of non-maximum suppression, dynamic threshold calculation and image edge detection

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101334263B (en) * 2008-07-22 2010-09-15 东南大学 A method for locating the center of a circular target
CN101620732A (en) * 2009-07-17 2010-01-06 南京航空航天大学 Visual detection method of road driving line
CN102194114B (en) * 2011-06-25 2012-11-07 电子科技大学 Method for recognizing iris based on edge gradient direction pyramid histogram
CN105719251B (en) * 2016-01-19 2018-06-19 浙江大学 A kind of compression degraded image restored method that Linear Fuzzy is moved for big picture
US10657635B2 (en) * 2017-03-16 2020-05-19 Ricoh Company, Ltd. Inspection apparatus, inspection method and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101465001A (en) * 2008-12-31 2009-06-24 昆山锐芯微电子有限公司 Method for detecting image edge based on Bayer RGB
CN109816673A (en) * 2018-12-27 2019-05-28 合肥工业大学 A method of non-maximum suppression, dynamic threshold calculation and image edge detection

Also Published As

Publication number Publication date
CN110930423A (en) 2020-03-27

Similar Documents

Publication Publication Date Title
CN110060237B (en) Fault detection method, device, equipment and system
CN106940816B (en) CT image pulmonary nodule detection system based on 3D full convolution neural network
US11443437B2 (en) Vibe-based three-dimensional sonar point cloud image segmentation method
US10510148B2 (en) Systems and methods for block based edgel detection with false edge elimination
CN109978807B (en) Shadow removing method based on generating type countermeasure network
CN108510451B (en) Method for reconstructing license plate based on double-layer convolutional neural network
CN110146791A (en) A corona detection method based on image processing
Singh et al. Contrast enhancement and brightness preservation using global-local image enhancement techniques
CN110930423B (en) Object edge feature recognition and extraction method
CN113012068A (en) Image denoising method and device, electronic equipment and computer readable storage medium
CN111353371A (en) Shoreline extraction method based on spaceborne SAR images
CN107705313A (en) A kind of remote sensing images Ship Target dividing method
CN114862889A (en) Road edge extraction method and device based on remote sensing image
CN113570652B (en) Sandstone reservoir mineral intercrystalline pore quantitative analysis method based on SEM image
CN117253150A (en) Ship contour extraction method and system based on high-resolution remote sensing image
CN111882565B (en) Image binarization method, device, equipment and storage medium
CN115841629A (en) SAR image ship detection method based on convolutional neural network
CN110930361A (en) A method for occlusion detection of virtual and real objects
CN119477709B (en) Method for enhancing contour of fluorescence in-situ hybridization image by means of illumination fusion and local binary
CN112215104A (en) A method, device and device for sea ice extraction based on superpixel segmentation
CN117788829B (en) Image recognition system for invasive plant seed detection
CN112652004B (en) Image processing method, device, equipment and medium
CN117765287A (en) Image target extraction method combining LWR and density clustering
CN117095275A (en) Asset inventory method, system, device and storage medium for data center
Jia Fabric defect detection based on open source computer vision library OpenCV

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 101, Building 6, No.1 Ruihua Road, Tianhe District, Guangzhou, Guangdong 510660

Patentee after: SHARPVISION CO.,LTD.

Address before: 510660 1st floor, building 6, Huangzhou Industrial Zone, west of chebei Road, Tianhe District, Guangzhou City, Guangdong Province

Patentee before: SHARPVISION CO.,LTD.

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A method for identifying and extracting object edge features

Granted publication date: 20230714

Pledgee: Agricultural Bank of China Limited Guangzhou Tianhe sub branch

Pledgor: SHARPVISION CO.,LTD.

Registration number: Y2024980038189