CN110793564A - Visual inspection apparatus and visual inspection method - Google Patents
Visual inspection apparatus and visual inspection method Download PDFInfo
- Publication number
- CN110793564A CN110793564A CN201810873358.3A CN201810873358A CN110793564A CN 110793564 A CN110793564 A CN 110793564A CN 201810873358 A CN201810873358 A CN 201810873358A CN 110793564 A CN110793564 A CN 110793564A
- Authority
- CN
- China
- Prior art keywords
- color
- module
- detected
- target object
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000011179 visual inspection Methods 0.000 title claims abstract description 41
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000012545 processing Methods 0.000 claims abstract description 72
- 238000001514 detection method Methods 0.000 claims abstract description 51
- 230000004927 fusion Effects 0.000 claims abstract description 20
- 238000004458 analytical method Methods 0.000 claims description 43
- 238000004364 calculation method Methods 0.000 claims description 16
- 230000005540 biological transmission Effects 0.000 claims description 5
- 238000007689 inspection Methods 0.000 claims 1
- 230000000007 visual effect Effects 0.000 abstract description 20
- 238000010586 diagram Methods 0.000 description 10
- 230000007246 mechanism Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000003707 image sharpening Methods 0.000 description 1
- 238000003706 image smoothing Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000011068 loading method Methods 0.000 description 1
- 238000004020 luminiscence type Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01D—MEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
- G01D21/00—Measuring or testing not otherwise provided for
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
Abstract
The invention discloses a visual inspection apparatus and a visual inspection method. The visual detection equipment comprises a detection module, a processing module and a fusion module; the detection module is used for acquiring image data of a target object in a plurality of detected areas and color and/or brightness information of the plurality of detected areas, and sending the image data of the target object in the detected areas and the color and/or brightness information to the processing module; the processing module is used for determining the position of each point on the target object in the detected areas according to the image data and the color and/or brightness information; the fusion module is used for splicing the positions of all points on the target object in the detected areas to obtain the image of the detected target object. By adopting the technical scheme, the embodiment of the invention realizes high-speed and high-efficiency visual detection without manual intervention.
Description
Technical Field
The invention relates to the technical field of image detection, in particular to visual detection equipment and a visual detection method.
Background
The general machine vision detection system comprises one or more cameras arranged at different angles, a fixed frame, a calculation host and a detected object placing platform for placing a detected object. The machine vision detection system comprises the following detection steps: firstly, the definition of a target picture of a detected target object shot by a camera is adjusted, and test calibration processing is carried out. Then, under the unified calibration image, the image capture of the detected target object is carried out; secondly, the captured image is transmitted to a computer through a signal output interface of the camera, and the computer carries out noise elimination and filtering processing according to the comparison between the calibration image and the currently captured image; and thirdly, classifying the processed image according to an image processing method and a pattern recognition algorithm, wherein the classification is specifically represented as problem area identification and problem area classification.
The above prior art has the following disadvantages: firstly, real-time calibration cannot be carried out, so that noise can be accumulated along with the use time of a camera, and the detection accuracy can be deteriorated along with the lapse of time; secondly, the type of the detected target cannot be changed randomly, and because calibration needs to be performed in advance, all information such as environmental conditions, target contour conditions, thickness of the body of the target and the like of the target needs to be completed through calibration in advance; thirdly, for the detected target which actively emits light, such as a flat panel display product, the brightness of the detected target needs to be calibrated in advance, and generally, the target is calibrated by a brightness/chromaticity test instrument before machine vision detection is carried out; for the detected target which actively emits light, because the light emitting areas are different, the whole calibration can not be completed by using the calibration of a single point, so that the processing threshold value suitable for the whole area is difficult to find; fifthly, in the processing process, all detected target objects are shot by each camera, rough shooting is carried out when shooting is carried out in a side-looking manner, whether appearance defects such as scratches and sundries exist or not is detected, but specific geometric attributes such as depth and height cannot be described; and sixthly, loading machine vision processing algorithm software by adopting a single computing host, wherein once the resolution of the camera is improved, the analyzed information is increased, so that the computing pressure is increased, the real-time performance is reduced, and as multiple paths of camera shooting need to enter one computing host, the image sampling frame rate per second of a non-compressive image sequence with the resolution of more than 1080P or a compressed data stream without loss compression is very small, for example, the transmission rate of a compressed video stream with 2900 ten thousand pixels is 5 frames per second, so that the detection rate of the whole detected object is not more than 7 seconds per second, and the speed of a product line needing high-speed product detection is too slow compared with that of manual detection.
Therefore, it is an urgent need to provide a high-speed and high-efficiency visual inspection method.
Disclosure of Invention
The invention aims to provide a visual detection device and a visual detection method to realize high-speed and high-efficiency visual detection.
In order to achieve the above object, according to a first aspect of the present invention, the following technical solutions are provided:
a visual inspection device comprises a detection module, a processing module and a fusion module; wherein:
the detection module is used for acquiring image data of a target object in a plurality of detected areas and color and/or brightness information of the plurality of detected areas, and sending the image data of the target object in the detected areas and the color and/or brightness information to the processing module;
the processing module is used for determining the position of each point on the target object in the detected areas according to the image data and the color and/or brightness information;
and the fusion module is used for splicing the positions of all points on the target objects in the detected areas to obtain the images of the detected target objects.
Further, the detection module comprises an image acquisition module and a color analysis module; wherein:
the image acquisition module is used for acquiring image data of the target objects in the detected areas and sending the image data to the processing module;
the color analysis module is used for acquiring the color and/or brightness information of the detected areas and sending the color and/or brightness information to the processing module.
Further, the processing module comprises a calculation processing unit and a color processing unit; wherein:
the calculation processing unit is used for carrying out real-time calibration, image recognition and image feature classification on the image data of the target object in the detected areas according to the color and/or brightness data of the detected areas, and determining the position of each point on the target object in the detected areas;
and the color processing unit is used for determining the color and/or brightness data of the detected areas according to the color and/or brightness information of the detected areas based on the arrangement mode of the color analysis module, and sending the color and/or brightness data of the detected areas to the calculation processing unit.
Furthermore, the image acquisition module and the color analysis module are both multiple and are arranged in a staggered mode.
Further, the staggered arrangement mode is a horizontal staggered arrangement mode, a vertical staggered arrangement mode or an oblique staggered arrangement mode.
Further, the visual inspection apparatus further comprises a display module;
and the display module is connected with the fusion module and is used for displaying the image of the detected target object.
Further, the visual inspection device further comprises a platform and a storage and transmission module; wherein:
the platform is used for bearing the detected target object;
the storage and transmission module is used for storing the processing results of the detection module, the processing module and the fusion module and sending the processing results to the platform so as to adjust the platform in six degrees of freedom.
In order to achieve the above object, according to a second aspect of the present invention, the following technical solutions are further provided:
a method of visual inspection, comprising:
acquiring image data of a target object in a plurality of detected regions and color and/or brightness information of the plurality of detected regions;
determining the position of each point on the target object in the detected areas according to the image data and the color and/or brightness information;
and splicing the positions of each point on the target object in the detected areas to obtain an image of the detected target object.
Further, the step of determining the position of each point on the target object in the detected regions according to the image data and the color and/or brightness information specifically includes:
determining the color and/or brightness data of the detected areas according to the color and/or brightness information of the detected areas;
and according to the color and/or brightness data of the detected areas, carrying out real-time calibration, image recognition and image feature classification on the image data of the target object in the detected areas, and determining the position of each point on the target object in the detected areas.
In order to achieve the above object, according to a third aspect of the present invention, the following technical solutions are further provided:
a visual inspection device comprising:
a memory for storing non-transitory computer readable instructions; and
a processor for executing the computer readable instructions such that the processor when executed implements a visual inspection method as described in the first aspect.
The embodiment of the invention provides a visual detection device and a visual detection method. The visual detection equipment comprises a detection module, a processing module and a fusion module; the detection module is used for acquiring image data of a target object in a plurality of detected areas and color and/or brightness information of the plurality of detected areas, and sending the image data of the target object in the detected areas and the color and/or brightness information to the processing module; the processing module is used for determining the position of each point on the target object in the detected areas according to the image data and the color and/or brightness information; the fusion module is used for splicing the positions of all points on the target object in the detected areas to obtain the image of the detected target object. By adopting the technical scheme, the embodiment of the invention processes the image data of the target objects in the detected areas and the color and/or brightness information of the detected areas by using the detection module, the processing module and the fusion module, thereby realizing high-speed and high-efficiency visual detection without manual intervention.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical means of the present invention more clearly understood, the present invention may be implemented in accordance with the content of the description, and in order to make the above and other objects, features, and advantages of the present invention more clearly understandable, the following preferred embodiments are described in detail with reference to the accompanying drawings.
Drawings
FIG. 1 is a schematic diagram of a visual inspection apparatus according to one embodiment of the present invention;
FIG. 2a is a schematic diagram of an image capture module and a color analysis module being arranged alternately according to an embodiment of the present invention;
FIG. 2b is a schematic diagram of an image capture module and a color analysis module being arranged alternately according to another embodiment of the present invention;
FIG. 2c is a schematic diagram of an image capture module and a color analysis module being arranged alternately according to yet another embodiment of the present invention;
FIG. 2d is a schematic diagram of an image capture module and a color analysis module being arranged alternately according to another embodiment of the present invention;
FIG. 3 is a schematic side view of a visual inspection apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic view of a field of view of an image acquisition module according to one embodiment of the invention;
FIG. 5 is a schematic diagram of a display module displaying color and/or brightness data according to one embodiment of the invention;
FIG. 6 is a flow chart illustrating a visual inspection method according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a visual inspection apparatus according to another embodiment of the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the drawings only show the components related to the present invention rather than the number, shape and size of the components in practical implementation, and the type, quantity and proportion of the components in practical implementation can be changed freely, and the layout of the components can be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
In order to realize high-speed and high-efficiency visual detection, the embodiment of the invention provides visual detection equipment. As shown in fig. 1, the apparatus mainly includes: a detection module 11, a processing module 12 and a fusion module 13.
The detection module 11, the processing module 12 and the fusion module 13 are described in detail below.
The detecting module 11 is configured to obtain image data of a target object in a plurality of detected regions and color and/or brightness information of the plurality of detected regions, and send the image data of the target object in the detected regions and the color and/or brightness information to the processing module 12.
The detection module 11 may specifically include an image acquisition module (e.g., a camera lens, etc.) and a color analysis module. The image acquisition module is used for acquiring image data of a detected target object in the detected area. The color analysis module is used for acquiring color and/or brightness information of the detected area.
The image acquisition module may include an image sensor (e.g., a CCD type image sensor, a CMOS type image sensor) and several first lenses. The color analysis module includes a color sensor and a number of second lenses. The color sensor is used for acquiring color and/or brightness information of a target object on the detected area. The color sensor may be, for example, a silicon-based sensor.
The number of the color analysis modules may be one or more, which is not limited in the present invention. If a plurality of color analysis modules are adopted, a distributed color analysis module can be formed, the distributed color analysis modules process simultaneously, and the real-time calibration of the image acquisition module can be realized, namely: the calibration can be carried out after each detection of the detected target object is finished. Compared with the existing off-line detection and calibration method, the method can overcome the influence of the detected target object by the ambient light, realize high-speed visual detection and obtain more accurate color and/or brightness information.
In practical application, the image acquisition module and the color analysis module are fixed on the fixing frame; the frame of the fixing frame is connected with the printed circuit board; the image acquisition module and the color analysis module are welded on the printed circuit board.
The detection module is described in detail below with specific embodiments.
According to the stereoscopic vision principle, a complete stereoscopic detection mode can be formed by 3 adjacent image acquisition modules with mutually parallel optical center lines (two of the image acquisition modules can only form a triangulation model, and three of the image acquisition modules can form 3 triangulation models, so that a space point can be uniquely determined); when three-dimensional information on the detected target needs to be detected, accurate space measurement can be carried out by using the stereo matching information of every three image acquisition modules. The image acquisition modules are distributed and distributed, so that the non-self-luminous objects can be influenced by ambient light or the inconsistent brightness and/or chromaticity of the active illumination points; for self-luminous objects, the luminescence of each detected region also has inconsistent brightness and/or chromaticity.
In order to eliminate the influence caused by the inconsistency of brightness/chroma, the embodiment of the invention arranges the color analysis module and the image acquisition module in a staggered way. For example, the color analysis module may be distributed between two image acquisition modules adjacent in the horizontal and vertical directions with a center-to-center distance of 20 mm.
Fig. 2a-2d schematically show the case of interleaving the image acquisition module and the color analysis module. The image acquisition modules 21 and the color analysis modules 22 are arranged in an interlaced manner, and are carried by the carrying structure 23. The staggered arrangement includes, but is not limited to: horizontal staggered arrangement, vertical staggered arrangement, oblique staggered arrangement, and the like. The image acquisition module 21 includes an image sensor and several lenses. The color analysis module 22 includes a color sensor and several lenses.
Fig. 3 schematically shows a side view of the visual inspection apparatus. The relative positional relationship between the image sensor and the color sensor and the lens is shown. As shown in fig. 3, there are shown a color sensor 31, an image sensor 32, a lens 33, a printed circuit board 34, a detected object 35, and a light ray 36. The color sensor 31 and the image sensor 32 are distributed on a printed circuit board 34. The color sensor 31 and the plurality of lenses 33 constitute a lens. The image sensor 32 and several lenses 33 constitute another lens. The object to be detected 35 is placed on the stage. The platform can realize automatic adjustment of six degrees of freedom through the pitching motion mechanism, the horizontal motion mechanism, the rotary motion mechanism and the vertical motion mechanism according to the processing result of the color analysis module, thereby carrying out automatic lifting.
In the above image acquisition module, a plurality of first lenses are arranged in order below the image sensor (here, downward in a direction toward the detected object), and the plurality of first lenses can allow light of ± 2.5 degrees to enter, whereby an area on the detected object having a side of 20mm can be observed.
In the color analysis module, the plurality of second lenses are sequentially arranged. If the color analysis module has an incident angle of ± 2.5 degrees and a diameter of 5-10mm, the detection area of the color analysis module can be determined to be an area of 8-15mm diameter according to a standard colorimetric system (e.g., CIE1931 colorimetric system).
Specifically, the color analysis module determines color (or chromaticity) and/or luminance information of the detected region based on a color space defined by the CIE1931 chromaticity system. Further, when the color analysis module receives the incident light, it performs photoelectric conversion and quantization processing on the current, and converts the current into values in the horizontal, vertical and vertical directions (i.e., x, y, z) defined by the CIE1931 color system, so as to determine the color and/or brightness information (i.e., accurate measurement value) of the detected region.
Fig. 4 exemplarily shows a view field schematic of an image acquisition module. As shown in fig. 4, the image acquisition module (401 and 409) and the first to fourth regions are shown. The area (i.e., the outermost dashed box in fig. 4) surrounded by the image acquisition modules (401, 402, 403, 404, 405, 406, 407, 408, 409) is the field of view of each image acquisition module. The image acquisition modules (401, 402, 403, 404, 406, 407, 408, 409) and the image acquisition module 405 have the same field of view. The fields of view of the image acquisition module 404, the image acquisition module 405, and the image acquisition module 402 have a common overlapping region, i.e., a first region; the fields of view of the image acquisition module 402, the image acquisition module 405, and the image acquisition module 406 have a common overlapping region, i.e., a second region; the fields of view of the image acquisition module 407, the image acquisition module 405, the image acquisition module 408 have a common overlap region, i.e. a third region; the fields of view of the image acquisition module 408, the image acquisition module 405, and the image acquisition module 406 have a common overlapping region, i.e., a fourth region. Of course, it should be understood by those skilled in the art that the present invention is not limited to the embodiment shown in fig. 4, which uses 9 image acquisition modules, and may also use 12 image acquisition modules, for example, one image acquisition module may be further disposed outside the image acquisition module 403, the image acquisition module 406, and the image acquisition module 409 respectively in the horizontal direction. At this time, the above method may be adopted to determine the condition of the area around the image acquisition module 406.
The three-dimensional coordinates of any point on the detected target object in any one of the first area to the fourth area can be uniquely determined by the three image acquisition modules. For example, according to the stereo vision calculation method, three-dimensional coordinates of all points on the detected target object can be calculated through every two image acquisition modules; three groups of three-dimensional data (namely three-dimensional coordinate data) can be obtained through any two adjacent image acquisition modules in the three image acquisition modules; according to the three sets of stereo data, the three-dimensional coordinates of any point on the detected target object can be uniquely determined. If the resolution of the image acquisition module is 1 micron, the detection precision of any detected space point in each area is +/-1 micron.
As shown in fig. 2a, the first to fourth regions are respectively subjected to the acquisition of chrominance and/or luminance data by the color analysis module 22 in the region, and the calibration is performed on the image acquisition module 21.
As shown in fig. 2b, the first to fourth regions are respectively acquired by the upper and lower color analysis modules 22 located in the regions, and the image acquisition module 21 is calibrated.
As shown in fig. 2c, the first to fourth regions are respectively acquired by the four color analysis modules 22 located at the upper, lower, left and right of the region, and the image acquisition module 21 is calibrated.
As shown in fig. 2d, the first to fourth regions are respectively acquired by the left and right color analysis modules 22 located in the region, and calibrated by the image acquisition module 21.
If a plurality of color analysis modules are used for carrying out region calibration on the image acquisition module, a method of weighting and averaging can be adopted. For example, taking the case shown in fig. 2b as an example, if the first region is determined by the two color analysis modules 22 located above and below the first region to have the color coordinates of (x1, y1), (x2, y2), and the determined luminance values of L1 and L2; then, the color coordinates of the first region are ((x1+ x2)/2, (y1+ y2)/2), and the luminance value is ((L1+ L2)/2).
The processing module 12 is configured to determine a position of each point on the target object in the multiple detected regions according to the image data and the color and/or brightness information of the target object in the multiple detected regions.
In practical application, an image sensor and a color sensor are welded on one surface of the printed circuit board 34, the image sensor and the lens are mechanically connected by using a structural member, and the color sensor and the lens are mechanically connected; the other side of the printed circuit board 34 is connected to the processing module via a high speed signal line or board-to-board connector.
Specifically, the processing module 12 includes a calculation processing unit and a color processing unit connected to the calculation processing unit. Wherein the number of the calculation processing units is the same as the number of the image sensors.
The calculation processing unit carries out real-time calibration, image recognition and image feature classification on the image data of the target object in the detected areas according to the color and/or brightness data of the detected areas, and determines the position of each point on the target object in the detected areas.
Specifically, the calculation processing unit is used for receiving the image data of the detected area sent by the image sensor and the color and/or brightness data sent by the color processing unit, and carrying out real-time calibration, image recognition and image feature classification on the image data of the detected area according to the color and/or brightness data.
Wherein the image data is calibrated in real time such that the image data is based on predetermined color and/or brightness information. In this way, when setting the image analysis threshold, it is possible to ensure that all images are processed at the same image analysis threshold, and thus it is possible to ensure consistency of processing (e.g., image segmentation, image smoothing, image sharpening, image filtering, image binarization, image classification, etc.).
The image recognition is to extract image features in a detected region when image features (for example, points, lines, curves, brightness, etc.) are defined in advance.
The image feature classification is to classify the image recognition result according to a predetermined category (for example, point defect, line defect, scratch, bright spot, dark spot, uneven chromaticity, uneven brightness, etc.).
The color processing unit is used for determining the color and/or brightness data of the detected areas according to the color and/or brightness information of the detected areas based on the arrangement mode of the color analysis module, and sending the color and/or brightness data of the detected areas to the calculation processing unit.
Specifically, the color processing unit is connected with the color sensor and used for receiving the color and/or brightness information of the detected object on the detected area, which is sent by the color sensor. Specifically, the color processing unit determines color and/or brightness data of a plurality of detected regions according to the color and/or brightness information of the detected regions based on the arrangement mode of the color analysis module, and sends the color and/or brightness data of the plurality of detected regions to the calculation processing unit.
The fusion module 13 is configured to splice the positions of each point on the target object in the multiple detected regions to obtain an image of the detected target object.
The fusion module 13 may also perform position labeling on the recognition result of the processing module 12 (as shown in fig. 4). Specifically, according to a stereo vision calculation method, three-dimensional coordinates of all points on a detected target object can be calculated through every two image acquisition modules; three groups of three-dimensional data can be obtained through any two adjacent image acquisition modules in the three image acquisition modules; according to the three groups of stereo data, based on a stereo geometric space point calculation method, the fusion module can mark the three-dimensional space positions of all points on the target object in the detected area.
In a preferred embodiment, the visual inspection device may further include a display module. The display module is connected with the fusion module. The display module is used for displaying the image of the detected target object.
Further, the display module may also display the image feature classification results of the first to fourth regions as shown in fig. 2a-2 d.
For example, assume that there is a scratch horizontally penetrating the surface of the target object in the entire detected area; if a scratch is extracted in each of the first to fourth regions; then, the image feature classification results of the first area to the fourth area are displayed through the display module, so that whether the scratch extracted from each area is the same scratch or whether multiple scratches exist in the detected target object can be determined.
In addition, taking the arrangement of the image obtaining module and the color analyzing module shown in fig. 2b as an example, the display module may further display color and/or luminance data, for example, color and/or luminance coordinate values of each region, as shown in fig. 5, in which the color luminance coordinate values (x1, y1, L1), (x2, y2, L2), (x3, y3, L3), (x4, y4, L4), (x5, y5, L5), (x6, y6, and L6) are exemplarily shown.
In a preferred embodiment, the visual inspection apparatus may further include a platform and a storage and transfer unit. The storage transmission module is used for storing the processing results of the detection module, the processing module and the fusion module and sending the processing results to the platform so as to adjust the platform in six degrees of freedom; the platform is used for bearing a target object.
Specifically, the platform utilizes a pitching motion mechanism, a horizontal motion mechanism, a rotating motion mechanism and a vertical motion mechanism to adjust six degrees of freedom according to a processing result.
By adopting the technical scheme, the automatic adjustment of the platform is realized, and the manual intervention is avoided.
In addition, in order to realize high-speed and high-efficiency visual detection, the embodiment of the invention also provides a visual detection method. As shown in fig. 6, the method mainly includes:
s600: acquiring image data of a target object in a plurality of detected regions and color and/or brightness information of the plurality of detected regions;
s610: determining the position of each point on the target object in the detected areas according to the image data and the color and/or brightness information;
s620: and splicing the positions of each point on the target object in the detected areas to obtain an image of the detected target object.
In a preferred embodiment, step S600 specifically includes:
s601: acquiring image data of target objects in a plurality of detected areas through an image acquisition module, and sending the image data to a processing module;
s602: and acquiring the color and/or brightness information of the detected areas through a color analysis module, and sending the color and/or brightness information to a processing module.
The image acquisition module can be a lens consisting of an image sensor and a plurality of lenses. The image sensor may be, for example, a CCD type image sensor (i.e., a charge-coupled device image sensor) or a CMOS type image sensor (complementary metal oxide semiconductor image sensor).
The color analysis module can be a lens consisting of a color sensor and a plurality of lenses.
Preferably, the image acquisition module and the color analysis module are both multiple and are arranged in a staggered manner. The staggered arrangement includes, but is not limited to, a horizontal staggered arrangement, a vertical staggered arrangement, and an oblique staggered arrangement.
In a preferred embodiment, the step of determining the position of each point on the target object in the plurality of detected regions according to the image data and the color and/or brightness information specifically includes:
determining color and/or brightness data of the detected areas according to the color and/or brightness information of the detected areas;
and according to the color and/or brightness data of the detected areas, carrying out real-time calibration, image recognition and image feature classification on the image data of the target object in the detected areas, and determining the position of each point on the target object in the detected areas.
For the above detailed description of the method embodiment, reference may be made to the related description of the foregoing embodiment of the visual inspection apparatus, and further description is omitted here.
In the above, although the steps in the embodiment of the visual inspection method are described in the above sequence, it should be clear to those skilled in the art that the steps in the embodiment of the present invention are not necessarily performed in the above sequence, and may also be performed in other sequences such as reverse, parallel, and cross, and further, on the basis of the above steps, those skilled in the art may also add other steps, and these obvious modifications or equivalents should also be included in the protection scope of the present invention, and are not described herein again.
For convenience of description, only the relevant parts of the embodiments of the present invention are shown, and details of the specific technology are not disclosed, please refer to the embodiments of the present invention.
In addition, in order to realize high-speed and high-efficiency visual detection, the embodiment of the invention also provides visual detection equipment.
Fig. 7 is a hardware block diagram illustrating a visual inspection apparatus according to an embodiment of the present disclosure. As shown in fig. 7, a visual inspection apparatus 70 according to an embodiment of the present disclosure includes a memory 71 and a processor 72.
The memory 71 is used to store non-transitory computer readable instructions. In particular, memory 71 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc.
The processor 72 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the visual inspection hardware device 70 to perform desired functions. In one embodiment of the present disclosure, the processor 72 is configured to execute the computer readable instructions stored in the memory 71, so that the visual inspection apparatus 70 performs all or part of the steps of the visual inspection method of the embodiments of the present disclosure described above.
Those skilled in the art should understand that, in order to solve the technical problem of how to obtain a good user experience, the present embodiment may also include well-known structures such as a communication bus, an interface, and the like, and these well-known structures should also be included in the protection scope of the present invention.
Various embodiments of the visual inspection method presented in this disclosure may be implemented using a computer-readable medium, such as computer software, hardware, or any combination thereof. For a hardware implementation, various embodiments of the visual detection method proposed by the present disclosure may be implemented by using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, an electronic unit designed to perform the functions described herein, and in some cases, various embodiments of the visual detection method proposed by the present disclosure may be implemented in the controller. For software implementation, various embodiments of the visual inspection method presented in the present disclosure may be implemented with a separate software module that allows at least one function or operation to be performed. The software codes may be implemented by software applications (or programs) written in any suitable programming language, which may be stored in memory and executed by the controller.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
Also, as used herein, "or" as used in a list of items beginning with "at least one" indicates a separate list, such that, for example, a list of "A, B or at least one of C" means A or B or C, or AB or AC or BC, or ABC (i.e., A and B and C). Furthermore, the word "exemplary" does not mean that the described example is preferred or better than other examples.
It is also noted that in the systems and methods of the present disclosure, components or steps may be decomposed and/or re-combined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
Various changes, substitutions and alterations to the techniques described herein may be made without departing from the techniques of the teachings as defined by the appended claims. Moreover, the scope of the claims of the present disclosure is not limited to the particular aspects of the process, machine, manufacture, composition of matter, means, methods and acts described above. Processes, machines, manufacture, compositions of matter, means, methods, or acts, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding aspects described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or acts.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.
Claims (10)
1. A visual inspection device comprises a detection module, a processing module and a fusion module; wherein:
the detection module is used for acquiring image data of a target object in a plurality of detected areas and color and/or brightness information of the plurality of detected areas, and sending the image data of the target object in the detected areas and the color and/or brightness information to the processing module;
the processing module is used for determining the position of each point on the target object in the detected areas according to the image data and the color and/or brightness information;
and the fusion module is used for splicing the positions of all points on the target objects in the detected areas to obtain the images of the detected target objects.
2. The visual inspection device of claim 1, wherein the inspection module includes an image acquisition module and a color analysis module; wherein:
the image acquisition module is used for acquiring image data of the target objects in the detected areas and sending the image data to the processing module;
the color analysis module is used for acquiring the color and/or brightness information of the detected areas and sending the color and/or brightness information to the processing module.
3. The visual inspection device of claim 2, wherein the processing module includes a computational processing unit and a color processing unit; wherein:
the calculation processing unit is used for carrying out real-time calibration, image recognition and image feature classification on the image data of the target object in the detected areas according to the color and/or brightness data of the detected areas, and determining the position of each point on the target object in the detected areas;
and the color processing unit is used for determining the color and/or brightness data of the detected areas according to the color and/or brightness information of the detected areas based on the arrangement mode of the color analysis module, and sending the color and/or brightness data of the detected areas to the calculation processing unit.
4. The visual inspection device of claim 2, wherein the image acquisition module and the color analysis module are each in plurality and are staggered.
5. The visual inspection device of claim 4, wherein the staggered arrangement is a horizontally staggered arrangement, a vertically staggered arrangement, or an obliquely staggered arrangement.
6. The visual inspection device of claim 1, wherein the visual inspection device further comprises a display module;
and the display module is connected with the fusion module and is used for displaying the image of the detected target object.
7. The visual inspection device of claim 1, wherein the visual inspection device further comprises a platform and a storage transport module; wherein:
the platform is used for bearing the detected target object;
the storage and transmission module is used for storing the processing results of the detection module, the processing module and the fusion module and sending the processing results to the platform so as to adjust the platform in six degrees of freedom.
8. A method of visual inspection, comprising:
acquiring image data of a target object in a plurality of detected regions and color and/or brightness information of the plurality of detected regions;
determining the position of each point on the target object in the detected areas according to the image data and the color and/or brightness information;
and splicing the positions of each point on the target object in the detected areas to obtain an image of the detected target object.
9. The method of claim 8, wherein determining the position of each point on the object in the plurality of detected regions based on the image data and the color and/or brightness information comprises:
determining the color and/or brightness data of the detected areas according to the color and/or brightness information of the detected areas;
and according to the color and/or brightness data of the detected areas, carrying out real-time calibration, image recognition and image feature classification on the image data of the target object in the detected areas, and determining the position of each point on the target object in the detected areas.
10. A visual inspection device comprising:
a memory for storing non-transitory computer readable instructions; and
a processor for executing the computer readable instructions such that the processor when executing performs the visual inspection method of any one of claims 8-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810873358.3A CN110793564A (en) | 2018-08-02 | 2018-08-02 | Visual inspection apparatus and visual inspection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810873358.3A CN110793564A (en) | 2018-08-02 | 2018-08-02 | Visual inspection apparatus and visual inspection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110793564A true CN110793564A (en) | 2020-02-14 |
Family
ID=69425334
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810873358.3A Pending CN110793564A (en) | 2018-08-02 | 2018-08-02 | Visual inspection apparatus and visual inspection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110793564A (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1361503A (en) * | 2000-12-29 | 2002-07-31 | 南开大学 | Color multi-objective fusion identifying technology and system based on neural net |
CN1527243A (en) * | 2003-03-05 | 2004-09-08 | ����ŷ�������ʽ���� | image reading device |
KR20080060041A (en) * | 2006-12-26 | 2008-07-01 | 엘지디스플레이 주식회사 | Smear test method of liquid crystal display |
CN101701916A (en) * | 2009-12-01 | 2010-05-05 | 中国农业大学 | A method for rapid identification and identification of corn varieties |
CN101872421A (en) * | 2010-06-03 | 2010-10-27 | 李华 | Colorimetry color feature vector automatic extracting method based on machine vision |
CN102968761A (en) * | 2011-08-30 | 2013-03-13 | 佳能株式会社 | Image processing apparatus and method |
CN103163132A (en) * | 2011-12-08 | 2013-06-19 | 常州中港纺织智能科技有限公司 | System and method for real-time appearance digital analysis for yarn |
CN103268493A (en) * | 2013-05-29 | 2013-08-28 | 上海电机学院 | Vehicle license plate image location method in RGB format |
CN104012081A (en) * | 2011-12-19 | 2014-08-27 | 日产自动车株式会社 | Object detection device |
CN107024476A (en) * | 2016-03-10 | 2017-08-08 | 上海帆声图像科技有限公司 | Display panel detecting system and its detection means and detection method |
CN107037130A (en) * | 2017-06-09 | 2017-08-11 | 长春理工大学 | Monocular vision three-D ultrasonic nondestructive detection system and detection method |
CN107909575A (en) * | 2017-12-30 | 2018-04-13 | 煤炭科学研究总院唐山研究院 | For the binocular vision on-line measuring device and detection method of vibrating screen operating status |
-
2018
- 2018-08-02 CN CN201810873358.3A patent/CN110793564A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1361503A (en) * | 2000-12-29 | 2002-07-31 | 南开大学 | Color multi-objective fusion identifying technology and system based on neural net |
CN1527243A (en) * | 2003-03-05 | 2004-09-08 | ����ŷ�������ʽ���� | image reading device |
KR20080060041A (en) * | 2006-12-26 | 2008-07-01 | 엘지디스플레이 주식회사 | Smear test method of liquid crystal display |
CN101701916A (en) * | 2009-12-01 | 2010-05-05 | 中国农业大学 | A method for rapid identification and identification of corn varieties |
CN101872421A (en) * | 2010-06-03 | 2010-10-27 | 李华 | Colorimetry color feature vector automatic extracting method based on machine vision |
CN102968761A (en) * | 2011-08-30 | 2013-03-13 | 佳能株式会社 | Image processing apparatus and method |
CN103163132A (en) * | 2011-12-08 | 2013-06-19 | 常州中港纺织智能科技有限公司 | System and method for real-time appearance digital analysis for yarn |
CN104012081A (en) * | 2011-12-19 | 2014-08-27 | 日产自动车株式会社 | Object detection device |
CN103268493A (en) * | 2013-05-29 | 2013-08-28 | 上海电机学院 | Vehicle license plate image location method in RGB format |
CN107024476A (en) * | 2016-03-10 | 2017-08-08 | 上海帆声图像科技有限公司 | Display panel detecting system and its detection means and detection method |
CN107037130A (en) * | 2017-06-09 | 2017-08-11 | 长春理工大学 | Monocular vision three-D ultrasonic nondestructive detection system and detection method |
CN107909575A (en) * | 2017-12-30 | 2018-04-13 | 煤炭科学研究总院唐山研究院 | For the binocular vision on-line measuring device and detection method of vibrating screen operating status |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101121034B1 (en) | System and method for obtaining camera parameters from multiple images and computer program products thereof | |
WO2021259151A1 (en) | Calibration method and apparatus for laser calibration system, and laser calibration system | |
CN105279372B (en) | A kind of method and apparatus of determining depth of building | |
CN104574350B (en) | three-dimensional data acquisition method and system thereof | |
CN102413354B (en) | Automatic optical detection method, device and system of mobile phone camera module | |
US20070126735A1 (en) | Method and apparatus for 3-D data input to a personal computer with a multimedia oriented operating system | |
CN106600648A (en) | Stereo coding target for calibrating internal parameter and distortion coefficient of camera and calibration method thereof | |
CN113077523B (en) | Calibration method, calibration device, computer equipment and storage medium | |
US9613465B1 (en) | Method for suturing 3D coordinate information and the device using the same | |
JP2003244521A (en) | Information processing method and apparatus, and recording medium | |
CN111539311B (en) | Living body judging method, device and system based on IR and RGB double shooting | |
CN110166648B (en) | Camera detection and locking method and device based on optical imaging | |
CN106524909B (en) | Three-dimensional image acquisition method and device | |
WO2022126871A1 (en) | Defect layer detection method and system based on light field camera and detection production line | |
JP2021135300A (en) | Substrate measuring system and substrate measuring method | |
CN109920003A (en) | Camera calibration detection method, device and equipment | |
CN107635135A (en) | Method and system for testing relative tilt angle before assembly of dual-camera module | |
CN114764779A (en) | Computing device and defect detection method for near-eye display device | |
CN101520309A (en) | Imaging device | |
JP2022013913A (en) | Wire measuring system for board and method for the same | |
CN109357637B (en) | Method for measuring curvature radius and thickness of plate rolling machine plate rolling based on depth camera | |
CN116416913A (en) | Method, device, equipment and medium for obtaining corresponding relationship | |
CN111536895B (en) | Appearance recognition device, appearance recognition system, and appearance recognition method | |
CN117078666B (en) | Two-dimensional and three-dimensional combined defect detection method, device, medium and equipment | |
TWI742391B (en) | Three-dimensional image surface defect detection system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200214 |