[go: up one dir, main page]

CN114648589B - Method, device and storage medium for determining parameters of phase height conversion mapping model - Google Patents

Method, device and storage medium for determining parameters of phase height conversion mapping model Download PDF

Info

Publication number
CN114648589B
CN114648589B CN202011511827.0A CN202011511827A CN114648589B CN 114648589 B CN114648589 B CN 114648589B CN 202011511827 A CN202011511827 A CN 202011511827A CN 114648589 B CN114648589 B CN 114648589B
Authority
CN
China
Prior art keywords
determining
area
areas
pixel
phase
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011511827.0A
Other languages
Chinese (zh)
Other versions
CN114648589A (en
Inventor
李玉成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd filed Critical Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN202011511827.0A priority Critical patent/CN114648589B/en
Publication of CN114648589A publication Critical patent/CN114648589A/en
Application granted granted Critical
Publication of CN114648589B publication Critical patent/CN114648589B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application provides a method and a device for determining phase height conversion mapping model parameters and a storage medium, and belongs to the technical field of image processing. The method comprises the steps of receiving image information of a calibration plate sent by image pickup equipment, determining a plurality of first areas in the image information, wherein the first areas are shadow areas corresponding to calibration blocks included in the calibration plate, determining second areas corresponding to the first areas, wherein the second areas are areas where the calibration blocks are located, the heights of target areas corresponding to pixels included in at least two second areas are different, and determining parameters of a phase height conversion mapping model according to the heights of the target areas corresponding to the pixels in the second areas, pixel coordinates and phase values corresponding to the pixels, wherein the phase height conversion mapping model is used for determining the heights of the target areas corresponding to all pixels in an object to be detected. The application can not only improve the efficiency of parameter determination of the phase height conversion mapping model, but also save calculation resources.

Description

Method, device and storage medium for determining phase height conversion mapping model parameters
Technical Field
The present application relates to image processing technologies, and in particular, to a method, an apparatus, and a storage medium for determining parameters of a phase height conversion mapping model.
Background
At present, a three-dimensional automatic optical detection (3D Automatic Optic Inspection,3D AOI) system is generally adopted in the processing industry, and three-dimensional point cloud data of a target object in a target area are acquired by utilizing a structured light imaging principle, so that the target area is detected. Specifically, firstly, a sine stripe phase value generated based on a phase shift method is used as coding information to mark an object, the phase value is projected to the object in a light intensity mode by a projector, the modulated object is collected in a gray value mode by a corresponding pixel of a camera, a corresponding sine phase can be analyzed after decoding, the sine phase is obtained, the height information of a corresponding position is generated through a phase height conversion mapping model, and three-dimensional data of a target object can be obtained by combining coordinates of the pixel.
In the prior art, when determining parameters of a phase height conversion mapping model, standard calibration blocks with known heights are generally adopted for calibration, phase diagrams corresponding to different calibration blocks are acquired one by one during calibration, and then the parameters of the phase height conversion mapping model are obtained by utilizing a least square method according to the phase diagrams.
However, when the parameters of the phase height conversion mapping model are determined in the above manner to complete the calibration of the phase height conversion mapping model, the collected data volume is large, which not only occupies more calculation resources, but also makes the efficiency of parameter determination lower.
Disclosure of Invention
In order to solve the problems in the prior art, the application provides a method, a device and a storage medium for determining parameters of a phase-to-height conversion mapping model, which can reduce occupied computing resources.
In a first aspect, an embodiment of the present application provides a method for determining parameters of a phase-height conversion mapping model, including:
receiving image information of a calibration plate sent by camera equipment;
determining a plurality of first areas in the image information, wherein the first areas are shadow areas corresponding to calibration blocks included in the calibration plate;
Determining a second area corresponding to the first area, wherein the second area is an area where the calibration block is located, and the heights of target areas corresponding to pixels included in at least two second areas are different;
And determining parameters of a phase height conversion mapping model according to the height of the target area corresponding to the pixel in the second area, the pixel coordinates and the phase value corresponding to the pixel, wherein the phase height conversion mapping model is used for determining the height of the target area corresponding to each pixel in the object to be detected.
In the embodiment of the application, the heights of the target areas corresponding to the pixels included in the at least two second areas are different, so that the parameters of the phase height conversion mapping model can be determined only by carrying out single scanning on the calibration plate, thereby simplifying the parameter determination process, improving the parameter determination efficiency, reducing the data volume and saving the calculation resources of the system. Furthermore, the area where the calibration block is located can be accurately positioned through the shadow area corresponding to the calibration block, so that the determination of the phase height conversion mapping model parameters is completed, and the accuracy and the robustness of the area determination can be improved.
In a second aspect, an embodiment of the present application provides a device for determining a parameter of a phase-height conversion mapping model, including:
The receiving module is used for receiving the image information of the calibration plate sent by the camera equipment;
the determining module is used for determining a plurality of first areas in the image information, wherein the first areas are shadow areas corresponding to calibration blocks included in the calibration plate;
The determining module is further configured to determine a second area corresponding to the first area, where the second area is an area where the calibration block is located, where heights of target areas corresponding to pixels included in at least two first areas are different;
The determining module is further configured to determine parameters of a phase-height conversion mapping model according to the height of the target area corresponding to the pixel in the second area, the pixel coordinates and the phase value corresponding to the pixel, where the phase-height conversion mapping model is used to determine the height of the target area corresponding to each pixel in the object to be detected.
In a third aspect, an embodiment of the present application provides an electronic device, including:
A processor;
Memory, and
A computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor, the computer program comprising instructions for performing the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program, the computer program causing a server to perform the method according to the first aspect.
The method, the device and the storage equipment for determining the phase height conversion mapping model parameters are characterized in that image information of a calibration plate sent by an image pickup device is received, a plurality of first areas are determined in the image information, the first areas are shadow areas corresponding to calibration blocks included in the calibration plate, second areas corresponding to the first areas are determined, the second areas are areas where the calibration blocks are located, the heights of target areas corresponding to pixels included in at least two second areas are different, and then the parameters of the phase height conversion mapping model are determined according to the heights of the target areas corresponding to the pixels in the second areas, pixel coordinates and phase values corresponding to the pixels, so that the phase height conversion mapping model is calibrated, wherein the phase height conversion mapping model is used for determining the heights of the target areas corresponding to the pixels in an object to be detected. Because the heights of the target areas corresponding to the pixels included in the at least two second areas are different, the parameters of the phase height conversion mapping model can be determined only by carrying out single scanning on the calibration plate, so that the parameter determination process can be simplified, the parameter determination efficiency can be improved, the data volume can be reduced, and the calculation resources of the system can be saved. Furthermore, the area where the calibration block is located can be accurately positioned through the shadow area corresponding to the calibration block, so that the determination of the phase height conversion mapping model parameters is completed, and the accuracy and the robustness of the area determination can be improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1a is a front view of a calibration block of the prior art;
FIG. 1b is a top view of a calibration block of the prior art;
fig. 2 is a schematic diagram of a system architecture of a method for determining parameters of a phase-height conversion mapping model according to an embodiment of the present application;
Fig. 3 is a flowchart illustrating a method for determining parameters of a phase-height conversion mapping model according to an embodiment of the present application;
FIG. 4 is a schematic diagram of processing image information;
FIG. 5 is a flowchart illustrating another method for determining parameters of a phase-height conversion mapping model according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a device for determining parameters of a phase-height conversion mapping model according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an embodiment of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "first," "second," "third," and "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The method for determining the phase height conversion mapping model parameters is applied to a detection scene of a target area or a target object in the processing industry, for example, can be applied to a scene of high-speed and high-precision quantitative defect detection on a printed circuit board (Printed Circuit Board, PCB) production line.
At present, a 3D AOI system is generally utilized, a sinusoidal fringe phase value generated based on a phase shift method is adopted as coding information to mark an object, a projection device projects a sinusoidal fringe pattern on a calibration plate in a light intensity mode, an imaging device acquires the sinusoidal fringe pattern, decodes the sinusoidal fringe pattern to analyze a corresponding sinusoidal phase, generates height information of a corresponding position through a phase height conversion mapping model shown in the following formula (1) after obtaining the sinusoidal phase, and combines coordinates of pixels to obtain three-dimensional data of a target object:
Wherein Z is the height of a target area corresponding to a target pixel in an image containing a target object captured by an image capturing device, the target pixel can be any pixel in the image, the target area corresponding to the target pixel can be understood as an area corresponding to the target pixel in the target object, therefore, the area is an area in the target object, a 1、a2…a20 is a parameter of a phase-height conversion mapping model, and (u, v) is coordinates of the target pixel, Is the sinusoidal phase corresponding to the target pixel. As can be seen from the formula (1), the height of the target area corresponding to each pixel has a one-to-one correspondence with the phase corresponding to the target area, and the height of the target area satisfies the polynomial model shown in the formula (1), and the polynomial model is a phase height conversion mapping model.
In the prior art, in order to obtain the parameter a 1、a2…a20 of the phase height conversion mapping model, a standard calibration block with a known height is generally adopted for calibration, a phase diagram corresponding to each calibration block is required to be acquired during calibration, and the parameter of the phase height conversion mapping model can be obtained by solving through a least square method. Fig. 1a is a front view of a calibration block in the prior art, fig. 1B is a top view of a calibration block in the prior art, and as shown in fig. 1 a-1B, a standard calibration block with a known height includes a calibration block a, a calibration block B, a calibration block C and a calibration block D, and an image capturing device needs to photograph the calibration block a, the calibration block B, the calibration block C and the calibration block D respectively so as to acquire a phase map corresponding to each calibration block, and solve parameters of a phase height conversion mapping model by using a least square method according to the acquired phase maps.
However, in the above-mentioned method for determining the parameters of the phase-height conversion mapping model, because four calibration blocks are used for calibration, four field-of-view alignment and image acquisition are required, so that the whole operation process is very complicated, and the parameter determination efficiency is low. In addition, because four image acquisitions are needed, more data are acquired and processed, so that the requirements on the memory and the video memory are higher, and more calculation resources are occupied.
In the embodiment of the application, in order to simplify the calibration process, a plurality of calibration blocks with different heights and mutually separated can be arranged on one calibration plate and are all in the field of view of the camera equipment, so that the calibration blocks with the plurality of heights can be acquired only by scanning the calibration plate once during the calibration, and the calibration can be completed without four times of scanning as in the prior art. In this embodiment, each calibration block has a respective height, and each calibration block of the calibration plate has a different height. The calibration blocks with different heights are mutually separated by a certain distance, so that shadows generated by the projection calibration blocks cannot be influenced by adjacent calibration blocks. In the case of using the above-mentioned calibration plate, since one calibration plate includes a plurality of calibration blocks having different heights and being spaced apart from each other, it is very important how to effectively distinguish between the different height areas in the calibration plate. It will be appreciated by those skilled in the art that in the 3D AOI system, the image capturing device usually captures an image directly above the calibration plate, and the projection device projects the image above the side of the calibration plate, so that the calibration blocks with different heights may generate corresponding shadow areas due to shielding. After different calibration blocks are distinguished, according to the height of a target area corresponding to the pixels in each calibration block, the coordinates of the pixels and the phase values corresponding to the pixels, the parameters of the phase height conversion mapping model can be determined based on the formula (1). By using the method, on one hand, the operation flow can be simplified and the efficiency of parameter determination can be improved because only one image data is required to be acquired, and on the other hand, the data volume can be greatly reduced and a large amount of calculation resources can be saved. On the other hand, the positioning accuracy of the calibration block can be improved as the positioning of the calibration block can be performed through the shadow area.
Before describing the scheme of the method for determining the parameters of the phase-height conversion mapping model of the present application, the system architecture to which the present application is applicable will be explained with reference to fig. 2.
Fig. 2 is a schematic diagram of a system architecture of a method for determining parameters of a phase-height conversion mapping model according to an embodiment of the present application, as shown in fig. 2, where the system includes a projection device 201, an image capturing device 202, and a translation stage 203 disposed horizontally, an optical axis of the image capturing device 202 is perpendicular to the translation stage 203, an included angle is formed between the optical axis of the projection device 201 and the translation stage 203, and the optical axis of the image capturing device 202 is controlled to be in the same plane with the optical axis of the projection device 201.
The projection device 201 may be a digital projector, which may be a digital Liquid crystal projection device ((Liquid CRYSTAL DISPLAY, LCD) projector), a digital micromirror projection device (DMD) projector, or a silicon-based Liquid crystal projection device ((Liquid Crystal on Silicon, LCOS) projector), and the gray-scale fringe pattern may be conveniently generated and written to the digital projection device by a computer image processing system.
The image capturing apparatus 202 may be a video camera, which may be a charge coupled device, a liquid crystal device, a spatial light modulation device, a complementary metal oxide semiconductor (Complementary Metal Oxide Semiconductor, CMOS) device, or a digital camera.
In addition, the system further includes an electronic device (not shown) with a processor, and the electronic device may communicate with the projection device 201 and the image capturing device 202 in a wired manner or a wireless manner, so as to control the projection device 201 to project the gray stripe pattern onto the calibration plate, or receive the image information of the calibration plate sent by the image capturing device 202, so as to process the image information to determine the parameters of the phase height conversion mapping model.
When the system is used for detecting three-dimensional information of an object, the calibration plate 204 is placed on the translation table 203, the translation table 203 is horizontally placed and can move up and down, the translation table 203 can be an electric control translation table, an electric stepper device for controlling the stepping distance by a computer, and the translation table 203 can also be a translation table for manually controlling the movement.
In addition, the calibration plate 204 includes a plurality of calibration blocks with different heights, and in the embodiment of the present application, the calibration blocks with 8 different heights are taken as an example for illustration. The calibration plate 204 is placed on the translation stage 203, and the gray-scale fringe pattern can be projected onto the calibration plate 204 by controlling the projection device 201. In addition, by controlling the translation table 203 to move, all the calibration blocks of the calibration plate 204 are in the field of view of the image pickup device 202, so that the calibration blocks with multiple heights can be acquired only by scanning the calibration plate 204 once through the image pickup device 202 during calibration, thereby simplifying the operation, reducing the data quantity and saving the calculation resources.
After understanding the system architecture of the present application, a scheme of the method for determining the phase height conversion mapping model parameter of the present application will be described in detail with reference to fig. 3.
Fig. 3 is a flowchart of a method for determining a phase-height conversion mapping model parameter according to an embodiment of the present application, where the method may be performed by any device that performs the method for determining a phase-height conversion mapping model parameter, and the device may be implemented by software and/or hardware. In this embodiment, the apparatus may be integrated in an electronic device. As shown in fig. 3, the method for determining the phase-height conversion mapping model parameter provided by the embodiment of the application includes the following steps:
Step 301, receiving image information of a calibration plate sent by an image pickup device.
In this step, as shown in fig. 2, the image pickup apparatus 202 may acquire image information of the calibration plate 204 placed on the translation stage 203, and may transmit the acquired image information to the electronic apparatus.
Step 302, determining a plurality of first areas in the image information, wherein the first areas are shadow areas corresponding to calibration blocks included in the calibration plate.
In this step, since the calibration plate includes a plurality of calibration blocks having different heights and spaced apart from each other, and the optical axis of the projection apparatus 201 forms an angle with the translation stage 203, the projection apparatus 201 inevitably generates a shadow area corresponding to each calibration block on the calibration plate 204 when projecting the gray stripe pattern to the calibration plate 204 disposed on the translation stage 203. After the electronic device obtains the image information of the calibration plate, the shadow area corresponding to each calibration block can be determined based on the image information.
In one possible implementation, when determining the plurality of first regions in the image information, the image information may be subjected to binarization processing to obtain a binary image, and then a connected region Labeling algorithm (also called connected region analysis, connected Component Analysis/Labeling) is adopted to determine the plurality of first regions in the binary image.
Specifically, fig. 4 is a schematic diagram of processing image information, and as shown in fig. 4, since the photographed scene and the background illumination condition are both fixed, the obtained image information may be subjected to binarization processing using a fixed threshold value, so that a corresponding binary image may be obtained.
For example, in order to improve the positioning accuracy of the calibration block, the above binary image may be further subjected to denoising processing, so as to obtain a denoised binary image, so that a plurality of first areas may be determined in the denoised binary image. In the embodiment of the application, the binary image can be denoised by morphological opening operation
After the binary image is obtained, a connected region marking algorithm is adopted to mark the binary image as a connected region. In addition, in order to avoid interference among a plurality of areas needing calibration, a strategy of firstly dividing and then detecting is adopted, namely, after the connected areas of the binary image are marked, outline rectangular frames of the areas of a plurality of calibration blocks are separated.
In order to improve the accuracy of positioning the calibration block region, the offset value mask_offset may be extended to the outside of the outline rectangular frame according to a preset offset value mask_offset, where the specific value of the offset value mask_offset may be set according to experience or actual situations, for example, may be set to 100 pixels, etc. Further, the expanded rectangular frame can be cut in the binary image, so that a plurality of images including shadow areas of the calibration block can be obtained, namely a plurality of images including the first area are obtained.
In this embodiment, after the image information is binarized to obtain the binary image, a connected region marking algorithm is adopted to determine a plurality of first regions in the binary image, so that the determination of the shadow regions of the calibration block is simpler, and the determined shadow regions are more accurate.
Step 303, determining a second area corresponding to the first area, wherein the second area is an area where the calibration block is located, and the heights of target areas corresponding to pixels included in at least two second areas are different.
In this step, after determining the shadow area of each calibration block in the image information, the area where each calibration block is located, that is, the second area corresponding to the first area, may be determined according to the shadow area. The first areas and the second areas are in one-to-one correspondence.
In addition, the calibration plate at least comprises two calibration blocks with different heights, so that the heights of target areas corresponding to pixels included in the second areas where the two calibration blocks with different heights are located are different. It will be appreciated that the heights of the target areas corresponding to the pixels included in the same second area are the same. For example, in order to improve the accuracy of the parameters of the phase-height conversion mapping model, the heights of all the calibration blocks in the calibration plate may be set to be different, where the heights of the target areas corresponding to the pixels included in the second area where all the calibration blocks are located are different.
For example, as shown in fig. 4, the calibration plate includes 8 calibration blocks with different heights, so that the height of the target area corresponding to the pixel included in the second area where each calibration block is located is different from the height of the target area corresponding to the pixel included in the second area where the other calibration block is located.
It will be appreciated that since the calibration blocks have different heights, the area of the shadow area corresponding to each calibration block is also different.
In one possible implementation manner, when determining the second area corresponding to the first area, a connected area marking algorithm may be adopted to determine a plurality of outline rectangular frames in the image information, where each outline rectangular frame includes a first area and a second area corresponding to the first area, then, straight line detection is performed on the image in each outline rectangular frame, a plurality of effective straight lines in the outline rectangular frame are determined, the number of white pixel points on the effective straight lines is greater than a preset threshold, and then, according to the plurality of effective straight lines, the second area corresponding to the first area is determined in the outline rectangular frame.
Specifically, as described in the foregoing embodiment, the electronic device may separate the outline rectangular frame of the plurality of calibration block areas, where the outline rectangular frame includes a first area and a second area corresponding to the first area, that is, the outline rectangular frame includes a shadow area of the calibration block and an area where the calibration block corresponding to the shadow area is located. The electronic equipment performs edge detection and Hough straight line detection on the image in each outline rectangular frame to obtain a plurality of straight lines, and then determines an effective straight line in the plurality of straight lines by using priori direction information, wherein the number of white pixel points on the effective straight line is larger than a preset threshold value. For example, straight lines a, b, c, and d in fig. 4 are determined effective straight lines. The preset threshold may be set according to actual situations or experience, for example, may be set to 200 pixels, and the specific value of the preset threshold is not limited herein.
In an exemplary embodiment of the present application, the first coordinate system is constructed with the upper left corner of the image information transmitted by the image capturing apparatus as the origin, the horizontal direction as the horizontal axis, and the vertical direction as the vertical axis. When the electronic equipment adopts the connected region marking algorithm to determine each outline rectangular frame, coordinate values of four vertexes of each outline rectangular frame can be determined. Based on this, the electronic device can extract the first coordinates of the upper left corner of each outline rectangular frame in the above-described first coordinate system.
In addition, for each outline rectangular frame, after determining a plurality of effective straight lines in the outline rectangular frame, the electronic device may determine a first straight line with the smallest abscissa among the effective straight lines perpendicular to the horizontal axis of the first coordinate system, and a second straight line with the smallest ordinate among the effective straight lines perpendicular to the vertical axis of the first coordinate system, and determine an intersection point of the first straight line and the second straight line.
It can be understood that the rectangular area formed by taking the connecting line between the upper left corner of the outline rectangular frame and the intersection point as a diagonal line is the second area in the outline rectangular frame.
For the second coordinates of the intersection point of the first straight line and the second straight line, it can be determined as follows:
For each outline rectangular frame, the second coordinate system may be constructed with the upper left corner of the outline rectangular frame as the origin, the horizontal direction as the horizontal axis, and the vertical direction as the vertical axis. The electronic equipment can determine the length of each effective straight line through a Hough straight line detection algorithm while determining a plurality of effective straight lines in the outline rectangular frame. Based on the length of each effective straight line, the coordinates of the intersection point of the first straight line and the second straight line in the second coordinate system can be determined. Further, the coordinates in the second coordinate system need to be converted into the coordinates in the first coordinate system, specifically, the coordinates in the second coordinate system of the intersection point of the first straight line and the second straight line may be added to the first coordinates in the first coordinate system of the upper left corner of the outline rectangular frame, so as to obtain the coordinates in the second coordinate system of the intersection point of the first straight line and the second straight line.
It should be understood that a rectangular region constructed with the vertex coordinates of the first and second coordinates as diagonal lines is determined as a second region within the outline rectangular frame.
Specifically, as shown in fig. 4, a first coordinate system is constructed with the upper left corner of the original drawing as the origin, the horizontal direction as the horizontal axis, and the vertical direction as the vertical axis. When the electronic equipment determines each outline rectangular frame by adopting the connected region marking algorithm, coordinate values of four vertexes of each outline rectangular frame can be determined, so that a first coordinate of the upper left corner of the outline rectangular frame under a first coordinate system, such as a coordinate of P left_top, can be extracted. In addition, a first line with the smallest abscissa among the effective lines perpendicular to the horizontal axis of the first coordinate system in the outline rectangular frame and a second line with the smallest ordinate among the effective lines perpendicular to the vertical axis of the first coordinate system can be determined, and an intersection point of the first line and the second line is determined, such as P right_bottom.
Taking a certain outline rectangular frame which is segmented after the detection of the connected region in the binary image as an example, taking the upper left corner of the outline rectangular frame as an original point, taking the horizontal direction as a horizontal axis and taking the vertical direction as a vertical axis to construct a second coordinate system. The electronic device may determine the coordinates of P right_bottom in the second coordinate system based on the length of each effective line. Further, the coordinates of P right_bottom in the second coordinate system are added to the first coordinates of P left_top in the first coordinate system, so as to obtain the coordinates of P right_bottom in the second coordinate system.
And determining a rectangular area constructed by taking P left_top and P right_bottom as vertexes of diagonal lines as a second area in the outline rectangular frame, namely an area where the calibration block in the outline rectangular frame is located, and determining the position of the second area in the original image according to the coordinates of P left_top and P right_bottom, so that the positioning of the calibration block area can be completed.
In this embodiment, after coarse positioning of the area is performed by using the shadow area generated by the imaging characteristic of the system, the precise positioning of the calibration block area is realized by combining algorithms such as the communication area mark and the hough straight line detection, so that the automatic positioning of the calibration block area can be realized, and meanwhile, the robustness and the accuracy can be improved.
For example, in consideration of noise generated easily at the determined region edge of the calibration block, the determined rectangular region edge may be indented by a preset number of pixels to update the second region.
Specifically, as shown in fig. 4, after determining the rectangular region, in order to eliminate noise generated at the edges of the region, the edges of the rectangular region determined by P left_top and P right_bottom may be indented by a preset number of pixels, thereby determining the indented rectangular region as the second region. The preset number may be set according to actual situations or experience, and may be set to 70 pixels, for example.
And 304, determining parameters of a phase height conversion mapping model according to the height of the target area corresponding to the pixel in the second area, the pixel coordinates and the phase value corresponding to the pixel.
The phase height conversion mapping model is used for determining the height of a target area corresponding to each pixel in the object to be detected.
In this step, since the heights of the respective calibration blocks included in the calibration plate are preset, the electronic device may acquire, for the second area corresponding to each calibration block, the height of the target area corresponding to each pixel in the second area.
It should be understood that, after the electronic device acquires the image information, the image information is stored in the form of a pixel matrix, and when the pixel matrix is processed, the coordinates of each pixel point are determined usually under the aforementioned first coordinate system. Therefore, after determining the second area, the electronic device traverses all pixels in the second area under the first coordinate system, so that the coordinates of each pixel in the second area can be obtained. For example, if the coordinates of P left_top are (10, 20), the coordinates of P right_bottom are (50, 60), the coordinates of other pixels can be obtained by traversing all the pixels in the second region, for example, the coordinates of the next pixel in the horizontal direction of P left_top are (11, 20), the coordinates of the next pixel in the vertical direction are (10, 21), and so on.
Furthermore, the electronic device may further determine a phase value corresponding to each pixel in the second area according to a phase shift method. After the heights of the target areas corresponding to the pixels in each second area, the pixel coordinates and the phase values corresponding to the pixels are obtained, parameters of the phase height conversion mapping model can be determined according to the formula (1), and therefore calibration of the phase height conversion mapping model is completed.
It should be noted that, since the calibration board includes a plurality of calibration blocks with different heights, the parameter of the phase height conversion mapping model is determined according to the height of the target area corresponding to the pixel in the second area where each calibration block is located, the pixel coordinates and the phase value corresponding to the pixel, so that the accuracy of the parameter can be improved.
According to the method for determining the parameters of the phase height conversion mapping model, the image information of the calibration plate sent by the image pickup device is received, a plurality of first areas are determined in the image information, the first areas are shadow areas corresponding to calibration blocks included in the calibration plate, the second areas corresponding to the first areas are determined, the second areas are areas where the calibration blocks are located, the heights of target areas corresponding to pixels included in at least two second areas are different, and then the parameters of the phase height conversion mapping model are determined according to the heights of the target areas corresponding to the pixels in the second areas, pixel coordinates and phase values corresponding to the pixels, so that calibration of the phase height conversion mapping model is completed, wherein the phase height conversion mapping model is used for determining the heights of the target areas corresponding to the pixels in an object to be detected. Because the heights of the target areas corresponding to the pixels included in the at least two second areas are different, the parameters of the phase height conversion mapping model can be determined only by carrying out single scanning on the calibration plate, so that the parameter determination process can be simplified, the parameter determination efficiency can be improved, the data volume can be reduced, and the calculation resources of the system can be saved. Furthermore, the area where the calibration block is located can be accurately positioned through the shadow area corresponding to the calibration block, so that the determination of the phase height conversion mapping model parameters is completed, and the accuracy and the robustness of the area determination can be improved.
Fig. 5 is a flowchart of another method for determining parameters of a phase-height conversion mapping model according to an embodiment of the present application, and the embodiment is based on the embodiment shown in fig. 3, and details of how to determine three-dimensional data of an object to be detected after determining parameters of the phase-height conversion mapping model. As shown in fig. 5, the method includes:
step 501 of receiving a target image containing an object to be detected sent by an image capturing apparatus.
In this step, as shown in fig. 1, an object to be detected may be placed on the translation stage 203, a target image of the object to be detected may be captured by the image capturing apparatus 202, and the image capturing apparatus 202 may transmit the target image to the electronic apparatus.
Step 502, determining the height of a target area corresponding to each pixel according to the coordinates of each pixel of the target image, the phase corresponding to each pixel and the parameters of the phase height conversion mapping model.
In this step, after receiving the target image, the electronic device stores the target image in the form of a pixel matrix, and when processing the pixel matrix, a coordinate system is generally constructed by using the upper left corner of the target image as the origin, the horizontal direction as the horizontal axis, and the vertical direction as the vertical axis, and the coordinates of each pixel point are determined under the coordinate system. Therefore, after the electronic device determines the target image and constructs the coordinate system in the above manner, the electronic device traverses all pixels in the target image, so that coordinates of each pixel in the target image can be obtained, and the phase corresponding to each pixel is determined by a phase shift method. The parameters of the phase-height conversion mapping model are determined according to the embodiment shown in fig. 3. After the electronic device obtains the parameters, the height of the target area corresponding to each pixel can be determined according to the formula (1).
Step 503, determining three-dimensional data of the object to be detected according to the height of the target area corresponding to each pixel.
In this step, after determining the height of the target area corresponding to each pixel, the electronic device may determine three-dimensional point cloud data of the object to be detected, thereby completing detection or test of the object to be detected.
According to the method for determining the parameters of the phase height conversion mapping model, which is provided by the embodiment of the application, the target image which is sent by the image pickup device and contains the object to be detected is received, the height of the target area corresponding to each pixel is determined according to the coordinates of each pixel of the target image and the parameters of the phase and phase height conversion mapping model corresponding to each pixel, and then the three-dimensional data of the object to be detected is determined according to the height of the target area corresponding to each pixel.
Fig. 6 is a schematic diagram of a device for determining a phase height conversion mapping model parameter according to an embodiment of the present application, as shown in fig. 6, the device 60 for determining a phase height conversion mapping model parameter includes:
A receiving module 601, configured to receive image information of a calibration board sent by an image capturing apparatus;
a determining module 602, configured to determine a plurality of first areas in the image information, where the first areas are shadow areas corresponding to calibration blocks included in the calibration board;
the determining module 602 is further configured to determine a second area corresponding to the first area, where the second area is an area where the calibration block is located, where there are at least two target areas corresponding to pixels included in the first areas that have different heights;
the determining module 602 is further configured to determine parameters of a phase-height conversion mapping model according to the height of the target area corresponding to the pixel in the second area, the pixel coordinates, and the phase value corresponding to the pixel, where the phase-height conversion mapping model is used to determine the height of the target area corresponding to each pixel in the object to be detected.
Optionally, the determining module 602 is specifically configured to:
Performing binarization processing on the image information to obtain a binary image;
and determining the plurality of first areas in the binary image by adopting a connected area marking algorithm.
Optionally, the determining module 602 is specifically configured to:
determining a plurality of outline rectangular frames in the image information by adopting the connected region marking algorithm, wherein each outline rectangular frame comprises a first region and a second region corresponding to the first region;
performing straight line detection on the image in each outline rectangular frame, and determining a plurality of effective straight lines in the outline rectangular frame, wherein the number of white pixel points on the effective straight lines is larger than a preset threshold value;
and determining a second area corresponding to the first area in the outline rectangular frame according to the effective straight lines.
Optionally, the determining module 602 is specifically configured to:
Constructing a coordinate system by taking the upper left corner of the image information as an origin, taking the horizontal direction as a horizontal axis and taking the vertical direction as a vertical axis, and determining a first coordinate of the upper left corner of the outline rectangular frame in the coordinate system;
In the coordinate system, determining a second coordinate of an intersection point of a first straight line with the smallest abscissa among the effective straight lines perpendicular to the horizontal axis and a second straight line with the smallest ordinate among the effective straight lines perpendicular to the vertical axis;
And determining a rectangular region constructed by using the first coordinate and the second coordinate as vertex coordinates of diagonal lines as the second region in the outline rectangular frame.
Optionally, the device further comprises an updating module 603 configured to retract the edge of the rectangular area by a preset number of pixels to update the second area.
Optionally, the determining module 602 is specifically configured to:
denoising the binary image to obtain a denoised binary image;
and determining the plurality of first areas in the denoised binary image by adopting a connected area marking algorithm.
Optionally, the receiving module 601 is further configured to receive a target image including an object to be detected sent by the image capturing apparatus;
The determining module 602 is further configured to determine a height of a target area corresponding to each pixel according to coordinates of each pixel of the target image, a phase corresponding to each pixel, and parameters of the phase-height conversion mapping model;
The determining module 602 is further configured to determine three-dimensional data of the object to be detected according to the height of the target area corresponding to each pixel.
The determining device 60 for determining the parameters of the phase-height conversion mapping model according to the embodiment of the present application may execute the technical scheme of the determining method for determining the parameters of the phase-height conversion mapping model according to any of the embodiments described above, and the implementation principle and beneficial effects of the determining method for determining the parameters of the phase-height conversion mapping model are similar to those of the determining method for determining the parameters of the phase-height conversion mapping model, and may refer to the implementation principle and beneficial effects of the determining method for determining the parameters of the phase-height conversion mapping model, which are not repeated herein.
It should be noted that, it should be understood that the division of the modules of the above device is merely a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated. The modules can be realized in the form of software which is called by the processing element, in the form of hardware, in the form of software which is called by the processing element, and in the form of hardware. For example, the determining module may be a processing element that is set up separately, may be implemented in a chip of the above-mentioned device, or may be stored in a memory of the above-mentioned device in the form of program codes, and may be called by a processing element of the above-mentioned device and execute the functions of the above-mentioned processing module. The implementation of the other modules is similar. In addition, all or part of the modules can be integrated together or can be independently implemented. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form.
For example, the modules above may be one or more integrated circuits configured to implement the methods above, such as one or more Application SPECIFIC INTEGRATED Circuits (ASICs), or one or more microprocessors (DIGITAL SIGNAL processors, DSPs), or one or more field programmable gate arrays (field programmable GATE ARRAY, FPGAs), or the like. For another example, when a module above is implemented in the form of processing element scheduler code, the processing element may be a general purpose processor, such as a central processing unit (central processing unit, CPU) or other processor that may invoke the program code. For another example, the modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk Solid STATE DISK (SSD)), etc.
Fig. 7 is a schematic structural diagram of an embodiment of an electronic device according to an embodiment of the present application. As shown in fig. 7, the electronic device may include a processor 701, a memory 702, a communication interface 703 and a system bus 704, where the memory 702 and the communication interface 703 are connected to the processor 701 through the system bus 704 and complete communication with each other, the memory 702 is used to store computer execution instructions, the communication interface 703 is used to communicate with other devices, and the processor 701 implements the technical scheme of the method for determining the phase height conversion mapping model parameters as shown in the foregoing embodiment when executing the computer program.
In fig. 7, the processor 701 may be a general-purpose processor including a central processing unit CPU, a network processor (network processor, NP), etc., or may be a digital signal processor DSP, an application specific integrated circuit ASIC, a field programmable gate array FPGA or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component.
The memory 702 may include random access memory (random access memory, RAM), read-only memory (RAM), and non-volatile memory (non-volatile memory), such as at least one disk memory.
The communication interface 703 is used to enable communication between the database access apparatus and other devices (e.g., clients, read-write libraries, and read-only libraries).
The system bus 704 may be a peripheral component interconnect (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus or an extended industry standard architecture (extended industry standard architecture, EISA) bus, or the like. The system bus may be classified into an address bus, a data bus, a control bus, and the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
Optionally, an embodiment of the present application further provides a computer readable storage medium, where computer instructions are stored, where when the computer instructions run on a computer, the computer is caused to execute the technical scheme of the method for determining the phase height conversion mapping model parameter as shown in the foregoing embodiment.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (9)

1. A method for determining parameters of a phase height conversion mapping model, comprising:
receiving image information of a calibration plate sent by camera equipment;
determining a plurality of first areas in the image information, wherein the first areas are shadow areas corresponding to calibration blocks included in the calibration plate;
Determining a second area corresponding to the first area, wherein the second area is an area where the calibration block is located, and the heights of target areas corresponding to pixels included in at least two second areas are different;
Determining parameters of a phase height conversion mapping model according to the height of a target area corresponding to the pixel in the second area, the pixel coordinates and the phase value corresponding to the pixel, wherein the phase height conversion mapping model is used for determining the height of the target area corresponding to each pixel in the object to be detected;
The determining a second region corresponding to the first region includes:
determining a plurality of outline rectangular frames in the image information by adopting a communication region marking algorithm, wherein each outline rectangular frame comprises a first region and a second region corresponding to the first region;
performing straight line detection on the image in each outline rectangular frame, and determining a plurality of effective straight lines in the outline rectangular frame, wherein the number of white pixel points on the effective straight lines is larger than a preset threshold value;
and determining a second area corresponding to the first area in the outline rectangular frame according to the effective straight lines.
2. The method of claim 1, wherein the determining a plurality of first regions in the image information comprises:
Performing binarization processing on the image information to obtain a binary image;
and determining the plurality of first areas in the binary image by adopting a connected area marking algorithm.
3. The method of claim 2, wherein the determining a second region within the outline rectangular box corresponding to the first region from the plurality of effective straight lines comprises:
Constructing a coordinate system by taking the upper left corner of the image information as an origin, taking the horizontal direction as a horizontal axis and taking the vertical direction as a vertical axis, and determining a first coordinate of the upper left corner of the outline rectangular frame in the coordinate system;
In the coordinate system, determining a second coordinate of an intersection point of a first straight line with the smallest abscissa among the effective straight lines perpendicular to the horizontal axis and a second straight line with the smallest ordinate among the effective straight lines perpendicular to the vertical axis;
And determining a rectangular region constructed by using the first coordinate and the second coordinate as vertex coordinates of diagonal lines as the second region in the outline rectangular frame.
4. A method according to claim 3, characterized in that the method further comprises:
And retracting the edge of the rectangular area by a preset number of pixels so as to update the second area.
5. The method of claim 2, wherein determining the plurality of first regions in the binary image using a connected region labeling algorithm comprises:
denoising the binary image to obtain a denoised binary image;
and determining the plurality of first areas in the denoised binary image by adopting a connected area marking algorithm.
6. The method according to any one of claims 1-5, wherein after determining parameters of a phase-height conversion mapping model according to the height of the target area corresponding to the pixel in the second area, the pixel coordinates, and the phase value corresponding to the pixel, the method further comprises:
Receiving a target image which is sent by the camera equipment and contains an object to be detected;
determining the height of a target area corresponding to each pixel according to the coordinates of each pixel of the target image, the phase corresponding to each pixel and the parameters of the phase height conversion mapping model;
And determining the three-dimensional data of the object to be detected according to the height of the target area corresponding to each pixel.
7. A phase height conversion mapping model parameter determining apparatus, comprising:
The receiving module is used for receiving the image information of the calibration plate sent by the camera equipment;
the determining module is used for determining a plurality of first areas in the image information, wherein the first areas are shadow areas corresponding to calibration blocks included in the calibration plate;
The determining module is further configured to determine a second area corresponding to the first area, where the second area is an area where the calibration block is located, where heights of target areas corresponding to pixels included in at least two first areas are different;
The determining module is further configured to determine parameters of a phase-height conversion mapping model according to the height of the target area corresponding to the pixel in the second area, the pixel coordinates and the phase value corresponding to the pixel, where the phase-height conversion mapping model is used to determine the height of the target area corresponding to each pixel in the object to be detected;
the determining module is specifically configured to:
determining a plurality of outline rectangular frames in the image information by adopting a communication region marking algorithm, wherein each outline rectangular frame comprises a first region and a second region corresponding to the first region;
performing straight line detection on the image in each outline rectangular frame, and determining a plurality of effective straight lines in the outline rectangular frame, wherein the number of white pixel points on the effective straight lines is larger than a preset threshold value;
and determining a second area corresponding to the first area in the outline rectangular frame according to the effective straight lines.
8. An electronic device, comprising:
A processor;
Memory, and
A computer program;
Wherein the computer program is stored in the memory and configured to be executed by the processor, the computer program comprising instructions for performing the method of any of claims 1-6.
9. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which causes a server to perform the method of any one of claims 1-6.
CN202011511827.0A 2020-12-18 2020-12-18 Method, device and storage medium for determining parameters of phase height conversion mapping model Active CN114648589B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011511827.0A CN114648589B (en) 2020-12-18 2020-12-18 Method, device and storage medium for determining parameters of phase height conversion mapping model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011511827.0A CN114648589B (en) 2020-12-18 2020-12-18 Method, device and storage medium for determining parameters of phase height conversion mapping model

Publications (2)

Publication Number Publication Date
CN114648589A CN114648589A (en) 2022-06-21
CN114648589B true CN114648589B (en) 2024-12-03

Family

ID=81990835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011511827.0A Active CN114648589B (en) 2020-12-18 2020-12-18 Method, device and storage medium for determining parameters of phase height conversion mapping model

Country Status (1)

Country Link
CN (1) CN114648589B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101074869A (en) * 2007-04-27 2007-11-21 东南大学 Method for measuring three-dimensional contour based on phase method
CN103528543A (en) * 2013-11-05 2014-01-22 东南大学 System calibration method for grating projection three-dimensional measurement

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4002919B2 (en) * 2004-09-02 2007-11-07 技研トラステム株式会社 Moving body height discrimination device
CN107610077A (en) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 Image processing method and device, electronic device, and computer-readable storage medium
CN110246185B (en) * 2018-03-07 2023-10-27 阿里巴巴集团控股有限公司 Image processing method, device, system, storage medium and calibration system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101074869A (en) * 2007-04-27 2007-11-21 东南大学 Method for measuring three-dimensional contour based on phase method
CN103528543A (en) * 2013-11-05 2014-01-22 东南大学 System calibration method for grating projection three-dimensional measurement

Also Published As

Publication number Publication date
CN114648589A (en) 2022-06-21

Similar Documents

Publication Publication Date Title
CN108683907B (en) Optical module pixel defect detection method, device and equipment
CN111080662A (en) Lane line extraction method, device and computer equipment
WO2020010945A1 (en) Image processing method and apparatus, electronic device and computer-readable storage medium
CN108537834A (en) A kind of volume measuring method, system and depth camera based on depth image
CN109427046B (en) Distortion correction method, device and computer-readable storage medium for three-dimensional measurement
CN115861351A (en) Edge detection method, defect detection method and detection device
JP6115214B2 (en) Pattern processing apparatus, pattern processing method, and pattern processing program
CN112270719A (en) Camera calibration method, device and system
JP2015203652A (en) Information processing apparatus and information processing method
CN115797359B (en) Detection method, equipment and storage medium based on solder paste on circuit board
CN112767498A (en) Camera calibration method and device and electronic equipment
WO2019001164A1 (en) Optical filter concentricity measurement method and terminal device
CN111311671B (en) Workpiece measuring method and device, electronic equipment and storage medium
CN115511718A (en) PCB image correction method and device, terminal equipment and storage medium
CN116205993A (en) Double-telecentric lens high-precision calibration method for 3D AOI
CN115423783A (en) Quality inspection method and device for photovoltaic module frame and junction box glue filling
CN117115233B (en) Dimension measurement method and device based on machine vision and electronic equipment
CN117036364B (en) Image processing method and device, storage medium and computing equipment
CN114494316A (en) Corner marking method, parameter calibration method, medium, and electronic device
CN115082552B (en) Marking hole positioning method and device, assembly equipment and storage medium
CN114648589B (en) Method, device and storage medium for determining parameters of phase height conversion mapping model
CN107941241A (en) A kind of resolving power test target and its application method for aerophotogrammetry quality evaluation
CN114792343B (en) Calibration method for image acquisition equipment, method and device for acquiring image data
CN118037538A (en) Mapping method and device for three-dimensional space coordinate data to two-dimensional image
JP6906177B2 (en) Intersection detection device, camera calibration system, intersection detection method, camera calibration method, program and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant