CN114170314B - Intelligent 3D vision processing-based 3D glasses process track execution method - Google Patents
Intelligent 3D vision processing-based 3D glasses process track execution method Download PDFInfo
- Publication number
- CN114170314B CN114170314B CN202111512132.9A CN202111512132A CN114170314B CN 114170314 B CN114170314 B CN 114170314B CN 202111512132 A CN202111512132 A CN 202111512132A CN 114170314 B CN114170314 B CN 114170314B
- Authority
- CN
- China
- Prior art keywords
- glasses
- track
- intelligent
- standard model
- execution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 88
- 239000011521 glass Substances 0.000 title claims abstract description 75
- 238000012545 processing Methods 0.000 title claims abstract description 28
- 230000005477 standard model Effects 0.000 claims abstract description 34
- 238000005259 measurement Methods 0.000 claims description 4
- 239000013598 vector Substances 0.000 claims description 3
- 238000004519 manufacturing process Methods 0.000 description 6
- 230000007547 defect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000003292 glue Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 229920003023 plastic Polymers 0.000 description 1
- 239000004033 plastic Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/18—Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
- G05B19/19—Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by positioning or contouring control systems, e.g. to control position from one programmed point to another or to control movement along a programmed continuous path
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Manufacturing & Machinery (AREA)
- Automation & Control Theory (AREA)
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The application provides a process track execution method for processing 3D glasses based on intelligent 3D vision, which comprises the following steps: calculating the spatial position relation between the execution equipment and the camera; scanning the 3D glasses and generating a corresponding standard model; dividing the standard model into a plurality of areas, and grabbing characteristic points in each area, wherein the range of each area is 1mm to 5mm; connecting the feature points in each region to form a process treatment track of the 3D glasses; and controlling the execution equipment to run according to the process treatment track. According to the intelligent 3D vision processing-based 3D glasses process track execution method, after the 3D glasses are scanned and the corresponding standard model is generated, the standard model is divided into a plurality of areas, so that the 3D glasses are planar in the areas, a teaching path can be applied to the areas, and the yield of products is improved.
Description
[ field of technology ]
The application relates to the technical field of 3D vision, in particular to a 3D glasses process track execution method based on intelligent 3D vision processing.
[ background Art ]
The 3D glasses employ a "time division method" by means of signals that the 3D glasses synchronize with the display. When the display outputs a left eye image, the left eye lens is in a light transmission state, and the right eye is in a light-tight state, and when the display outputs a right eye image, the right eye lens transmits light and the left eye is light-tight, so that two glasses can see different game pictures. In the production and manufacturing process of the 3D glasses, in order to obtain the glasses with standard models and no deviation, the existing mode is usually carried out by using an integral deviation correction mode, namely, the 3D camera with a large visual field is used for carrying out one-time image acquisition on the product, the first image is used as a template after the integral space position of the product is confirmed, and the execution track of the process can be taught by using the template. And then, carrying out image acquisition on the product for the second time through a 3D camera, comparing the second image with the first image, calculating the three-dimensional space position deviation of the two images, and applying the three-dimensional deviation relation to the template process teaching track to correct the template process track. Most of the existing 3D glasses are made of nonmetallic materials such as plastics, and after the manufacturing is finished, the whole or partial deformation is caused, and the teaching path is a rigid path and cannot be applied to products with more curved surfaces or partial deformation, so that the problem of low yield of the products manufactured by the process is caused.
[ invention ]
In view of the foregoing, it is necessary to provide a method for performing a process track based on intelligent 3D vision processing 3D glasses, which can ensure the product yield, so as to solve the above-mentioned problems.
The embodiment of the application provides a process track execution method for processing 3D glasses based on intelligent 3D vision, which comprises the following steps:
calculating the spatial position relation between the execution equipment and the camera;
scanning the 3D glasses and generating a corresponding standard model;
dividing the standard model into a plurality of areas, and grabbing characteristic points in each area, wherein the range of each area is 1mm to 5mm; selecting characteristics of the 3D glasses, and dividing areas on the characteristics, wherein the characteristics are curved surfaces or planes;
connecting the feature points in each region to form a process treatment track of the 3D glasses;
and controlling the execution equipment to run according to the process treatment track.
In at least one embodiment of the present application, the step of "dividing the standard model into a plurality of regions and capturing feature points in each region, wherein each region ranges from 1mm to 5mm" includes the steps of:
generating a three-dimensional space measurement box in each divided region;
and calculating corresponding characteristic points by using a three-dimensional space side measuring box according to the geometric characteristics of the 3D glasses.
In at least one embodiment of the present application, the geometric features of the 3D glasses are a combination of one or more of triaxial features, normal vectors, state quantities, and speed quantities.
In at least one embodiment of the present application, the step of "connecting the feature points within each region to form the process trajectory of the 3D glasses" further includes the steps of:
and replacing the coordinate system of the process treatment track to the coordinate system of the execution equipment.
In at least one embodiment of the present application, the step of replacing the coordinate system of the process track with the coordinate system of the execution device further includes the following steps:
and replacing the process track to the coordinates in the execution equipment and uploading the process track to the execution equipment.
In at least one embodiment of the present application, the step of "calculating the spatial positional relationship of the execution device and the camera" includes the steps of:
installing a calibration block on the execution equipment on the robot;
moving the robot to right below the camera to collect image information;
and calculating the spatial position relation between the camera and the execution equipment according to the image information.
In at least one embodiment of the present application, the step of "moving the robot to directly under the camera to collect image information" includes the steps of:
moving the robot to different positions under the 3D camera for multiple times;
and respectively acquiring image information of the robots at a plurality of different positions.
In at least one embodiment of the present application, the number of robot movements is 4 to 9.
According to the intelligent 3D vision processing-based 3D glasses process track execution method, after the 3D glasses are scanned and the corresponding standard model is generated, the standard model is divided into a plurality of areas, so that the 3D glasses are planar in the areas, a teaching path can be applied to the areas, and the yield of products is improved.
[ description of the drawings ]
Fig. 1 is a flow chart of a method for performing a process track based on intelligent 3D vision processing on 3D glasses in an embodiment of the present application.
[ detailed description ] of the invention
Embodiments of the present application will now be described with reference to the accompanying drawings, in which it is apparent that the embodiments described are merely some, but not all embodiments of the present application.
It is noted that when one component is considered to be "connected" to another component, it may be directly connected to the other component or intervening components may also be present. When an element is referred to as being "disposed" on another element, it can be directly on the other element or intervening elements may also be present. The terms "top," "bottom," "upper," "lower," "left," "right," "front," "rear," and the like are used herein for illustrative purposes only.
The embodiment of the application provides a process track execution method for processing 3D glasses based on intelligent 3D vision, which comprises the following steps:
calculating the spatial position relation between the execution equipment and the camera;
scanning the 3D glasses and generating a corresponding standard model;
dividing the standard model into a plurality of areas, and grabbing characteristic points in each area, wherein the range of each area is 1mm to 5mm;
connecting the feature points in each region to form a process treatment track of the 3D glasses;
and controlling the execution equipment to run according to the process treatment track.
According to the intelligent 3D vision processing-based 3D glasses process track execution method, after the 3D glasses are scanned and the corresponding standard model is generated, the standard model is divided into a plurality of areas, so that the 3D glasses are planar in the areas, a teaching path can be applied to the areas, and the yield of products is improved.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, a method for executing a process track of a 3D glasses based on intelligent 3D vision processing includes the following steps:
s10: the spatial positional relationship of the execution device and the camera is calculated.
S20: the 3D glasses are scanned and a corresponding standard model is generated.
S30: dividing the standard model into a plurality of areas, and grabbing characteristic points in each area, wherein each area ranges from 1mm to 5mm.
S40: the feature points within each region are connected to form a process trace of the 3D glasses.
S50: and controlling the execution equipment to run according to the process treatment track.
It should be noted that, in the production and manufacture of 3D glasses, it is necessary to first make a process execution track of the 3D glasses, and operate corresponding execution equipment to perform processing along the process execution track. In the existing process of making a process track, teaching is usually completed in a teaching mode, and teaching belongs to a rigid path, so that when a product is locally deformed, the product cannot be applied to a deformed curved surface position, and the defect of the external shape of the product with the curved surface, which is made in the teaching mode, is caused, and the reject ratio of the product is increased. According to the scheme, the plurality of areas are divided on the generated standard model, and the areas are controlled within a certain range, so that the product is in a structure close to a plane in the divided areas, normal teaching operation can be performed in the areas, and the defect of the product caused by inadaptation of teaching is reduced.
In one embodiment, the 3D glasses include, but are not limited to AR and VR glasses, and any glasses having a 3D viewing angle may be used.
In one embodiment, the area is 3mm, but obviously not limited thereto, and in another embodiment, the area may be 1mm, 2mm, 4mm, 5mm, etc.
Step S10 includes the steps of:
s11: and installing the calibration block on the execution equipment on the robot.
S12: the robot is moved to just below the camera to collect image information.
S13: and calculating the spatial position relation between the camera and the execution equipment according to the image information.
It should be noted that, the calibration block is disposed on the execution device, and when detecting the spatial position relationship between the execution device and the 3D camera, the calibration block on the execution device is mounted on the robot, and the robot is moved to the position right under the camera to collect the image information. At this time, the execution device also records a set of position information, and combines with the image information, and the spatial position relationship between the execution device and the camera can be calculated by using a corresponding algorithm. And controlling the robot to run to a specific direction according to the spatial position relationship so as to realize the manufacture of the 3D glasses.
Further, step S12 includes the steps of:
s121: the robot is moved multiple times to different positions under the 3D camera.
S122: respectively acquiring image information of the robot at a plurality of different positions,
it should be noted that, in order to ensure the accuracy of acquiring the image information of the calibration block, the positions of the calibration blocks on the robot under a plurality of different positions are acquired by moving the robot to different positions of the 3D camera for a plurality of times, and the positions of the calibration blocks are mutually combined to judge the specific positions of the calibration blocks, so that the accuracy is higher, and the error of image information acquisition is reduced.
Further, the number of movements of the robot is 4 to 9. In one embodiment, the number of robot movements is 4, but obviously not limited thereto, as in another embodiment, the number of robot movements is 9.
Preferably, in step S20, the 3D glasses are divided into regions, and a high-precision camera is used to collect a plurality of high-precision images of different regions, and the plurality of images are spliced, so as to improve the precision and definition of generating the standard model, which is specifically described as follows:
in the 3D vision calibration process, a 3D camera is required to perform scanning photographing on a device to be calibrated. When the number of pictures is less than or equal to four, the conventional splicing process does not generate errors exceeding project requirements, so that the splicing of the images is normally completed. And 3D glasses in this application have many curved surfaces, when the quantity that the control scanning was shot is below four, can cause the unclear problem of image that the scanning was shot, and when the quantity that the scanning was shot is above four, adopt conventional concatenation method can produce the error that exceeds the project demand to influence the precision of 3D vision calibration. In this regard, the above scheme is to create a standard model and divide the standard model into different scanning areas, so as to perform high-precision scanning photographing on the different scanning areas through multiple independent scanning, and to realize the stitching of multiple images by using the characteristic points, thereby not only ensuring the precision of image acquisition, but also preventing the problem that the multiple images generate errors exceeding project requirements during stitching.
The standard model includes a base (not shown) and a plurality of positioning posts (not shown) secured to the base. The base is used for bearing a plurality of positioning columns, the positioning columns are fixed at corresponding positions on the base according to the specific shape of the 3D glasses, so that the positioning columns fixed on the base are surrounded to form a model consistent with the shape of the 3D glasses, and equipment executed on the 3D glasses is calibrated.
In an embodiment, the plurality of positioning posts are adhered to the base by glue, and wherein a portion of the positioning posts are perpendicular to the base and another portion of the positioning posts are disposed at an angle to the base. It is understood that the fixing manner of the positioning posts and the base is not limited thereto, for example, in another embodiment, a plurality of mounting holes are formed in the base, and the positioning posts are inserted and fixed in the plurality of mounting holes.
In one embodiment, the base is a substantially rectangular block structure, but obviously, the base is not limited thereto, and in another embodiment, the base may also be an ellipsoid structure, etc.
In order to obtain a model consistent with the shape of the 3D glasses, the external contour of each part of the 3D glasses and the height and size information of each part need to be selected, so that the end faces of the positioning posts far away from the base are commonly surrounded to form the model consistent with the shape of the 3D glasses by fixing the positioning posts with different lengths at different positions on the base.
In order to ensure accuracy of the model formed by the plurality of positioning columns, the formed model needs to be detected to prevent a problem that an error of calibration of the execution device occurs when the execution device is calibrated due to deviation of the model. Preferably, the central hole of the positioning column is selected as a characteristic point, so that the hole sites are formed in the positioning column, and the positions of a plurality of positioning columns can be better detected through the central hole. Furthermore, the scheme is that a central hole is formed in one part of the positioning columns, and a cross mark is arranged on the central line of the other part of the positioning columns so as to position and mark the positioning columns. Furthermore, when the model precision is detected, the standard model is placed in a designated calibration instrument, and the center hole and the cross mark are grasped through a computer algorithm to obtain the specific setting position of the positioning column, so that the precision of the standard model is better judged, and the precision of the calibration of the execution equipment is better ensured when the execution equipment is calibrated.
It is to be understood that the arrangement of the feature points is not limited thereto, and in another embodiment, marks such as "-" or other shapes may be disposed on the plurality of positioning columns to obtain the positions of the plurality of positioning columns in cooperation with the corresponding algorithm.
It should be noted that, the area is divided through a plurality of positioning columns arranged on the base, so that the complicated operation of marking on the base can be reduced, and the area is conveniently divided. And through the mode of dividing the locating columns, after the areas are divided, obvious locating column distribution differences exist in each area, so that the area division is more obvious, and the problem of poor detection precision caused by the fact that the areas are not obvious is solved.
Further, by dividing the standard model into a plurality of different regions and scanning each of the different regions by the 3D camera to form a corresponding model, the display range of the formed model is relatively small, so that the definition of the model can be maximally increased in a state that the 3D camera has the same focal length.
Furthermore, the scanning area comprises at least three groups of positioning columns, so that more overlapped parts exist in different formed images, and the quality of each image splice can be ensured to the greatest extent when the overlapped parts with more overlapped features are spliced in the subsequent image splice process.
It should be noted that, in order to ensure that there are at least three positioning columns in the field of view to facilitate the stitching of two images, and reduce the number of times of scanning and photographing as much as possible, it is preferable that the diameter of the positioning columns be reduced so that the three positioning columns simultaneously fall into the scanning range of the 3D camera.
It should be noted that, in the scanning photographing process of the 3D camera, the image obtained by the first photographing of the 3D camera is a (not shown), and when photographing for the second time, a partial area of the image a is used as an overlapping area and scanned into the image of the second time, so that there is a certain overlapping area between the image of the second time and the image of the first time, so as to facilitate subsequent stitching.
The two image information are compared with the same overlapping part and the same part of the two image information are combined together, so that different parts on images scanned from different angles can be completely compared, and therefore under the combination of multiple images, the splicing of calibration images is realized, and the calibration images spliced by the method are consistent with the 3D glasses in appearance and size.
To facilitate the construction of a more accurate process trajectory, step S30 includes the steps of:
s31: and selecting the characteristics of the 3D glasses, and dividing the areas on the characteristics.
It should be noted that the above feature is an outline of the 3D glasses, and in one embodiment, the above feature is a curved surface, but it is obviously not limited thereto, and in another embodiment, the above feature may be a plane.
Step S30 includes the steps of:
s32: a three-dimensional spatial measurement box is generated within each of the divided regions.
S33: and calculating corresponding characteristic points by using a three-dimensional space side measuring box according to the geometric characteristics of the 3D glasses.
It should be noted that, by generating a three-dimensional space detection box in each divided region to measure a specific point in the region by the three-dimensional space measurement box, after each region produces a feature point, a process track of the product can be obtained by connecting all the feature points.
Preferably, the geometric features of the 3D glasses are one or a combination of three axial features, normal vectors, state quantities and speed quantities.
Further, step 40 is followed by the steps of:
s41: and replacing the coordinate system of the process treatment track to the coordinate system of the execution equipment.
It should be noted that, the coordinate system where the process processing track is located is replaced to the coordinate system where the execution device is located, so that the execution device can accurately detect a specific process position, thereby facilitating the processing of a product and increasing the processing precision of the product.
Still further, step S41 further includes the steps of:
s411: and replacing the process track to the coordinates in the execution equipment and uploading the process track to the execution equipment.
It should be noted that, the process processing track is replaced to the coordinates in the execution device and uploaded to the execution device, so as to save the data to prevent omission of the data, and when the next operation is performed, the execution device can complete processing of the product by repeating the last data, so that complicated operation caused by resetting is avoided, process steps are simplified, and processing efficiency is increased.
Further, after the process track is uploaded into the execution device, the execution device is enabled to operate the execution device according to the process track, so that the execution device is controlled to produce corresponding products according to the track.
According to the intelligent 3D vision processing-based 3D glasses process track execution method, after the 3D glasses are scanned and the corresponding standard model is generated, the standard model is divided into a plurality of areas, so that the 3D glasses are planar in the areas, a teaching path can be applied to the areas, and the yield of products is improved.
The foregoing is merely exemplary of the present application and it should be noted herein that modifications may be made by those skilled in the art without departing from the inventive concept herein, which fall within the scope of the present application.
Claims (8)
1. The method for executing the process track of the 3D glasses based on intelligent 3D vision processing is characterized by comprising the following steps of:
calculating the spatial position relation between the execution equipment and the camera;
scanning the 3D glasses and generating a corresponding standard model;
dividing the standard model into a plurality of areas, and grabbing characteristic points in each area, wherein the range of each area is 1mm to 5mm, so that the 3D glasses are plane in each area, and the teaching path can be applied to each area; selecting characteristics of the 3D glasses, and dividing areas on the characteristics, wherein the characteristics are curved surfaces or planes;
connecting the feature points in each region to form a process treatment track of the 3D glasses;
controlling the execution equipment to run according to the process treatment track;
the standard model comprises a base and a plurality of positioning columns fixed on the base, wherein the external contour of each part of the 3D glasses and the height and size information of each part are required to be selected, so that the end faces of the positioning columns far away from the base are jointly surrounded to form the standard model consistent with the shape of the 3D glasses by fixing the positioning columns with different lengths on different positions on the base, a central hole is formed in one part of the positioning columns, a cross mark is arranged on the central line position of the other part of the positioning columns, so that positioning marks are carried out on the positioning columns, and when model precision is detected, the standard model is placed in a specified calibration instrument, and the specific setting positions of the positioning columns are obtained by grabbing the central hole and the cross mark through a computer algorithm, so that the precision of the standard model is judged;
the step of dividing the standard model into a plurality of regions includes:
the area is divided through the plurality of positioning columns arranged on the base, so that after the area is divided, obvious positioning column distribution differences exist in each area, wherein the area comprises at least three groups of positioning columns, and different formed images have more overlapped parts.
2. The intelligent 3D vision processing-based 3D glasses process trajectory execution method according to claim 1, wherein the step of dividing the standard model into a plurality of regions and capturing feature points in each region, wherein each region ranges from 1mm to 5mm, comprises the steps of:
generating a three-dimensional space measurement box in each divided region;
and calculating corresponding characteristic points by using a three-dimensional space side measuring box according to the geometric characteristics of the 3D glasses.
3. The method for performing 3D glasses process track based on intelligent 3D vision processing according to claim 2, wherein the geometric features of the 3D glasses are one or a combination of three axial features, normal vectors, state quantities and speed quantities.
4. The intelligent 3D vision processing 3D glasses-based process trajectory execution method according to claim 2, wherein after the step of connecting the feature points in each region to form the process trajectory of the 3D glasses, further comprising the steps of:
and replacing the coordinate system of the process treatment track to the coordinate system of the execution equipment.
5. The method for performing a process track on a 3D glasses based on intelligent 3D vision processing according to claim 4, wherein the step of replacing the coordinate system of the process track with the coordinate system of the performing device further comprises the following steps:
and replacing the process track to the coordinates in the execution equipment and uploading the process track to the execution equipment.
6. The intelligent 3D vision processing-based 3D glasses process track execution method according to claim 1, wherein the step of calculating the spatial position relationship between the execution device and the camera comprises the steps of:
installing a calibration block on the execution equipment on the robot;
moving the robot to right below the camera to collect image information;
and calculating the spatial position relation between the camera and the execution equipment according to the image information.
7. The intelligent 3D vision processing based 3D glasses process track execution method according to claim 6, wherein the step of moving the robot to right under the camera to collect the image information comprises the steps of:
moving the robot to different positions under the 3D camera for multiple times;
and respectively acquiring image information of the robots at a plurality of different positions.
8. The method for performing a 3D glasses process trajectory based on intelligent 3D vision processing according to claim 7, wherein the number of robot movements is 4 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111512132.9A CN114170314B (en) | 2021-12-07 | 2021-12-07 | Intelligent 3D vision processing-based 3D glasses process track execution method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111512132.9A CN114170314B (en) | 2021-12-07 | 2021-12-07 | Intelligent 3D vision processing-based 3D glasses process track execution method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114170314A CN114170314A (en) | 2022-03-11 |
CN114170314B true CN114170314B (en) | 2023-05-26 |
Family
ID=80485665
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111512132.9A Active CN114170314B (en) | 2021-12-07 | 2021-12-07 | Intelligent 3D vision processing-based 3D glasses process track execution method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114170314B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104634276A (en) * | 2015-02-12 | 2015-05-20 | 北京唯创视界科技有限公司 | Three-dimensional measuring system, photographing device, photographing method, depth calculation method and depth calculation device |
CN105118088A (en) * | 2015-08-06 | 2015-12-02 | 曲阜裕隆生物科技有限公司 | 3D imaging and fusion method based on pathological slice scanning device |
CN107767414A (en) * | 2017-10-24 | 2018-03-06 | 林嘉恒 | The scan method and system of mixed-precision |
CN111462253A (en) * | 2020-04-23 | 2020-07-28 | 深圳群宾精密工业有限公司 | Three-dimensional calibration board, system and calibration method suitable for laser 3D vision |
CN111823734A (en) * | 2020-09-10 | 2020-10-27 | 季华实验室 | Positioning calibration component, device, printer and method for positioning and calibrating printing point coordinates |
CN112489195A (en) * | 2020-11-26 | 2021-03-12 | 新拓三维技术(深圳)有限公司 | Rapid machine adjusting method and system for pipe bender |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4338672A (en) * | 1978-04-20 | 1982-07-06 | Unimation, Inc. | Off-line teach assist apparatus and on-line control apparatus |
US20030035100A1 (en) * | 2001-08-02 | 2003-02-20 | Jerry Dimsdale | Automated lens calibration |
CA2819956C (en) * | 2013-07-02 | 2022-07-12 | Guy Martin | High accuracy camera modelling and calibration method |
CN104281098B (en) * | 2014-10-27 | 2017-02-15 | 南京航空航天大学 | Modeling method for dynamic machining features of complex curved surface |
CN104408408A (en) * | 2014-11-10 | 2015-03-11 | 杭州保迪自动化设备有限公司 | Extraction method and extraction device for robot spraying track based on curve three-dimensional reconstruction |
CN105354880B (en) * | 2015-10-15 | 2018-02-06 | 东南大学 | A kind of sand blasting machine people's automatic path generation method based on line laser structured light |
CN106354098B (en) * | 2016-11-04 | 2018-09-04 | 大连理工大学 | A Method for Generating Tool Machining Trajectories on NURBS Composite Surfaces |
CN108527319B (en) * | 2018-03-28 | 2024-02-13 | 广州瑞松北斗汽车装备有限公司 | Robot teaching method and system based on vision system |
CN109454642B (en) * | 2018-12-27 | 2021-08-17 | 南京埃克里得视觉技术有限公司 | Automatic production method of robot gluing trajectory based on 3D vision |
CN109822550B (en) * | 2019-02-21 | 2020-12-08 | 华中科技大学 | A high-efficiency and high-precision teaching method for complex curved robots |
CN210452170U (en) * | 2019-06-18 | 2020-05-05 | 蓝点触控(北京)科技有限公司 | Flexible intelligent polishing system of robot based on six-dimensional force sensor |
CN111496786A (en) * | 2020-04-15 | 2020-08-07 | 武汉海默机器人有限公司 | Point cloud model-based mechanical arm operation processing track planning method |
CN112435350B (en) * | 2020-11-19 | 2025-02-11 | 群滨智造科技(苏州)有限公司 | A machining trajectory deformation compensation method and system |
CN112486098A (en) * | 2020-11-20 | 2021-03-12 | 张均 | Computer-aided machining system and computer-aided machining method |
CN112497192A (en) * | 2020-11-25 | 2021-03-16 | 广州捷士电子科技有限公司 | Method for improving teaching programming precision by adopting automatic calibration mode |
CN113103226A (en) * | 2021-03-08 | 2021-07-13 | 同济大学 | Visual guide robot system for ceramic biscuit processing and manufacturing |
-
2021
- 2021-12-07 CN CN202111512132.9A patent/CN114170314B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104634276A (en) * | 2015-02-12 | 2015-05-20 | 北京唯创视界科技有限公司 | Three-dimensional measuring system, photographing device, photographing method, depth calculation method and depth calculation device |
CN105118088A (en) * | 2015-08-06 | 2015-12-02 | 曲阜裕隆生物科技有限公司 | 3D imaging and fusion method based on pathological slice scanning device |
CN107767414A (en) * | 2017-10-24 | 2018-03-06 | 林嘉恒 | The scan method and system of mixed-precision |
CN111462253A (en) * | 2020-04-23 | 2020-07-28 | 深圳群宾精密工业有限公司 | Three-dimensional calibration board, system and calibration method suitable for laser 3D vision |
CN111823734A (en) * | 2020-09-10 | 2020-10-27 | 季华实验室 | Positioning calibration component, device, printer and method for positioning and calibrating printing point coordinates |
CN112489195A (en) * | 2020-11-26 | 2021-03-12 | 新拓三维技术(深圳)有限公司 | Rapid machine adjusting method and system for pipe bender |
Non-Patent Citations (1)
Title |
---|
赵伟.航空发动机叶片三维重建与评估技术研究.《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》.2017,C031-45. * |
Also Published As
Publication number | Publication date |
---|---|
CN114170314A (en) | 2022-03-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109859272B (en) | Automatic focusing binocular camera calibration method and device | |
US7479982B2 (en) | Device and method of measuring data for calibration, program for measuring data for calibration, program recording medium readable with computer, and image data processing device | |
US12358145B2 (en) | System and method for three-dimensional calibration of a vision system | |
CN105716582B (en) | Measurement method, device and the camera field of view angle measuring instrument at camera field of view angle | |
CN107121093A (en) | A kind of gear measurement device and measuring method based on active vision | |
CN110133853B (en) | Method for adjusting adjustable speckle pattern and projection method thereof | |
WO1995034996A1 (en) | Method and apparatus for transforming coordinate systems in an automated video monitor alignment system | |
CN107534715B (en) | Camera production method and advanced driving assistance system | |
JP2010528318A (en) | 3D assembly inspection with 2D images | |
JP2019052983A (en) | Calibration method and calibrator | |
CN110695982A (en) | Mechanical arm hand-eye calibration method and device based on three-dimensional vision | |
JP2005509877A (en) | Computer vision system calibration method and system | |
CN102538707B (en) | Three dimensional localization device and method for workpiece | |
CN116087922A (en) | Laser calibration method, device, system, equipment and medium | |
US20150286075A1 (en) | 3D Tracer | |
CN114170314B (en) | Intelligent 3D vision processing-based 3D glasses process track execution method | |
CN110519586A (en) | A kind of optical device calibration device and method | |
CN111145247B (en) | Position degree detection method based on vision, robot and computer storage medium | |
CN112468801B (en) | Optical center testing method of wide-angle camera module, testing system and testing target thereof | |
CN115866383B (en) | Side chip active alignment assembly method and device, electronic equipment and medium | |
CN112927305A (en) | Geometric dimension precision measurement method based on telecentricity compensation | |
CN118071840A (en) | Telecentric lens calibration method and 3D measurement system | |
CN115289997B (en) | Binocular camera three-dimensional contour scanner and application method thereof | |
CN114155229B (en) | Calibration method based on intelligent 3D vision in 3D glasses part production | |
CN110455180B (en) | Full-path precision calibration method and system for multi-degree-of-freedom two-dimensional adjusting mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20230509 Address after: Building A3, 3E Digital Smart Building, No. 526, Fangqiao Road, Caohu Street, Suzhou City, Jiangsu Province, 215000 Applicant after: Qunbin Intelligent Manufacturing Technology (Suzhou) Co.,Ltd. Address before: 518000 room 314, 3 / F, 39 Queshan new village, Gaofeng community, Dalang street, Longhua District, Shenzhen City, Guangdong Province Applicant before: SHENZHEN QB PRECISION INDUSTRIAL CO.,LTD. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |