CN118196356B - Reconstruction analysis method and system for irregular object image based on point cloud - Google Patents
Reconstruction analysis method and system for irregular object image based on point cloud Download PDFInfo
- Publication number
- CN118196356B CN118196356B CN202410605133.5A CN202410605133A CN118196356B CN 118196356 B CN118196356 B CN 118196356B CN 202410605133 A CN202410605133 A CN 202410605133A CN 118196356 B CN118196356 B CN 118196356B
- Authority
- CN
- China
- Prior art keywords
- camera
- axis
- point
- image
- coordinate system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000001788 irregular Effects 0.000 title claims abstract description 15
- 238000004458 analytical method Methods 0.000 title claims abstract description 14
- 238000004364 calculation method Methods 0.000 claims abstract description 14
- 238000000034 method Methods 0.000 claims abstract description 9
- 101100459261 Cyprinus carpio mycb gene Proteins 0.000 claims description 12
- 238000010276 construction Methods 0.000 claims description 4
- 238000012800 visualization Methods 0.000 abstract description 2
- 238000004088 simulation Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 5
- 230000001419 dependent effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2016—Rotation, translation, scaling
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Architecture (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention relates to a reconstruction analysis method and a system for an irregular object image based on point cloud, and belongs to the technical field of graphic visualization. The method comprises the following steps: importing an image; an image point cloud computing process; acquiring a center point B of an object in an image under a world coordinate system; constructing an object coordinate system with an origin as an object center point B and existing relative to an object; obtaining the maximum diameter of the object; calculating a camera coordinate point; calculating a camera rotation angle; and giving the obtained camera coordinate point and camera rotation angle to a virtual three-dimensional camera, and outputting a front view of the object as a camera view angle. According to the invention, through a series of calculations such as point cloud analysis of the two-dimensional picture, space relation simulation of the camera and the object, the front image of the required object can be obtained quickly, and meanwhile, the information points of the image can be reserved and restored better.
Description
Technical Field
The invention relates to a reconstruction analysis method and a system for an irregular object image based on point cloud, and belongs to the technical field of graphic visualization.
Background
In real life, due to the limitation of viewing angle, the front image of the object in the objective world is often not completely obtained. In the prior art and the method, for the limitation of the visibility angle, the front view of the object to be restored can only be presented by relying on the image, and then the pixel stretching is carried out, so that the problems of pixel loss, angle deviation and the like are easy to occur due to the fact that the pixel is not observed in place or the operation precision is not high enough.
The conventional restoration method of the image front view is more dependent on the definition, angle, surrounding reference objects and the like of the original photo in the form of pixels, the accuracy is more dependent on experience through manual adjustment of an operator, and various factors are more limited.
In view of the foregoing, a method and a system for reconstructing and analyzing an irregular object image based on a point cloud are needed.
Disclosure of Invention
In order to solve the problems of the prior art, the invention provides a reconstruction analysis method and a system for an irregular object image based on point cloud.
In a first aspect, the present invention provides a reconstruction and analysis method for an irregular object image based on point cloud, including the following steps:
Step 1, importing an image:
The image is imported into PCL (Point Cloud Library) architecture based software.
Step 2, image point cloud computing processing:
And carrying out point cloud calculation on the image to obtain the point cloud coordinate information of the object in the image under the world coordinate system.
Step 3, obtaining an object center point:
And calculating the center point B of the object in the image under the world coordinate system through an average value calculation formula based on the obtained point cloud coordinate information of the object.
Step 4, constructing an object coordinate system:
Making a vertical line to the ground through the center point B of the object, obtaining a relative Y axis of the object, and marking the relative Y axis as a Yk axis; making a straight line parallel to the ground and perpendicular to the Yk axis through the center point B of the object, and marking the straight line as the Xk axis; a straight line passing through the center point B of the object and perpendicular to the XkBYk plane is marked as a Zk axis; an object coordinate system with the origin as the object center point B is obtained so far, which exists relative to the object.
Step 5, obtaining the maximum diameter of the object:
Creating a bounding box cube of the object, and converting the bounding box cube into rectangular two-dimensional projections of three planes XY, YZ and XZ through three-view projections of an object coordinate system; and obtaining the maximum diameter of the object on the three surfaces based on the two-dimensional projection rectangular graph of the three surfaces, and assigning the maximum value to M, namely the maximum diameter of the object.
Step 6, calculating camera coordinate points:
And (3) calculating the absolute distance L between the camera and the center B point of the object through a trigonometric function calculation formula, crossing the center B point of the object, taking the absolute distance L as a radius to make a circle, obtaining two points where the circle intersects with the Zk axis of the object coordinate system, and determining the intersection point with the positive direction of the Zk axis as a camera coordinate point cam (cam 1, cam2, cam 3).
Step 7, calculating a camera rotation angle:
making a straight line perpendicular to the XkBYk plane through the origin of the world coordinate system to obtain an included angle alpha between the straight line and the negative direction of the Z axis of the world coordinate system;
The rotation angles of the camera are denoted as R (R1, R2 and R3), wherein R1, R2 and R3 are respectively anticlockwise rotation angles of the camera along the X axis, the Y axis and the Z axis of the world coordinate system; since the camera is perpendicular to the XkBYk plane of the object coordinate axis and parallel to the XOZ plane of the world coordinate axis, a front view of the object is obtained, and thus the camera rotation angle R is r1=0°, r2=α, and r3=0°.
Step 8, outputting a front view of the object:
The obtained camera coordinate points cam (cam 1, cam2, cam 3) and camera rotation angles R (R1, R2, R3) are given to a virtual three-dimensional camera, and the camera view angle is the front view of the output object.
Further, in the step 1, the image is a single-point image or a multi-point image; the single point image is an image obtained by shooting at a single angle, and the multi-point image is a plurality of images obtained by shooting from a plurality of angles.
Further, in the step 6, the algorithm of the absolute distance L between the camera and the center B of the object is as follows:
L= [M/2 /tan(63°/2) ]*1.5;
Where M is the maximum diameter of the object, 63 ° is the camera line of sight range determined based on the conventional use focal length of the camera of 35mm, and 1.5 is the fault tolerance value.
On the other hand, the invention also provides a reconstruction analysis system for the irregular object image based on the point cloud, which comprises the following steps:
the importing module is used for importing the image into software based on the PCL architecture;
the point cloud computing module is used for carrying out point cloud computing on the image to obtain point cloud coordinate information of an object in the image under a world coordinate system;
The object center point acquisition module is used for solving a center point B of an object in an image under a world coordinate system through an average value calculation formula based on the acquired point cloud coordinate information of the object;
The object coordinate system construction module is used for making a vertical line to the ground through a center point B of the object, obtaining a relative Y axis of the object and marking the relative Y axis as a Yk axis; making a straight line parallel to the ground and perpendicular to the Yk axis through the center point B of the object, and marking the straight line as the Xk axis; a straight line passing through the center point B of the object and perpendicular to the XkBYk plane is marked as a Zk axis; an object coordinate system with the origin as the object center point B is obtained so far, which exists relative to the object.
The maximum diameter acquisition module is used for creating a bounding box cube of the object, and converting three-view projections of an object coordinate system into rectangular two-dimensional projections of three planes XY, YZ and XZ respectively; and obtaining the maximum diameter of the object on the three surfaces based on the two-dimensional projection rectangular graph of the three surfaces, and assigning the maximum value to M, namely the maximum diameter of the object.
The camera coordinate point acquisition module is used for solving an absolute distance L between a camera and a center B point of an object through a trigonometric function calculation formula, passing through the center B point of the object, taking the absolute distance L as a radius to make a circle, obtaining two points where the circle intersects with a Zk axis of an object coordinate system, and determining an intersection point with the positive direction of the Zk axis as a camera coordinate point cam (cam 1, cam2, cam 3).
The camera rotation angle acquisition module is used for making a straight line perpendicular to the XkBYk plane through the origin of the world coordinate system to obtain an included angle alpha between the straight line and the negative direction of the Z axis of the world coordinate system;
The rotation angles of the camera are denoted as R (R1, R2 and R3), wherein R1, R2 and R3 are respectively anticlockwise rotation angles of the camera along the X axis, the Y axis and the Z axis of the world coordinate system; since the camera is perpendicular to the XkBYk plane of the object coordinate axis and parallel to the XOZ plane of the world coordinate axis, a front view of the object is obtained, and thus the camera rotation angle R is r1=0°, r2=α, and r3=0°.
And the view output module is used for endowing the obtained camera coordinate points cam (cam 1, cam2, cam 3) and camera rotation angles R (R1, R2, R3) to a virtual three-dimensional camera, and outputting a front view of an object as a camera view angle.
Compared with the prior art, the invention has the beneficial effects that:
According to the reconstruction analysis method and system based on the point cloud for the irregular object image, the relative coordinate information of the object is obtained through resolving the existing space information, the absolute coordinate information of the object is reconstructed through resolving the related information, and finally the position information of the front camera of the object is obtained through resolving the absolute coordinate, so that the front view of the object is quickly obtained for other requirements, the visual errors caused by view angle limitation and manual image analysis are reduced, the production efficiency is improved, meanwhile, the method can also compensate pixel limitation and loss caused by original image resolution during manual resolving, and a higher-definition image can be obtained according to the point cloud information.
Drawings
Fig. 1 is a flowchart of a reconstruction and analysis method for an irregular object image based on point cloud according to an embodiment of the present invention;
Fig. 2 is a structural diagram of a reconstruction analysis system for an irregular object image based on point cloud according to a second embodiment of the present invention;
FIG. 3 is an interface view after importing an image in step1 of the first embodiment;
Fig. 4 is a schematic diagram of the object point cloud coordinate information obtained in step 2 of the first embodiment;
FIG. 5 is a schematic view of the object coordinate system obtained in step 4 of the first embodiment;
FIG. 6 is a diagram showing the relationship between the focal length and the line of sight of the camera in step 6 according to the first embodiment;
Fig. 7 is a schematic diagram of a relationship between a camera and an object after assigning coordinate points and rotation angles to the camera in step 8 of the first embodiment;
Fig. 8 is a front view of the output object of step 8 of the first embodiment.
Detailed Description
The following describes in further detail the embodiments of the present invention with reference to the drawings and examples. The following examples are illustrative of the invention but are not intended to limit the scope of the invention.
Example 1
The embodiment provides a reconstruction analysis method for an irregular object image based on point cloud, and the flow is shown in fig. 1, and the method comprises the following steps:
Step 1, importing an image:
Importing the available single-point image or multi-point image into RealityCapture software; the imported image is shown in fig. 3. The single point image is an image obtained by shooting at a single angle, and the multi-point image is a plurality of images obtained by shooting from a plurality of angles.
Step 2, image point cloud computing processing:
And (4) performing point cloud computing processing on the single point image or the multi-point image by adopting an industry universal mode to obtain point cloud coordinate information of an object in the image under a world coordinate system, as shown in fig. 4.
Step 3, obtaining an object center point:
selecting point cloud information in a required object area according to the requirement, and based on the point cloud coordinate information of the object acquired in the step 2: (x 1, y1, z 1), (x 2, y2, z 2), (x 3, y3, z 3) &..the term (xn, yn, zn), and according to the average calculation formula: x0= (x1+x2+x3+) +xn)/n, y0= (y1+y2+y3+), yn)/n, z0= (z1+z2+z3+), the center point B (x 0, y0, z 0) of the object in the world coordinate system is found.
Step 4, constructing an object coordinate system:
the origin of the world coordinate system is marked as O (0, 0), and the XOZ surface is the ground under the world coordinate system; see figure 4 for details;
Based on the center point B (x 0, Y0, z 0) of the object, making a vertical line to the ground, namely the XOZ plane, a relative Y axis of the object can be obtained and is marked as a Yk axis; meanwhile, a straight line which is parallel to the ground, namely an XOZ plane and is perpendicular to the Yk axis is formed through the center point B (x 0, y0, z 0) of the object and is marked as the Xk axis; the straight line passing through the center point B of the object, which is perpendicular to the XkBYk plane, is defined as the Zk axis. Obtaining an object coordinate system with an origin of B (x 0, y0, z 0) relative to the object; as shown in fig. 5.
Step 5, obtaining the maximum diameter of the object:
Establishing a bounding box cube of the whole object, and converting three-view projections of an object coordinate system into two-dimensional projections of three planes of XY (top view), YZ (left view) and XZ (front view) respectively, wherein all projections are rectangles; obtaining maximum diameters of the object on three surfaces according to two-dimensional projection rectangular diagrams of the three surfaces of the object, and respectively marking the maximum diameters as a, b and c; taking the maximum value of a, b and c, and assigning the maximum value to M, wherein the maximum value is the maximum diameter of the object.
Step 6, calculating camera coordinate points:
determining a line-of-sight range 63 ° of the camera based on a conventional use focal length of the camera of 35mm, as in fig. 6;
According to a trigonometric function calculation formula, the absolute distance L= [ M/2/tan (63 degrees/2) ] is 1.5 between the camera cam and the center B point of the coordinate axis of the object; m is the maximum diameter of the object, 1.5 is a fault tolerance value, and in order to ensure that the object completely exists in the shooting range of the camera and prevent pixel loss of the edge position of the object, the absolute distance between the camera cam and the center of the coordinate axis of the object is increased by 0.5 times of the fault tolerance value in the existing range;
And (3) passing through the center point B of the coordinate axis of the object, taking the absolute distance L as a radius to make a circle, obtaining two points of intersection of the circle and the Zk axis of the coordinate system of the object, determining an intersection point of the circle and the positive direction of the Zk axis as a camera coordinate point, and marking as cam (cam 1, cam2, cam 3).
Step 7, calculating a camera rotation angle:
Making a straight line perpendicular to the XkBYk plane through the origin of the world coordinate system, and obtaining an included angle alpha between the straight line and the Z-axis negative direction of the world coordinate system;
The camera rotation angles are denoted as R (R1, R2, R3), where R1, R2, R3 are counterclockwise rotation angles of the camera along the world coordinate system X-axis, Y-axis, Z-axis, respectively. Since the camera is perpendicular to the XkBYk plane of the object coordinate axis and parallel to the world coordinate axis XOZ plane, a front view of the desired object can be obtained, so r1=0°, r2=α, r3=0°.
Step 8, outputting a front view of the object:
the obtained camera coordinate points cam (cam 1, cam2, cam 3) and camera rotation angles R (R1, R2, R3) are given to a virtual three-dimensional camera, at this time, the relationship between the three-dimensional camera and the object is as shown in fig. 7, and at this time, a front view of the object can be obtained from the camera view angle; refer to fig. 8.
Example two
The embodiment relates to a natural phenomenon construction and rendering optimization system based on graphics, which is shown in fig. 2 and mainly comprises the following functional modules:
the importing module is used for importing the image into software based on the PCL architecture;
the point cloud computing module is used for carrying out point cloud computing on the image to obtain point cloud coordinate information of an object in the image under a world coordinate system;
The object center point acquisition module is used for solving a center point B of an object in an image under a world coordinate system through an average value calculation formula based on the acquired point cloud coordinate information of the object;
The object coordinate system construction module is used for making a vertical line to the ground through a center point B of the object, obtaining a relative Y axis of the object and marking the relative Y axis as a Yk axis; making a straight line parallel to the ground and perpendicular to the Yk axis through the center point B of the object, and marking the straight line as the Xk axis; a straight line passing through the center point B of the object and perpendicular to the XkBYk plane is marked as a Zk axis; an object coordinate system with the origin as the object center point B is obtained so far, which exists relative to the object.
The maximum diameter acquisition module is used for creating a bounding box cube of the object, and converting three-view projections of an object coordinate system into rectangular two-dimensional projections of three planes XY, YZ and XZ respectively; and obtaining the maximum diameter of the object on the three surfaces based on the two-dimensional projection rectangular graph of the three surfaces, and assigning the maximum value to M, namely the maximum diameter of the object.
The camera coordinate point acquisition module is used for solving an absolute distance L between a camera and a center B point of an object through a trigonometric function calculation formula, passing through the center B point of the object, taking the absolute distance L as a radius to make a circle, obtaining two points where the circle intersects with a Zk axis of an object coordinate system, and determining an intersection point with the positive direction of the Zk axis as a camera coordinate point cam (cam 1, cam2, cam 3).
The camera rotation angle acquisition module is used for making a straight line perpendicular to the XkBYk plane through the origin of the world coordinate system to obtain an included angle alpha between the straight line and the negative direction of the Z axis of the world coordinate system;
The rotation angles of the camera are denoted as R (R1, R2 and R3), wherein R1, R2 and R3 are respectively anticlockwise rotation angles of the camera along the X axis, the Y axis and the Z axis of the world coordinate system; since the camera is perpendicular to the XkBYk plane of the object coordinate axis and parallel to the XOZ plane of the world coordinate axis, a front view of the object can be obtained, and thus the camera rotation angle R is r1=0°, r2=α, r3=0°.
And the view output module is used for endowing the obtained camera coordinate points cam (cam 1, cam2, cam 3) and camera rotation angles R (R1, R2, R3) to a virtual three-dimensional camera, and outputting a front view of an object as a camera view angle.
It should be noted that: the above embodiments are only for illustrating the technical aspects of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those of ordinary skill in the art that: modifications and equivalents may be made to the specific embodiments of the invention without departing from the spirit and scope of the invention, and any modifications and equivalents are intended to be included within the scope of the invention.
Claims (3)
1. The reconstruction analysis method for the irregular object image based on the point cloud is characterized by comprising the following steps of:
step 1: importing the image into software based on PCL architecture;
Step 2: performing point cloud computing on the image to obtain point cloud coordinate information of an object in the image under a world coordinate system;
step 3: based on the obtained point cloud coordinate information of the object, calculating a center point B of the object in the image under a world coordinate system through an average value calculation formula;
Step 4: making a vertical line to the ground through the center point B of the object, obtaining a relative Y axis of the object, and marking the relative Y axis as a Yk axis; making a straight line parallel to the ground and perpendicular to the Yk axis through the center point B of the object, and marking the straight line as the Xk axis; a straight line passing through the center point B of the object and perpendicular to the XkBYk plane is recorded as a Zk axis; obtaining an object coordinate system with an origin as an object center point B and existing relative to the object;
Step 5, creating a bounding box cube of the object, and converting the bounding box cube into rectangular two-dimensional projections of three planes XY, YZ and XZ through three-view projections of an object coordinate system; obtaining the maximum diameter of the object on the three surfaces based on the two-dimensional projection rectangular graph of the three surfaces, and assigning the maximum value to M to obtain the maximum diameter of the object;
Step 6, calculating the absolute distance L between the camera and the center B point of the object through a trigonometric function calculation formula, crossing the center B point of the object, making a circle by taking the absolute distance L as a radius, obtaining two points where the circle intersects with the Zk axis of the object coordinate system, and determining an intersection point with the positive direction of the Zk axis as a camera coordinate point cam (cam 1, cam2, cam 3);
The absolute distance L between the camera and the center B point of the object is calculated by the following algorithm: l= [ M/2/tan (63 °/2) ]. 1.5; wherein M is the maximum diameter of the object, 63 degrees is the camera sight line range determined based on 35mm of the conventional focal length of the camera, and 1.5 is the fault tolerance value;
Step 7, making a straight line perpendicular to the XkBYk plane by passing through the origin of the world coordinate system, and obtaining an included angle alpha between the straight line and the negative direction of the Z axis of the world coordinate system;
the rotation angles of the camera are denoted as R (R1, R2 and R3), wherein R1, R2 and R3 are respectively anticlockwise rotation angles of the camera along the X axis, the Y axis and the Z axis of the world coordinate system; since the camera is perpendicular to the XkBYk plane of the object coordinate axis and parallel to the XOZ plane of the world coordinate axis, i.e. a front view of the object, the camera rotation angle R has r1=0°, r2=α, r3=0°;
Step 8, outputting a front view of the object:
The obtained camera coordinate points cam (cam 1, cam2, cam 3) and camera rotation angles R (R1, R2, R3) are given to a virtual three-dimensional camera, and the camera view angle is the front view of the output object.
2. The method for reconstructing and analyzing the irregular object image based on the point cloud according to claim 1,
The image in the step 1 is a single point image or a multi-point image; the single point image is an image obtained by shooting at a single angle, and the multi-point image is a plurality of images obtained by shooting from a plurality of angles.
3. A reconstruction and analysis system for irregular object images based on point cloud, for implementing the method of any one of claims 1-2, comprising:
the importing module is used for importing the image into software based on the PCL architecture;
the point cloud computing module is used for carrying out point cloud computing on the image to obtain point cloud coordinate information of an object in the image under a world coordinate system;
The object center point acquisition module is used for solving a center point B of an object in an image under a world coordinate system through an average value calculation formula based on the acquired point cloud coordinate information of the object;
The object coordinate system construction module is used for making a vertical line to the ground through a center point B of the object, obtaining a relative Y axis of the object and marking the relative Y axis as a Yk axis; making a straight line parallel to the ground and perpendicular to the Yk axis through the center point B of the object, and marking the straight line as the Xk axis; a straight line passing through the center point B of the object and perpendicular to the XkBYk plane is recorded as a Zk axis; obtaining an object coordinate system with an origin as an object center point B and existing relative to the object;
The maximum diameter acquisition module is used for creating a bounding box cube of the object, and converting three-view projections of an object coordinate system into rectangular two-dimensional projections of three planes XY, YZ and XZ respectively; obtaining the maximum diameter of the object on the three surfaces based on the two-dimensional projection rectangular graph of the three surfaces, and assigning the maximum value to M, namely the maximum diameter of the object;
the camera coordinate point acquisition module is used for solving an absolute distance L between a camera and a center B point of an object through a trigonometric function calculation formula, passing through the center B point of the object, taking the absolute distance L as a radius to make a circle, obtaining two points where the circle intersects with a Zk axis of an object coordinate system, and determining an intersection point with the positive direction of the Zk axis as a camera coordinate point cam (cam 1, cam2, cam 3);
The camera rotation angle acquisition module is used for making a straight line perpendicular to the XkBYk plane through the origin of the world coordinate system to obtain an included angle alpha between the straight line and the negative direction of the Z axis of the world coordinate system;
the rotation angles of the camera are denoted as R (R1, R2 and R3), wherein R1, R2 and R3 are respectively anticlockwise rotation angles of the camera along the X axis, the Y axis and the Z axis of the world coordinate system; since the camera is perpendicular to the XkBYk plane of the object coordinate axis and parallel to the XOZ plane of the world coordinate axis, i.e. a front view of the object, the camera rotation angle R has r1=0°, r2=α, r3=0°;
And the view output module is used for endowing the obtained camera coordinate points cam (cam 1, cam2, cam 3) and camera rotation angles R (R1, R2, R3) to a virtual three-dimensional camera, and outputting a front view of an object as a camera view angle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410605133.5A CN118196356B (en) | 2024-05-16 | 2024-05-16 | Reconstruction analysis method and system for irregular object image based on point cloud |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410605133.5A CN118196356B (en) | 2024-05-16 | 2024-05-16 | Reconstruction analysis method and system for irregular object image based on point cloud |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118196356A CN118196356A (en) | 2024-06-14 |
CN118196356B true CN118196356B (en) | 2024-08-02 |
Family
ID=91406849
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410605133.5A Active CN118196356B (en) | 2024-05-16 | 2024-05-16 | Reconstruction analysis method and system for irregular object image based on point cloud |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118196356B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108876852A (en) * | 2017-05-09 | 2018-11-23 | 中国科学院沈阳自动化研究所 | A kind of online real-time object identification localization method based on 3D vision |
CN112967379A (en) * | 2021-03-03 | 2021-06-15 | 西北工业大学深圳研究院 | Three-dimensional medical image reconstruction method for generating confrontation network based on perception consistency |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR3092748A1 (en) * | 2019-02-18 | 2020-08-21 | Sylorus Robotics | Image processing methods and systems |
TWI720447B (en) * | 2019-03-28 | 2021-03-01 | 財團法人工業技術研究院 | Image positioning method and system thereof |
CN116129059B (en) * | 2023-04-17 | 2023-07-07 | 深圳市资福医疗技术有限公司 | Three-dimensional point cloud set generation and reinforcement method, device, equipment and storage medium |
CN117115272A (en) * | 2023-09-08 | 2023-11-24 | 中国人民解放军国防科技大学 | Telecentric camera calibration and three-dimensional reconstruction method for precipitation particle multi-angle imaging |
-
2024
- 2024-05-16 CN CN202410605133.5A patent/CN118196356B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108876852A (en) * | 2017-05-09 | 2018-11-23 | 中国科学院沈阳自动化研究所 | A kind of online real-time object identification localization method based on 3D vision |
CN112967379A (en) * | 2021-03-03 | 2021-06-15 | 西北工业大学深圳研究院 | Three-dimensional medical image reconstruction method for generating confrontation network based on perception consistency |
Also Published As
Publication number | Publication date |
---|---|
CN118196356A (en) | 2024-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104268922B (en) | A kind of image rendering method and image rendering device | |
EP3711290B1 (en) | System and methods for extrinsic calibration of cameras and diffractive optical elements | |
EP3057066B1 (en) | Generation of three-dimensional imagery from a two-dimensional image using a depth map | |
CN107170043A (en) | A kind of three-dimensional rebuilding method | |
CN104778694A (en) | Parameterized and automatic geometric correction method for multi-projector tiled display | |
WO2011145285A1 (en) | Image processing device, image processing method and program | |
CN113643414B (en) | Three-dimensional image generation method and device, electronic equipment and storage medium | |
TW202135007A (en) | Method for processing three-dimensional point cloud data | |
KR20110062083A (en) | Image restoration device and method | |
CN112002003A (en) | Spherical panoramic stereo picture generation and interactive display method for virtual 3D scene | |
CN119180908A (en) | Gaussian splatter-based laser enhanced visual three-dimensional reconstruction method and system | |
CN118982611A (en) | Scene reconstruction Gaussian model generation method and scene reconstruction method | |
Ishihara et al. | Integrating both parallax and latency compensation into video see-through head-mounted display | |
CN110619601B (en) | Image data set generation method based on three-dimensional model | |
JP2014106642A (en) | Ar system using optical see-through type hmd | |
CN118196356B (en) | Reconstruction analysis method and system for irregular object image based on point cloud | |
CN112562067A (en) | Method for generating large-batch point cloud data sets | |
Li et al. | An occlusion detection algorithm for 3d texture reconstruction of multi-view images | |
PP et al. | Efficient 3D visual hull reconstruction based on marching cube algorithm | |
CN116012449A (en) | Image rendering method and device based on depth information | |
CN112700556B (en) | A method for accurately displaying the current field of view in the eagle-eye window of a three-dimensional map | |
Waizenegger et al. | Parallel high resolution real-time visual hull on gpu | |
CN111462199B (en) | Rapid speckle image matching method based on GPU | |
Zhang et al. | Design of a 3D reconstruction model of multiplane images based on stereo vision | |
Graciá et al. | A system for real-time multi-view 3d reconstruction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |