CN114782550B - Camera calibration method, device, electronic equipment and program product - Google Patents
Camera calibration method, device, electronic equipment and program product Download PDFInfo
- Publication number
- CN114782550B CN114782550B CN202210443441.3A CN202210443441A CN114782550B CN 114782550 B CN114782550 B CN 114782550B CN 202210443441 A CN202210443441 A CN 202210443441A CN 114782550 B CN114782550 B CN 114782550B
- Authority
- CN
- China
- Prior art keywords
- camera
- image
- precision map
- points
- dimensional space
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000005457 optimization Methods 0.000 claims description 36
- 238000013507 mapping Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 12
- 238000003860 storage Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000012897 Levenberg–Marquardt algorithm Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the disclosure discloses a camera calibration method, a camera calibration device, electronic equipment and a program product. The method comprises the following steps: acquiring multi-frame images in a high-precision map coverage area through a camera; for each frame of image, acquiring a high-precision map image matched with the image; according to the high-precision map image matched with the image, determining three-dimensional space points corresponding to the feature points in the image; and determining the camera internal parameters of the camera according to the characteristic points in the multi-frame images, the three-dimensional space points corresponding to the characteristic points, the projection equation of the camera and the internal parameter initial values of the camera.
Description
Technical Field
The disclosure relates to the technical field of camera calibration, in particular to a camera calibration method, a camera calibration device, electronic equipment and a program product.
Background
High-precision maps have played a key role in various fields such as automatic driving, smart cities and the like, and the main high-precision map acquisition and updating modes at present also depend on laser acquisition vehicles. In order to solve the problems of acquisition productivity and real-time, a visual updating means based on low cost enters the current research field. At present, a plurality of companies develop the work of constructing the high-precision map based on monocular vision, crowd-sourced updating service is also generated, and the aim of constructing and updating the high-precision map based on monocular vision is achieved, so that the coverage range of the high-precision map is improved, and the updating period of the map is shortened.
The reconstruction scheme based on vision relies on the internal parameter calibration of the camera of the high-precision map acquisition equipment, and the accuracy of the internal parameter calibration directly influences the accuracy of the finally generated high-precision map. In addition, due to the restrictions of factors such as a vehicle windshield, manufacturing errors of acquisition equipment, installation positions and the like, calibration of a camera cannot be completed when leaving a factory. Under book all or most of the seats in the theatre or cinema scenes, the camera distribution range is very wide, if a traditional off-line calibration scheme is adopted, operators are required to carry calibration equipment (such as a checkerboard, a two-dimension code calibration plate and the like) to carry out field calibration on each camera, the calibration cost is high, the efficiency is low, and the implementation is difficult.
Disclosure of Invention
In order to solve the problems in the related art, embodiments of the present disclosure provide a camera calibration method, apparatus, electronic device, medium, and program product.
In a first aspect, an embodiment of the present disclosure provides a camera calibration method, including:
acquiring multi-frame images in a high-precision map coverage area through a camera;
For each frame of image, acquiring a high-precision map image matched with the image;
according to the high-precision map image matched with the image, determining three-dimensional space points corresponding to the feature points in the image;
and determining the camera internal parameters of the camera according to the characteristic points in the multi-frame images, the three-dimensional space points corresponding to the characteristic points, the projection equation of the camera and the internal parameter initial values of the camera.
According to an embodiment of the present disclosure, the acquiring a high-precision map image matched with the image includes:
acquiring a high-precision map image similar to the image, wherein the acquisition place of the high-precision map image is close to the acquisition place of the image; and/or
And acquiring a plurality of high-precision map images matched with the images.
According to an embodiment of the present disclosure, the acquiring a high-precision map image similar to the image, and the acquisition site of the high-precision map image is adjacent to the acquisition site of the image, includes: and acquiring a high-precision map image similar to the image according to the acquisition place of the image and/or through similar image retrieval, wherein the acquisition place of the high-precision map image is close to the acquisition place of the image.
According to an embodiment of the disclosure, the determining, according to a high-precision map image matched with the image, a three-dimensional space point corresponding to a feature point in the image includes:
obtaining homonymous feature points of feature points in the image in a high-precision map image matched with the image;
And determining the three-dimensional space points corresponding to the same-name feature points as three-dimensional space points corresponding to the feature points in the image.
According to an embodiment of the present disclosure, the acquiring, in a high-precision map image matched with the image, a homonymous feature point of a feature point in the image includes:
extracting a first characteristic point set in the image and a second characteristic point set in a high-precision map image matched with the image;
And performing feature matching on the first feature point set and the second feature point set to obtain homonymous feature points of the feature points in the image.
According to an embodiment of the disclosure, the determining the camera internal parameter of the camera according to the feature point in the multi-frame image, the three-dimensional space point corresponding to the feature point, the projection equation of the camera, and the internal parameter initial value of the camera includes:
according to the characteristic points in the multi-frame images, the three-dimensional space points corresponding to the characteristic points and the projection equation of the camera, an optimization equation of the camera internal parameters is constructed;
calculating the pose initial value of the camera when the multi-frame image is acquired according to the characteristic points in the multi-frame image, the three-dimensional space points corresponding to the characteristic points and the internal reference initial value of the camera;
and solving an optimization equation of the camera internal parameters based on the initial values of the camera internal parameters and the initial values of the pose of the camera when the multi-frame images are acquired, and determining the camera internal parameters of the camera.
According to an embodiment of the disclosure, the determining the camera internal parameter of the camera according to the feature point in the multi-frame image, the three-dimensional space point corresponding to the feature point, the projection equation of the camera, and the internal parameter initial value of the camera includes:
Constructing an optimization equation of a camera internal reference according to the characteristic points in the multi-frame image, the three-dimensional space points corresponding to the characteristic points, the projection equation of the camera, and the observed quantity and observation equation of a sensor fixedly connected with the camera;
calculating the pose initial value of the camera when the multi-frame image is acquired according to the characteristic points in the multi-frame image, the three-dimensional space points corresponding to the characteristic points and the internal reference initial value of the camera;
and solving an optimization equation of the camera internal parameters based on the initial values of the camera internal parameters and the initial values of the pose of the camera when the multi-frame images are acquired, and determining the camera internal parameters of the camera.
In a second aspect, in an embodiment of the present disclosure, a camera calibration apparatus is provided.
Specifically, the camera calibration device includes:
the acquisition module is configured to acquire multi-frame images in a high-precision map coverage area through a camera;
An acquisition module configured to acquire, for each frame of image, a high-precision map image that matches the image;
A first determining module configured to determine three-dimensional space points corresponding to feature points in the image according to a high-precision map image matched with the image;
and the second determining module is configured to determine the camera internal parameters of the camera according to the characteristic points in the multi-frame image, the three-dimensional space points corresponding to the characteristic points, the projection equation of the camera and the internal parameter initial values of the camera.
According to an embodiment of the present disclosure, the acquiring a high-precision map image matched with the image includes:
acquiring a high-precision map image similar to the image, wherein the acquisition place of the high-precision map image is close to the acquisition place of the image; and/or
And acquiring a plurality of high-precision map images matched with the images.
According to an embodiment of the present disclosure, the acquiring a high-precision map image similar to the image, and the acquisition site of the high-precision map image is adjacent to the acquisition site of the image, includes: and acquiring a high-precision map image similar to the image according to the acquisition place of the image and/or through similar image retrieval, wherein the acquisition place of the high-precision map image is close to the acquisition place of the image. According to an embodiment of the disclosure, the determining, according to a high-precision map image matched with the image, a three-dimensional space point corresponding to a feature point in the image includes:
obtaining homonymous feature points of feature points in the image in a high-precision map image matched with the image;
And determining the three-dimensional space points corresponding to the same-name feature points as three-dimensional space points corresponding to the feature points in the image.
According to an embodiment of the present disclosure, the acquiring, in a high-precision map image matched with the image, a homonymous feature point of a feature point in the image includes:
extracting a first characteristic point set in the image and a second characteristic point set in a high-precision map image matched with the image;
And performing feature matching on the first feature point set and the second feature point set to obtain homonymous feature points of the feature points in the image.
According to an embodiment of the disclosure, the determining the camera internal parameter of the camera according to the feature point in the multi-frame image, the three-dimensional space point corresponding to the feature point, the projection equation of the camera, and the internal parameter initial value of the camera includes:
according to the characteristic points in the multi-frame images, the three-dimensional space points corresponding to the characteristic points and the projection equation of the camera, an optimization equation of the camera internal parameters is constructed;
calculating the pose initial value of the camera when the multi-frame image is acquired according to the characteristic points in the multi-frame image, the three-dimensional space points corresponding to the characteristic points and the internal reference initial value of the camera;
and solving an optimization equation of the camera internal parameters based on the initial values of the camera internal parameters and the initial values of the pose of the camera when the multi-frame images are acquired, and determining the camera internal parameters of the camera.
According to an embodiment of the disclosure, the determining the camera internal parameter of the camera according to the feature point in the multi-frame image, the three-dimensional space point corresponding to the feature point, the projection equation of the camera, and the internal parameter initial value of the camera includes:
Constructing an optimization equation of a camera internal reference according to the characteristic points in the multi-frame image, the three-dimensional space points corresponding to the characteristic points, the projection equation of the camera, and the observed quantity and observation equation of a sensor fixedly connected with the camera;
calculating the pose initial value of the camera when the multi-frame image is acquired according to the characteristic points in the multi-frame image, the three-dimensional space points corresponding to the characteristic points and the internal reference initial value of the camera;
and solving an optimization equation of the camera internal parameters based on the initial values of the camera internal parameters and the initial values of the pose of the camera when the multi-frame images are acquired, and determining the camera internal parameters of the camera.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including a memory and a processor, wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method of any one of the first aspects.
In a fourth aspect, in an embodiment of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement a method according to any of the first aspects.
In a fifth aspect, embodiments of the present disclosure provide a computer program product comprising computer instructions which, when executed by a processor, implement the method steps as described in any of the first aspects.
According to the technical scheme provided by the embodiment of the disclosure, the high-precision map simultaneously contains image information and accurate three-dimensional point cloud data, and effective support data can be provided for online self-calibration of the camera. When the camera is positioned in the coverage area of the high-precision map, the association between the camera acquisition image and the high-precision map image is realized based on the image information in the high-precision map, and the rapid and accurate calibration of the camera is realized by combining the three-dimensional point cloud data of the high-precision map.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
Other features, objects and advantages of the present disclosure will become more apparent from the following detailed description of non-limiting embodiments, taken in conjunction with the accompanying drawings. In the drawings:
fig. 1 shows a flowchart of a camera calibration method according to an embodiment of the present disclosure.
Fig. 2 shows a schematic flow diagram of a camera calibration method according to an embodiment of the present disclosure.
Fig. 3 shows a block diagram of a camera calibration apparatus according to an embodiment of the present disclosure.
Fig. 4 shows a block diagram of an electronic device according to an embodiment of the disclosure.
Fig. 5 shows a schematic diagram of a computer system suitable for use in implementing methods according to embodiments of the present disclosure.
Detailed Description
Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily implement them. In addition, for the sake of clarity, portions irrelevant to description of the exemplary embodiments are omitted in the drawings.
In this disclosure, it should be understood that terms such as "comprises" or "comprising," etc., are intended to indicate the presence of features, numbers, steps, acts, components, portions, or combinations thereof disclosed in this specification, and are not intended to exclude the possibility that one or more other features, numbers, steps, acts, components, portions, or combinations thereof are present or added.
In addition, it should be noted that, without conflict, the embodiments of the present disclosure and features of the embodiments may be combined with each other. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
In the present disclosure, the acquisition of user information or user data is an operation that is authorized, confirmed, or actively selected by the user.
As described above, the high-precision map has played a key role in various fields such as autopilot, smart city, etc., and the main high-precision map acquisition and update methods at present also depend on the laser acquisition vehicle. In order to solve the problems of acquisition productivity and real-time, a visual updating means based on low cost enters the current research field. At present, a plurality of companies develop the work of constructing the high-precision map based on monocular vision, crowd-sourced updating service is also generated, and the aim of constructing and updating the high-precision map based on monocular vision is achieved, so that the coverage range of the high-precision map is improved, and the updating period of the map is shortened.
The reconstruction scheme based on vision relies on the internal parameter calibration of the camera of the high-precision map acquisition equipment, and the accuracy of the internal parameter calibration directly influences the accuracy of the finally generated high-precision map. In addition, due to the restrictions of factors such as a vehicle windshield, manufacturing errors of acquisition equipment, installation positions and the like, calibration of a camera cannot be completed when leaving a factory. Under book all or most of the seats in the theatre or cinema scenes, the camera distribution range is very wide, if a traditional off-line calibration scheme is adopted, operators are required to carry calibration equipment (such as a checkerboard, a two-dimension code calibration plate and the like) to carry out field calibration on each camera, the calibration cost is high, the efficiency is low, and the implementation is difficult.
The online self-calibration does not need calibration equipment, and can complete online calibration of the camera internal parameters in the acquisition process. The online self-calibration scheme based on sfm (Structure from motion) has limitations on the calibration precision of the internal parameters of the camera, and is difficult to support a high-precision reconstruction task.
The embodiment of the disclosure provides a camera calibration method, which comprises the following steps: collecting multi-frame images { I 't}t=0,1,…,T over a high-precision map coverage area by a camera, wherein I' t represents a T-th frame image, and T >1; for each frame of image I 't, acquiring a high-precision map image I n matched with the image I' t; from the high-precision map image I n matching the image I 't, the feature points in the image I' t are determinedCorresponding three-dimensional space pointAccording to the characteristic points in the multi-frame image { I' t}t=0,1,…,T Three-dimensional space points corresponding to the feature pointsProjection equation of the camera, initial value of internal reference of the cameraA camera intrinsic K of the camera is determined.
According to the embodiment of the disclosure, the high-precision map simultaneously contains image information and accurate three-dimensional point cloud data, and can provide effective support data for online self-calibration of the camera. When the camera is positioned in the coverage area of the high-precision map, the association between the camera acquisition image and the high-precision map image is realized based on the image information in the high-precision map, and the rapid and accurate calibration of the camera is realized by combining the three-dimensional point cloud data of the high-precision map.
Fig. 1 shows a flowchart of a camera calibration method according to an embodiment of the present disclosure.
As shown in fig. 1, the camera calibration method includes the following steps S101 to S104:
in step S101, acquiring a plurality of frames of images in a high-precision map coverage area by a camera;
in step S102, for each frame of image, a high-precision map image matching the image is acquired;
in step S103, determining three-dimensional space points corresponding to feature points in the image according to the high-precision map image matched with the image;
in step S104, a camera internal parameter of the camera is determined according to the feature points in the multi-frame image, the three-dimensional space points corresponding to the feature points, the projection equation of the camera, and the internal parameter initial value of the camera.
According to an embodiment of the present disclosure, the multi-frame image is acquired by the camera in the high-precision map coverage area, for example, the multi-frame image { I 't}t=0,1,…,T, where I' t represents the T-th frame image, T >1, may be acquired by the camera in the high-precision map coverage area. For each frame of image, a high-precision map image matching the image is acquired, for example, a high-precision map image I n matching the image I 't may be acquired for each frame of image I' t. Determining three-dimensional spatial points corresponding to feature points in the image from a high-precision map image matching the image, for example, determining feature points in the image I 't from a high-precision map image I n matching the image I' t Corresponding three-dimensional space pointDetermining the camera internal parameters of the camera according to the characteristic points in the multi-frame image, the three-dimensional space points corresponding to the characteristic points, the projection equation of the camera and the initial values of the internal parameters of the camera, for example, the camera internal parameters can be determined according to the characteristic points in the multi-frame image { I' t}t=0,1,…,T }Three-dimensional space points corresponding to the feature pointsProjection equation of the camera, initial value of internal reference of the cameraA camera intrinsic K of the camera is determined.
According to embodiments of the present disclosure, the camera to be calibrated may be a monocular RGB camera. The camera acquires multiple frames of images, e.g., consecutive T frames of images, in a high precision map coverage area, the set of images being denoted by { I 't}t=0,1,…,T, where I' t denotes the T-th frame of image, T >1.
The high-precision map data of the high-precision map coverage area includes a high-precision map image { I n } and three-dimensional point cloud data { x i }, where n is an index number of the image data in the high-precision map data,Representing a point in three-dimensional euclidean space. The high-precision map should also contain the mapping relation between the pixel points in the high-precision map image and the three-dimensional space pointsThe mapping can be performed by a pixel coordinate on the nth high-precision map imageFinding a corresponding three-dimensional point in the three-dimensional spaceIn practice, the mapping relationship can be obtained through various modes such as camera external parameter calibration, dot pattern registration and the like during high-precision map making.
According to an embodiment of the disclosure, the acquiring the high-precision map image I n matching the image I ' t includes acquiring a high-precision map image I n similar to the image I ' t, and the acquisition location of the high-precision map image I n is adjacent to the acquisition location of the image I ' t; the acquiring the high-precision map image I n matched with the image I 't comprises acquiring a plurality of high-precision map images matched with the image I' t Wherein I n represents a set of high-precision map images that match the image I' t The nth high-precision map image in (a).
According to an embodiment of the present disclosure, the acquiring the high-precision map image I n similar to the image I ' t and the acquisition location of the high-precision map image I n is adjacent to the acquisition location of the image I ' t includes acquiring the high-precision map image I n similar to the image I ' t and the acquisition location of the high-precision map image I n is adjacent to the acquisition location of the image I ' t according to the acquisition location of the image I ' t and/or by similar image retrieval.
According to the embodiment of the disclosure, each frame of image I 't has multiple similar high-precision map images, for example, the image I' t collected on the same small section of road is similar to the high-precision map image, the collection positions of the images are adjacent to each other, the contents of the photographed images are substantially the same, the difference of the pictures is not large, and multiple identical feature points can be extracted. The high-precision map image I n matched with the image I 't can be obtained according to the collected location positioning information of the image I' t and the high-precision map image, such as Global Positioning System (GPS) positioning information, or by similar image retrieval.
According to an embodiment of the disclosure, the feature points in the image I 't are determined according to the high-precision map image I n matched with the image I' t Corresponding three-dimensional space pointComprising the following steps: acquiring feature points in the image I 't in a high-precision map image I n matched with the image I' t Is the same name feature point ofWill be the same-name feature pointCorresponding three-dimensional space pointIs determined to be the feature point in the imageCorresponding three-dimensional space point
According to an embodiment of the disclosure, the feature points in the image I 't are acquired in a high-precision map image I n matching the image I' t Is the same name feature point ofComprising the following steps: extracting a first set of feature points in the image I 't and a second set of feature points in a high-precision map image I n that matches the image I' t; feature points in the image I' t are obtained by performing feature matching on the first feature point set and the second feature point setIs the same name feature point of
According to an embodiment of the present disclosure, feature points of the type such as SIFT, ORB, R2D2, etc. may be used, and the present disclosure does not limit the type of feature points, for example SuperPoint feature points may be used, corresponding to the described dimension d=256.
After extracting a first feature point set in the image I 't and a second feature point set in the high-precision map image I n matched with the image I' t, obtaining a homonymous feature point set of the image I 't and the high-precision map image I n matched with the image I' t by feature matching the first feature point set with the second feature point setMn is the number of homonymous feature points of the image I 't and the high-precision map image I n matched with the image I' t. Based on the mapping relation XI in the high-precision map, a three-dimensional space point matching result of the image I' t and the high-precision map can be obtained
According to an embodiment of the present disclosure, the characteristic points in the multi-frame image { I' t}t=0,1,…,T }Three-dimensional space points corresponding to the feature pointsProjection equation of the camera, initial value of internal reference of the cameraDetermining a camera intrinsic K of the camera, comprising: according to the characteristic points in the multi-frame image { I' t}t=0,1,…,T Three-dimensional space points corresponding to the feature pointsThe projection equation of the camera is used for constructing an optimization equation of the camera internal parameters; according to the characteristic points in the multi-frame image { I' t}t=0,1,…,T Three-dimensional space points corresponding to the feature pointsInitial value of internal parameter of the cameraCalculating the initial pose value of the camera when the multi-frame image { I' t}t=0,1,…,T is acquiredBased on the initial value of the internal parameter of the cameraAnd acquiring the initial pose value of the camera when the multi-frame image { I' t}t=0,1,…,T is acquiredAnd solving an optimization equation of the camera internal parameters, and determining the camera internal parameters K of the camera.
According to an embodiment of the present disclosure, the optimization equation may be, for example, the following equation (1):
wherein, Is the position coordinates of the ith feature point in the t-th frame image,Is associated withThe position coordinates of the corresponding three-dimensional space position points, tt is the pose of the camera when the T-th frame image is acquired, K is the internal reference of the camera, K * is the optimal solution of the internal reference of the camera, and { T t}* is the optimal solution of the pose of the camera.Is a projection equation of the camera, and the camera model may be, for example, pinehole model or fisheye model according to practical situations, which is not particularly limited in the present disclosure. By usingTo represent the projection process of the camera, the projection equation can map a three-dimensional space point into a two-dimensional image space when the internal reference K is known, namely
When solving the optimization equation, the initial values of the internal parameters and the initial values of the pose of the camera are needed, and various acquisition modes of the initial values of the internal parameters are available, such as factory parameters, calibration results of cameras of the same model, sfm online calibration results and the like. Based onThe initial value of the camera pose when collecting each frame of image I' t can be calculated based on PnP algorithmObtainingThe PnP algorithm refers to an algorithm that solves for camera outliers with minimized re-projection errors by multiple pairs of 3D and 2D matching points, with known or unknown camera outliers.
Then, solving an unconstrained linear optimization problem (for example, adopting a Levenberg-Marquardt algorithm) on the optimization equation (1) of the camera internal parameters, wherein initial values of the camera internal parameters and the pose in the optimization solving iteration are respectivelyAndAnd finally, obtaining an optimal solution K * of the camera internal parameters, and completing the online self-calibration of the camera internal parameters.
According to an embodiment of the disclosure, the method further comprises the step of determining feature points in the multi-frame image { I' t}t=0,1,…,T }Three-dimensional space points corresponding to the feature pointsProjection equation of the camera, initial value of internal reference of the cameraDetermining a camera intrinsic K of the camera, comprising: according to the characteristic points in the multi-frame image { I' t}t=0,1,…,T Three-dimensional space points corresponding to the feature pointsThe projection equation of the camera, the observed quantity and the observation equation of a sensor fixedly connected with the camera are used for constructing an optimization equation of camera internal parameters; according to the characteristic points in the multi-frame image { I' t}t=0,1,…,T Three-dimensional space points corresponding to the feature pointsInitial value of internal parameter of the cameraCalculating the initial pose value of the camera when the multi-frame image { I' t}t=0,1,…,T is acquiredBased on the initial value of the internal parameter of the cameraAnd acquiring the initial pose value of the camera when the multi-frame image { I' t}t=0,1,…,T is acquiredAnd solving an optimization equation of the camera internal parameters, and determining the camera internal parameters K of the camera.
According to an embodiment of the present disclosure, the optimization equation may also be, for example, the following equation (2):
Zk(K,{Tt},{I′t},{In})‖2) (2)
Wherein Z k is the observation equation corresponding to the observed quantity Z k. The acquisition device may include a sensor fixedly connected to the camera, such as one or more of a speed sensor, an acceleration sensor, a positioning sensor. In particular, the sensors may include Inertial Measurement Units (IMUs), global Positioning System (GPS) units, and the like. The observed quantity of these sensors can be represented by { z k}k=0,1,…,E, E being the number of observed quantities.
The same parameters in the formula (2) as those in the formula (1) have the same physical meanings as those in the formula (1), and a description thereof will be omitted herein.
Solving unconstrained linear optimization problem of the optimization equation (2) of the camera internal parameters (for example, adopting a Levenberg-Marquardt algorithm), wherein initial values of the camera internal parameters and the pose in optimization solving iteration are respectively as followsAndAnd finally, obtaining an optimal solution K * of the camera internal parameters, and completing the online self-calibration of the camera internal parameters.
By introducing sensor observation into the optimization equation, the accuracy of the camera calibration result can be further improved.
Fig. 2 shows a schematic flow diagram of a camera calibration method according to an embodiment of the present disclosure.
As shown in fig. 2, in step S1, a set of images { I' t}t=0,1,…,T and camera internal parameter initial values acquired by the camera are acquiredObtaining other constraint data { z k}k=0,1,…,E }, obtaining high-precision map image data { I n }, three-dimensional space point data { x i }, and mapping relation
In step S2, the image collected by the camera and the high-precision map image are subjected to feature matching and feature point extraction to obtain a homonymous feature point set
In step S3, according to the mapping relationship xi in the high-precision map, a matching result between the image I' t and the three-dimensional space point in the high-precision map can be obtained
In step S4, solving the pose initial value of the camera based on PnP algorithm
In step S5, the maximum a posteriori estimation optimization equation (1) or (2) is constructed.
In step S6, solving the unconstrained nonlinear optimization problem to obtain a self-calibration result K *.
Fig. 3 shows a block diagram of a camera calibration apparatus according to an embodiment of the present disclosure. The apparatus may be implemented as part or all of an electronic device by software, hardware, or a combination of both.
As shown in fig. 3, the camera calibration apparatus 300 includes an acquisition module 310, an acquisition module 320, a first determination module 330, and a second determination module 340.
The acquisition module 310 acquires multi-frame images in a high-precision map coverage area through a camera;
The acquisition module 320 acquires a high-precision map image matched with the image for each frame of image;
The first determining module 330 determines three-dimensional space points corresponding to feature points in the image according to the high-precision map image matched with the image;
The second determining module 340 determines a camera internal parameter of the camera according to the feature points in the multi-frame image, the three-dimensional space points corresponding to the feature points, the projection equation of the camera, and the internal parameter initial value of the camera.
According to an embodiment of the present disclosure, the acquiring the high-precision map image I n that matches the image I ' t includes acquiring a high-precision map image I n that matches the image I ' t, and the acquisition location of the high-precision map image I n is adjacent to the acquisition location of the image I ' t; and/or
Acquiring a plurality of high-precision map images matched with the image I' t Wherein I n represents a set of high-precision map images that match the image I' t The nth high-precision map image in (a).
According to an embodiment of the present disclosure, the acquiring the high-precision map image I n similar to the image I ' t and the acquisition location of the high-precision map image I n is adjacent to the acquisition location of the image I ' t includes acquiring the high-precision map image I n similar to the image I ' t and the acquisition location of the high-precision map image I n is adjacent to the acquisition location of the image I ' t according to the acquisition location of the image I ' t and/or by similar image retrieval.
According to an embodiment of the disclosure, the feature points in the image I 't are determined according to a high-precision map image I n matched with the image I' t Corresponding three-dimensional space pointComprising the following steps:
Acquiring feature points in the image I 't in a high-precision map image I n matched with the image I' t Is the same name feature point of
Will be the same-name feature pointCorresponding three-dimensional space pointIs determined to be the feature point in the imageCorresponding three-dimensional space point
According to an embodiment of the disclosure, the feature points in the image I 't are acquired in a high-precision map image I n matching the image I' t Is the same name feature point ofComprising the following steps:
extracting a first set of feature points in the image I 't and a second set of feature points in a high-precision map image I n that matches the image I' t;
feature points in the image I' t are obtained by performing feature matching on the first feature point set and the second feature point set Is the same name feature point of
According to an embodiment of the disclosure, the method further comprises the step of determining feature points in the multi-frame image { I' t}t=0,1,…,T }Three-dimensional space points corresponding to the feature pointsProjection equation of the camera, initial value of internal reference of the cameraDetermining a camera intrinsic K of the camera, comprising:
According to the characteristic points in the multi-frame image { I' t}t=0,1,…,T Three-dimensional space points corresponding to the feature pointsThe projection equation of the camera is used for constructing an optimization equation of the camera internal parameters;
According to the characteristic points in the multi-frame image { I' t}t=0,1,…,T Three-dimensional space points corresponding to the feature pointsInitial value of internal parameter of the cameraCalculating the initial pose value of the camera when the multi-frame image { I' t}t=0,1,…,T is acquired
Based on the initial value of the internal parameter of the cameraAnd acquiring the initial pose value of the camera when the multi-frame image { I' t}t=0,1,…,T is acquiredAnd solving an optimization equation of the camera internal parameters, and determining the camera internal parameters K of the camera.
According to an embodiment of the disclosure, the method further comprises the step of determining feature points in the multi-frame image { I' t}t=0,1,…,T }Three-dimensional space points corresponding to the feature pointsProjection equation of the camera, initial value of internal reference of the cameraDetermining a camera intrinsic K of the camera, comprising:
According to the characteristic points in the multi-frame image { I' t}t=0,1,…,T Three-dimensional space points corresponding to the feature pointsThe projection equation of the camera, the observed quantity and the observation equation of a sensor fixedly connected with the camera are used for constructing an optimization equation of camera internal parameters;
According to the characteristic points in the multi-frame image { I' t}t=0,1,…,T Three-dimensional space points corresponding to the feature pointsInitial value of internal parameter of the cameraCalculating the initial pose value of the camera when the multi-frame image { I' t}t=0,1,…,T is acquired
Based on the initial value of the internal parameter of the cameraAnd acquiring the initial pose value of the camera when the multi-frame image { I' t}t=0,1,…,T is acquiredAnd solving an optimization equation of the camera internal parameters, and determining the camera internal parameters K of the camera.
The present disclosure also discloses an electronic device, and fig. 4 shows a block diagram of the electronic device according to an embodiment of the present disclosure.
As shown in fig. 4, the electronic device 400 comprises a memory 401 and a processor 402, wherein the memory 401 is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor 402 to implement a method according to an embodiment of the disclosure. According to embodiments of the present disclosure, the electronic device 400 may be a high-precision map acquisition device or a mobile terminal with a camera, or may be a server host or cloud server in communication with the high-precision map acquisition device.
The embodiment of the disclosure provides a camera calibration method, which comprises the following steps:
Collecting multi-frame images { I 't}t=0,1,…,T over a high-precision map coverage area by a camera, wherein I' t represents a T-th frame image, and T >1;
For each frame of image I 't, acquiring a high-precision map image I n matched with the image I' t;
From the high-precision map image I n matching the image I 't, the feature points in the image I' t are determined Corresponding three-dimensional space point
According to the characteristic points in the multi-frame image { I' t}t=0,1,…,T Three-dimensional space points corresponding to the feature pointsProjection equation of the camera, initial value of internal reference of the cameraA camera intrinsic K of the camera is determined.
According to an embodiment of the present disclosure, the acquiring the high-precision map image I n that matches the image I ' t includes acquiring a high-precision map image I n that matches the image I ' t, and the acquisition location of the high-precision map image I n is adjacent to the acquisition location of the image I ' t; and/or
Acquiring a plurality of high-precision map images matched with the image I' t Wherein I n represents a set of high-precision map images that match the image I' t The nth high-precision map image in (a).
According to an embodiment of the present disclosure, the acquiring the high-precision map image I n similar to the image I ' t and the acquisition location of the high-precision map image I n is adjacent to the acquisition location of the image I ' t includes acquiring the high-precision map image I n similar to the image I ' t and the acquisition location of the high-precision map image I n is adjacent to the acquisition location of the image I ' t according to the acquisition location of the image I ' t and/or by similar image retrieval.
According to an embodiment of the disclosure, the feature points in the image I 't are determined according to a high-precision map image I n matched with the image I' t Corresponding three-dimensional space pointComprising the following steps:
Acquiring feature points in the image I 't in a high-precision map image I n matched with the image I' t Is the same name feature point of
Will be the same-name feature pointCorresponding three-dimensional space pointIs determined to be the feature point in the imageCorresponding three-dimensional space point
According to an embodiment of the disclosure, the feature points in the image I 't are acquired in a high-precision map image I n matching the image I' t Is the same name feature point ofComprising the following steps:
extracting a first set of feature points in the image I 't and a second set of feature points in a high-precision map image I n that matches the image I' t;
feature points in the image I' t are obtained by performing feature matching on the first feature point set and the second feature point set Is the same name feature point of
According to an embodiment of the disclosure, the method further comprises the step of determining feature points in the multi-frame image { I' t}t=0,1,…,T }Three-dimensional space points corresponding to the feature pointsProjection equation of the camera, initial value of internal reference of the cameraDetermining a camera intrinsic K of the camera, comprising:
According to the characteristic points in the multi-frame image { I' t}t=0,1,…,T Three-dimensional space points corresponding to the feature pointsThe projection equation of the camera is used for constructing an optimization equation of the camera internal parameters;
According to the characteristic points in the multi-frame image { I' t}t=0,1,…,T Three-dimensional space points corresponding to the feature pointsInitial value of internal parameter of the cameraCalculating the initial pose value of the camera when the multi-frame image { I' t}t=0,1,…,T is acquired
Based on the initial value of the internal parameter of the cameraAnd acquiring the initial pose value of the camera when the multi-frame image { I' t}t=0,1,…,T is acquiredAnd solving an optimization equation of the camera internal parameters, and determining the camera internal parameters K of the camera.
According to an embodiment of the disclosure, the method further comprises the step of determining feature points in the multi-frame image { I' t}t=0,1,…,T }Three-dimensional space points corresponding to the feature pointsProjection equation of the camera, initial value of internal reference of the cameraDetermining a camera intrinsic K of the camera, comprising:
According to the characteristic points in the multi-frame image { I' t}t=0,1,…,T Three-dimensional space points corresponding to the feature pointsThe projection equation of the camera, the observed quantity and the observation equation of a sensor fixedly connected with the camera are used for constructing an optimization equation of camera internal parameters;
According to the characteristic points in the multi-frame image { I' t}t=0,1,…,T Three-dimensional space points corresponding to the feature pointsInitial value of internal parameter of the cameraCalculating the initial pose value of the camera when the multi-frame image { I' t}t=0,1,…,T is acquired
Based on the initial value of the internal parameter of the cameraAnd acquiring the initial pose value of the camera when the multi-frame image { I' t}t=0,1,…,T is acquiredAnd solving an optimization equation of the camera internal parameters, and determining the camera internal parameters K of the camera.
Fig. 5 shows a schematic diagram of a computer system suitable for use in implementing methods according to embodiments of the present disclosure.
As shown in fig. 5, the computer system 500 includes a processing unit 501, which can execute various processes in the above-described embodiments in accordance with a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the system 500 are also stored. The processing unit 501, the ROM 502, and the RAM 503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input section 506 including a keyboard, a mouse, and the like; an output portion 507 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker, and the like; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The drive 510 is also connected to the I/O interface 505 as needed. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as needed so that a computer program read therefrom is mounted into the storage section 508 as needed. The processing unit 501 may be implemented as a processing unit CPU, GPU, TPU, FPGA, NPU or the like.
In particular, according to embodiments of the present disclosure, the methods described above may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising computer instructions which, when executed by a processor, implement the method steps described above. In such embodiments, the computer program product may be downloaded and installed from a network via the communications portion 509, and/or installed from the removable media 511.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules referred to in the embodiments of the present disclosure may be implemented in software or in programmable hardware. The units or modules described may also be provided in a processor, the names of which in some cases do not constitute a limitation of the unit or module itself.
As another aspect, the present disclosure also provides a computer-readable storage medium, which may be a computer-readable storage medium included in the electronic device or the computer system in the above-described embodiments; or may be a computer-readable storage medium, alone, that is not assembled into a device. The computer-readable storage medium stores one or more programs for use by one or more processors in performing the methods described in the present disclosure.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention referred to in this disclosure is not limited to the specific combination of features described above, but encompasses other embodiments in which any combination of features described above or their equivalents is contemplated without departing from the inventive concepts described. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Claims (8)
1.A camera calibration method, comprising:
Acquiring multiple frames of camera images in a high-precision map coverage area through a camera;
For each frame of camera image, acquiring a high-precision map image matched with the camera image;
Determining three-dimensional space points corresponding to the feature points in the camera image according to the high-precision map image matched with the camera image; the high-precision map data of the high-precision map coverage area comprises a high-precision map image, three-dimensional point cloud data and a mapping relation between pixel points in the high-precision map image and three-dimensional space points in the three-dimensional point cloud data;
According to the characteristic points in the multi-frame camera image, the three-dimensional space points corresponding to the characteristic points, the projection equation of the camera, and the error between the observed quantity and the observation equation of the sensor fixedly connected with the camera, an optimization equation of the camera internal parameters is constructed;
calculating the pose initial value of the camera when the multi-frame camera image is acquired according to the feature points in the multi-frame camera image, the three-dimensional space points corresponding to the feature points and the internal reference initial value of the camera;
And solving an optimization equation of the camera internal parameters based on the initial values of the camera internal parameters and the initial values of the pose of the camera when the multi-frame camera images are acquired, and determining the camera internal parameters of the camera.
2. The method of claim 1, wherein the acquiring a high-precision map image that matches the camera image comprises:
Acquiring a high-precision map image similar to the camera image, wherein the acquisition place of the high-precision map image is close to the acquisition place of the camera image; and/or
And acquiring a plurality of high-precision map images matched with the camera images.
3. The method of claim 2, wherein the acquiring a high-precision map image similar to the camera image and having a collection location adjacent to the collection location of the camera image comprises:
and acquiring a high-precision map image similar to the camera image according to the acquisition place of the camera image and/or through similar image retrieval, wherein the acquisition place of the high-precision map image is close to the acquisition place of the camera image.
4. The method of claim 1, wherein the determining three-dimensional spatial points corresponding to feature points in the camera image from a high-precision map image that matches the camera image comprises:
obtaining homonymous feature points of feature points in the camera image in a high-precision map image matched with the camera image;
and determining the three-dimensional space points corresponding to the same-name feature points as the three-dimensional space points corresponding to the feature points in the camera image.
5. The method of claim 4, wherein the obtaining homonymous feature points of feature points in the camera image in a high-precision map image that matches the camera image comprises:
extracting a first characteristic point set in the camera image and a second characteristic point set in a high-precision map image matched with the camera image;
and performing feature matching on the first feature point set and the second feature point set to obtain homonymous feature points of the feature points in the camera image.
6. A camera calibration apparatus comprising:
the acquisition module is configured to acquire a plurality of frames of camera images in a high-precision map coverage area through a camera;
an acquisition module configured to acquire, for each frame of camera image, a high-precision map image that matches the camera image;
a first determining module configured to determine three-dimensional space points corresponding to feature points in the camera image according to a high-precision map image matched with the camera image; the high-precision map data of the high-precision map coverage area comprises a high-precision map image, three-dimensional point cloud data and a mapping relation between pixel points in the high-precision map image and three-dimensional space points in the three-dimensional point cloud data;
The second determining module is configured to construct an optimization equation of the internal parameters of the camera according to the characteristic points in the multi-frame camera image, the three-dimensional space points corresponding to the characteristic points, the projection equation of the camera, and the error between the observed quantity and the observation equation of the sensor fixedly connected with the camera; calculating the pose initial value of the camera when the multi-frame camera image is acquired according to the feature points in the multi-frame camera image, the three-dimensional space points corresponding to the feature points and the internal reference initial value of the camera; and solving an optimization equation of the camera internal parameters based on the initial values of the camera internal parameters and the initial values of the pose of the camera when the multi-frame camera images are acquired, and determining the camera internal parameters of the camera.
7. An electronic device includes a memory and a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method of any of claims 1-5.
8. A computer program product comprising computer instructions which, when executed by a processor, implement the method of any of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210443441.3A CN114782550B (en) | 2022-04-25 | 2022-04-25 | Camera calibration method, device, electronic equipment and program product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210443441.3A CN114782550B (en) | 2022-04-25 | 2022-04-25 | Camera calibration method, device, electronic equipment and program product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114782550A CN114782550A (en) | 2022-07-22 |
CN114782550B true CN114782550B (en) | 2024-09-03 |
Family
ID=82433869
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210443441.3A Active CN114782550B (en) | 2022-04-25 | 2022-04-25 | Camera calibration method, device, electronic equipment and program product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114782550B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112136137A (en) * | 2019-10-29 | 2020-12-25 | 深圳市大疆创新科技有限公司 | A kind of parameter optimization method, device and control equipment, aircraft |
CN112184890A (en) * | 2020-10-14 | 2021-01-05 | 佳都新太科技股份有限公司 | Camera accurate positioning method applied to electronic map and processing terminal |
CN114283201A (en) * | 2021-04-26 | 2022-04-05 | 阿波罗智联(北京)科技有限公司 | Camera calibration method, device and roadside equipment |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10970878B2 (en) * | 2018-12-13 | 2021-04-06 | Lyft, Inc. | Camera calibration using reference map |
CN110009681B (en) * | 2019-03-25 | 2021-07-30 | 中国计量大学 | A monocular visual odometry pose processing method based on IMU assistance |
CN112444242B (en) * | 2019-08-31 | 2023-11-10 | 北京地平线机器人技术研发有限公司 | Pose optimization method and device |
KR102095226B1 (en) * | 2019-12-26 | 2020-04-01 | 한국항공촬영 주식회사 | Image processing system to improve the accuracy of captured images |
CN111156997B (en) * | 2020-03-02 | 2021-11-30 | 南京航空航天大学 | Vision/inertia combined navigation method based on camera internal parameter online calibration |
CN114022561B (en) * | 2021-10-18 | 2024-07-30 | 武汉中海庭数据技术有限公司 | Urban area monocular mapping method and system based on GPS constraint and dynamic correction |
CN113989450B (en) * | 2021-10-27 | 2023-09-26 | 北京百度网讯科技有限公司 | Image processing method, device, electronic equipment and medium |
-
2022
- 2022-04-25 CN CN202210443441.3A patent/CN114782550B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112136137A (en) * | 2019-10-29 | 2020-12-25 | 深圳市大疆创新科技有限公司 | A kind of parameter optimization method, device and control equipment, aircraft |
CN112184890A (en) * | 2020-10-14 | 2021-01-05 | 佳都新太科技股份有限公司 | Camera accurate positioning method applied to electronic map and processing terminal |
CN114283201A (en) * | 2021-04-26 | 2022-04-05 | 阿波罗智联(北京)科技有限公司 | Camera calibration method, device and roadside equipment |
Also Published As
Publication number | Publication date |
---|---|
CN114782550A (en) | 2022-07-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6862409B2 (en) | Map generation and moving subject positioning methods and devices | |
CN108319655B (en) | Method and device for generating grid map | |
JP2020528134A (en) | Calibration of integrated sensor in natural scene | |
CN111932627B (en) | Marker drawing method and system | |
CN114840703B (en) | Pose information acquisition method, device, equipment, medium and product | |
CN113312435B (en) | High-precision map updating method and equipment | |
CN111190199B (en) | Positioning method, positioning device, computer equipment and readable storage medium | |
CN114494466B (en) | External parameter calibration method, device and equipment and storage medium | |
CN115164918B (en) | Semantic point cloud map construction method and device and electronic equipment | |
CN111080682A (en) | Point cloud data registration method and device | |
US20160169662A1 (en) | Location-based facility management system using mobile device | |
CN111353453A (en) | Obstacle detection method and apparatus for vehicle | |
CN114140533A (en) | Method and device for calibrating external parameters of camera | |
CN113223064A (en) | Method and device for estimating scale of visual inertial odometer | |
CN112233149B (en) | Method and device for determining scene flow, storage medium, and electronic device | |
CN112446915A (en) | Picture-establishing method and device based on image group | |
CN111523409B (en) | Method and device for generating position information | |
CN113536854A (en) | High-precision map guideboard generation method and device and server | |
CN114820769A (en) | Vehicle positioning method and device, computer equipment, storage medium and vehicle | |
CN114782550B (en) | Camera calibration method, device, electronic equipment and program product | |
CN114595238A (en) | Vector-based map processing method and device | |
CN113009533A (en) | Vehicle positioning method and device based on visual SLAM and cloud server | |
CN115620264B (en) | Vehicle positioning method and device, electronic equipment and computer readable medium | |
CN114858156B (en) | Real-time positioning and map construction method and unmanned mobile device | |
CN116878486A (en) | Map construction method, storage medium and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |