CN112884832A - Intelligent trolley track prediction method based on multi-view vision - Google Patents
Intelligent trolley track prediction method based on multi-view vision Download PDFInfo
- Publication number
- CN112884832A CN112884832A CN202110270322.8A CN202110270322A CN112884832A CN 112884832 A CN112884832 A CN 112884832A CN 202110270322 A CN202110270322 A CN 202110270322A CN 112884832 A CN112884832 A CN 112884832A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- intelligent trolley
- intelligent
- pose
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 230000000007 visual effect Effects 0.000 claims description 30
- 239000011159 matrix material Substances 0.000 claims description 10
- 238000005259 measurement Methods 0.000 claims description 5
- 230000005484 gravity Effects 0.000 claims description 4
- 238000004458 analytical method Methods 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 2
- 230000007547 defect Effects 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses an intelligent trolley track prediction method based on multi-view vision, which solves the defects of instable real-time positioning, easy signal disconnection, use limitation, low working efficiency and high cost of the existing indoor intelligent trolley.
Description
Technical Field
The invention relates to an indoor positioning technology, in particular to an intelligent trolley track prediction method based on multi-view vision.
Background
In the prior art, real-time positioning and track prediction are realized for an outdoor intelligent trolley based on a global positioning navigation technology (GPS), but accurate positioning cannot be performed on the trolley only in areas with weak GPS signals or areas which cannot be covered by the GPS at all, for example, indoor intelligent trolley positioning. However, since the sensor is greatly affected by the outside, when the signal is interfered, the positioning cannot be accurately realized, for example, WiFi positioning, bluetooth positioning, radio frequency identification positioning, etc. may be used in some places where the GPS cannot cover. These methods have the disadvantage of instability, which can occur when the indoor intelligent vehicle is operated at a distance from the receiving source. Therefore, the existing technology has the disadvantages of high cost, certain limitation and low working efficiency.
Disclosure of Invention
The invention aims to provide an intelligent trolley track prediction method based on multi-view vision, which can realize real-time positioning based on machine vision and has low cost and high stability.
The technical purpose of the invention is realized by the following technical scheme:
an intelligent trolley track prediction method based on multi-view vision comprises the following steps:
s1, fixedly installing a plurality of cameras, shooting pictures with different poses of a set number of checkerboards, and calibrating the cameras by a Zhang Zhengyou calibration method to obtain internal parameters and distortion parameters of the cameras;
s2, pasting the visual label on the intelligent trolley, shooting by the camera in real time, and positioning the pose of the visual label to obtain a real-time two-dimensional coordinate of the intelligent trolley;
s3, establishing a world coordinate origin according to a PnP algorithm, defining a corresponding relation between world space coordinates and two-dimensional coordinates of a plurality of points, and obtaining external parameters of the camera at the moment;
s4, transforming the two-dimensional coordinate pose output by the visual label into a coordinate system, and converting the two-dimensional coordinate into a three-dimensional space coordinate;
s5, constructing a pose measurement model through a multi-view stereo vision model, solving a three-dimensional space pose by introducing a least square method, and optimally solving a mechanical arm space pose of the intelligent trolley by combining a triangular gravity center method;
and S6, obtaining pose information of the intelligent trolley through the three-dimensional space coordinates, drawing the movement track of the trolley and carrying out error analysis.
Preferably, the information of the visual label comprises four corner pixels, a central pixel, a homography matrix and an ID corresponding to each label.
Preferably, the visual label employs the aprilatag visual system.
Preferably, the transformation of the three-dimensional space coordinates is embodied as
The method comprises the steps that the visual labels on the intelligent car are subjected to picture acquisition under the same scene at the same time through different placement positions of three cameras;
solving camera external parameters under the condition of giving N3D point coordinates in the world and two-dimensional coordinates on an image through a PnP algorithm;
and converting a series of two-dimensional coordinates into three-dimensional coordinates according to the internal reference and the external reference of the camera.
Preferably, the three cameras are installed in different positions and at different angles in space.
In conclusion, the invention has the following beneficial effects:
the pose of the intelligent trolley is positioned by a multi-vision machine vision and vision labeling technology, so that the indoor real-time positioning of the sensorless intelligent trolley is realized, and the existing intelligent trolley positioning technology is improved; the three-dimensional pose is calculated through multi-view vision, the problem of monocular camera depth calculation can be solved, the binocular camera has higher precision, more accurate indoor intelligent trolley positioning can be realized, and the requirements of intelligent trolley multi-angle and large-range real-time positioning are met.
Drawings
FIG. 1 is a schematic block flow diagram of the process;
FIG. 2 is a schematic diagram of multi-view intelligent vehicle pose measurement;
FIG. 3 is a multi-view vision pose measurement model diagram.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
According to one or more embodiments, an intelligent vehicle trajectory prediction method based on multi-view vision is disclosed, as shown in fig. 1 and 2, including the following steps:
and S1, fixedly installing a plurality of cameras, shooting pictures with different poses of a set number of checkerboard grids, and calibrating the cameras by a Zhang Zhengyou calibration method to obtain internal parameters and distortion parameters of the cameras.
The installation of camera specifically is: the number of the cameras is preferably 3, three cameras are installed at different positions of indoor space, the angle of each camera is different, multi-view vision is achieved through the three cameras, the multi-view vision measuring system can cover a larger measuring area, compared with single binocular vision, the measuring system with the three binocular vision has better robustness, and the measuring system has wider application in actual complex application scenes.
The calibration of the internal reference of the camera is specifically as follows: the chessboard pattern calibration board is manufactured, three fixedly installed video cameras synchronously shoot 20 pictures of the calibration board at different positions and rotation angles, data are synchronously acquired in a multi-view mode, and the three video cameras are respectively calibrated by a Zhang Zhengyou calibration method to obtain the internal reference and distortion coefficient of each camera.
According to the calibration of the camera, a translation matrix and a rotation matrix under a camera coordinate system can be calculated, and then the three-dimensional space coordinate of the center of the object can be obtained according to the translation matrix, so that the six-degree-of-freedom attitude estimation is realized.
And S2, the visual label is pasted on the intelligent trolley, and the camera shoots in real time to obtain the real-time two-dimensional coordinate of the intelligent trolley by positioning the pose of the visual label.
The information of the visual label comprises four corner pixels, a central pixel, a homography matrix and an ID corresponding to each label.
The AprilTag visual system is adopted as the visual label, the visual label system is widely used in the field of robot, AR and camera calibration, is similar to a two-dimensional code (QR) technology, reduces complexity, can quickly detect the markers and calculate the relative positions of the markers, and can accurately estimate the two-dimensional coordinate pose of the intelligent trolley through the label to realize a remote real-time positioning technology.
A plurality of visual labels are pasted on an indoor intelligent trolley body, the positions of the visual labels on the intelligent trolley are shot in real time, when the intelligent trolley works, three cameras shoot video information in real time and transmit the video information to a computer, and the computer recognizes two-dimensional coordinates of a central point of a visual label AprilTag through the visual labels on the intelligent trolley body.
And S3, establishing a world coordinate origin according to the PnP algorithm, defining the corresponding relation between the world space coordinates and the two-dimensional coordinates of a plurality of points, and obtaining the external parameters of the camera at the moment. And (3) solving the camera external parameters through the known 4 angular points and the calibrated camera internal parameters and distortion according to the two-dimensional coordinates and the space coordinates of the images of the fixed points by utilizing a PnP algorithm, and simultaneously determining the origin of a world coordinate system.
And S4, transforming the two-dimensional coordinate pose output by the visual tag into a coordinate system, and converting the two-dimensional coordinate into a three-dimensional space coordinate, as shown in FIG. 3.
The two-dimensional coordinates can be converted into camera coordinates according to the information of a camera which is input into a computer in advance and the spatial position relation of each device, the camera coordinates are converted into spatial position coordinates under a world coordinate system, and the conversion of the three-dimensional space coordinates is specifically as follows:
the method comprises the steps that the visual labels on the intelligent car are subjected to picture acquisition under the same scene at the same time through different placement positions of three cameras; solving camera external parameters under the condition of giving N3D point coordinates in the world and two-dimensional coordinates on an image through a PnP algorithm; converting a series of two-dimensional coordinates into three-dimensional coordinates according to internal parameters and external parameters of the camera, wherein a matrix A is a camera internal parameter matrix, a matrix B is a camera external parameter matrix, u and v are two-dimensional coordinates, and X is shown in formula 1W,YW,ZWAs three-dimensional coordinates, ZCIs a scaling factor.
S5, constructing a pose measurement model through the multi-view stereo vision model, introducing a least square method to obtain a three-dimensional space pose as shown in figure 3, and optimizing to obtain the space pose of the intelligent vehicle by combining a triangular gravity center method.
In practical application, because data is always noisy, three-dimensional coordinates of a measured object are respectively three disjoint points obtained by performing three-mesh visual fusion through a least square method, and the optimal three-dimensional coordinates can be obtained through a gravity center method.
And S6, obtaining pose information of the intelligent trolley through the three-dimensional space coordinates, drawing the movement track of the trolley and carrying out error analysis.
The intelligent vehicle track prediction method can realize real-time track prediction of the intelligent vehicle, meets the work requirement of accurate positioning of the indoor intelligent vehicle, and improves flexibility and controllability.
The pose positioning is carried out through technologies such as multi-vision machine vision, a vision label system and the like, the sensorless indoor real-time positioning of the intelligent trolley is realized, and the existing intelligent trolley positioning technology is improved. AprilTag is a visual reference system, and is a visual task such as robot positioning and camera calibration, and can calculate the accurate position and direction of a camera in a two-dimensional coordinate system. Therefore, the cost for purchasing an expensive intelligent sensor trolley can be saved, and the system has good robustness and economy. The user can get rid of the trouble caused by the defects of inconvenience and low working efficiency caused by the positioning of an indoor intelligent trolley in an area where the global positioning system can not be positioned. The used visual label system is also economical and reliable, the multi-view vision is more accurate to precision than the binocular vision, the estimation and prediction work efficiency of the indoor intelligent vehicle is greatly improved, the three-dimensional pose is calculated through the multi-view vision, the problem of monocular camera depth calculation can be solved, the binocular camera has higher precision, more accurate indoor intelligent vehicle positioning can be realized, and the real-time positioning requirements of the intelligent vehicle on multiple angles and large range are realized.
The present embodiment is only for explaining the present invention, and it is not limited to the present invention, and those skilled in the art can make modifications of the present embodiment without inventive contribution as needed after reading the present specification, but all of them are protected by patent law within the scope of the claims of the present invention.
Claims (5)
1. An intelligent trolley track prediction method based on multi-view vision is characterized by comprising the following steps:
s1, fixedly installing a plurality of cameras, shooting pictures with different poses of a set number of checkerboards, and calibrating the cameras by a Zhang Zhengyou calibration method to obtain internal parameters and distortion parameters of the cameras;
s2, pasting the visual label on the intelligent trolley, shooting by the camera in real time, and positioning the pose of the visual label to obtain a real-time two-dimensional coordinate of the intelligent trolley;
s3, establishing a world coordinate origin according to a PnP algorithm, defining a corresponding relation between world space coordinates and two-dimensional coordinates of a plurality of points, and obtaining external parameters of the camera at the moment;
s4, transforming the two-dimensional coordinate pose output by the visual label into a coordinate system, and converting the two-dimensional coordinate into a three-dimensional space coordinate;
s5, constructing a pose measurement model through a multi-view stereo vision model, solving a three-dimensional space pose by introducing a least square method, and optimally solving an intelligent vehicle space pose by combining a triangular gravity center method;
and S6, obtaining pose information of the intelligent trolley through the three-dimensional space coordinates, drawing the movement track of the trolley and carrying out error analysis.
2. The intelligent trolley track prediction method based on multi-view vision as claimed in claim 1, wherein: the information of the visual label comprises four corner pixels, a central pixel, a homography matrix and an ID corresponding to each label.
3. The intelligent trolley track prediction method based on multi-view vision as claimed in claim 2, wherein: the visual label employs the aprilatag visual system.
4. The intelligent trolley track prediction method based on multi-view vision as claimed in claim 3, wherein the method comprises the following steps: the transformation of the three-dimensional space coordinate is specifically
The method comprises the steps that the visual labels on the intelligent car are subjected to picture acquisition under the same scene at the same time through different placement positions of three cameras;
solving camera external parameters under the condition of giving N3D point coordinates in the world and two-dimensional coordinates on an image through a PnP algorithm;
and converting a series of two-dimensional coordinates into three-dimensional coordinates according to the internal reference and the external reference of the camera.
5. The intelligent trolley track prediction method based on multi-view vision as claimed in claim 4, wherein the method comprises the following steps: the three cameras are arranged in different positions and different angles in space.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110270322.8A CN112884832B (en) | 2021-03-12 | 2021-03-12 | Intelligent trolley track prediction method based on multi-view vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110270322.8A CN112884832B (en) | 2021-03-12 | 2021-03-12 | Intelligent trolley track prediction method based on multi-view vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112884832A true CN112884832A (en) | 2021-06-01 |
CN112884832B CN112884832B (en) | 2022-10-21 |
Family
ID=76042455
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110270322.8A Active CN112884832B (en) | 2021-03-12 | 2021-03-12 | Intelligent trolley track prediction method based on multi-view vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112884832B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113781576A (en) * | 2021-09-03 | 2021-12-10 | 北京理工大学 | Binocular vision detection system, method and device for real-time adjustment of multi-degree-of-freedom pose |
CN118470099A (en) * | 2024-07-15 | 2024-08-09 | 济南大学 | Object space pose measurement method and device based on monocular camera |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106527426A (en) * | 2016-10-17 | 2017-03-22 | 江苏大学 | Indoor multi-target track planning system and method |
CN108571971A (en) * | 2018-05-17 | 2018-09-25 | 北京航空航天大学 | A kind of AGV visual positioning system and method |
CN108827316A (en) * | 2018-08-20 | 2018-11-16 | 南京理工大学 | Mobile robot visual orientation method based on improved Apriltag label |
CN109018591A (en) * | 2018-08-09 | 2018-12-18 | 沈阳建筑大学 | A kind of automatic labeling localization method based on computer vision |
CN109658461A (en) * | 2018-12-24 | 2019-04-19 | 中国电子科技集团公司第二十研究所 | A kind of unmanned plane localization method of the cooperation two dimensional code based on virtual simulation environment |
US20200364482A1 (en) * | 2019-05-15 | 2020-11-19 | Matterport, Inc. | Arbitrary visual features as fiducial elements |
CN112364677A (en) * | 2020-11-23 | 2021-02-12 | 盛视科技股份有限公司 | Robot vision positioning method based on two-dimensional code |
-
2021
- 2021-03-12 CN CN202110270322.8A patent/CN112884832B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106527426A (en) * | 2016-10-17 | 2017-03-22 | 江苏大学 | Indoor multi-target track planning system and method |
CN108571971A (en) * | 2018-05-17 | 2018-09-25 | 北京航空航天大学 | A kind of AGV visual positioning system and method |
CN109018591A (en) * | 2018-08-09 | 2018-12-18 | 沈阳建筑大学 | A kind of automatic labeling localization method based on computer vision |
CN108827316A (en) * | 2018-08-20 | 2018-11-16 | 南京理工大学 | Mobile robot visual orientation method based on improved Apriltag label |
CN109658461A (en) * | 2018-12-24 | 2019-04-19 | 中国电子科技集团公司第二十研究所 | A kind of unmanned plane localization method of the cooperation two dimensional code based on virtual simulation environment |
US20200364482A1 (en) * | 2019-05-15 | 2020-11-19 | Matterport, Inc. | Arbitrary visual features as fiducial elements |
CN112364677A (en) * | 2020-11-23 | 2021-02-12 | 盛视科技股份有限公司 | Robot vision positioning method based on two-dimensional code |
Non-Patent Citations (2)
Title |
---|
GUO ZHENGLONG ET AL.: "Pose Estimation for Multicopters Based on Monocular Vision and AprilTag", 《PROCEEDINGS OF THE 37TH CHINESE CONTROL CONFERENCE》 * |
何浩楠 等: "基于AprilTag的智能小车拓展定位追踪应用", 《现代信息科技》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113781576A (en) * | 2021-09-03 | 2021-12-10 | 北京理工大学 | Binocular vision detection system, method and device for real-time adjustment of multi-degree-of-freedom pose |
CN113781576B (en) * | 2021-09-03 | 2024-05-07 | 北京理工大学 | Binocular vision detection system, method and device for adjusting pose with multiple degrees of freedom in real time |
CN118470099A (en) * | 2024-07-15 | 2024-08-09 | 济南大学 | Object space pose measurement method and device based on monocular camera |
Also Published As
Publication number | Publication date |
---|---|
CN112884832B (en) | 2022-10-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104200086B (en) | Wide-baseline visible light camera pose estimation method | |
CN106291278B (en) | A kind of partial discharge of switchgear automatic testing method based on more vision systems | |
CN102155923B (en) | Splicing measuring method and system based on three-dimensional target | |
CN110728715A (en) | Camera angle self-adaptive adjusting method of intelligent inspection robot | |
CN110345937A (en) | Appearance localization method and system are determined in a kind of navigation based on two dimensional code | |
CN111220126A (en) | Space object pose measurement method based on point features and monocular camera | |
CN111127568A (en) | A camera pose calibration method based on spatial point information | |
CN109242915A (en) | Multicamera system scaling method based on multi-face solid target | |
CN107990940A (en) | A kind of moving object method for tracing based on stereo vision measuring technology | |
CN102072706A (en) | Multi-camera positioning and tracking method and system | |
Aliakbarpour et al. | An efficient algorithm for extrinsic calibration between a 3d laser range finder and a stereo camera for surveillance | |
CN108007456A (en) | A kind of indoor navigation method, apparatus and system | |
CN115830142A (en) | Camera calibration method, camera target detection and positioning method, camera calibration device, camera target detection and positioning device and electronic equipment | |
CN106370160A (en) | Robot indoor positioning system and method | |
CN112884832A (en) | Intelligent trolley track prediction method based on multi-view vision | |
CN114413958A (en) | Monocular visual ranging and speed measurement method for unmanned logistics vehicles | |
CN114370871A (en) | A tightly coupled optimization method for visible light positioning and lidar inertial odometry | |
Jung et al. | A novel 2.5 D pattern for extrinsic calibration of tof and camera fusion system | |
CN111199576A (en) | Outdoor large-range human body posture reconstruction method based on mobile platform | |
JP4132068B2 (en) | Image processing apparatus, three-dimensional measuring apparatus, and program for image processing apparatus | |
Chen et al. | Low cost and efficient 3D indoor mapping using multiple consumer RGB-D cameras | |
CN116824067B (en) | Indoor three-dimensional reconstruction method and device thereof | |
CN117310627A (en) | Combined calibration method applied to vehicle-road collaborative road side sensing system | |
CN113345017B (en) | A method for visual SLAM using landmarks | |
CN109410272A (en) | A kind of identification of transformer nut and positioning device and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |