CN111583338B - Positioning method and device for unmanned equipment, medium and unmanned equipment - Google Patents
Positioning method and device for unmanned equipment, medium and unmanned equipment Download PDFInfo
- Publication number
- CN111583338B CN111583338B CN202010340694.9A CN202010340694A CN111583338B CN 111583338 B CN111583338 B CN 111583338B CN 202010340694 A CN202010340694 A CN 202010340694A CN 111583338 B CN111583338 B CN 111583338B
- Authority
- CN
- China
- Prior art keywords
- target
- position information
- feature point
- target feature
- dimensional position
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000004590 computer program Methods 0.000 claims description 12
- 238000010586 diagram Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/10—Simultaneous control of position or course in three dimensions
- G05D1/101—Simultaneous control of position or course in three dimensions specially adapted for aircraft
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/12—Target-seeking control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
Abstract
The disclosure relates to a positioning method, a positioning device, a positioning medium and an unmanned device for the unmanned device, wherein the method comprises the following steps: acquiring target characteristic points in an image acquired at the current moment; determining whether the target feature point exists in an image acquired at the last moment; under the condition that the target characteristic point does not exist in the image acquired at the previous moment, determining whether the target characteristic point exists in the images acquired at a plurality of moments closest to the previous moment; under the condition that the target feature point exists in the images acquired at a plurality of moments closest to the previous moment, determining target three-dimensional position information of the target feature point according to the position information of the target feature point in the image acquired at the current moment and the position information of each target image in the plurality of target images; and positioning the unmanned equipment according to the three-dimensional position information of the target. Therefore, the accuracy of the target three-dimensional position information of the target feature points can be improved, and the positioning precision of the unmanned equipment is ensured.
Description
Technical Field
The present disclosure relates to the field of positioning, and in particular, to a positioning method and apparatus for an unmanned aerial vehicle, a medium, and an unmanned aerial vehicle.
Background
Unmanned devices, such as drones, are increasingly used in the fields of distribution, logistics, aerial photography, and the like. The unmanned equipment is supported by a high-precision positioning technology, and the safe and accurate operation of the unmanned equipment can be ensured only by accurate positioning information.
In the related art, the unmanned device is generally positioned by adopting a visual positioning technology, which is to acquire an image of a surrounding environment through a camera integrated in the device and determine a positioning mode of positioning information of the unmanned device by using feature points in the image. If the positioning precision of the unmanned equipment is not high, the running track of the unmanned equipment cannot be accurately judged, so that the safe and accurate running of the unmanned equipment cannot be ensured.
Disclosure of Invention
The purpose of the present disclosure is to provide a positioning method, an apparatus, a medium, and an unmanned device for an unmanned device, which improve the accuracy of target three-dimensional position information of a target feature point when the target feature point is not continuously tracked, thereby ensuring the positioning accuracy of the unmanned device.
To achieve the above object, in a first aspect, the present disclosure provides a positioning method for an unmanned aerial device, the method comprising: acquiring target characteristic points in an image acquired at the current moment; determining whether the target feature point exists in an image acquired at the last moment; determining whether the target feature point exists in the images acquired at a plurality of moments closest to the previous moment or not under the condition that the target feature point does not exist in the image acquired at the previous moment; under the condition that the target feature point exists in the images acquired at a plurality of moments closest to the previous moment, determining target three-dimensional position information of the target feature point according to position information of the target feature point in the image acquired at the current moment and position information of each target image in a plurality of target images, wherein the target images are images acquired earlier than the previous moment and comprise the target feature point; and positioning the unmanned equipment according to the target three-dimensional position information.
Optionally, in a case where it is determined that the target feature point exists in the images acquired at a plurality of moments closest to the previous moment, the method further includes: acquiring first three-dimensional position information of the target feature point; under the condition that the first three-dimensional position information is obtained, determining whether the first three-dimensional position information meets a preset condition; and under the condition that the first three-dimensional position information is determined not to meet the preset condition, determining target three-dimensional position information of the target feature point according to the position information of the target feature point in the image acquired at the current moment and the position information of each target image in a plurality of target images.
Optionally, the method further comprises; and taking the first three-dimensional position information as the target three-dimensional position information under the condition that the first three-dimensional position information is determined to meet the preset condition.
Optionally, the determining the target three-dimensional position information of the target feature point according to the position information of the target feature point in the image acquired at the current time and the position information of each target image in a plurality of target images includes: determining second three-dimensional position information of the target feature point according to position information of the target feature point in the image acquired at the current moment and position information of each target image in a plurality of target images; determining whether the second three-dimensional position information meets a preset condition; and under the condition that the second three-dimensional position information is determined to meet the preset condition, determining the second three-dimensional position information as the target three-dimensional position information.
Optionally, the preset condition includes: and the variance of the difference between the position information of the target feature point in each target image and the projection position information of the three-dimensional position information of the target feature point in the corresponding target image is smaller than a preset threshold value.
Optionally, the determining whether the target feature point exists in the image acquired at the previous time includes: determining a target area in the image acquired at the previous moment according to the position information of the target feature point in the image acquired at the current moment; determining whether a feature point matching the target feature point exists in the target area; and under the condition that the characteristic point matched with the target characteristic point exists in the target area, determining that the target characteristic point exists in the image acquired at the last moment.
Optionally, the determining whether the target feature point exists in the images acquired at the plurality of moments closest to the previous moment includes: traversing the images acquired at a plurality of moments closest to the last moment, and determining whether characteristic points matched with the target characteristic points exist in the currently traversed images; and under the condition that the feature point matched with the target feature point exists in the image traversed currently, determining that the target feature point exists in the images acquired at a plurality of moments closest to the last moment, and stopping traversing.
In a second aspect, the present disclosure provides a positioning apparatus for an unmanned device, the apparatus comprising: the target characteristic point acquisition module is configured to acquire a target characteristic point in an image acquired at the current moment; a first determining module configured to determine whether the target feature point exists in an image acquired at a previous moment; a second determination module configured to determine whether the target feature point exists in the images acquired at a plurality of times closest to a last time, in a case where the first determination module determines that the target feature point does not exist in the image acquired at the last time; a third determination module configured to, in a case where the second determination module determines that the target feature point exists in the images acquired at the plurality of times closest to the previous time, determine target three-dimensional position information of the target feature point based on position information of the target feature point in the image acquired at the current time and position information in each of the target images in a plurality of target images, wherein the target image is an image acquired earlier than the previous time and including the target feature point; a positioning module configured to position the unmanned device according to the target three-dimensional position information.
Optionally, the apparatus further comprises; a first three-dimensional position information acquisition module configured to acquire first three-dimensional position information of the target feature point in a case where the second determination module determines that the target feature point exists in images acquired at a plurality of moments closest to the last moment; a fourth determining module, configured to determine whether the first three-dimensional position information satisfies a preset condition if the first three-dimensional position information is acquired; the third determining module is configured to, if it is determined by the fourth determining module that the first three-dimensional position information does not satisfy the preset condition, determine target three-dimensional position information of the target feature point according to position information of the target feature point in an image acquired at the current time and position information of each of the target images in a plurality of target images.
Optionally, the apparatus further comprises; a fifth determination module configured to take the first three-dimensional position information as the target three-dimensional position information if the fourth determination module determines that the first three-dimensional position information satisfies the preset condition.
Optionally, the third determining module includes: a first determining submodule configured to determine second three-dimensional position information of the target feature point according to position information of the target feature point in the image acquired at the current time and position information of each of the target images in a plurality of target images; a second determination submodule configured to determine whether the second three-dimensional position information satisfies a preset condition; a third determination submodule configured to determine the second three-dimensional position information as the target three-dimensional position information if the second determination submodule determines that the second three-dimensional position information satisfies the preset condition.
Optionally, the first determining module includes: a target area determination submodule configured to determine a target area in the image acquired at the previous time according to the position information of the target feature point in the image acquired at the current time; a fourth determination submodule configured to determine whether there is a feature point matching the target feature point in the target region; a fifth determination submodule configured to determine that the target feature point exists in the image acquired at the previous time, in a case where the fourth determination submodule determines that a feature point matching the target feature point exists in the target region.
Optionally, the second determining module is configured to traverse images acquired at a plurality of moments closest to the previous moment, and determine whether a feature point matching the target feature point exists in the currently traversed image; and under the condition that the feature point matched with the target feature point exists in the image traversed currently, determining that the target feature point exists in the images acquired at a plurality of moments closest to the last moment, and stopping traversing.
In a third aspect, the present disclosure provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method provided by the first aspect of the present disclosure.
In a fourth aspect, the present disclosure provides a positioning apparatus for an unmanned device, the apparatus comprising: a memory having a computer program stored thereon; a processor for executing the computer program in the memory to implement the steps of the method provided by the first aspect of the present disclosure.
In a fifth aspect, the present disclosure provides an unmanned aerial device comprising the positioning apparatus provided in the fourth aspect of the present disclosure.
Through the technical scheme, under the condition that the target characteristic point does not exist in the image acquired at the last moment and exists in the images acquired at a plurality of moments closest to the last moment, the target characteristic point can be represented to be shielded at the last moment, namely not continuously tracked, and reappear in the image acquired at the current moment. When determining the target three-dimensional position information of the target feature point, the position information of the target feature point in the image acquired at the current moment and the position information of each target image in the plurality of target images are used for determination. In this way, instead of ignoring the position information of the target feature point in the image acquired at the previous time as in the related art, the position information of the target feature point in the image acquired at the current time and the position information of the previous target images are integrated, so that the accuracy of the determined target three-dimensional position information can be improved, and the positioning accuracy of the unmanned device can be improved according to the target three-dimensional position information.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a flow chart illustrating a positioning method for an unmanned device according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating a positioning method for an unmanned device according to another exemplary embodiment.
Fig. 3 is a flow chart illustrating a method of determining whether a target feature point is present in an image acquired at a previous time, according to another exemplary embodiment.
FIG. 4 is a flow diagram illustrating a method of determining whether a target feature point is present in an image acquired at a plurality of times closest to an immediately preceding time in accordance with an exemplary embodiment.
FIG. 5 is a block diagram illustrating a positioning apparatus for an unmanned device, according to an example embodiment.
FIG. 6 is a block diagram illustrating a positioning apparatus for an unmanned device, according to another example embodiment.
Detailed Description
When the unmanned device is positioned, a camera integrated in the unmanned device is generally used for acquiring an image of the surrounding environment, a plurality of characteristic points exist in the image, and the position information of the characteristic points in the image can be used for positioning the unmanned device.
When the camera is used for shooting images, the time interval between two adjacent moments is usually short, the moving distance of the unmanned device between two or more adjacent moments is also short, and the surrounding environment is not changed greatly. Therefore, the images respectively acquired by the camera at two or more adjacent moments have more same feature points. For example, three moments t0, t1 and t2 are sequentially arranged from front to back, a characteristic point a exists near the center of an image in an image acquired at the moment t0, and if the characteristic point a also exists in images respectively acquired at the moments t1 and t2, the characteristic point a can be characterized to be continuously tracked, that is, the tracking effect is good. The purpose of tracking the feature points is to determine three-dimensional position information of the feature points in a world coordinate system, such as three-dimensional coordinate information, according to position information of the feature points in images acquired at different moments.
However, during operation of the unmanned device, motion characteristics are generally present in the environment where the device is located, and the motion characteristics may be a moving person or object, and the like. When a camera shoots, the phenomenon that the characteristic points are temporarily shielded easily occurs. For example, the feature point a exists in the images respectively acquired at the time t0 and the time t2, and the feature point a does not exist in the acquired image at the time t1 because the feature point a is occluded, so that the tracking of the feature point a is lost at the time t1 and the feature point a is not continuously tracked. In the related art, for a feature point a that is not continuously tracked, when the feature point a is detected again at time t2, the feature point a is regarded as a new feature point, and when the three-dimensional position information of the feature point a is determined, the position information in the image acquired at the previous time (for example, the image acquired at time t 0) of the feature point a is not used.
Thus, according to the scheme in the related art, for the feature point which is not continuously tracked, when the feature point is detected again at the current time, the three-dimensional position information of the feature point is determined only according to the image acquired at the current time, the available information is less, and the three-dimensional position information of the feature point cannot be accurately determined, so that the positioning accuracy of the unmanned equipment is influenced, and the positioning accuracy is not high.
In view of this, the present disclosure provides a positioning method, an apparatus, a medium, and an unmanned device for an unmanned device, so as to solve the problems in the related art, and improve the accuracy of target three-dimensional position information of a target feature point when the target feature point is not continuously tracked, thereby ensuring the positioning accuracy of the unmanned device.
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
Fig. 1 is a flow chart illustrating a positioning method for an drone, which may be applied to a controller in the drone, according to an exemplary embodiment. As shown in fig. 1, the method may include S101 to S105.
In S101, a target feature point in an image acquired at the current time is acquired.
The target feature point may be any feature point in the image acquired at the current time.
In S102, it is determined whether the target feature point exists in the image acquired at the previous time.
For example, whether a feature point matching the target feature point exists in the image acquired at the previous time may be determined by means of feature point matching. The feature point matching may refer to related technologies in the art, for example, the matching may be performed by using an optical flow tracking method, and the matching may be performed by using description information of the feature points, where the description information of the feature points may be, for example, inverse depth information of the feature points, or may also be performed by using distance information between the feature points. For example, the distance information may be, for example, a hamming distance, and in the image acquired at the previous time, a feature point closest to the hamming distance of the target feature point is determined, and if the hamming distance between the feature point and the target feature point is smaller than a preset distance threshold, the feature point may be characterized to be matched with the target feature point.
One feature point is matched with another feature point, and the two feature points can be characterized as the same feature point. If the image acquired at the previous moment has the feature point matched with the target feature point, determining that the target feature point exists in the image acquired at the previous moment; if the image acquired at the last moment does not have the feature point matched with the target feature point, the target feature point can be determined to be absent in the image acquired at the last moment.
In S103, when it is determined that the target feature point does not exist in the image acquired at the previous time, it is determined whether or not the target feature point exists in the images acquired at a plurality of times closest to the previous time.
In the case that it is determined that the target feature point does not exist in the image acquired at the previous time, it may be indicated that the target feature point is a feature point that newly appears at the current time, or the target feature point is occluded at the previous time and is not continuously tracked. In this case, it may be determined whether the target feature point exists in the images respectively acquired at the time before the previous time, and if the target feature point exists, the target feature point is not a feature point newly appearing at the current time but is occluded at the previous time.
In one embodiment, it may be determined whether the target feature point exists in the images respectively acquired at all times before the previous time. Preferably, in order to improve the determination speed and ensure the real-time performance of the positioning information, in the present disclosure, the images acquired at a plurality of moments closest to the last moment may be searched, and the plurality of moments may be preset, for example, may be a plurality of continuous moments, and the number of the plurality of moments may also be preset. The images acquired at a plurality of moments closest to the last moment can be original images acquired by a camera or key frame images. Wherein, the key frame image is the image where the key action is located in the motion or change of the object.
In S104, in a case where it is determined that the target feature point exists in the images acquired at the plurality of times closest to the previous time, the target three-dimensional position information of the target feature point is determined based on the position information of the target feature point in the image acquired at the current time and the position information in each of the plurality of target images.
In the present disclosure, if the target feature point is occluded at the previous time, i.e., not continuously tracked, when the target feature point is detected again at the current time, the position information of the target feature point in the image acquired at the current time and the position information of each target image in the plurality of target images may be integrated to determine the target three-dimensional position information of the target feature point. The position information of the target feature point in the target image may be two-dimensional coordinate information of the target feature point in the target image. The target image may be an image including the target feature point acquired earlier than the previous time.
It is known that, when determining the three-dimensional position information of a feature point, the more the position information of the feature point in an image is used, the more error can be avoided, and the more accurate the three-dimensional position information to be finally determined is. Therefore, in the present disclosure, instead of ignoring the position information in the image acquired at the previous time of the target feature point and determining the target three-dimensional position information thereof only using the position information in the image acquired at the current time of the target feature point as in the related art, the position information in the image acquired at the current time of the target feature point and the position information in the previous target images are integrated, so that the accuracy of the target three-dimensional position information can be improved.
The specific manner of determining the three-dimensional position information of the target feature point according to the position information of the target feature point in the image can refer to the related art, and for example, the three-dimensional position information can be determined by adopting a triangulation method. The triangulation method is a method in which the same feature point is observed at different positions, and three-dimensional position information of the feature point is determined based on position information of the feature point in different images using a trigonometric relationship.
In S105, the unmanned device is positioned based on the target three-dimensional position information.
Illustratively, for example, the positioning information of the unmanned device can be determined by means of filtering (such as kalman filtering, extended kalman filtering, and the like) according to the target three-dimensional position information of the target feature point, so as to position the unmanned device. The positioning information of the unmanned device can comprise position information and posture information of the unmanned device.
Through the technical scheme, under the condition that the target characteristic point does not exist in the image acquired at the last moment and exists in the images acquired at a plurality of moments closest to the last moment, the target characteristic point can be represented to be shielded at the last moment, namely not continuously tracked, and reappear in the image acquired at the current moment. When determining the target three-dimensional position information of the target feature point, the position information of the target feature point in the image acquired at the current moment and the position information of each target image in the plurality of target images are used for determination. In this way, instead of ignoring the position information of the target feature point in the image acquired at the previous time as in the related art, the position information of the target feature point in the image acquired at the current time and the position information of the previous target images are integrated, so that the accuracy of the determined target three-dimensional position information can be improved, and the positioning accuracy of the unmanned device can be improved according to the target three-dimensional position information.
Fig. 2 is a flowchart illustrating a positioning method for an unmanned aerial device according to another exemplary embodiment, which may include S201 to S211, as shown in fig. 2. Wherein S104 may include S208-S210.
In S201 (101), a target feature point in an image acquired at the present time is acquired.
In S202 (102), it is determined whether the target feature point exists in the image acquired at the previous time. In the absence, S203 is executed; if so, S204 is executed.
In S203 (103), it is determined whether the target feature point exists in the images acquired at a plurality of times closest to the last time. If so, executing S204; if not, the process returns to S201, and the next time is set as a new current time.
In S204, first three-dimensional position information of the target feature point is acquired.
The first three-dimensional position information may be three-dimensional position information of a target feature point determined last before the current time. After determining the first three-dimensional position information of the target feature point, the first three-dimensional position information may be stored in a memory or a controller of the unmanned device.
In S205, it is determined whether the first three-dimensional position information is acquired. In the case of acquisition, executing S206; if not acquired, S208 is executed.
Illustratively, the first three-dimensional position information of the target feature point may be determined by a triangularization method. If the target feature point appears in only one image before, or the positions in different images are the same or have small differences, in both cases, the triangulation cannot be performed well by using the triangulation relationship, and the first three-dimensional position information of the target feature point may not be determined, so that the first three-dimensional position information cannot be acquired, or even if the first three-dimensional position information can be determined before the current time, the accuracy of the first three-dimensional position information may not be high.
Under the condition that the first three-dimensional position information of the target feature point is obtained, the fact that the three-dimensional position information of the target feature point is determined before the current time is shown, in order to improve positioning accuracy, the accuracy of the first three-dimensional position information can be further judged, and under the condition that the accuracy is high, the first three-dimensional position information is used for positioning the unmanned equipment.
In S206, it is determined whether the first three-dimensional position information satisfies a preset condition. If yes, executing S207; in the case of non-satisfaction, S208 is executed.
The preset conditions may include: and the variance of the difference between the position information of the target feature point in each target image and the projection position information of the three-dimensional position information of the target feature point in the corresponding target image is smaller than a preset threshold value.
And the position information of the target feature point in each target image is the position information of the target feature point actually positioned in the target image.
In the step, for each target image, first three-dimensional position information of the target feature point is projected into the target image to obtain corresponding projection position information, the projection position information is two-dimensional position information corresponding to the first three-dimensional position information in the target image, and a difference between the position information of the target feature point in the target image and the projection position information is calculated. After the corresponding difference value is determined for each target image, if the variance of the difference values is smaller than a preset threshold value, the projection position information obtained by projecting the first three-dimensional position information can be represented, and the difference between the projection position information and the actual position information of the target feature point in the image is smaller, so that the precision of the first three-dimensional position information is higher. Wherein the predetermined threshold value can be calibrated in advance.
In S207, the first three-dimensional position information is set as target three-dimensional position information.
Under the condition that the first three-dimensional position information is determined to meet the preset condition, the fact that the precision of the first three-dimensional position information is high is shown, namely the three-dimensional position information of the target feature point determined before is accurate, and the first three-dimensional position information can be used as the target three-dimensional position information to be used for positioning the unmanned equipment.
In S208, second three-dimensional position information of the target feature point is determined according to the position information of the target feature point in the image acquired at the current time and the position information of each target image in the plurality of target images.
When the first three-dimensional position information is determined not to meet the preset condition, the accuracy of the first three-dimensional position information is not high, and the first three-dimensional position information cannot be directly used as target three-dimensional position information for positioning the unmanned equipment. In the present disclosure, the second three-dimensional position information of the target feature point may be determined by first using the position information of the target feature point in the image acquired at the current time and the position information in each of the plurality of target images, that is, integrating the position information of the target feature point in the image acquired at the current time and the position information in the previous plurality of target images.
In an embodiment, in order to ensure accuracy of the target three-dimensional position information of the target feature point, after the second three-dimensional position information is determined, the second three-dimensional position information may be determined, the precision of the second three-dimensional position information is determined, and when the precision is higher, the second three-dimensional position information is determined as the target three-dimensional position information to be used for positioning the unmanned device.
In S209, it is determined whether the second three-dimensional position information satisfies a preset condition. If yes, executing S210; if not, the process returns to S201.
The preset conditions have been described above. This step may be similar to determining whether the first three-dimensional position information satisfies a preset condition in S206. And for each target image, projecting the second three-dimensional position information of the target feature point into the target image to obtain corresponding projection position information, and calculating a difference value between the position information of the target feature point in the target image and the projection position information. After the corresponding difference value is determined for each target image, if the variance of the difference values is smaller than a preset threshold value, it can be represented that the second three-dimensional position information meets a preset condition, that is, the accuracy is high.
In S210, the second three-dimensional position information is determined as the target three-dimensional position information.
Under the condition that the second three-dimensional position information meets the preset condition, the second three-dimensional position information is high in precision, namely the precision of the three-dimensional position information determined by integrating the position information of the target feature point in the multiple images is high, and the second three-dimensional position information can be used as the target three-dimensional position information to be used for positioning the unmanned equipment.
In S211 (105), the unmanned device is positioned based on the target three-dimensional position information.
Through the scheme, if the first three-dimensional position information of the target feature point determined before the current moment meets the preset condition, namely the precision is high, the first three-dimensional position information can be directly used as the target three-dimensional position information. If the first three-dimensional position information is not acquired, or the first three-dimensional position information does not meet the preset condition, namely the precision is low, the position information of the target feature point in the multiple images can be integrated, the second three-dimensional position information is determined, and the second three-dimensional position information is determined to be the target three-dimensional position information under the condition that the second three-dimensional position information meets the preset condition so as to be used for positioning the unmanned equipment. Therefore, the accuracy of the target three-dimensional position information of the target feature points is guaranteed, and the positioning precision of the unmanned equipment is guaranteed.
In the present disclosure, in the above S102, an exemplary embodiment of determining whether the target feature point exists in the image acquired at the previous time may be as shown in fig. 3, including S301 to S303.
In S301, a target area in an image acquired at a previous time is determined according to position information of a target feature point in the image acquired at the current time.
As described above, when the camera captures an image, the time interval between two adjacent time instants is generally short, and the moving distance of the target feature point is generally small, so that the position information of the target feature point in two images acquired at two adjacent time instants should not be greatly different.
In the present disclosure, in order to improve the positioning efficiency, when determining whether the target feature point exists in the image acquired at the previous time, the target area in the image acquired at the previous time may be determined according to the position information of the target feature point in the image acquired at the current time. The target area may be a certain area around the position information, and the range size of the target area may be set in advance.
For example, the position information of the target feature point in the image acquired at the current time is the center position of the image, and for example, a region formed by taking the center position as a center and taking a preset distance as a radius in the image acquired at the previous time may be used as the target region.
In S302, it is determined whether there is a feature point matching the target feature point in the target region.
After the target area in the image acquired at the previous moment is determined in S301, when the feature point matching is performed in this step, only the feature point in the target area needs to be matched with the target feature point, and it is not necessary to match the feature point in the whole image with the target feature point, so that the matching speed can be increased, and the positioning efficiency can be increased.
In S303, when it is determined that a feature point matching the target feature point exists in the target region, it is determined that the target feature point exists in the image acquired at the previous time.
The manner of feature point matching has been described in detail above and may be similar to S102. One characteristic point is matched with the other characteristic point, and the two characteristic points can be represented as the same characteristic point, so that if the characteristic point matched with the target characteristic point exists in the target area, the target characteristic point can be considered to exist in the image acquired at the last moment.
By the scheme, when whether the target characteristic point exists in the image acquired at the previous moment is determined, the target area in the image acquired at the previous moment can be determined according to the position information of the target characteristic point in the image acquired at the current moment, and then the characteristic point matching is performed, so that the matching range of the characteristic point can be reduced, and the positioning efficiency is improved.
Fig. 4 is a flowchart illustrating a method for determining whether a target feature point exists in images acquired at a plurality of moments closest to an immediately previous moment, according to an exemplary embodiment, and as shown in fig. 4, the method may include S401 and S402.
In S401, images acquired at a plurality of moments closest to the previous moment are traversed, and it is determined whether a feature point matching the target feature point exists in the currently traversed image.
In S402, when it is determined that a feature point matching the target feature point exists in the currently traversed image, it is determined that the target feature point exists in the images acquired at a plurality of times closest to the previous time, and the traversal is stopped.
The present disclosure does not limit the specific traversal pattern. For example, the traversal may be performed randomly or from near to far in time sequence, that is, it is first determined whether a feature point matching the target feature point exists in the image acquired at the time closest to the previous time. If the target characteristic point exists, the target characteristic point can be determined to exist in the images acquired at a plurality of moments closest to the previous moment; if not, traversal can continue.
If the feature points matched with the target feature points do not exist in the images acquired at a plurality of moments closest to the previous moment, it can be determined that the target feature points do not exist in the images acquired at the plurality of moments closest to the previous moment.
Based on the same inventive concept, the present disclosure also provides a positioning apparatus for an unmanned aerial device, and fig. 5 is a block diagram illustrating a positioning apparatus for an unmanned aerial device according to an exemplary embodiment, as shown in fig. 5, the apparatus 500 may include:
a target feature point obtaining module 501, configured to obtain a target feature point in an image acquired at a current time; a first determining module 502 configured to determine whether the target feature point exists in an image acquired at a previous time; a second determining module 503, configured to determine whether the target feature point exists in the images acquired at a plurality of moments closest to the last moment if the first determining module 502 determines that the target feature point does not exist in the image acquired at the last moment; a third determining module 504, configured to, in a case where the second determining module 503 determines that the target feature point exists in the images acquired at the plurality of times closest to the previous time, determine target three-dimensional position information of the target feature point according to position information of the target feature point in the image acquired at the current time and position information in each of the target images in the plurality of target images, wherein the target image is an image acquired earlier than the previous time and including the target feature point; a positioning module 505 configured to position the unmanned device according to the target three-dimensional position information.
Through the technical scheme, under the condition that the target characteristic point does not exist in the image acquired at the last moment and exists in the images acquired at a plurality of moments closest to the last moment, the target characteristic point can be represented to be shielded at the last moment, namely not continuously tracked, and reappear in the image acquired at the current moment. When determining the target three-dimensional position information of the target feature point, the position information of the target feature point in the image acquired at the current moment and the position information of each target image in the plurality of target images are used for determination. In this way, instead of ignoring the position information of the target feature point in the image acquired at the previous time as in the related art, the position information of the target feature point in the image acquired at the current time and the position information of the previous target images are integrated, so that the accuracy of the determined target three-dimensional position information can be improved, and the positioning accuracy of the unmanned device can be improved according to the target three-dimensional position information.
Optionally, the apparatus 500 may further comprise; a first three-dimensional position information obtaining module configured to obtain first three-dimensional position information of the target feature point in a case where the second determining module 503 determines that the target feature point exists in the images acquired at a plurality of times closest to the previous time; a fourth determining module, configured to determine whether the first three-dimensional position information satisfies a preset condition in a case where the first three-dimensional position information is acquired; the third determining module is configured to, if it is determined by the fourth determining module that the first three-dimensional position information does not satisfy the preset condition, determine target three-dimensional position information of the target feature point according to position information of the target feature point in an image acquired at the current time and position information of each of the target images in a plurality of target images.
Optionally, the apparatus 500 may further comprise; a fifth determination module configured to take the first three-dimensional position information as the target three-dimensional position information if the fourth determination module determines that the first three-dimensional position information satisfies the preset condition.
Optionally, the third determining module 504 may include: a first determining submodule configured to determine second three-dimensional position information of the target feature point according to position information of the target feature point in the image acquired at the current time and position information of each of the target images in a plurality of target images; a second determination submodule configured to determine whether the second three-dimensional position information satisfies a preset condition; a third determination submodule configured to determine the second three-dimensional position information as the target three-dimensional position information if the second determination submodule determines that the second three-dimensional position information satisfies the preset condition.
Optionally, the first determining module 502 may include: a target area determination submodule configured to determine a target area in the image acquired at the previous time according to the position information of the target feature point in the image acquired at the current time; a fourth determination submodule configured to determine whether there is a feature point matching the target feature point in the target region; a fifth determination submodule configured to determine that the target feature point exists in the image acquired at the previous time, in a case where the fourth determination submodule determines that a feature point matching the target feature point exists in the target region.
Optionally, the second determining module 503 is configured to traverse the images acquired at a plurality of moments closest to the previous moment, and determine whether a feature point matching the target feature point exists in the currently traversed image; and under the condition that the feature point matched with the target feature point exists in the image traversed currently, determining that the target feature point exists in the images acquired at a plurality of moments closest to the last moment, and stopping traversing.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 6 is a block diagram illustrating a positioning apparatus 600 for an unmanned device according to another exemplary embodiment. For example, the positioning apparatus 600 for an unmanned device may be provided as a controller. Referring to fig. 6, a positioning apparatus 600 for an unmanned device includes a processor 622, which may be one or more in number, and a memory 632 for storing computer programs executable by the processor 622. The computer program stored in memory 632 may include one or more modules that each correspond to a set of instructions. Further, the processor 622 may be configured to execute the computer program to perform the positioning method for the unmanned device described above.
Additionally, the positioning apparatus 600 for an unmanned device may also include a power component 626 that may be configured to perform power management of the positioning apparatus 600 for an unmanned device and a communication component 650 that may be configured to enable communication, e.g., wired or wireless communication, of the positioning apparatus 600 for an unmanned device. Further, the positioning apparatus 600 for an unmanned device may also include input/output (I/O) interfaces 658.
In another exemplary embodiment, a computer-readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the above-described positioning method for an unmanned device is also provided. For example, the computer readable storage medium may be the memory 632 described above that includes program instructions executable by the processor 622 of the positioning apparatus 600 for an unmanned device to perform the positioning method for an unmanned device described above.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned positioning method for an unmanned aerial device when executed by the programmable apparatus.
The present disclosure also provides an unmanned aerial device comprising the positioning apparatus 600 for an unmanned aerial device provided above. The unmanned device may be an unmanned aerial vehicle, a robot, an unmanned delivery vehicle, an unmanned ship, or the like.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that, in the foregoing embodiments, various features described in the above embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, various combinations that are possible in the present disclosure are not described again.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.
Claims (11)
1. A positioning method for an unmanned device, the method comprising:
acquiring target characteristic points in an image acquired at the current moment;
determining whether the target feature point exists in an image acquired at the last moment;
determining whether the target feature point exists in the images acquired at a plurality of moments closest to the previous moment or not under the condition that the target feature point does not exist in the image acquired at the previous moment;
under the condition that the target feature point exists in the images acquired at a plurality of moments closest to the previous moment, determining target three-dimensional position information of the target feature point according to position information of the target feature point in the image acquired at the current moment and position information of each target image in a plurality of target images, wherein the target images are images acquired earlier than the previous moment and comprise the target feature point;
and positioning the unmanned equipment according to the target three-dimensional position information.
2. The method according to claim 1, wherein in a case where it is determined that the target feature point exists in the images acquired at a plurality of times closest to the previous time, the method further comprises:
acquiring first three-dimensional position information of the target feature point;
under the condition that the first three-dimensional position information is obtained, determining whether the first three-dimensional position information meets a preset condition;
and under the condition that the first three-dimensional position information is determined not to meet the preset condition, determining target three-dimensional position information of the target feature point according to the position information of the target feature point in the image acquired at the current moment and the position information of each target image in a plurality of target images.
3. The method of claim 2, further comprising;
and taking the first three-dimensional position information as the target three-dimensional position information under the condition that the first three-dimensional position information is determined to meet the preset condition.
4. The method according to claim 1, wherein determining the target three-dimensional position information of the target feature point according to the position information of the target feature point in the image acquired at the current time and the position information of each target image in a plurality of target images comprises:
determining second three-dimensional position information of the target feature point according to position information of the target feature point in the image acquired at the current moment and position information of each target image in a plurality of target images;
determining whether the second three-dimensional position information meets a preset condition;
and under the condition that the second three-dimensional position information is determined to meet the preset condition, determining the second three-dimensional position information as the target three-dimensional position information.
5. The method according to any one of claims 2-4, wherein the preset conditions include:
and the variance of the difference between the position information of the target feature point in each target image and the projection position information of the three-dimensional position information of the target feature point in the corresponding target image is smaller than a preset threshold value.
6. The method of claim 1, wherein the determining whether the target feature point exists in the image acquired at the previous time comprises:
determining a target area in the image acquired at the previous moment according to the position information of the target feature point in the image acquired at the current moment;
determining whether a feature point matching the target feature point exists in the target area;
and under the condition that the characteristic point matched with the target characteristic point exists in the target area, determining that the target characteristic point exists in the image acquired at the previous moment.
7. The method of claim 1, wherein the determining whether the target feature point exists in the images acquired at the moments closest to the previous moment comprises:
traversing the images acquired at a plurality of moments closest to the last moment, and determining whether characteristic points matched with the target characteristic points exist in the currently traversed images;
and under the condition that the feature point matched with the target feature point exists in the image traversed currently, determining that the target feature point exists in the images acquired at a plurality of moments closest to the last moment, and stopping traversing.
8. A positioning apparatus for an unmanned device, the apparatus comprising:
the target characteristic point acquisition module is configured to acquire a target characteristic point in an image acquired at the current moment;
a first determination module configured to determine whether the target feature point exists in an image acquired at a previous time;
a second determination module configured to determine whether the target feature point exists in the images acquired at a plurality of times closest to a last time, in a case where the first determination module determines that the target feature point does not exist in the image acquired at the last time;
a third determination module configured to, in a case where the second determination module determines that the target feature point exists in the images acquired at the plurality of times closest to the previous time, determine target three-dimensional position information of the target feature point based on position information of the target feature point in the image acquired at the current time and position information in each of the target images in a plurality of target images, wherein the target image is an image acquired earlier than the previous time and including the target feature point;
a positioning module configured to position the unmanned device according to the target three-dimensional position information.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
10. A positioning device for an unmanned aerial device, the device comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 7.
11. An unmanned aerial device, characterized in that the unmanned aerial device comprises a positioning apparatus according to claim 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010340694.9A CN111583338B (en) | 2020-04-26 | 2020-04-26 | Positioning method and device for unmanned equipment, medium and unmanned equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010340694.9A CN111583338B (en) | 2020-04-26 | 2020-04-26 | Positioning method and device for unmanned equipment, medium and unmanned equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111583338A CN111583338A (en) | 2020-08-25 |
CN111583338B true CN111583338B (en) | 2023-04-07 |
Family
ID=72111686
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010340694.9A Active CN111583338B (en) | 2020-04-26 | 2020-04-26 | Positioning method and device for unmanned equipment, medium and unmanned equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111583338B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112393723B (en) * | 2020-11-27 | 2023-10-24 | 北京三快在线科技有限公司 | Positioning method, positioning device, medium and unmanned equipment |
CN113689485B (en) * | 2021-08-25 | 2022-06-07 | 北京三快在线科技有限公司 | Method and device for determining depth information of unmanned aerial vehicle, unmanned aerial vehicle and storage medium |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106488139A (en) * | 2016-12-27 | 2017-03-08 | 深圳市道通智能航空技术有限公司 | Image compensation method, device and unmanned plane that a kind of unmanned plane shoots |
CN107798691A (en) * | 2017-08-30 | 2018-03-13 | 西北工业大学 | A kind of unmanned plane independent landing terrestrial reference real-time detecting and tracking method of view-based access control model |
CN107907131A (en) * | 2017-11-10 | 2018-04-13 | 珊口(上海)智能科技有限公司 | Alignment system, method and the robot being applicable in |
WO2018120351A1 (en) * | 2016-12-28 | 2018-07-05 | 深圳市道通智能航空技术有限公司 | Method and device for positioning unmanned aerial vehicle |
JP2018174461A (en) * | 2017-03-31 | 2018-11-08 | キヤノン株式会社 | Image processing apparatus, image processing method, and program |
CN108898624A (en) * | 2018-06-12 | 2018-11-27 | 浙江大华技术股份有限公司 | A kind of method, apparatus of moving body track, electronic equipment and storage medium |
CN109668551A (en) * | 2017-10-17 | 2019-04-23 | 杭州海康机器人技术有限公司 | Robot localization method, apparatus and computer readable storage medium |
WO2019093532A1 (en) * | 2017-11-07 | 2019-05-16 | 공간정보기술 주식회사 | Method and system for acquiring three-dimensional position coordinates without ground control points by using stereo camera drone |
CN109974693A (en) * | 2019-01-31 | 2019-07-05 | 中国科学院深圳先进技术研究院 | UAV positioning method, device, computer equipment and storage medium |
CN110111364A (en) * | 2019-04-30 | 2019-08-09 | 腾讯科技(深圳)有限公司 | Method for testing motion, device, electronic equipment and storage medium |
CN110705575A (en) * | 2019-09-27 | 2020-01-17 | Oppo广东移动通信有限公司 | Positioning method and device, equipment, storage medium |
-
2020
- 2020-04-26 CN CN202010340694.9A patent/CN111583338B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106488139A (en) * | 2016-12-27 | 2017-03-08 | 深圳市道通智能航空技术有限公司 | Image compensation method, device and unmanned plane that a kind of unmanned plane shoots |
WO2018120351A1 (en) * | 2016-12-28 | 2018-07-05 | 深圳市道通智能航空技术有限公司 | Method and device for positioning unmanned aerial vehicle |
JP2018174461A (en) * | 2017-03-31 | 2018-11-08 | キヤノン株式会社 | Image processing apparatus, image processing method, and program |
CN107798691A (en) * | 2017-08-30 | 2018-03-13 | 西北工业大学 | A kind of unmanned plane independent landing terrestrial reference real-time detecting and tracking method of view-based access control model |
CN109668551A (en) * | 2017-10-17 | 2019-04-23 | 杭州海康机器人技术有限公司 | Robot localization method, apparatus and computer readable storage medium |
WO2019093532A1 (en) * | 2017-11-07 | 2019-05-16 | 공간정보기술 주식회사 | Method and system for acquiring three-dimensional position coordinates without ground control points by using stereo camera drone |
CN107907131A (en) * | 2017-11-10 | 2018-04-13 | 珊口(上海)智能科技有限公司 | Alignment system, method and the robot being applicable in |
CN108898624A (en) * | 2018-06-12 | 2018-11-27 | 浙江大华技术股份有限公司 | A kind of method, apparatus of moving body track, electronic equipment and storage medium |
CN109974693A (en) * | 2019-01-31 | 2019-07-05 | 中国科学院深圳先进技术研究院 | UAV positioning method, device, computer equipment and storage medium |
CN110111364A (en) * | 2019-04-30 | 2019-08-09 | 腾讯科技(深圳)有限公司 | Method for testing motion, device, electronic equipment and storage medium |
CN110705575A (en) * | 2019-09-27 | 2020-01-17 | Oppo广东移动通信有限公司 | Positioning method and device, equipment, storage medium |
Non-Patent Citations (1)
Title |
---|
王丹.基于视觉的无人机检测与跟踪系统研究.中国优秀硕士学位论文全文数据库.2017,C031-287. * |
Also Published As
Publication number | Publication date |
---|---|
CN111583338A (en) | 2020-08-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111442722B (en) | Positioning method, positioning device, storage medium and electronic equipment | |
CN107990899B (en) | Positioning method and system based on SLAM | |
CN111209978B (en) | Three-dimensional visual repositioning method and device, computing equipment and storage medium | |
US10789719B2 (en) | Method and apparatus for detection of false alarm obstacle | |
CN112258567A (en) | Visual positioning method and device for object grabbing point, storage medium and electronic equipment | |
CN108279670B (en) | Method, apparatus and computer readable medium for adjusting point cloud data acquisition trajectory | |
CN112106111A (en) | Calibration method, calibration equipment, movable platform and storage medium | |
CN112549034B (en) | Robot task deployment method, system, equipment and storage medium | |
EP3852065A1 (en) | Data processing method and apparatus | |
CN110361005B (en) | Positioning method, positioning device, readable storage medium and electronic equipment | |
CN111123912B (en) | Calibration method and device for travelling crane positioning coordinates | |
US20220277480A1 (en) | Position estimation device, vehicle, position estimation method and position estimation program | |
CN110648363A (en) | Camera posture determining method and device, storage medium and electronic equipment | |
KR101544021B1 (en) | Apparatus and method for generating 3d map | |
WO2021195939A1 (en) | Calibrating method for external parameters of binocular photographing device, movable platform and system | |
CN111932611B (en) | Object position acquisition method and device | |
CN111583338B (en) | Positioning method and device for unmanned equipment, medium and unmanned equipment | |
CN115366097A (en) | Robot following method, device, robot and computer-readable storage medium | |
CN110243339A (en) | A kind of monocular cam localization method, device, readable storage medium storing program for executing and electric terminal | |
CN110634183A (en) | Map construction method and device and unmanned equipment | |
CN116958452A (en) | Three-dimensional reconstruction method and system | |
WO2022147655A1 (en) | Positioning method and apparatus, spatial information acquisition method and apparatus, and photographing device | |
Yang et al. | Simultaneous estimation of ego-motion and vehicle distance by using a monocular camera | |
CN112652018B (en) | External parameter determining method, external parameter determining device and electronic equipment | |
CN112313707B (en) | Tracking methods and movable platforms |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |