[go: up one dir, main page]

CN114096931A - Control method and device for movable platform - Google Patents

Control method and device for movable platform Download PDF

Info

Publication number
CN114096931A
CN114096931A CN202080029224.9A CN202080029224A CN114096931A CN 114096931 A CN114096931 A CN 114096931A CN 202080029224 A CN202080029224 A CN 202080029224A CN 114096931 A CN114096931 A CN 114096931A
Authority
CN
China
Prior art keywords
target object
distance
movable platform
image
shooting device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202080029224.9A
Other languages
Chinese (zh)
Other versions
CN114096931B (en
Inventor
许中研
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN114096931A publication Critical patent/CN114096931A/en
Application granted granted Critical
Publication of CN114096931B publication Critical patent/CN114096931B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/12Target-seeking control

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Studio Devices (AREA)

Abstract

一种可移动平台的控制方法和设备,该方法包括:确定目标对象是否为静止状态(S301);在目标对象为静止状态时,根据拍摄装置采集到的多帧图像运行多视图几何算法以确定目标对象的第一位置,并根据第一位置控制可移动平台对目标对象执行工作任务(S302);在目标对象为运动状态时,根据目标对象在拍摄装置采集到的图像中的图像区域尺寸确定目标对象的第二位置,并根据第二位置控制可移动平台对目标对象执行工作任务(S303),提高了确定出的不同状态下目标对象的位置的准确性。而且根据上述位置控制可移动平台对目标对象执行工作任务,工作任务的执行效果更佳。

Figure 202080029224

A control method and device for a movable platform, the method includes: determining whether a target object is in a stationary state (S301); when the target object is in a stationary state, running a multi-view geometric algorithm according to multiple frames of images collected by a photographing device to determine the first position of the target object, and control the movable platform to perform work tasks on the target object according to the first position (S302); when the target object is in a motion state, determine according to the size of the image area of the target object in the image collected by the shooting device The second position of the target object is controlled, and the movable platform is controlled to perform work tasks on the target object according to the second position (S303), which improves the accuracy of the determined positions of the target object in different states. Moreover, the movable platform is controlled to perform the work task on the target object according to the above-mentioned position, and the execution effect of the work task is better.

Figure 202080029224

Description

Control method and device for movable platform Technical Field
The embodiment of the application relates to the technical field of movable platforms, in particular to a control method and equipment of a movable platform.
Background
After the unmanned aerial vehicle selects the target object, the unmanned aerial vehicle can fly with reference to the target object, for example, the unmanned aerial vehicle can move according to a fixed track relative to the target object, and a common scene is as follows: automatic surround shooting, automatic inspection, autonomous monitoring and other scenes.
Taking active surround shooting as an example, the unmanned aerial vehicle needs to observe the position of the target object and fly around the target object according to the position of the target object. The position of the observation target can be determined by: the unmanned aerial vehicle runs a multi-view geometric algorithm through a plurality of frames of images acquired by the shooting device to determine the position of the target object. However, this method is suitable for a stationary state of the object. If the target object is in a motion state, the determined position of the target object is inaccurate by adopting the mode.
Disclosure of Invention
The embodiment of the application provides a control method and equipment of a movable platform, which are used for accurately determining the position of a target object.
In a first aspect, an embodiment of the present application provides a method for controlling a movable platform, where the movable platform includes a camera, and the method includes:
determining whether the target object is in a static state;
when the target object is in a static state, a multi-view geometric algorithm is operated according to a plurality of frames of images collected by the shooting device to determine a first position of the target object, and a movable platform is controlled to execute a work task on the target object according to the first position;
and when the target object is in a motion state, determining a second position of the target object according to the size of an image area of the target object in an image acquired by the shooting device, and controlling the movable platform to execute the work task on the target object according to the second position.
In a second aspect, an embodiment of the present application provides a control device for a movable platform, where the movable platform includes a camera, and the control device for the movable platform includes: a memory and at least one processor;
the memory for storing program code;
the at least one processor configured to execute the program code to:
determining whether the target object is in a static state;
when the target object is in a static state, a multi-view geometric algorithm is operated according to a plurality of frames of images collected by the shooting device to determine a first position of the target object, and a movable platform is controlled to execute a work task on the target object according to the first position;
and when the target object is in a motion state, determining a second position of the target object according to the size of an image area of the target object in an image acquired by the shooting device, and controlling the movable platform to execute the work task on the target object according to the second position.
In a third aspect, embodiments of the present application provide a movable platform, which includes a camera and a control device of the movable platform according to the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored; the computer program, when executed, implements a method of controlling a movable platform as described in embodiments of the present application in the first aspect.
In a sixth aspect, this application embodiment provides a program product, which includes a computer program, where the computer program is stored in a readable storage medium, and the computer program can be read by at least one processor from the readable storage medium, and the computer program is executed by the at least one processor to implement the method for controlling a movable platform according to the first aspect.
To sum up, according to the control method and the control device for the movable platform provided by the embodiment of the application, the first position of the target object is determined by running the multi-view geometric algorithm according to the multi-frame image acquired by the photographing device when the target object is in the static state, the second position of the target object is determined according to the size of the image area of the target object in the image acquired by the photographing device when the target object is in the motion state, and the position of the target object is determined in different modes according to different states of the target object, so that the accuracy of the determined position of the target object is improved. And the movable platform is controlled to execute the work task on the target object according to the position, so that the execution effect of the work task is better.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic architecture diagram of an unmanned flight system according to an embodiment of the present application;
fig. 2 is a schematic view of an application scenario provided in an embodiment of the present application;
FIG. 3 is a flowchart illustrating a method for controlling a movable platform according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating a multi-view geometric algorithm executed according to a plurality of frames of images acquired by a camera to determine a position of a stationary target object according to an embodiment of the present application;
fig. 5 is a schematic diagram illustrating a multi-view geometric algorithm executed according to a plurality of frames of images acquired by a camera to determine a position of a target object in a moving state according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a determination that a target object is in a static state according to an embodiment of the present application;
fig. 7 is a schematic diagram illustrating a determination that a target object is in a motion state according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a control device of a movable platform according to an embodiment of the present disclosure;
FIG. 9 is a schematic structural diagram of a movable platform according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a movable platform according to another embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When a component is referred to as being "connected" to another component, it can be directly connected to the other component or intervening components may also be present.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The embodiment of the application provides a control method and equipment of a movable platform. Wherein, the movable platform can be unmanned aerial vehicle, unmanned ship, robot, handheld cloud platform etc.. The following description of the movable platform of the present application uses a drone as an example. It will be apparent to those skilled in the art that other types of drones may be used without limitation, and embodiments of the present application may be applied to various types of drones. For example, the drone may be a small or large drone. In certain embodiments, the drone may be a rotorcraft (rotorcraft), for example, a multi-rotor drone propelled through the air by a plurality of propulsion devices, embodiments of the present application are not so limited.
Fig. 1 is a schematic architecture diagram of an unmanned flight system according to an embodiment of the present application. The present embodiment is described by taking a rotor unmanned aerial vehicle as an example.
The unmanned flight system 100 can include a drone 110, a display device 130, and a control terminal 140. The drone 110 may include, among other things, a power system 150, a flight control system 160, a frame, and a pan-tilt 120 carried on the frame. The drone 110 may be in wireless communication with the control terminal 140 and the display device 130. Wherein, the drone 110 further includes a battery (not shown in the figures) that provides electrical energy to the power system 150. The drone 110 may be an agricultural drone or an industrial application drone, with the need for cyclic operation. Accordingly, the battery also has a demand for a cycle operation.
The airframe may include a fuselage and a foot rest (also referred to as a landing gear). The fuselage may include a central frame and one or more arms connected to the central frame, the one or more arms extending radially from the central frame. The foot rest is connected with the fuselage for play the supporting role when unmanned aerial vehicle 110 lands.
The power system 150 may include one or more electronic governors (abbreviated as electric governors) 151, one or more propellers 153, and one or more motors 152 corresponding to the one or more propellers 153, wherein the motors 152 are connected between the electronic governors 151 and the propellers 153, the motors 152 and the propellers 153 are disposed on the horn of the drone 110; the electronic governor 151 is configured to receive a drive signal generated by the flight control system 160 and provide a drive current to the motor 152 based on the drive signal to control the rotational speed of the motor 152. The motor 152 is used to drive the propeller in rotation, thereby providing power for the flight of the drone 110, which power enables the drone 110 to achieve one or more degrees of freedom of motion. In certain embodiments, the drone 110 may rotate about one or more axes of rotation. For example, the above-mentioned rotation axes may include a Roll axis (Roll), a Yaw axis (Yaw) and a pitch axis (pitch). It should be understood that the motor 152 may be a dc motor or an ac motor. The motor 152 may be a brushless motor or a brush motor.
Flight control system 160 may include a flight controller 161 and a sensing system 162. The sensing system 162 is used to measure attitude information of the drone, i.e., position information and state information of the drone 110 in space, such as three-dimensional position, three-dimensional angle, three-dimensional velocity, three-dimensional acceleration, three-dimensional angular velocity, and the like. The sensing system 162 may include, for example, at least one of a gyroscope, an ultrasonic sensor, an electronic compass, an Inertial Measurement Unit (IMU), a vision sensor, a global navigation satellite system, and a barometer. For example, the Global navigation satellite System may be a Global Positioning System (GPS). The flight controller 161 is used to control the flight of the drone 110, for example, the flight of the drone 110 may be controlled according to attitude information measured by the sensing system 162. It should be understood that the flight controller 161 may control the drone 110 according to preprogrammed instructions, or may control the drone 110 in response to one or more remote control signals from the control terminal 140.
The pan/tilt head 120 may include a motor 122. The pan/tilt head is used to carry a load, which may be, for example, the camera 123. Flight controller 161 may control the movement of pan/tilt head 120 via motor 122. Optionally, as another embodiment, the pan/tilt head 120 may further include a controller for controlling the movement of the pan/tilt head 120 by controlling the motor 122. It should be understood that the pan/tilt head 120 may be separate from the drone 110, or may be part of the drone 110. It should be understood that the motor 122 may be a dc motor or an ac motor. The motor 122 may be a brushless motor or a brush motor. It should also be understood that the pan/tilt head may be located at the top of the drone, as well as at the bottom of the drone.
The photographing device 123 may be, for example, a device for capturing an image such as a camera or a video camera, and the photographing device 123 may communicate with the flight controller and perform photographing under the control of the flight controller. The image capturing Device 123 of this embodiment at least includes a photosensitive element, such as a Complementary Metal Oxide Semiconductor (CMOS) sensor or a Charge-coupled Device (CCD) sensor. It can be understood that the camera 123 may also be directly fixed to the drone 110, such that the pan/tilt head 120 may be omitted.
The display device 130 is located at the ground end of the unmanned aerial vehicle system 100, can communicate with the unmanned aerial vehicle 110 in a wireless manner, and can be used for displaying attitude information of the unmanned aerial vehicle 110. In addition, an image photographed by the photographing device 123 may also be displayed on the display apparatus 130. It should be understood that the display device 130 may be a stand-alone device or may be integrated into the control terminal 140.
The control terminal 140 is located at the ground end of the unmanned aerial vehicle system 100, and can communicate with the unmanned aerial vehicle 110 in a wireless manner, so as to remotely control the unmanned aerial vehicle 110.
It should be understood that the above-mentioned nomenclature for the components of the unmanned flight system is for identification purposes only, and should not be construed as limiting the embodiments of the present application.
Fig. 2 is a schematic view of an application scenario provided in the embodiment of the present application, and as shown in fig. 2, fig. 2 shows an unmanned aerial vehicle 201 and a control terminal 202 of the unmanned aerial vehicle. The control terminal 202 of the drone 201 may be one or more of a remote control, a smartphone, a desktop computer, a laptop computer, a wearable device (watch, bracelet). The embodiment of the present application takes the control terminal 202 as the remote controller 2021 and the terminal device 2022 as an example for schematic explanation. The terminal device 2022 is, for example, a smart phone, a wearable device, a tablet computer, and the like, but the embodiment of the present application is not limited thereto.
Unmanned aerial vehicle 201 includes fuselage 2011 and connects cradle head 2012 on fuselage 2011, and cradle head 2012 is used for bearing load 2013. The load 2013 comprises a shooting device, the unmanned aerial vehicle transmits images shot by the shooting device to the control terminal 202, and the control terminal 202 displays the images shot by the shooting device. In a scenario where the drone 201 tracks a target object or a scenario where the drone 201 flies around a target object, the drone 201 needs to determine the location of the target object. Therefore, the present application proposes that the manner in which the position of the target object is observed in the stationary state of the target object is different from the manner in which the position of the target object is observed in the moving state of the target object. For example, when the target object is in a static state, determining the position of the target object according to a multi-view geometric algorithm; and when the target object is in a motion state, determining the position of the target object according to the size of an image area of the target object in the acquired image. So as to accurately determine the position of the target object when the target object is in different states.
Fig. 3 is a flowchart of a method for controlling a movable platform according to an embodiment of the present disclosure, where the method of the present embodiment may be applied to a control device of the movable platform. The control device of the movable platform can be arranged on the movable platform; alternatively, a part of the control device of the movable platform is arranged on the movable platform, and the other part of the control device of the movable platform is arranged on the control terminal of the movable platform. In this embodiment, for example, the control device of the movable platform is disposed on the movable platform, as shown in fig. 3, the method of this embodiment may include:
s301, determining whether the target object is in a static state.
In this embodiment, the movable platform includes a camera for capturing the target object so that the target object is located in the image captured by the camera. The movable platform may determine whether a target object photographed by the photographing device is in a stationary state. If the target object is determined to be in a static state, S302 is performed. If it is determined that the target object is not in a stationary state, for example, the target object is in a moving state, S303 is executed.
S302, when the target object is in a static state, a multi-view geometric algorithm is operated according to a multi-frame image acquired by the shooting device to determine a first position of the target object, and the movable platform is controlled to execute a work task on the target object according to the first position.
In this embodiment, when the target object is in a still state, the photographing device may photograph an image including the target object. The movable platform runs a multi-view geometric algorithm according to the multi-frame images acquired by the shooting device to determine the position of the target object, and the position is called as a first position. The first position may be used to direct the movable platform to perform a corresponding work task. And then controlling the movable platform to execute the work task on the target object according to the first position.
One implementation way of determining the position of the target object by running a multi-view geometry according to a multi-frame image acquired by a shooting device is as follows: when the target object is in a static state, one frame of image acquired by the shooting device is taken as a 1 st frame of image according to the epipolar geometry principle, as shown in a in fig. 4. The shooting device can determine the position of the target object on a ray connecting the shooting device and the target object through observation of a single frame image, namely the direction of the target object can be determined, but the distance between the shooting device and the target object cannot be determined according to the 1 st frame image. And as shown in B in fig. 4, the photographing device acquires the next frame image of the 1 st frame image, which is called as the 2 nd frame image. And determining a ray representing the direction of the target object according to the 2 nd frame image, and obtaining the position of the target object according to the intersection point of the determined two rays, namely the position of the target object determined according to the 1 st frame image and the 2 nd frame image. The shooting device continues to shoot the 3 rd frame image, the target object is projected to the 1 st frame image, the 2 nd frame image and the 3 rd frame image respectively according to the determined position of the target object to obtain 3 projection positions respectively, and the position of the target object is optimized according to the 3 projection positions (for example, the sum of the re-projection errors corresponding to the 3 projection positions is minimum), so that the position of the optimized target object is obtained. The shooting device continues to shoot the 4 th frame image, the target object is projected to the 1 st frame image, the 2 nd frame image, the 3 rd frame image and the 4 th frame image respectively according to the optimized position of the target object to obtain 4 projection positions respectively, the position of the target object is optimized according to the 4 projection positions (for example, the sum of the re-projection errors corresponding to the 4 projection positions is minimum), and the position of the optimized target object is obtained again. By analogy, the position of the target object is optimized, so that the accurate position of the target object in a static state can be obtained through the multi-frame image. Alternatively, the above method may be referred to as a triangulation method.
And S303, when the target object is in a motion state, determining a second position of the target object according to the size of an image area of the target object in an image acquired by the shooting device, and controlling the movable platform to execute a work task on the target object according to the second position.
When the target object is in a moving state, if the position of the target object is determined by running the multi-view geometric algorithm according to the multi-frame images acquired by the photographing device, as shown in fig. 5, two rays are determined according to two frames of images acquired by the photographing device, but the intersection point of the two rays is not located at the position of the target object, so the position of the target object in the moving state cannot be obtained by adopting the above method when the target object is in a static state.
In this embodiment, when the target object is in a moving state, the shooting device may also shoot an image including the target object, where the target object is located in the image captured by the shooting device and occupies a part of an image area in the image. The movable platform determines the position of the target object according to the size of the image area of the target object in the image acquired by the shooting device, and the position is called as a second position. The second position may be used to direct the movable platform to perform a corresponding work task. And then controlling the movable platform to execute the work task on the target object according to the second position.
One implementation manner of determining the position of the target object according to the size of the image area of the target object in the image acquired by the shooting device is as follows: determining rays representing the direction of the target object through a single frame image of the shooting device; according to the size of an image area of a target object in an image acquired by a shooting device and prior information (such as prior size, for example, the prior size of the target object is different from that of a person), a distance between the target object and the shooting device is determined, and according to the distance and the direction of the target object, the position of the target object can be determined.
In the embodiment, the multi-view geometric algorithm is operated according to the multi-frame image acquired by the shooting device when the target object is in a static state to determine the first position of the target object, the second position of the target object is determined according to the size of the image area of the target object in the image acquired by the shooting device when the target object is in a moving state, and the position of the target object is determined in different modes according to different states of the target object, so that the accuracy of the determined position of the target object is improved. And the movable platform is controlled to execute the work task on the target object according to the position, so that the execution effect of the work task is better.
In some embodiments, the work task is at least one of a circle flight task and a tracking task. And if the work task surrounds the flight task, controlling the movable platform to fly around the target object according to the first position or the second position. The track of the movable platform flying around the target object can be an involute line, a spiral line or a circular arc line (or a circumferential line) with a constant radius. The embodiment can accurately surround the target object and can quickly follow the target object.
In some embodiments, when the target object is in a static state, the first position of the target object is not updated after the first position of the target object is obtained. That is, the multi-view geometric algorithm is not run to update the first position of the target object according to the multi-frame image collected by the shooting device any more, and even if the shooting device continues to collect images, the first position of the target object is not optimized according to the images. The method can save processing resources, avoid the influence of the noise of a certain frame of image on the shooting effect, and control the movable platform to execute the work task on the target object according to the same fixed first position when the target object is in a static state, thereby improving the execution effect of the work task. For example, the movable platform is fixed around the center of a target object, so that the smoothness of a surrounding path is ensured.
In some embodiments, the second position of the target object is updated after the second position of the target object is obtained while the target object is in a motion state. The accuracy of the determined position when the target object moves is improved, and the execution effect of the work task is improved.
In some embodiments, one possible implementation manner of the foregoing S302 is: and operating a multi-view geometric algorithm according to the multi-frame images acquired by the shooting device to determine a first distance between the target object and the shooting device, and determining a first position of the target object according to the first distance.
In this embodiment, a distance between the target object and the camera may be determined by running a multi-view geometric algorithm on the basis of the multi-frame image acquired by the camera, and the distance is referred to as a first distance.
Optionally, the first distance is determined as a first position of the target object, wherein the position may be represented by a distance size. Alternatively, since the target object is captured by the capturing device, the target object occupies an image area in the image captured by the capturing device, the direction of the target object may be determined based on the image area, and the first position may be determined based on the direction of the target object and the first distance.
In addition, when the target object is in a static state, a distance between the target object and the shooting device may also be determined according to an image area size of the target object in an image acquired by the shooting device, which is referred to as a second distance. The distance between the target object and the shooting device is more distant if the image size of the target object in the image acquired by the shooting device is smaller, and conversely, the distance between the target object and the shooting device is closer if the image size of the target object in the image acquired by the shooting device is larger. In the embodiment, since the distance between the target object and the shooting device needs to be determined by adopting the prior size of the target object, there may be a deviation between the prior size and the actual size of the target object, and the first distance is closer to the actual distance between the target object and the shooting device than the second distance. There is a distance deviation between the first distance and the second distance obtained in different ways.
Therefore, the movable platform of the present embodiment may further acquire a distance correction deviation for characterizing a distance deviation existing between the first distance and the second distance.
Accordingly, one possible implementation manner of the foregoing S303 is: and determining a third distance between the target object and the shooting device according to the size of an image area of the target object in the image acquired by the shooting device, and determining a second position of the target object according to the third distance and the distance correction deviation.
In this embodiment, when the target object is in a motion state, a distance between the target object and the camera is determined according to an image area size of the target object in an image captured by the camera, and the distance is referred to as a third distance. However, as described above, the prior size of the target object needs to be referred to in the process of determining the third distance, so that there is an error between the third distance and the distance between the target object and the photographing device. And because the distance between the target object and the shooting device is determined to be closer to the distance between the target object and the shooting device by running the multi-view geometric algorithm according to the multi-frame images collected by the shooting device, the third distance is converted into the distance between the target object and the shooting device, which is equivalent to the distance between the target object and the shooting device determined by running the multi-view geometric algorithm according to the multi-frame images collected by the shooting device when the target object is in a motion state, according to the distance correction deviation, and the distance is called the corrected distance. The corrected distance is closer to a distance between the target object and the photographing device, and then the second position of the target object is determined according to the corrected distance. Because the deviation of the third distance is corrected in the embodiment, the correction precision is higher, so that the obtained second position is closer to the actual position of the target object, and the accuracy of the position of the target object determined when the target object is in a motion state is improved.
Optionally, the corrected distance is determined as a second position of the target object, wherein the position may be represented by a distance size. Alternatively, since the target object is captured by the imaging device, the target object occupies an image area in the image captured by the imaging device, the direction of the target object can be determined based on the image area, and the second position can be determined based on the direction of the target object and the corrected distance.
Several possible implementations of obtaining the distance correction offset are exemplified below.
In one possible implementation manner, when the target object is in a static state, determining a second distance between the target object and the shooting device according to the size of an image area of the target object in an image acquired by the shooting device; a distance correction offset is determined based on the first distance and the second distance. When the target object is in a static state, a multi-view geometric algorithm is operated according to the multi-frame images collected by the shooting device to determine a first distance between the target object and the shooting device, wherein the first distance is closer to the distance between the target object and the shooting device when the target object is in the static state. In order to obtain the distance deviation between the target object and the shooting device determined by the multi-view geometric algorithm, relative to the distance between the target object and the shooting device determined according to the image area size, when the target object is in a static state, the second distance between the target object and the shooting device is determined according to the image area size of the target object in the image acquired by the shooting device. The distance correction deviation is then determined based on the distances between the target object and the camera, which are obtained in different ways when the target object is in the same state, i.e., based on the first distance and the second distance.
Optionally, the distance correction offset is obtained by subtracting the second distance from the first distance. Accordingly, the corrected distance is, for example, equal to the sum of the third distance and the distance correction deviation.
Optionally, the distance correction offset is obtained by comparing the first distance with the second distance. Accordingly, the above-described corrected distance is, for example, equal to the product of the third distance and the distance correction deviation.
Optionally, the distance correction offset is obtained by subtracting the second distance from the first distance and comparing the second distance with the first distance. Accordingly, the corrected distance is, for example, equal to the sum of the third distance and (the product of the third distance and the distance correction deviation).
The expression of the distance correction deviation is not limited to this.
Optionally, when the target object is in a static state, multiple frames of images acquired by the photographing device are acquired, a second distance between the target object and the photographing device is determined according to the size of an image area of the target object in each frame of image acquired by the photographing device, and the distance correction deviation is determined according to the first distance and the multiple second distances. For example: a mean value of the plurality of second locations is determined, and the distance correction bias is then determined based on the first distance and the mean value. Alternatively, a distance correction offset is determined from the first distance and each of the second distances, and then the average of the plurality of distance correction offsets is determined as the final distance correction offset.
In this embodiment, when the target object is in a stationary state, the above-described distance correction deviation is calculated and obtained, and the distance correction deviation more accurately reflects the deviation between the target object and the imaging device, which is obtained in a different manner in the same state of the target object. And then when the target object is in a motion state, correcting the deviation according to the distance to obtain a more accurate position of the target object.
In another possible implementation, one or more reference distance correction offsets are pre-stored in the local storage of the movable platform. The movable platform of the present embodiment may obtain at least one reference distance correction offset from the local storage of the movable platform, and then determine the distance correction offset based on the obtained at least one reference distance correction offset. If a reference range correction offset is retrieved from the local storage of the movable platform, the reference range correction offset is determined to be the range correction offset described above. If a plurality of reference distance correction deviations are obtained from the local storage device of the movable platform, the average value, or the maximum value, or the minimum value, or the median of the plurality of reference distance correction deviations is determined as the distance correction deviation.
Since the distance correction deviation of the embodiment is stored in the local storage device of the movable platform in advance, the distance correction deviation can be acquired from the local storage device, and the distance correction deviation does not need to be calculated and acquired when the target object is in a static state, so that the processing resource is saved.
Several implementations are illustrated below for obtaining at least one reference range correction bias from a local storage of a movable platform and determining a range correction bias based on the at least one reference range correction bias.
In one possible implementation manner, the local storage device of the movable platform stores reference distance correction deviations corresponding to a plurality of object types in advance, where the plurality of object types include: automobiles, people, etc. Each object type corresponds to one reference distance correction offset, alternatively it is also possible to correspond to a plurality of reference distance correction offsets per object type. The reference range correction bias may be different for different object types. In this embodiment, the movable platform determines the object type of the target object according to the image acquired by the photographing device, acquires a reference distance correction offset matching the object type of the target object from a local storage device of the movable platform according to the object type of the target object, and determines the reference distance correction offset as the distance correction offset. Here, taking the example that the object type of the target object is an automobile, in this embodiment, the reference distance correction offset corresponding to the automobile is obtained from a plurality of reference distance correction offsets corresponding to automobiles, people, and the like, which are stored in advance in the local storage device of the movable platform. If the reference distance correction offset corresponding to the car acquired from the local storage device of the movable platform is one, the reference distance correction offset is determined as the above distance correction offset. If the corresponding reference distance correction deviations of the automobile obtained from the local storage device of the movable platform are multiple, determining the average value or the maximum value or the minimum value or the median of the multiple reference distance correction deviations as the distance correction deviation.
In another possible implementation manner, the local storage device of the movable platform stores reference distance correction deviations corresponding to a plurality of reference distances in advance. Each reference distance corresponds to one reference distance correction offset, alternatively it is also possible to correspond to a plurality of reference distance correction offsets per reference distance. The reference distance correction offset may be different for different reference distances.
Optionally, when the target object is in a static state, the movable platform determines the first distance between the target object and the shooting device through the above schemes. And then acquiring a reference distance correction deviation corresponding to at least one reference distance from a local storage device of the movable platform according to the first distance, and determining the distance correction deviation according to the reference distance correction deviation. If the first distance is included in the plurality of reference distances, one or more reference distance correction offsets corresponding to the first distance, which are stored in advance, are obtained from a local storage device of the movable platform. If the first distance is not included in the plurality of reference distances, one or more reference distance correction offsets corresponding to the reference distance closest to the first distance, which are stored in advance, are acquired from the local storage device of the movable platform in the present embodiment. Alternatively, if the first distance is not included in the plurality of reference distances, one or more reference distance correction offsets, which are stored in advance and correspond to two reference distances adjacent to the first distance, respectively, are acquired from the local storage device of the movable platform in the present embodiment.
If the reference distance correction offset acquired from the local storage device of the movable platform according to the first distance is one, the reference distance correction offset is determined as the above-mentioned distance correction offset. If the reference distance correction deviations corresponding to the automobiles acquired from the local storage device of the movable platform according to the first distance are multiple, determining the average value, the maximum value, the minimum value or the median of the multiple reference distance correction deviations as the distance correction deviation.
Optionally, in this embodiment, when the target object is in a stationary state, the movable platform determines the second distance between the target object and the shooting device according to the size of an image area of the target object in the image acquired by the shooting device. And acquiring a reference distance correction deviation corresponding to at least one reference distance from the local storage device of the movable platform according to the second distance, and determining the distance correction deviation according to the reference distance correction deviation. For a specific implementation process, reference may be made to the above-mentioned related description about the first distance determination distance correction deviation, and details are not described herein again.
Optionally, in this embodiment, when the target object is in a motion state, the movable platform determines the third distance between the target object and the shooting device in the above manner. And then acquiring a reference distance correction deviation corresponding to at least one reference distance from the local storage device of the movable platform according to the third distance, and determining the distance correction deviation according to the reference distance correction deviation. For a specific implementation process, reference may be made to the above-mentioned related description about the first distance determination distance correction deviation, and details are not described herein again.
In any of the above embodiments, one possible implementation manner of determining whether the target object is in a static state is as follows: acquiring a multi-frame image which is acquired by a shooting device and contains a target object, and determining whether the target object is in a static state or not according to the multi-frame image. In this embodiment, the shooting device may acquire a plurality of frames of images, where the plurality of frames of images include the target object. And then determining whether the target object is in a static state according to the multi-frame image. Therefore, whether the target object is in a static state or a moving state can be quickly and stably detected, and meanwhile, the influence of noise of a certain frame image on a result can be avoided.
The following illustrates how to determine whether the target is in a still state from the multi-frame image.
In one possible implementation, a multi-view geometric algorithm is operated according to the multi-frame images to determine fourth positions of the target object at multiple moments; determining whether the plurality of fourth positions meet a preset position convergence condition; and when the condition is met, determining that the target object is in a static state, otherwise, determining that the target object is in a motion state.
Optionally, each frame of image may correspond to a time, and one possible implementation manner of determining the fourth position of the target object at multiple times by running the multi-view geometric algorithm according to multiple frames of images is as follows: the shooting device sequentially collects multiple frames of images, and after the shooting device collects the 2 nd frame of image at the 2 nd moment, the fourth position of the target object at the 2 nd moment can be determined according to the 1 st frame of image and the 2 nd frame of image. And after the 3 rd frame image is acquired by the shooting device at the 3 rd moment, the fourth position of the target object at the 3 rd moment can be determined according to the 1 st frame image, the 2 nd frame image and the 3 rd frame image. And after the 4 th frame of image is acquired by the shooting device at the 4 th moment, determining the fourth position of the target object at the 4 th moment according to the 1 st frame of image, the 2 nd frame of image, the 3 rd frame of image and the 4 th frame of image. By analogy, the fourth position of the target object at multiple times may be obtained. For how to obtain the fourth position, reference may be made to the description related to S302 in the embodiment shown in fig. 3, and details are not repeated here. Then, it is determined whether the plurality of fourth locations satisfy a predetermined location convergence condition, such as whether the plurality of fourth locations converge to the same location, for example, it is determined that the plurality of fourth locations are located within a predetermined range area, and the predetermined range area may represent a covariance of convergence of the fourth locations. And when the plurality of fourth positions meet the preset position convergence condition, the position of the target object is basically unchanged at the moments, and the target object is determined to be in a static state. And when the plurality of fourth positions do not meet the preset position convergence condition, the position of the target object is changed, and the target object is determined to be in a motion state.
Optionally, each frame of image may correspond to one time, and another possible implementation manner of determining the fourth position of the target object at multiple times is as follows: in the implementation mode, a cyclic triangular distance measurement scheme is provided, namely a plurality of triangular distance meters are instantiated at one time, the oldest triangular distance meter is deleted in real time along with the update of time, and meanwhile, the newest triangular distance meter is supplemented. Specifically, a triangular range finder is instantiated from two adjacent frame images.
For example, a position is obtained by running a multi-view geometry algorithm on at least two images, and a plurality of positions can be obtained in the same manner. In the following, an example of obtaining one position by two frame images is taken, but the embodiment is not limited to two frame images.
The shooting device sequentially collects multiple frames of images, and takes 8 frames of images shot by the shooting device at 8 moments as an example. After the camera captures the 1 st frame image at the 1 st time and the 2 nd frame image at the 2 nd time, a fourth position of the target object may be determined according to the 1 st frame image and the 2 nd frame image (see, for example, B in fig. 4). After the 3 rd frame image is acquired at the 3 rd time and the 4 th frame image is acquired at the 4 th time, a fourth position of the target object can be determined according to the 3 rd frame image and the 4 th frame image. After the 5 th frame image is acquired at the 5 th time and the 6 th frame image is acquired at the 6 th time, a fourth position of the target object can be determined according to the 5 th frame image and the 6 th frame image. After the 7 th frame of image is acquired at the 7 th moment and the 8 th frame of image is acquired at the 8 th moment, a fourth position of the target object can be determined according to the 7 th frame of image and the 8 th frame of image.
When the target object is in a static state, the position of the target object is substantially unchanged. As shown in fig. 6, it may be determined that the above-mentioned 4 fourth positions of the target object converge within the same preset range area, indicating that the target object is in a stationary state.
When the target object is in a motion state, the position of the target object changes. As shown in fig. 7, it can be determined that the fourth positions of the target object at the above 4 moments cannot be converged within the same preset range area, indicating that the target object is in a motion state.
In another possible implementation manner, after multiple frames of images are acquired, the target object may be included in each frame of image, and the target object occupies an image area in each frame of image, and this embodiment may determine whether the target object is in a static state according to an image area size of the target object in each frame of image. For example, if the target object is in motion, the image size of the target object in the image is different. Therefore, whether the difference between the sizes of the image areas of the target object in each frame of image is smaller than the preset difference or not can be judged, if yes, the target object is determined to be in a static state, and if not, the target object is determined to be in a motion state.
In any of the above embodiments, the movable platform determines whether the target object is switched from the stationary state to the moving state during the process of controlling the movable platform to perform the work task on the target object according to the first position.
In this embodiment, when the target object is in a stationary state, after the first position is determined, the movable object is controlled to perform the work task on the target object according to the first position. The first position is used for the movable platform to perform the work task when the target object is in a stationary state. And if the target object is in a motion state, the first position cannot be used for controlling the movable platform to execute the work task, otherwise, the movable platform cannot execute the work task on the target object. Therefore, in the process of controlling the movable platform to execute the work task on the target object according to the first position, whether the target object is switched from the static state to the motion state is determined. And if the target object is determined not to be switched from the static state to the motion state, which indicates that the target object is still in the static state, continuing to control the movable platform to execute the work task on the target object according to the first position. And if the target object is determined to be switched from the static state to the motion state, which indicates that the target object is in the motion state, determining a second position of the target object according to the size of an image area of the target object in the image acquired by the shooting device, and controlling the movable platform to execute a work task on the target object according to the second position.
One possible implementation manner of determining whether the target object is switched from the static state to the moving state is as follows: acquiring a multi-frame image acquired by a shooting device; and identifying the position of the target object in the multi-frame image; projecting the target object into the multi-frame image according to the first position to acquire the projection position of the target object in the image; and determining whether the target object is switched from a static state to a moving state according to the identified position and the projection position.
In this embodiment, in the process of controlling the movable platform to execute the work task on the target object according to the first position, multiple frames of images acquired by the shooting device are acquired, then positions of the target object in each frame of image are identified respectively (where how to identify the position of the target object in the image may refer to descriptions in related technologies such as a neural network model or image tracking, which are not described herein again), and the target object is projected into each frame of image according to the first position, so that the target object is obtained as a projection position in each frame of image. For each frame of image, the deviation between the identified position and the projected position is determined, which may be referred to as a reprojection error. And determining whether the target object is switched from a static state to a moving state according to the reprojection errors (namely a plurality of reprojection errors) corresponding to the multi-frame images.
For example, it is determined whether a plurality of reprojection errors are predetermined error convergence conditions, that is, whether the plurality of reprojection errors converge to the same error. For example, the plurality of reprojection errors are determined to be within a preset error range. And if the multiple reprojection errors meet the preset error convergence condition, indicating that the position of the target object is not changed basically, and determining that the target object is still in a static state. And when the multiple reprojection errors do not meet the preset error convergence condition, indicating that the position of the target object starts to change, determining that the target object starts to move, and switching the target object from a static state to a moving state.
For example, determining that the reprojection error is larger and larger according to the plurality of reprojection errors, it is determined that the target object starts moving. Or, for example, if some of the plurality of reprojection errors are small and some of the plurality of reprojection errors are large, it is determined that the target object starts moving. Thereby determining that the target object is switched from a stationary state to a moving state.
Therefore, the target object can be rapidly detected to be switched from the static state to the moving state in real time through the modes of the embodiment. Then, the seamless switching can be carried out as follows: and determining a second position of the target object according to the size of an image area of the target object in an image acquired by the shooting device, and controlling the movable platform to execute the work task on the target object according to the second position. Abrupt changes in the determined position when switching from the stationary state to the moving state are avoided. Therefore, the path is smooth in the process that the movable platform executes the work task on the target object, and sudden jitter can not occur.
Optionally, the shooting device in each of the above embodiments may be a monocular camera.
The embodiment of the present application also provides a computer storage medium, in which program instructions are stored, and when the program is executed, the program may include some or all of the steps of the control method for a movable platform according to any one of the above embodiments.
Fig. 8 is a schematic structural diagram of a control device of a movable platform according to an embodiment of the present application, where the movable platform includes a camera, and as shown in fig. 8, the control device 800 of the movable platform includes: at least one processor 801, one processor 801 being shown as an example.
The at least one processor 801 configured to:
determining whether the target object is in a static state;
when the target object is in a static state, a multi-view geometric algorithm is operated according to a plurality of frames of images collected by the shooting device to determine a first position of the target object, and a movable platform is controlled to execute a work task on the target object according to the first position;
and when the target object is in a motion state, determining a second position of the target object according to the size of an image area of the target object in an image acquired by the shooting device, and controlling the movable platform to execute the work task on the target object according to the second position.
Optionally, when the target object is in a static state, the first position of the target object is not updated.
Optionally, when the at least one processor 801 runs a multi-view geometric algorithm according to the multiple frames of images acquired by the shooting device to determine the first position of the target object, the at least one processor is specifically configured to:
and running a multi-view geometric algorithm according to the multi-frame images acquired by the shooting device to determine a first distance between the target object and the shooting device, and determining a first position of the target object according to the first distance.
The at least one processor 801 is further configured to: and acquiring a distance correction deviation, wherein the distance correction deviation is used for representing a distance deviation between the first distance and a second distance, and the second distance is the distance between the target object and the shooting device which is determined according to the size of an image area of the target object in the image acquired by the shooting device when the target object is in a static state.
The at least one processor 801, when determining the second position of the target object according to the size of the image area of the target object in the image acquired by the shooting device, is specifically configured to:
and determining a third distance between the target object and the shooting device according to the size of an image area of the target object in the image acquired by the shooting device, and determining a second position of the target object according to the third distance and the distance correction deviation.
Optionally, the at least one processor 801 is specifically configured to: when the target object is in a static state, determining a second distance between the target object and the shooting device according to the size of an image area of the target object in an image acquired by the shooting device; determining the distance correction offset from the first distance and the second distance.
Optionally, the local storage device of the movable platform stores one or more reference distance correction offsets in advance, and the at least one processor 801 is specifically configured to:
at least one reference range correction offset is retrieved from a local storage of the movable platform, and the range correction offset is determined based on the at least one reference range correction offset.
Optionally, the local storage device of the movable platform stores reference distance correction offsets corresponding to a plurality of object types in advance, and the at least one processor 801 is further configured to: and determining the object type of the target object according to the image acquired by the shooting device.
The at least one processor 801, when obtaining at least one reference distance correction offset from the local storage of the movable platform and determining the distance correction offset according to the at least one reference distance correction offset, is specifically configured to:
and acquiring a reference distance correction deviation matched with the object type of the target object from a local storage device of the movable platform according to the object type of the target object, and determining the reference distance correction deviation as the distance correction deviation.
Optionally, the local storage device of the movable platform stores reference distance correction deviations corresponding to a plurality of reference distances in advance. The at least one processor 801 is specifically configured to: and acquiring a reference distance correction deviation corresponding to at least one reference distance from a local storage device of the movable platform according to the first distance, and determining the distance correction deviation according to the reference distance correction deviation.
Optionally, the distance correction deviation corresponding to a plurality of reference distances stored in advance is obtained from a local storage device of the movable platform. The at least one processor 801 is further configured to: and when the target object is in a static state, determining a second distance between the target object and the shooting device according to the size of an image area of the target object in the image acquired by the shooting device.
The at least one processor 801, when obtaining at least one reference distance correction offset from the local storage of the movable platform and determining the distance correction offset according to the at least one reference distance correction offset, is specifically configured to:
and acquiring a reference distance correction deviation corresponding to at least one reference distance from a local storage device of the movable platform according to the second distance, and determining the distance correction deviation according to the reference distance correction deviation.
Optionally, the distance correction deviation corresponding to a plurality of reference distances stored in advance is obtained from a local storage device of the movable platform.
The at least one processor 801 is specifically configured to: and acquiring a reference distance correction deviation corresponding to at least one reference distance from a local storage device of the movable platform according to the third distance, and determining the distance correction deviation according to the reference distance correction deviation.
Optionally, the at least one processor 801 is specifically configured to: acquiring a multi-frame image which is acquired by the shooting device and contains a target object; and determining whether the target is in a static state or not according to the multi-frame image.
Optionally, the at least one processor 801 is specifically configured to: running a multi-view geometric algorithm according to the multi-frame images to determine fourth positions of the target object at multiple moments; determining whether the plurality of fourth positions meet a preset position convergence condition; and when the conditions are met, determining that the target object is in a static state, otherwise, determining that the target object is in a motion state.
Optionally, the at least one processor 801 is further configured to:
and determining whether the target object is switched from a static state to a motion state or not in the process of controlling the movable platform to execute the work task on the target object according to the first position.
Optionally, the at least one processor 801 is specifically configured to: acquiring a multi-frame image acquired by a shooting device; identifying the position of a target object in the multi-frame image; projecting the target object into the multi-frame image according to the first position to acquire the projection position of the target object in the image; and determining whether the target object is switched from a static state to a moving state according to the identified position and the projection position.
Optionally, the work task is at least one of a surrounding flight task and a tracking task.
Optionally, the control device 800 of the movable platform of the present embodiment may further include a memory 802. A memory 802 for storing program code. The at least one processor 801 invokes the program code, which when executed, is configured to implement the methods described above.
The control device of the movable platform of this embodiment may be configured to execute the technical solutions of the method embodiments of the present application, and the implementation principles and technical effects thereof are similar and will not be described herein again.
Fig. 9 is a schematic structural diagram of a movable platform provided in an embodiment of the present application, where the movable platform 900 includes a camera 901 and at least one processor 902, and one processor 902 is illustrated as an example in the drawing.
The at least one processor 902 configured to: determining whether the target object is in a static state; when the target object is in a static state, a multi-view geometric algorithm is run according to the multi-frame images collected by the shooting device 901 to determine a first position of the target object, and the movable platform 900 is controlled to execute a work task on the target object according to the first position; when the target object is in a motion state, determining a second position of the target object according to the size of an image area of the target object in the image acquired by the photographing device 901, and controlling the movable platform 900 to execute the work task on the target object according to the second position.
Optionally, when the target object is in a static state, the first position of the target object is not updated.
Optionally, when the at least one processor 902 operates a multi-view geometric algorithm according to the multiple frames of images acquired by the capturing device 901 to determine the first position of the target object, the at least one processor is specifically configured to:
a multi-view geometric algorithm is executed according to the multi-frame images collected by the shooting device 901 to determine a first distance between the target object and the shooting device 901, and a first position of the target object is determined according to the first distance.
The at least one processor 902 is further configured to: obtaining a distance correction deviation, where the distance correction deviation is used to represent a distance deviation between the first distance and a second distance, where the second distance is a distance between the target object and the photographing device 901, which is determined according to an image area size of the target object in the image acquired by the photographing device 901 when the target object is in a stationary state.
The at least one processor 902, when determining the second position of the target object according to the size of the image area of the target object in the image acquired by the shooting device 901, is specifically configured to:
determining a third distance between the target object and the photographing device 901 according to the size of an image area of the target object in the image acquired by the photographing device 901, and determining a second position of the target object according to the third distance and the distance correction deviation.
Optionally, the at least one processor 902 is specifically configured to: when the target object is in a static state, determining a second distance between the target object and the photographing device 901 according to the size of an image area of the target object in the image acquired by the photographing device 901; determining the distance correction offset from the first distance and the second distance.
Optionally, the removable platform 900 of the present embodiment locally further includes a storage device 903. The local storage 903 of the movable platform 900 stores one or more reference distance correction offsets in advance, and the at least one processor 902 is specifically configured to:
at least one reference range correction offset is retrieved from the local storage 903 of the moveable platform 900 and the range correction offset is determined from the at least one reference range correction offset.
Optionally, the local storage 903 of the movable platform 900 stores reference distance correction deviations corresponding to a plurality of object types in advance, and the at least one processor 902 is further configured to: and determining the object type of the target object according to the image acquired by the photographing device 901.
The at least one processor 902, when obtaining at least one reference distance correction offset from the local storage 903 of the moveable platform 900 and determining the distance correction offset according to the at least one reference distance correction offset, is specifically configured to:
and acquiring a reference distance correction deviation matched with the object type of the target object from the local storage device 903 of the movable platform 900 according to the object type of the target object, and determining the reference distance correction deviation as the distance correction deviation.
Optionally, the local storage device 903 of the movable platform 900 stores reference distance correction deviations corresponding to a plurality of reference distances in advance.
The at least one processor 902 is specifically configured to: and acquiring a reference distance correction deviation corresponding to at least one reference distance from a local storage device 903 of the movable platform 900 according to the first distance, and determining the distance correction deviation according to the reference distance correction deviation.
Optionally, the at least one processor 902 is further configured to obtain the distance correction offset corresponding to a plurality of pre-stored reference distances from the local storage 903 of the movable platform 900:
when the target object is in a static state, a second distance between the target object and the photographing device 901 is determined according to the size of an image area of the target object in the image acquired by the photographing device 901.
The at least one processor 902, when obtaining at least one reference distance correction offset from the local storage 903 of the moveable platform 900 and determining the distance correction offset according to the at least one reference distance correction offset, is specifically configured to:
and acquiring a reference distance correction deviation corresponding to at least one reference distance from the local storage device 903 of the movable platform 900 according to the second distance, and determining the distance correction deviation according to the reference distance correction deviation.
Optionally, the distance correction offsets corresponding to a plurality of reference distances stored in advance are acquired in the local storage 903 of the movable platform 900. The at least one processor 902 is specifically configured to: and acquiring a reference distance correction deviation corresponding to at least one reference distance from the local storage device 903 of the movable platform 900 according to the third distance, and determining the distance correction deviation according to the reference distance correction deviation.
Optionally, the at least one processor 902 is specifically configured to: acquiring a multi-frame image containing a target object, which is acquired by the photographing device 901; and determining whether the target is in a static state or not according to the multi-frame image.
Optionally, the at least one processor 902 is specifically configured to: running a multi-view geometric algorithm according to the multi-frame images to determine fourth positions of the target object at multiple moments; determining whether the plurality of fourth positions meet a preset position convergence condition; and when the conditions are met, determining that the target object is in a static state, otherwise, determining that the target object is in a motion state.
Optionally, the at least one processor 902 is further configured to:
in the process of controlling the movable platform 900 to execute the work task on the target object according to the first position, it is determined whether the target object is switched from a stationary state to a moving state.
Optionally, the at least one processor 902 is specifically configured to: acquiring a plurality of frames of images acquired by a shooting device 901; identifying the position of a target object in the multi-frame image; projecting the target object into the multi-frame image according to the first position to acquire the projection position of the target object in the image; and determining whether the target object is switched from a static state to a moving state according to the identified position and the projection position.
Optionally, the work task is at least one of a surrounding flight task and a tracking task.
Optionally, the movable platform 900 of the present embodiment may further include a memory (not shown in the figure). A memory for storing program code. The at least one processor 902, invokes the program code, which when executed, is configured to implement the methods described above.
The memory may be the same as or different from the storage device 903.
The movable platform of this embodiment may be configured to implement the technical solutions of the method embodiments of the present application, and the implementation principles and technical effects thereof are similar, and are not described herein again.
Fig. 10 is a schematic structural diagram of a movable platform according to another embodiment of the present application, where the movable platform 1000 includes a camera 1001 and a control device 1002 of the movable platform.
The control device 1002 of the movable platform may adopt the structure of the device embodiment shown in fig. 8, and accordingly, may execute the technical solution provided by any of the above method embodiments, which is not described herein again.
Optionally, the movable platform 1000 further comprises a storage device 1003. The storage device 1003 is used to store one or more reference distance correction deviations in advance.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media capable of storing program codes, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, and an optical disk.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (30)

PCT国内申请,权利要求书已公开。PCT domestic application, the claims have been published.
CN202080029224.9A 2020-04-27 2020-04-27 Control method and device for movable platform Active CN114096931B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/087326 WO2021217372A1 (en) 2020-04-27 2020-04-27 Control method and device for movable platform

Publications (2)

Publication Number Publication Date
CN114096931A true CN114096931A (en) 2022-02-25
CN114096931B CN114096931B (en) 2025-02-18

Family

ID=78373878

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080029224.9A Active CN114096931B (en) 2020-04-27 2020-04-27 Control method and device for movable platform

Country Status (2)

Country Link
CN (1) CN114096931B (en)
WO (1) WO2021217372A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115686018A (en) * 2022-11-04 2023-02-03 智道网联科技(北京)有限公司 Method and device for verifying lane center line precision

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115219439A (en) * 2022-06-06 2022-10-21 北京市农林科学院信息技术研究中心 Method and device for detecting working conditions of plant high-throughput phenotyping platform

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156481A (en) * 2011-01-24 2011-08-17 广州嘉崎智能科技有限公司 Intelligent tracking control method and system for unmanned aircraft
CN106295459A (en) * 2015-05-11 2017-01-04 青岛若贝电子有限公司 Based on machine vision and the vehicle detection of cascade classifier and method for early warning
CN108351653A (en) * 2015-12-09 2018-07-31 深圳市大疆创新科技有限公司 System and method for UAV flight controls
CN108762310A (en) * 2018-05-23 2018-11-06 深圳市乐为创新科技有限公司 A kind of unmanned plane of view-based access control model follows the control method and system of flight
CN109407697A (en) * 2018-09-20 2019-03-01 北京机械设备研究所 A kind of unmanned plane pursuit movement goal systems and method based on binocular distance measurement

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106813649A (en) * 2016-12-16 2017-06-09 北京远特科技股份有限公司 A kind of method of image ranging localization, device and ADAS
CN110222581B (en) * 2019-05-13 2022-04-19 电子科技大学 Binocular camera-based quad-rotor unmanned aerial vehicle visual target tracking method
CN110262565B (en) * 2019-05-28 2023-03-21 深圳市吉影科技有限公司 Target tracking motion control method and device applied to underwater six-push unmanned aerial vehicle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156481A (en) * 2011-01-24 2011-08-17 广州嘉崎智能科技有限公司 Intelligent tracking control method and system for unmanned aircraft
CN106295459A (en) * 2015-05-11 2017-01-04 青岛若贝电子有限公司 Based on machine vision and the vehicle detection of cascade classifier and method for early warning
CN108351653A (en) * 2015-12-09 2018-07-31 深圳市大疆创新科技有限公司 System and method for UAV flight controls
CN108762310A (en) * 2018-05-23 2018-11-06 深圳市乐为创新科技有限公司 A kind of unmanned plane of view-based access control model follows the control method and system of flight
CN109407697A (en) * 2018-09-20 2019-03-01 北京机械设备研究所 A kind of unmanned plane pursuit movement goal systems and method based on binocular distance measurement

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115686018A (en) * 2022-11-04 2023-02-03 智道网联科技(北京)有限公司 Method and device for verifying lane center line precision

Also Published As

Publication number Publication date
CN114096931B (en) 2025-02-18
WO2021217372A1 (en) 2021-11-04

Similar Documents

Publication Publication Date Title
US20250045949A1 (en) Uav control method, device and uav
WO2020172800A1 (en) Patrol control method for movable platform, and movable platform
US11798172B2 (en) Maximum temperature point tracking method, device and unmanned aerial vehicle
WO2021081774A1 (en) Parameter optimization method and apparatus, control device, and aircraft
CN108521864B (en) Imaging control method, imaging device and unmanned aerial vehicle
CN113795805B (en) Unmanned aerial vehicle flight control method and unmanned aerial vehicle
US20210229810A1 (en) Information processing device, flight control method, and flight control system
WO2019227289A1 (en) Time-lapse photography control method and device
WO2021052334A1 (en) Return method and device for unmanned aerial vehicle, and unmanned aerial vehicle
WO2021217371A1 (en) Control method and apparatus for movable platform
WO2021168819A1 (en) Return control method and device for unmanned aerial vehicle
WO2020042159A1 (en) Rotation control method and apparatus for gimbal, control device, and mobile platform
CN113079698A (en) Control device and control method for controlling flight of aircraft
WO2020019260A1 (en) Calibration method for magnetic sensor, control terminal and movable platform
WO2020062089A1 (en) Magnetic sensor calibration method and movable platform
CN114096931B (en) Control method and device for movable platform
WO2019227287A1 (en) Data processing method and device for unmanned aerial vehicle
CN112985359A (en) Image acquisition method and image acquisition equipment
US20210256732A1 (en) Image processing method and unmanned aerial vehicle
WO2021168821A1 (en) Mobile platform control method and device
CN116745722A (en) Unmanned aerial vehicle control method and device, unmanned aerial vehicle and storage medium
WO2020062255A1 (en) Photographing control method and unmanned aerial vehicle
WO2022227096A1 (en) Point cloud data processing method, and device and storage medium
WO2020150974A1 (en) Photographing control method, mobile platform and storage medium
WO2021035746A1 (en) Image processing method and device, and movable platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant