WO2018214078A1 - 拍摄控制方法及装置 - Google Patents
拍摄控制方法及装置 Download PDFInfo
- Publication number
- WO2018214078A1 WO2018214078A1 PCT/CN2017/085791 CN2017085791W WO2018214078A1 WO 2018214078 A1 WO2018214078 A1 WO 2018214078A1 CN 2017085791 W CN2017085791 W CN 2017085791W WO 2018214078 A1 WO2018214078 A1 WO 2018214078A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- drone
- target object
- flight
- video stream
- angle
- Prior art date
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U20/00—Constructional aspects of UAVs
- B64U20/80—Arrangement of on-board electronics, e.g. avionics systems or wiring
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/10—Simultaneous control of position or course in three dimensions
- G05D1/101—Simultaneous control of position or course in three dimensions specially adapted for aircraft
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/17—Terrestrial scenes taken from planes or by drones
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/176—Urban or other man-made structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/66—Remote control of cameras or camera parts, e.g. by remote control devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U2101/00—UAVs specially adapted for particular uses or applications
- B64U2101/30—UAVs specially adapted for particular uses or applications for imaging, photography or videography
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U2201/00—UAVs characterised by their flight controls
- B64U2201/10—UAVs characterised by their flight controls autonomous, i.e. by navigating independently from ground or air stations, e.g. by using inertial navigation systems [INS]
- B64U2201/104—UAVs characterised by their flight controls autonomous, i.e. by navigating independently from ground or air stations, e.g. by using inertial navigation systems [INS] using satellite radio beacon positioning systems, e.g. GPS
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U2201/00—UAVs characterised by their flight controls
- B64U2201/20—Remote controls
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U70/00—Launching, take-off or landing arrangements
- B64U70/10—Launching, take-off or landing arrangements for releasing or capturing UAVs by hand
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Definitions
- the present invention relates to the field of image acquisition, and in particular, to a shooting control method and apparatus.
- the drone aerial photography involves a series of operations such as camera setting, pan/tilt control, joystick control and composition framing. If the user wants to use the drone to shoot a smooth, well-constructed video, it is necessary to cooperate with the camera and the pan/tilt. A series of parameters such as rocker and composition framing, the control process is more complicated. However, users who are not skilled in aerial photography operations are difficult to match the above-mentioned parameters in a period of time.
- the invention provides a shooting control method and device.
- a photographing control method for use in a drone, wherein the drone is equipped with a pan/tilt, and the pan/tilt is equipped with an imaging device, the method comprising:
- the start command including a flight mode of the drone
- a photographing control device for use in a drone, wherein the drone is equipped with a pan/tilt, the pan-tilt is equipped with an image pickup apparatus, and the apparatus includes a first processor, wherein The first processor is configured to:
- the start command including a flight mode of the drone
- a photographing control method comprising:
- start instruction includes an airplane mode of the drone, and the start command is used to trigger the drone to fly autonomously according to the flight mode;
- a photographing control apparatus comprising a second processor, wherein the second processor is configured to:
- start instruction includes an airplane mode of the drone, and the start command is used to trigger the drone to fly autonomously according to the flight mode;
- the present invention enables the drone to autonomously fly according to the set flight mode and the position information of the target object by setting the flight mode, so that the drone can realize a relatively complicated flight trajectory.
- the flight trajectory with strong regularity; and obtaining the orientation information of the target object relative to the drone through image recognition, thereby controlling the posture of the gimbal, so that the target object is in the captured picture; without manual control by the operator
- the control of the drone and the pan/tilt is realized, and the captured picture is smoother and the composition is richer and more precise.
- FIG. 1 is a flow chart of a photographing control method on a drone side according to an embodiment of the present invention
- 2a is a schematic diagram of a screen coordinate system and a field of view according to an embodiment of the present invention
- 2b is a schematic diagram of a field of view of an image pickup apparatus according to an embodiment of the present invention.
- FIG. 3 is a schematic diagram showing position coordinates between a drone and a target object according to an embodiment of the present invention
- FIG. 4 is a schematic diagram of a screen composition according to an embodiment of the present invention.
- FIG. 5 is a schematic diagram of a screen composition according to another embodiment of the present invention.
- FIG. 6a is a schematic diagram showing a positional relationship between a shooting scene and an image capturing apparatus according to an embodiment of the invention
- 6b is a schematic diagram showing a positional relationship between a shooting scene and an image capturing apparatus according to another embodiment of the present invention.
- 6c is a schematic diagram showing a positional relationship between a shooting scene and an image capturing apparatus according to still another embodiment of the present invention.
- FIG. 7 is a schematic structural diagram of a remote control device according to an embodiment of the present invention.
- FIG. 8 is a flowchart of a photographing control method on an intelligent terminal side according to an embodiment of the present invention.
- FIG. 9 is a flowchart of a photographing control method on a smart terminal side according to another embodiment of the present invention.
- FIG. 10 is a flowchart of a shooting control method according to still another embodiment of the present invention on an intelligent terminal side;
- FIG. 11 is a schematic structural diagram of a photographing control device according to an embodiment of the present invention.
- Figure 12 is a block diagram showing the structure of the photographing controller device on the side of the drone according to an embodiment of the present invention.
- Figure 13 is a block diagram showing the structure of the photographing control device on the side of the drone according to another embodiment of the present invention.
- FIG. 14 is a block diagram showing the structure of a photographing controller device on the smart terminal side according to an embodiment of the present invention.
- Figure 15 is a block diagram showing the structure of a photographing controller device on the smart terminal side according to another embodiment of the present invention.
- the shooting control method and apparatus can be used to control aerial photography of a drone or other aerial equipment such as an unmanned automobile provided with a pan/tilt, a movable robot, and the like.
- the drone can include a carrier and a load.
- the carrier may allow the load to rotate about one, two, three or more axes.
- the carrier may allow the load to move along one, two, three or more axes.
- the axes for the rotational or translational motion may or may not be orthogonal to one another.
- the load may be rigidly mounted or attached to the drone to maintain a relatively static state of the load relative to the drone.
- a carrier connected to a drone and a load may not allow the load to move relative to the drone.
- the load can be carried directly on the drone without the need for a carrier.
- the load may include one or more sensors for monitoring or tracking one or more target objects.
- the load may include an image capture device or an image capture device (such as a camera, camcorder, infrared camera device, ultraviolet camera device, or the like), an audio capture device (eg, a parabolic reflector microphone), an infrared camera device, and the like.
- Any suitable sensor can be integrated onto the load to capture a visual signal, an audio signal, an electromagnetic signal, or any other desired signal.
- the sensor can provide static sensing data (such as pictures) or dynamic sensing data (such as video).
- the sensor can continuously capture the sensing data in real time or at high frequency.
- the target object tracked by the drone may include any natural or artificially manufactured object or texture, such as a geographic landscape (eg, mountains, vegetation, valleys, lakes, rivers, etc.), buildings , transportation means (such as airplanes, ships, cars, trucks, buses, trucks or motorcycles).
- the target object may include an organism such as a human or an animal.
- the target object can be either moving or stationary relative to any suitable reference.
- the reference may be a relatively fixed reference (such as the surrounding environment or the earth).
- the reference may be a moving reference (such as a moving vehicle).
- the target object can include a passive target object or an active target object.
- the active target object may transmit information of the target object, such as the GPS location of the target object, to the drone.
- the information can be transmitted by wireless A communication unit that is transmitted from a communication unit in an active target to a drone.
- Active target objects can be environmentally friendly vehicles, buildings, military, and the like.
- a passive target object cannot transmit information about a target object.
- Passive target objects can include neutral or hostile vehicles, buildings, troops, and the like.
- the drone can be used to receive control data, and the smart terminal 2 can be used to provide control data.
- the control data is used to directly or indirectly control aspects of the drone.
- the control data may include flight instructions that control drone flight parameters, such as the position, speed, direction, or attitude of the drone.
- the control data can be used to control the flight of the unmanned aerial vehicle.
- the control data may enable operation of one or more power units to effect flight of the unmanned aerial vehicle.
- the control data may include instructions to control individual components of the drone.
- the control data includes information that controls the operation of the carrier.
- the control data can be used to control the actuation mechanism of the carrier to cause angular or linear motion of the load relative to the drone.
- control data is used to control the motion of the carrier that does not carry the load.
- control data is used to adjust one or more operational parameters of the load, such as capturing still or moving images, zooming of the lens, turning on/off, switching imaging modes, changing image resolution, changing focus, Change the depth of field, change the exposure time, change the lens speed, change the viewing angle or the field of view.
- control data can be used to control a sensing system (not shown), a communication system (not shown), and the like of the drone.
- the control data of the smart terminal 2 may include target object information.
- the target object information includes characteristics of the specified target object, such as an initial position (such as coordinates) and/or a size of the target object captured in one or more images captured by the camera device mounted on the drone.
- the target object information may include type information of the target object, such as a type of the target object or a classified feature, including color, texture, style, size, shape, dimension, and the like.
- the target object information may include data representing an image of the target object, including an image of the target object within the field of view. The field of view can be defined or composed of images that can be captured by the camera device.
- the target object information may include expected target object information.
- the expected target object information specifies a feature that the tracked target object is expected to satisfy in the image captured by the imaging device.
- the expected target object information is used to adjust the drone, the carrier, and/or the imaging device to maintain the tracked target object in one or more images according to the expected target object information.
- the target object can be tracked such that the target object maintains an expected position or size in one or more images captured by the camera device.
- the expected location of the tracked target object can be near the center of the image or off center.
- the expected size of the target object being tracked may refer to the inclusion of approximately a certain number of pixels.
- the expected target object information and the initial target object Information can be the same or different.
- the expected target object information may or may not be provided by the smart terminal 2.
- the intended target object information may be recorded in a hard-coded form in a logic circuit executed by a processing unit of the drone, stored in a data storage unit local and/or remotely of the drone, or obtained from other suitable sources. .
- the target object information may be generated by user input of the smart terminal 2. Additionally or alternatively, the target object information may be generated by other sources.
- the target object type information may be from previous images or data in a local or remote data storage unit.
- the image may be an image previously captured by an imaging device mounted on a drone or other device.
- the image may be computer generated.
- the type information of the target object may be selected by the user or may be provided by default by the drone.
- the drone can utilize the target object information to track one or more target objects.
- the tracking or other related data processing may be performed at least in part by one or more processors of the drone.
- the target object information can be used by the drone to identify the target object to be tracked.
- the identification of the target object may be performed based on initial target object information including specified features of the special target object (eg, initial coordinates of the target object in an image captured by the drone), or a type of target A common feature of an object (such as the color or texture of the target object to be tracked).
- the identification of the target object can include any suitable image recognition and/or matching algorithm.
- the identifying of the target object includes comparing two or more images to determine, extract, or match features in the image.
- the expected target object information can be used to detect deviations of the target object from the expected features, such as deviations in expected location and/or size.
- the current target object feature or information may be known by one or more images captured by the drone.
- the current target object information is compared with the expected target object information provided by the smart terminal 2 to determine the deviation between the two.
- the change of the target object position can be obtained by comparing the coordinates of the target object in the image (such as the coordinates of the center point of the target object) with the coordinates of the expected target object position.
- the change in the size of the target object can be obtained by comparing the size of the area covered by the target object (such as a pixel) with the preset target object size.
- the change in size can be derived by detecting the direction, boundary, or other characteristics of the target object.
- a control signal (eg, by one or more processors of the drone) may be generated based on at least a portion of the deviations, and an adjustment to substantially correct the deviation is performed in accordance with the control signal.
- the adjustment can be used to substantially maintain one or more desired target object features (eg, target objects) in images captured by the drone Position or size).
- the adjustment may be performed in real time when the drone performs a flight command (circling or moving) provided by the user or a preset flight path.
- the adjustment may also be performed in real time when the imaging device captures one or more images.
- adjustments may also be performed based on other data, such as sensed data acquired by one or more sensors (eg, proximity sensors or GPS sensors) on the drone.
- the location information of the tracked target object may be obtained by a proximity sensor and/or provided by the target object itself (eg, GPS location). The location information can be used to perform the adjustment in addition to detecting deviations.
- the adjustment may be with respect to the drone, the carrier, and/or the load (eg, an imaging device).
- the adjustment may cause the drone and/or load (such as an imaging device) to change position, posture, direction, angular velocity, or line speed, and the like.
- the adjustment may cause the carrier to move the load (such as an imaging device) about or along one, two, three or more axes relative to the drone.
- the adjusting may include adjusting a zoom, focus or other operational parameter of the load (eg, the imaging device) itself.
- the adjustment may be generated based at least in part on the type of the deviation. For example, deviations from the expected target object position may require rotating the drone and/or load (eg, through a carrier) about one, two, or three axes of rotation. As another example, deviations from the expected target size may require the drone to perform translational motion along the appropriate axis and/or change the focal length of the imaging device (eg, zooming in or out of the lens). For example, if the current or actual target object size is smaller than the expected target object size, the drone may need to be close to the target object, and/or the camera device may need to zoom in on the target object. On the other hand, if the current or actual target object size is larger than the expected target object size, the drone may need to be away from the target object, and/or the camera device may need to zoom out of the target object.
- the adjustment of the deviation from the expected target object information may be achieved by controlling one or more controllable objects, such as the movable device, the carrier, the camera, by using a control signal Equipment or any combination of them.
- the controllable object can be selected to perform the adjustment, and the control signal can be generated based at least in part on the configuration and settings of the controllable object. For example, if the image pickup apparatus is stably mounted on the drone and is not movable relative to the drone, adjustment including rotation about the corresponding two axes can be realized only by rotating the drone around the two axes.
- the image pickup apparatus may be directly mounted on the drone, or the image pickup apparatus may be mounted on the drone through the carrier, and the carrier does not allow relative movement between the image pickup apparatus and the drone.
- the carrier allows the imaging device to rotate relative to the drone about at least one axis
- the adjustment of the two axes can also be achieved by combining adjustments to the drone and the carrier.
- the carrier can be controlled to perform in two axes that are adjusted around the need One or two axes are rotated and the drone can be controlled to perform one or both of the two axes that need to be adjusted for rotation.
- the carrier may include an axis platform to allow the imaging device to rotate about one of the two axes that need to be adjusted, while the drone performs rotation about the other of the two axes that need to be adjusted.
- the carrier allows the imaging device to rotate relative to the drone about two or more axes, the adjustment of the two axes described above can also be done separately by the carrier.
- the carrier comprises a two-axis or three-axis gimbal.
- the adjustment to correct the size of the target object may be by controlling the zooming operation of the imaging device (if the imaging device is capable of achieving the desired zoom level), or by controlling the motion of the drone (to approach or away from the target object), Or a combination of the two ways to achieve.
- the processor of the drone can determine which way to choose or a combination of the two. For example, if the imaging apparatus does not have the zoom level required to achieve the desired size of the target object in the image, the movement of the drone can be controlled instead of or in addition to the zooming operation on the imaging apparatus.
- the adjustments may take into account other constraints.
- the adjustment can be performed by the carrier and/or the imaging device without affecting the movement of the drone.
- the remote terminal is autonomously controlling the flight of the drone through the smart terminal 2, or if the drone is flying (autonomously or semi-autonomously) according to a pre-stored flight line, the flight path of the drone Can be preset.
- constraints may include maximum and/or minimum thresholds, operating parameters, or the like of the angle of rotation, angular velocity, and/or linear velocity of the drone, carrier, and/or load (eg, imaging device).
- the maximum and/or minimum threshold can be used to display the adjusted range.
- the angular velocity of a drone and/or imaging device about a particular axis may be limited by the maximum angular velocity of the drone, carrier, and/or load (eg, imaging device).
- the line speed of the drone and/or carrier may be limited by the maximum line speed of the drone, carrier, and/or load (eg, imaging device).
- the adjustment of the focal length of the camera device may be limited by the maximum and/or minimum focal length of the particular camera device.
- such limitations may be presets, or may depend on the particular configuration of the drone, carrier, and/or load (eg, imaging device).
- the configuration is adjustable (eg, by manufacturer, administrator, or user).
- the drone can be used to provide data
- the smart terminal 2 can be used to receive data, such as sensory data acquired by sensors of the drone, and one or the other used to indicate drone tracking. Tracking data or information for features of multiple target objects.
- the sensing data may include image data captured by an imaging device mounted on the drone or data sensed by other sensors. For example, from drones and/or loads (such as The real-time or near real-time video stream of the camera device can be transmitted to the smart terminal 2.
- the sensed data may also include data obtained by a global positioning system (GPS) sensor, a motion sensor, an inertial sensor, a proximity sensor, or other sensor.
- GPS global positioning system
- the tracking data includes relative or absolute coordinates or dimensions of the target object in the image frame received by the drone, changes in the target image in successive image frames, GPS coordinates, or other location information of the target object.
- the smart terminal 2 can utilize the tracking data to display the tracked target object (eg, by a graphical tracking indicator, such as with a box surrounding the target object).
- the data received by the smart terminal 2 may be unprocessed data (unprocessed sensed data acquired by each sensor) and/or processed data (eg, one or more processors of the drone) The resulting tracking data).
- the location of the smart terminal 2 can be remote from the drone, carrier, and/or load.
- the smart terminal 2 can be placed or affixed on a support platform.
- the smart terminal 2 may be a handheld or wearable device.
- the smart terminal 2 can include a smartphone, tablet, laptop, computer, glasses, gloves, helmet, microphone, or any suitable combination.
- the smart terminal 2 is configured to display display data received from the drone through the display device.
- the display data includes sensing data, such as an image acquired by an imaging device mounted on the drone.
- the display data further includes tracking information that is displayed separately from the image data or superimposed on top of the image data.
- the display device can be used to display an image in which a target object is indicated or highlighted by a tracking indicator.
- the tracking indication identifier may be a target object that is tracked by using a box, a circle, or other geometric figures.
- the image and tracking indicator can be displayed in real time as the image and tracking data is received from the drone and/or when the image data is acquired. In some embodiments, the display can be delayed.
- the smart terminal 2 can be configured to receive an input of a user through an input device.
- the input device may include a joystick, a keyboard, a mouse, a stylus, a microphone, an image or motion sensor, an inertial sensor, and the like. Any suitable user input can interact with the terminal, such as manual input commands, sound control, gesture control, or position control (eg, by motion, position, or tilt of the terminal).
- the smart terminal 2 can be used to allow a user to interact with a graphical user interface by operating a joystick, changing the direction or posture of the smart terminal 2, using a keyboard, mouse, finger or stylus, or using other methods to control the drone, The state of the carrier, the load, or any combination thereof.
- the smart terminal 2 can also be used to allow a user to enter target object information using any suitable method.
- the smart terminal 2 is capable of causing a user to display one or more images (such as video) from the display. Or select the target object directly in the snapshot).
- the user can directly touch the screen with a finger to select a target object, or use a mouse or a joystick to select.
- the user can scribe the target object, draw the target object on the image, or select the target object. Computer vision or other techniques can be used to identify the boundaries of the target object.
- You can select one or more target objects at a time.
- the selected target object can be displayed with a selection indication to indicate that the user has selected the target object to be tracked.
- the smart terminal 2 may allow the user to select or input target object information such as color, texture, shape, dimension, or other characteristics of the desired target object.
- target object information such as color, texture, shape, dimension, or other characteristics of the desired target object.
- the user can enter target object type information, select such information through a graphical user interface, or use other methods.
- the target object information may be obtained from some data source, such as a remote or local data storage unit, other computing connected or communicating with the smart terminal 2, rather than from a user. Equipment, etc.
- the smart terminal 2 may allow a user to select between a manual tracking mode and an automatic tracking mode.
- the manual tracking mode the user needs to specify the specific target object that needs to be tracked.
- the user can manually select a target object from the image displayed by the smart terminal 2.
- the specific target object information (coordinates or dimensions) of the selected target object is supplied to the drone as the initial target object information of the target object.
- the automatic tracking mode the user does not need to specify a specific target object to be tracked.
- the user can specify descriptive information about the type of target object to be tracked, for example, through a user interface provided by the smart terminal 2.
- the drone automatically identifies the image to be tracked using the initial target object information of the specific target object or the descriptive information of the target object type, and then tracks the identified image.
- target object information such as initial target object information
- automatic processing or calculation image or target object recognition
- providing descriptive information for the target object type requires less user control of the target object tracking, but requires more computations performed by the processing system disposed on the drone.
- the reasonable distribution of user control and processing system control during the tracking process can be adjusted according to various factors, such as the environment of the drone, the speed or attitude of the drone, user preferences, or computing power within or outside the drone (such as CPU or memory).
- a drone when a drone is flying in a relatively complex environment (such as when there are many buildings, obstacles, or indoors), it will be allocated more than a drone in a relatively simple environment (open space or outdoor). User control.
- the drone when the drone is flying at a lower altitude position, it will allocate a relatively larger amount of user control than when flying at a higher altitude position.
- the drone when the drone is equipped with a high speed processor to be able to perform complex calculations faster, the drone is assigned more automation.
- the user control and the automatic allocation of the drone during the tracking process can be dynamically adjusted in accordance with the factors described above.
- the control data can be generated by a smart terminal 2, a drone, a third device, or any combination thereof.
- a user operating a joystick or smart terminal 2, or interacting with a graphical user interface can be converted to a preset control command to change the state or parameters of the drone, carrier or load.
- the target object selection performed by the user in the image displayed by the terminal may generate initial and/or expected target object information of the desired tracking, such as the initial and/or expected location and/or size of the target object.
- the control data may be generated based on non-user operated information, such as a remote or local data storage unit, or other computing device connected to the smart terminal 2.
- the drone is equipped with a pan/tilt, and the pan/tilt is equipped with an imaging device.
- the unmanned aerial vehicles such as the drone can continuously capture the target object while moving to certain places or azimuths.
- the image captured by the camera device including the target object can be transmitted back to a certain terrestrial device through the wireless link.
- the image including the target object captured by the drone can be transmitted to the smartphone or tablet through the wireless link.
- the smart terminals 2, such smart terminals 2 have established a communication link with the drone or directly with the camera device before receiving the picture including the target object.
- the target object can be an object specified by the user, such as an environmental object.
- the screen captured by the camera device can be displayed in a user interface, and the user selects an object as a target object by clicking operation on the screen displayed in the user interface.
- a user can select a tree, an animal, or an object of a certain area as a target object.
- the user can also input only the picture features of certain objects, such as inputting a face feature or a shape feature of an object, and performing processing on the screen by the corresponding processing module 204 to find a person or object corresponding to the picture feature, and further Shoot the person or object you find as the target.
- the target object may be a stationary object, or the object does not move for a period of continuous shooting, or the speed of moving during continuous shooting is relative to that of a drone such as a drone.
- the moving speed is much smaller, for example, the speed difference between the two is less than a preset threshold.
- the pan/tilt in order to better achieve continuous shooting at multiple angles, can be a three-axis pan/tilt that is capable of three axes of yaw, pitch pitch, and roll roll.
- the pan/tilt head may be a two-axis pan/tilt head, and the pan/tilt head is rotatable on two pitch axes of a pitch pitch and a roll roll, that is, the pan/tilt head itself includes a pitch angle and a roll angle. Degree of freedom.
- the yaw direction of the drone can be controlled to achieve the two
- the attitude of the axis cloud platform in the yaw direction changes. That is, the pan/tilt also includes another degree of freedom, which is the yaw angle of the drone.
- the imaging device may be a device having an image capturing function such as a camera or an image sensor.
- the shooting control method and apparatus of the present invention will be further described below by taking the shooting control method and apparatus for controlling the aerial photography of the drone as an example.
- the embodiment of the invention provides a shooting control method, which can be applied to the drone side 1 .
- the method may be implemented by a dedicated control device, or may be implemented by a flight controller of the drone, or may be implemented by a pan/tilt controller.
- the shooting control method may include the following steps:
- Step S101 receiving a start instruction, where the start instruction includes a flight mode of the drone;
- the start command can be transmitted from the smart terminal 2 to the drone side 1.
- the smart terminal 2 may include a user interface.
- the user interface is provided with an operation button for generating a start instruction, and the operation button may be a physical button or a virtual button.
- the operation button may be a physical button or a virtual button.
- the flight mode includes at least one of: a slash mode, a surround mode, a spiral mode, a skyrocket mode, and a comet surround mode, each flight mode including a corresponding flight strategy (each flight mode)
- the corresponding flight strategy will be specifically explained in step S104, which is used to indicate the flight of the drone, thereby realizing the function of one-touch control of the drone to fly according to the required flight strategy.
- the way to control the drone's flight is more precise and convenient, without the need for complex rocker control to achieve drone flight.
- the flight mode may also include other flight modes, such as a straight line mode.
- the flight mode may be a default flight mode, wherein the default flight mode may be a preset one flight mode or a preset combination of multiple flight modes. Specifically, after pressing the operation button that generates the start instruction, the smart terminal 2 selects the default flight mode and generates the start instruction according to the default flight mode.
- the flight mode may be a flight mode input by a user.
- the user can select the flight mode of the drone as needed.
- the smart terminal 2 presets a plurality of flight modes for The user selects, the user can select one or more of a plurality of selectable flight modes provided by the smart terminal 2 according to the need, thereby instructing the drone to realize the flight of different flight modes to obtain the shooting pictures of different viewing angles.
- each flight mode further includes at least one of a corresponding flight path and a flight speed, thereby instructing the drone to automatically complete each flight mode according to the flight path and/or the flight speed. Flight under. Among them, the flight path and flight speed corresponding to each flight mode can be set according to actual needs, thereby meeting the diverse needs of users.
- Step S102 Control the drone to fly autonomously according to the flight mode
- step S102 is performed after the execution of step S101, thereby implementing automatic control of the drone to realize a relatively complicated flight trajectory.
- the drone is autonomously flying according to a flight strategy in the flight mode.
- the method further includes: controlling the imaging device to record a video in the flight mode, and transmitting the video data to the smart terminal 2, thereby acquiring video data of the aerial photography of the drone.
- the unmanned object in each flight mode, stores the video data (ie, the original data stream) captured by the current camera device in real time, and compresses the original data stream in real time to generate a backhaul video stream.
- the smart terminal 1 is given so that the smart terminal 1 displays the image currently taken by the drone in real time.
- Step S103 acquiring, in the flight mode, location information of the target object, and obtaining, according to the target object identified in the image captured by the imaging device, orientation information of the target object relative to the drone;
- the position information of the target object refers to the absolute position information of the target object, for example, the coordinate value of the target object in the northeast coordinate system.
- the orientation information of the target object relative to the drone is the direction of the target object relative to the drone. In an embodiment, the orientation information may not include distance information of the target object and the drone.
- the physical coordinate system of the image captured by the imaging device is set to XOY, wherein the heart of the physical coordinate system is the optical axis position of the photographing device,
- the physical coordinate system XOY includes an X axis and a Y axis.
- the X axis corresponds to a yaw direction of the pan/tilt
- the Y axis corresponds to a pitch direction of the pan/tilt.
- the acquiring the location information of the target object may include the following steps: acquiring an information set including at least two sets of shooting information, and determining the target object based on at least two sets of shooting information selected from the information set.
- the position information of the selected group of shooting information is different from the position corresponding to the shooting position information, and the shooting information includes shooting position information and shooting angle information when the target object is captured.
- the image may be analyzed and identified by the image recognition technology, which may be based on grayscale and texture.
- the feature recognizes each picture captured, to find the target object and continuously shoot the target object.
- the target object In the process of continuous shooting of the target object, there may be a loss of the target object, which may result in multiple reasons for loss. Specifically, after the target object is occluded by an object, image recognition based on features such as grayscale and texture may be The target object cannot be found, resulting in the loss of the target object; or, if the distance between the drone and the target object is far away, the grayscale, texture, and other features of the target object in the captured image are insufficient to be recognized from the image. Out of the target object, resulting in the loss of the target object. Of course, there may be other cases where the target object is lost.
- the lens of the imaging device is exposed to strong light, so that the features such as grayscale and texture in the captured image are weak, or the module performing image recognition processing is faulty.
- the above-mentioned lost target object means that the target object cannot be determined on the screen.
- the photographing information at the time of capturing the screen satisfying the condition is recorded.
- the screen satisfying condition for the target object means that the target object can be accurately recognized on the screen based on the image recognition technology for a certain captured image.
- the captured shooting information at the time of the shooting includes: shooting position information and shooting angle information, wherein the shooting position information is used to indicate position information of the imaging device when the imaging device captures the target object, and the shooting position information may be a drone Positioning information, such as GPS coordinates; the shooting angle information of the embodiment of the present invention is used to indicate the orientation of the target object relative to the imaging device when the imaging device captures the target object, and the orientation may be based on the attitude angle of the gimbal (PTZ bias)
- the navigation angle yaw, the pitch angle pitch, and the display position of the target object in the captured picture are comprehensively calculated and determined.
- the embodiment of the present invention detects at least two screens that satisfy the condition, and records corresponding photographing information.
- the recorded shooting information constitutes a collection of information, so that the position information of the target object can be calculated based on the shooting information, so that when the target object is lost, or when the object shooting needs to be directly based on the position, the user can be satisfied to some extent. Continuous shooting needs.
- the positions corresponding to the shooting position information included in each group of shooting information in the information set are different.
- the shooting position information includes the collected position coordinates of the drone, and the shooting angle information includes calculating according to the posture information of the pan/tilt and the position information of the target object in the captured image. The angle obtained.
- the pitch angle in the shooting angle information may be The pitch angle pitch
- the yaw angle in the shooting angle information is the yaw angle yaw of the gimbal. 2a and 2b, if not in the central area, it can be determined according to the pixel distance dp1 (ie, d_rows in FIG.
- the size determines the offset angle of the target object with respect to the Y axis of the screen.
- the pitch angle pitch of the pan/tilt may be added to the offset angle with respect to the X axis of the screen, and the image is taken.
- the yaw angle in the angle information is the yaw angle yaw of the gimbal plus the offset angle with respect to the Y axis of the picture.
- the physical coordinate system of the picture, the horizontal field of view (HFOV) and the vertical field of view (VFOV) of the imaging device are shown, based on the center point of the target object relative to the X axis.
- the pixel distance ratio and the corresponding angle of view of the pixel distance from the Y-axis can obtain an offset angle with respect to the X-axis of the screen and an offset angle of the Y-axis of the screen.
- FIGS. 6a, 6b, and 6c the positional relationship between the imaging apparatus and the photographing scene is shown, and the relationship between the target object and the field of view (FOV) of the imaging apparatus can be known.
- Selecting at least two sets of shooting information from the information set, and selecting a selection rule for selecting at least two sets of shooting information from the information set includes: selecting shooting distance based on the calculated separation distance of the shooting position information in the shooting information; and/or, The photographing information is selected based on the interval angle calculated from the photographing angle information in the photographing information.
- the condition that the continuous shooting based on the location is satisfied may include: receiving a control instruction issued by the user based on the position for continuous shooting, or calculating the position coordinate of the target object more accurately based on the information in the already recorded information set.
- the location information of the calculation target object is described by taking only two sets of shooting information as an example.
- the coordinates of the target object are t(tx, ty), and the shooting position information d1 (d1x, d1y) in the first group of shooting information is selected, and the shooting angle is The yaw angle in the information is yaw1, the shooting position information d2 (d2x, d2y) in the second group of shooting information, and the yaw angle in the shooting angle information is yaw2.
- the pitch angle of the shooting angle information of the first group of shooting information is pitch1
- the pitch angle of the shooting angle information of the second group of shooting information is pitch2.
- the location information of the target object includes the calculated coordinate t.
- d1 and d2 may be positioning coordinates collected by the positioning module in the drone, for example, GPS coordinates obtained by the GPS positioning module in the drone.
- the yaw angle and the elevation angle in the shooting angle information are based on the yaw angle of the pan-tilt and the screen position of the target object with respect to the Y-axis of the screen when the screen capable of recognizing the target object is captured, and the pan-tilt
- the pitch angle and the distance of the screen position of the target object with respect to the X-axis of the screen are respectively calculated.
- FIG. 2a and FIG. 2b For the specific calculation manner, reference may be made to the corresponding description for FIG. 2a and FIG. 2b.
- determining the location information of the target object based on the at least two sets of shooting information selected from the information set comprises: determining, based on at least three sets of shooting information, at least two of the target objects Position initial estimation information; determining location information of the target object based on each location initial estimation information.
- the location initial estimation information is determined based on the at least three sets of shooting information
- one position initial estimation information may be determined according to any two sets of shooting information of the at least three sets of shooting information, wherein the calculation of the position initial estimation information may refer to the above embodiment. How to calculate location information.
- the determined location information of the target object relative to the UAV may be one information randomly selected from the plurality of location initial estimation information, or averaged the location coordinates corresponding to the multiple location initial estimation information. An average after calculation. It may also be position information determined according to some other rules, for example, position initial estimation information calculated from two sets of shooting information having the longest separation distance and/or the largest separation angle is determined as position information.
- the amplitude change position between the positions corresponding to the at least two position initial estimation information in the determined initial position estimation information meets the preset change amplitude requirement, it is determined that the stable condition is satisfied.
- the position change amplitude mainly refers to the separation distance between the positions, and the requirement for meeting the position change amplitude mainly includes: the plurality of separation distances are all within a preset numerical range. Based on the magnitude of the position change between the initial estimation information of the two or more positions, it may be determined whether the calculated position estimate about the target object is stable, and the smaller the position change amplitude, indicating that the calculated initial position estimation information is more accurate, and vice versa.
- the selected shooting information is inaccurate, and the obtained initial estimation information of the position has an inaccurate amount, and the accurate position information cannot be determined, and thus the shooting angle cannot be adjusted based on the position information, and the target cannot be based on the position information.
- the subject performs continuous shooting.
- the case where the position change between the plurality of position initial estimation information is large includes a plurality of cases, for example, the target object is in a stationary state, and when the above information set is acquired, one or more sets of shooting information are captured. Inaccurate location information or shooting angle information, resulting in inaccurate location information. Therefore, when determining the location information of the target object, the calculation is performed based on the calculated plurality of location initial estimation information, for example, the average value of the plurality of location initial estimation information may be averaged as described above, and the average value obtained is used as the The location information of the target object.
- the acquiring location information of the target object includes: location information of the smart terminal 2, the smart terminal 2 is a terminal that communicates with the drone, and the location information is the location information.
- the smart terminal 2 is a GPS positioning device worn by the target object, and the GPS positioning device may send the positioning information of the target object detected by the GPS positioning device to the drone side 1 at a certain frequency, or may be none.
- the human machine side 1 interrogates the GPS positioning device as needed to obtain positioning information of the target object.
- obtaining, according to the target object identified in the image captured by the imaging device, obtaining orientation information of the target object relative to the drone comprising: acquiring a target object to be tracked Feature information; according to the feature information, identifying a target object in the captured image based on the image recognition technology, and obtaining orientation information of the target object relative to the drone.
- the description of identifying the target object in the captured image based on the image recognition technology may refer to the description in the foregoing, and details are not described herein again.
- Step S104 Control a flight trajectory of the drone according to the location information and the flight mode
- the drone can fly according to the actual position information of the target object according to the flight mode in the start command, thereby realizing different flight trajectories, thereby obtaining a picture of an angle that is difficult to capture, and more suitable for the user's needs.
- the embodiment is especially suitable for a flight trajectory with strong regularity, and it is difficult to control the drone by manually operating the rocker to realize a relatively complicated and especially regular flight trajectory.
- the flight strategy corresponding to the oblique line mode may include: controlling, according to the position information, that the drone first flies along a horizontal plane (ie, parallel to the ground) and then along the horizontal plane. Plane flying at a certain angle. Wherein, the size of the angle can be set as needed, for example, 45°, so that the target object is photographed at different angles, and a richly-rich shooting picture is obtained.
- controlling the UAV to fly along the horizontal plane means that the UAV only has a flying speed in a horizontal direction, and there is no flying speed in a vertical direction (ie, a direction perpendicular to the ground).
- the step of controlling the drone to first fly along a horizontal plane and then fly along a plane at a certain angle with a horizontal plane may include: controlling the drone to fly along a horizontal plane; Determining that the angle between the lowest point of the target object and the connection between the center of the drone and the highest point of the target object and the connection of the center of the drone is less than a preset multiple of the angle of view of the camera device In the case of counting, the drone is controlled to fly along a plane at a certain angle with the horizontal plane according to the position information, wherein the preset multiple is ⁇ 1, thereby capturing a picture with a more beautiful composition.
- the controlling the UAV to fly along a plane at a certain angle to the horizontal plane includes: controlling the UAV to move away from the connection direction of the target object and the UAV
- the target object flies.
- the connection between the target object and the drone can refer to any position on the target object and a connection to any position on the drone.
- the connection of the target object to the drone refers to the connection between the center position of the target object and the center position of the drone.
- the determination rule of the center position of the target object and the center position of the drone can be set as needed, and taking the center position of the target object as an example, a regular shape (for example, a rectangle, a square pentagon, a circle, etc.) can be used.
- the central position of the shape of the rule is the center position of the target object.
- the flight strategy corresponding to the oblique line mode includes: controlling, according to the position information, the drone to fly away from the target object in a S-shaped curve, thereby capturing a picture with a more beautiful composition.
- the degree of bending of the S-shaped curve can be set as needed to meet the needs of shooting.
- the lowest point and the highest point of the target object are the position closest to the ground on the target object and the position farther from the ground on the target object.
- the angle between the lowest point of the target object and the connection between the center of the drone and the highest point of the target object and the connection between the center of the drone and the center of the drone can also be referred to as the angle of the target object relative to the drone, for example
- the target object is a character
- the angle of the character relative to the drone is the lowest point of the character and the connection between the center of the drone and the highest point of the character.
- the preset multiple is 1/3
- the target object is located on the ground.
- the drone When the angle of the target object relative to the drone is less than 1/3 of the angle of view of the imaging device, the drone will fly away from the target object along the connection direction of the target object and the drone, thereby enabling Make the horizon in the picture appear in the upper 1/3 of the screen (ie, the pixel distance from the top edge of the horizon is 1/3 of the total pixel distance in the Y direction of the physical coordinate system of the screen), and the target object can also appear in the scene. In the picture taken, the picture with a more beautiful composition is obtained.
- the flight strategy corresponding to the surround mode comprises: controlling the drone to fly around the target object according to the specified distance according to the location information.
- the drone of the present embodiment centers around the target object and performs circular motion around the target object, thereby realizing the shooting of the target object in the 360° direction.
- the shape of the flight trajectory around the target object can be selected as needed.
- the flight path of the surrounding target object may be circular.
- the flight path of the surrounding target object may be elliptical.
- the flight around the target object may also be other flight trajectories similar to a circle or an ellipse.
- the specified distance is used to indicate the distance of the drone from the target object at each position.
- the specified distance is a default distance
- the flight mode corresponding to the surround mode includes A default distance.
- the specified distance is distance information input by the user, that is, the distance information of the drone around the target object is set by the user according to actual needs, thereby satisfying different user requirements.
- the user may input a specified distance corresponding to the surround mode on the smart terminal 2 to indicate distance information of the drone flying around the target object.
- the specified distance is the distance between the drone and the target object at the current time.
- the distance between the UAV and the target object at the current moment is calculated according to the location information of the target object and the positioning information of the current time of the UAV, thereby further improving the intelligence of the UAV.
- the flight strategy corresponding to the spiral mode comprises: controlling, according to the position information, a U-turn, a proportional spiral, an equiangular spiral, and an Archimedes
- a spiral or other shaped spiral travels around the target object for the trajectory.
- the drone of the present embodiment is centered on the target object, and is photographed by a ⁇ ⁇ ⁇ spiral, an isometric spiral, an isometric spiral, an Archimedes spiral, or other shape of a spiral.
- the flight strategy corresponding to the spiral mode further includes: controlling the UAV to a ⁇ spiral according to the position information, etc.
- a spiral, an isosceles spiral, an Archimedes spiral, or other shaped helix traverses the target object while controlling the drone to rise or fall vertically at a preset rate.
- the target object is photographed from more angles by controlling the flight of the drone in the vertical ground direction to rise or fall, so as to improve the content richness of the captured picture.
- the flying speed of the drone rising or falling can be set according to actual needs.
- the drone is based on the position information, and surrounds the target object along the horizontal plane with a ⁇ Bonachet spiral, a proportional spiral, an isosceles spiral, or an Archimedes spiral.
- the drone only has a horizontal flying speed, and the vertical flying speed is zero, thereby changing the size of the target object in the picture and increasing the richness of the shooting picture.
- the flight strategy corresponding to the sky-shaking mode includes: controlling, according to the position information, the drone to fly to a first designated position relative to the target object according to a preset angle, and control the The drone rises vertically on the ground.
- the preset angle, the first designated position, and the flying speed of the drone can be set according to actual needs, thereby capturing a variety of pictures.
- the first designated location refers to a specific distance from a specified location of the target object, and the first designated location is located at a specific location of the specified location of the target object. In this embodiment, the first designated location may be set by the user as needed.
- controlling the drone to fly to a first designated position relative to the target object according to a preset angle comprises: controlling the drone to fly in a direction close to the target object The first specified location. In some examples, the controlling the drone to fly to a first designated position relative to the target object according to a preset angle comprises: controlling the drone to fly in a direction away from the target object First finger Position.
- the drone in the sky-rushing mode, can be controlled to fly from any starting point (ie, the current position of the drone) to the first designated position, or the drone can be controlled to fly to a specific starting point, and then the control is performed.
- the drone flies from the particular starting point to a first designated location. It should be noted that, in the case where the drone is first controlled to fly to a specific starting point, and then the drone is controlled to fly from the specific starting point to the first designated position, the imaging device on the drone is The drone will not start recording until after the specific starting point.
- the flight strategy corresponding to the comet surround mode includes: controlling, according to the location information, the drone to fly to a second designated location near the target object, and surrounding the target from the second designated location After the object flies, fly away from the target object.
- the second designated position may be set as needed, for example, the second specified position is at a specific distance from the specified position of the target object, and the second specified position is located at a specific position of the specified position of the target object, thereby shooting A variety of pictures.
- the number of laps of the unmanned aircraft flying around the target object after flying to the second designated position may be set as needed, for example, one week, several weeks, or less than one week.
- the drone in the comet surround mode, can be controlled to fly from the arbitrary starting point (ie, the current position of the drone) close to the target object to the second designated position, and surround the second designated position.
- the target object flies away from the target object after flight,
- the drone in the comet surround mode, may be first controlled to fly to a specific starting point, and then the drone is controlled to fly from the specific starting point to the target object to the second designation. a position that flies away from the target object after flying around the target object from the second designated location.
- the imaging device on the drone starts recording after the drone is located at the specific starting point.
- the flight path of the drone may be controlled with the target object as the base point or the coordinates in the world coordinate system.
- controlling the flight trajectory corresponding to the real-time corresponding flight strategy of the drone is based on the premise that the target object is in the screen.
- the aircraft may also fly in advance for a flight trajectory that does not look at the target object, and then fly toward the location information of the target object to meet different flight requirements.
- Step S105 Control the posture of the PTZ according to the orientation information, so that the target object is in a picture captured by the imaging device.
- the target object located at a preset position in the captured picture means that the specified position of the target object is at a preset position in the captured picture.
- the specified position of the target object refers to a central position of the target object, and the target object is located at a preset position in the captured picture, meaning that the center position of the target object is in the captured picture. The default location.
- the preset location may be generated by the user directly clicking on any location in the user interface of the smart terminal 2 for displaying the captured image, that is, the designated location is the user sitting on the user interface. Enter the click location.
- the preset position may be selected as a default position at which the specified position of the target object is displayed in the captured picture.
- the size of the target object displayed in the captured picture is always the size of the preset size.
- the orientation information may be central location information of the preset size or other location information (eg, vertex location information) in the preset size corresponding area.
- the size of the target object displayed in the captured picture refers to the product of the pixel height and the pixel width displayed by the target object in the captured picture.
- the preset size may be a size box that the user inputs directly on the user interface of the smart terminal 2. In some examples, the preset size can be a default size box.
- the target object is located in the size frame in the captured picture during subsequent shooting.
- the size frame is sized to be able to surround the target object in the captured picture, thereby obtaining a composition that satisfies the user's needs.
- the size frame is a regular shape such as a rectangle, a square, or the like.
- the posture of the pan/tilt may be controlled according to the deviation of the actual position of the target object in the screen from the position to be displayed, so that the target object is in the captured picture.
- the position to be displayed ie, the preset position
- the position at which the target object is displayed in the captured picture is the preset position.
- the yaw angle of the pan/tilt is controlled to keep the target object at the position to be displayed; if the actual position of the target object in the screen is If the display position is shifted up and down, the pitch angle of the pan/tilt is controlled to keep the target object in the position to be displayed.
- the position of the center of the target object to be tracked to be displayed in the captured image is P(u, v), where u is the pixel coordinate of the X axis, and v is the pixel coordinate of the Y axis.
- the size of the picture is (W, H), W is the picture pixel width, and H is the picture pixel height. If the upper left corner of the setting screen is the origin, Then the angular velocity Yx of the yaw axis of the gimbal is:
- ⁇ is a constant and ⁇ R (R represents a real number);
- the angular velocity Yy of the pitch axis rotation of the gimbal is:
- the focal length of the imaging device can be adjusted according to the size of the target object in the captured picture.
- the initialization time is set (a certain time before the smart terminal 2 does not send the start command to the drone side 1)
- the pixel area (ie, the preset size) of the target object in the captured picture is S.
- S is defined as the pixel height of the target object multiplied by the pixel width of the target object.
- the adjustment speed F of the focal length of the imaging device is:
- ⁇ is a constant and ⁇ ⁇ R (R represents a real number).
- the focal length of the imaging device is adjusted to be longer; otherwise, the focal length of the imaging device is adjusted. Shortened.
- the controlling the attitude of the pan/tilt includes controlling at least one of a pitch angle, a yaw angle, and a roll angle of the pan/tilt, thereby controlling a position of the target object in the captured picture.
- the pan/tilt is a three-axis pan/tilt, and the attitude of the gimbal can be changed by controlling at least one of a pitch axis, a yaw axis, and a roll axis of the three-axis pan/tilt.
- the pan/tilt and the drone are fixed to each other on a heading axis; the attitude of controlling the pan/tilt includes: controlling a pitch angle and/or a roll angle of the pan/tilt; controlling the The heading angle of the man-machine to control the yaw angle of the head.
- the gimbal itself includes two degrees of freedom of a pitch angle and a roll angle, and another degree of freedom yaw angle of the gimbal is replaced by a heading angle of the drone, thereby controlling the heading angle of the drone. To achieve control of the yaw angle of the gimbal.
- the pitch angle and/or the partiality of the pan/tilt includes determining at least one of a pitch angle and a yaw angle of the pan/tilt based on a position of the background position specified to be displayed in the captured picture.
- the specified position of the background logo is set to be displayed at the position of the photographed screen, thereby satisfying diverse composition requirements, and enhancing the richness and aesthetics of the photographed image.
- the background identifier may include at least one of a ground, a sky, a sea surface, a building, and other background identifiers. Referring to FIG.
- the user can set the horizon to be approximately perpendicular to the Y-axis of the captured physical coordinate system and located at the upper 1/3 of the Y-axis, and the drone can be displayed according to the horizon.
- the position in the captured picture is used to calculate the pitch angle of the gimbal, thereby controlling the pitch angle of the gimbal, so that the horizon is displayed at the upper 1/3 of the Y-axis in the captured picture for better composition.
- determining the at least one of a pitch angle and a yaw angle of the pan/tilt according to a location to be displayed in the captured image according to a specified location of the background identifier including: acquiring the captured image a first total pixel distance in the first direction and a specified position of the background identifier to be displayed at a pixel distance of the position of the captured picture in the first direction to the edge of the picture, wherein the first direction and the cloud Determining a pitch direction or a yaw direction of the stage; determining a pitch angle of the pan/tilt and/or according to the first total pixel distance, the pixel distance, and a vertical field angle of the imaging device or a horizontal field of view angle Yaw angle.
- the first direction is that the pixel distance of the upper edge of the captured image in the Y-axis direction is row, the first total pixel distance (ie, the screen height) is row_size, and the vertical view of the camera
- the field angle is VFOV.
- the calculation formula of the pitch angle pitch of the gimbal is:
- the process of determining the pitch angle and/or the yaw angle of the pan/tilt includes: acquiring a preset The elevation angle and/or the horizontal angle of the shooting position; determining the offset of the target object phase in the center line of the first direction of the captured image (ie, the captured image, the X-axis or the Y-axis of the physical coordinate system) An angle, wherein the first direction corresponds to a pitch direction or a yaw direction of the pan/tilt; determining a pitch angle of the pan/tilt and/or according to the offset angle and the elevation angle and/or a horizontal angle Yaw angle.
- the height angle and/or the horizontal angle of the shooting position are directly set by the user, and the shooting position information (x, y, z) corresponding to the shooting position is determined, and the shooting position points to the target object to determine a direction angle.
- the height angle is defined as arctan(z/x) and the horizontal angle is defined as arctan(y/x).
- the user sets the height angle to set the ratio of x and z.
- the user sets the horizontal angle to set the ratio of x and y.
- the determining an offset angle of the target object with respect to a center line of the first direction of the captured picture comprises: acquiring a first total pixel distance of the captured picture in the first direction and a vertical field of view angle/horizontal field of view of the imaging device; determining a first offset pixel distance of the target object from a center line of the first direction of the captured image; And according to the first total pixel distance, the vertical field of view/horizontal field of view, and the first offset pixel distance, an offset angle of the target object with respect to a center line of the first direction of the captured picture is determined.
- the offset angle of the target object with respect to the center line of the first direction of the captured picture is determined according to the vertical field of view angle;
- the offset angle of the target object with respect to the center line of the first direction of the captured picture is determined according to the horizontal angle of view.
- the distance of the first offset pixel of the center distance of the target object from the X-axis is determined to be d_rows
- the height angle of the shooting position set by the user is theta
- the first total pixel distance of the screen ie, the height of the screen
- the vertical angle of view of the imaging device is VFOV
- the pitch pitch pitch of the pan/tilt is calculated as:
- pitch -arctan(z/x)-d_row/row_size*VFOV.
- the controlling the flight trajectory of the drone according to the location information and the flight mode comprises: determining a distance between the target object and an imaging device; according to the location information, The flight mode and the distance between the target object and the imaging device control the flight path of the drone such that the height of the target object to be displayed in the screen is a specific height to meet the user's composition requirements.
- the determining a distance between the target object and the imaging device includes: acquiring an actual height of the target object, a first total pixel distance of the captured image in the first direction; The actual height of the target object is to be displayed in a corresponding pixel distance in the first direction of the captured picture, wherein the first direction corresponds to the pitch direction of the pan/tilt; according to the actual height of the target object, the first total pixel The distance between the target object and the imaging device is determined by the pixel distance corresponding to the distance and the actual height of the target object in the first direction of the captured picture.
- the actual height of the target object is h
- the height angle of the shooting position set by the user is theta
- the height of the screen is row_row
- the vertical angle of view of the shooting device is VFOV. If the user wants the height of the target object to be displayed in the captured image (ie, the height in the Y-axis direction) to be tar_rows, the distance d satisfies:
- the method further includes: obtaining an elevation angle of the preset shooting position, a horizontal angle of view of the imaging device, and the captured image is a second total pixel distance in the second direction, wherein the second direction corresponds to a yaw direction of the pan/tilt; determining a second pixel offset of the target object from a center line of the second direction in the captured picture Moving distance; determining the pan/tilt according to the second pixel offset distance, the height angle, the horizontal field of view angle, the second total pixel distance, and a distance between the target object and an imaging device a moving distance in the pitch direction; controlling the attitude of the pan/tilt according to the moving distance of the pan-tilt in the pitch direction.
- the second pixel offset distance of the target object from the X axis is d_col
- the second total pixel distance of the screen in the X-axis direction is col_size
- the horizontal field of view of the camera is HFOV.
- the distance between the drone and the target is d
- the height angle of the shooting position set by the user is theta
- the moving distance y of the gimbal in the pitch direction is:
- the posture of the control pan/tilt includes: acquiring the level of the imaging device. a second total pixel distance in the second direction of the captured image, wherein the second direction corresponds to a yaw direction of the pan/tilt; and the target object is determined to be in a picture taken a second pixel offset distance of the center line of the two directions; determining a yaw angle of the pan/tilt according to the second total pixel distance, the horizontal field of view angle, and the second pixel offset distance; The angle of the plane controls the attitude of the head.
- the second pixel offset distance of the target object from the X axis is d_col
- the second total pixel distance of the screen in the X-axis direction is col_size
- the horizontal field of view of the camera is HFOV.
- the yaw angle of the gimbal is d_col/col_size*HFOV.
- composition of the above embodiment is used as a basis for composition by the target object or the background to identify the position to be displayed in the captured picture.
- it can also be classified by CNN (Convolutional Neural Network).
- CNN Convolutional Neural Network
- the algorithm identifies background markers such as sky, buildings, and sea surface for better composition.
- the user operating the drone can target the object, the drone takes off from the palm of the user, ignoring the position difference between the drone and the user, assuming that the position between the drone and the target object is always line.
- the user operating the drone can target the object, and the drone takes off from the user's palm and the camera of the drone is installed directly in front of the fuselage.
- the user needs to extend his arms so that the camera device faces the user's face. By assuming the normal length of the arm, the size of the face can be used to infer the positional relationship between the aircraft and the user.
- the drone by setting the flight mode, the drone can autonomously fly according to the set flight mode and the position information of the target object, so that the drone can realize a relatively complicated flight trajectory, especially a flight with strong regularity. Trajectory; and obtain the orientation information of the target object relative to the drone through image recognition, thereby controlling the attitude of the gimbal, so that the target object is in the captured picture; the UAV and the PTZ can be realized without manual control by the operator The control makes the picture more smooth and the composition richer and more precise.
- the method further includes: when it is determined that the target object cannot be identified in the screen, the method is further configured to: The step of controlling the attitude of the pan/tilt to replace the attitude of the pan/tilt according to the position information.
- controlling the flight trajectory of the drone according to the position information and the flight mode further comprising: controlling the drone to move to a reset position.
- the drone after the drone completes the flight according to the flight strategy in the flight mode, it automatically moves to the reset position, so that the drone is always in the same take-off position.
- the reset position may be a certain positioning coordinate position obtained by the drone by GPS positioning. It should be noted that, in the process of moving to the reset position, if the drone is sent by an external device (such as a remote control device that controls the operation of the drone), the current motion is immediately terminated to the reset position. operating.
- the method further includes: controlling at least one of a flight of the drone and a posture of the pan/tilt according to the bar operation signal if a bar operation signal sent by the external device is received kind.
- the striking operation signal is generated by the user by operating a remote control device that controls the drone.
- the striking operation signal may include a signal for controlling a vertical rise or fall of the drone, a signal for controlling the drone to move away from or close to the target object, controlling a flight speed of the drone, and controlling a yaw angle of the gimbal.
- the remote control device of the remote control drone includes two pairs of joysticks, each of which includes four degrees of freedom adjustment direction.
- One of the rockers includes a rise/fall and a left/right spin operation, and the other rocker includes front/rear and left/right operations.
- the rise/fall corresponds to the height rise/fall operation of the drone
- the left rotation/right rotation corresponds to the yaw of the gimbal
- the left/right corresponds to the roll of the pan/tilt
- the front/back corresponds to the pitch of the gimbal.
- the left/right rotation is controlled, corresponding to the left and right composition of the target object in the captured picture, that is, the left and right positions of the target object in the captured picture; before/after control, respectively
- the expansion and reduction of the surrounding radius of the drone relative to the target object controlling the left/right, respectively, corresponding to the acceleration and slowing of the flying speed of the drone around the target object; controlling the rise/fall, respectively corresponding to the drone around the target object
- the left/right rotation is controlled, corresponding to the left and right composition of the target object in the captured picture, that is, the left and right positions of the target object in the captured picture; before/after control, respectively
- the flight speed of the drone is accelerated and slowed down; controlling left/right and rising/falling are invalid operations.
- the drone is in the sky mode, and controls the left rotation/right rotation, corresponding to the rotation of the drone body, for controlling the rotation of the body, thereby realizing the rotation of the lens of the photographing device to obtain a target object as the center.
- the rotating lens has a more beautiful picture; the front/back and left/right controls are invalid; the control rises/falls, respectively, corresponding to the acceleration and slowdown of the drone's rising speed.
- the drone is in the spiral mode, and controls the left rotation/right rotation, respectively corresponding to the left and right composition of the target object in the captured picture, that is, the left and right positions of the target object in the captured picture; before/after control, respectively corresponding to the spiral
- the radius is enlarged and reduced; the left/right is controlled, respectively, corresponding to the lateral speed of the spiral flight (ie, the direction of the parallel ground), and the flight speed is accelerated and slowed down; the control rise/fall is respectively corresponding to the acceleration and slowdown of the spiral rising speed of the drone. , or corresponding to the speed and slowdown of the drone spiral speed.
- the embodiment of the invention provides a shooting control method, which can be applied to the smart terminal 2 with the APP installed.
- the smart terminal can be communicatively connected to the drone.
- the photographing method may include the following steps:
- Step S801 receiving a user instruction
- the user instruction can be directly input by the user at the smart terminal 2.
- the smart terminal 2 includes an APP (application software) for the user to input a user instruction.
- the APP can be used to display a picture returned by the drone.
- the user instructions include determining a target object to be identified.
- the method further includes: identifying feature information of the target object to be tracked in the currently displayed picture, where the feature information is that the target object is to be displayed in the captured image.
- the user interface of the smart terminal displays the picture taken by the camera device at the current moment in real time, and the user directly clicks on the target object to be identified on the current time screen, and the smart terminal can select the target object based on the image recognition technology.
- the information of the target object may be a preset position of the target object, or may be a preset size of the target object, and may also be information such as grayscale and texture, thereby facilitating subsequent tracking of the target object.
- the manner in which the user selects the target object to be identified includes: the user directly clicks on an object in the current moment of the user interface of the smart terminal, and the object is the target object to be identified.
- the manner in which the user selects the target object to be identified includes: the user encloses an object in the screen of the current moment of the user interface of the smart terminal in the form of a size box, and the enclosed object is the object to be The target object identified.
- the size box is just enough to surround the target object to be identified, or the size box is a minimum regular graphic frame (eg, a box or a circular box) capable of surrounding the target object to be identified.
- the feature information may include a preset position or a preset size of the target object in the captured picture, thereby indicating that the drone side 1 controls the pan/tilt attitude such that the target object is being photographed.
- the preset position is always in the picture, and the target object is always the size of the preset size in the captured picture to obtain a better composition effect.
- the preset position of the target object in the captured picture refers to that when the user selects the target object to be identified, the center position of the target object (which may also be other positions of the target object) is at the current time (ie, the user) Selecting a preset position in the screen of the target object to be recognized; a preset size of the target object in the captured picture refers to a pixel height and a pixel in the captured picture of the target object at the current time The product of the width.
- the user instruction further includes: a position in the captured picture of the specified position of the background identification to be displayed.
- the specified position of the background logo is set to be displayed at the position of the photographed screen, thereby satisfying diverse composition requirements, and enhancing the richness and aesthetics of the photographed image.
- the background identifier may include at least one of a ground, a sky, a sea surface, a building, and other background identifiers.
- the user instruction further includes: a height angle or a horizontal angle of the shooting position to further determine a pitch angle or a yaw angle of the pan/tilt to better compose the image so that the target object is in the captured image In the preset position.
- the user instructions further comprise: at least one of a flight path and a flight speed of the drone, thereby instructing the drone to automatically complete each of the flight paths and/or the flight speeds A flight in flight mode.
- the flight path and flight speed corresponding to each flight mode can be set according to actual needs, thereby meeting the diverse needs of users.
- Step S802 Generate a start instruction according to the user instruction, where the start instruction includes an airplane mode of the drone, and the start command is used to trigger the drone to fly autonomously according to the flight mode;
- the flight mode is a default flight mode.
- the default flight mode may be a preset one flight mode or a preset combination of multiple flight modes. Specifically, after receiving the user instruction (for example, the user presses an operation button or inputs some instruction information), the smart terminal 2 selects the default flight mode and generates the start instruction according to the default flight mode.
- the user instructions include a mode selection instruction that includes an airplane mode for indicating drone flight.
- the user can select the flight mode of the drone as needed.
- the smart terminal 2 presets a plurality of flight modes for the user to select, and the user can select one or more of a plurality of selectable flight modes provided by the smart terminal 2, thereby indicating the drone. Achieve different flight modes of flight to obtain different viewing angles.
- the flight mode may include at least one of a diagonal mode, a surround mode, a spiral mode, a skyrocket mode, a comet surround mode, and other flight modes (eg, a straight line mode), each of the flight modes including a corresponding flight strategy,
- a flight strategy is used to indicate the flight of the drone. For the flight strategy corresponding to each flight mode, refer to the description in the first embodiment above.
- Step S803 Send the start command to the drone.
- Step S804 Receive and store a backhaul video stream returned by the drone in the flight mode.
- the unmanned person stores the video data (that is, the original data stream) captured by the current camera device in real time, and compresses the original data stream in real time, and generates a return video stream to be sent to the smart terminal 1 so that The smart terminal 1 displays the image currently captured by the drone in real time.
- the video data that is, the original data stream
- step S804 the smart terminal 2 performs buffering after receiving the backhaul video stream, thereby obtaining a complete backhaul video stream of the drone in the flight mode.
- the user sets the flight mode on the smart terminal 2, so that the drone can fly autonomously according to the set flight mode and the position information of the target object, so that the drone can realize a relatively complicated flight trajectory, especially a highly regular flight trajectory; and enables the drone to obtain the orientation information of the target object relative to the drone through image recognition, thereby controlling the attitude of the gimbal so that the target object is in the captured picture; no manual control by the operator is required
- the remote control device can realize the control of the drone and the pan/tilt, and the captured picture is smoother and the composition is richer and more precise.
- the return video stream transmitted during the flight of the drone is generally for the user to directly watch, because the drone is transmitted to the ground equipment during flight (for example, smart terminals such as smart phones and tablet computers)
- the video stream is generally large, and it is difficult for users to share the backhaul transmitted by the drone directly on a social network such as a circle of friends. frequency.
- most users need to manually cut back the video stream transmitted by the drone to obtain a small video that is easy to share, and the way the user manually cuts the small video may not be professional enough, and the small video effect is poor.
- Step S901 Process the backhaul video stream to generate a video picture of a first specified duration, where the first specified duration is less than the duration of the backhaul video stream.
- the user can manually convert the returned video stream into a small video that is easy to share (a video screen of the first preset duration), and the user can quickly and easily Share on social media such as circle of friends.
- the first preset duration may be set as needed, for example, 10 seconds, to obtain a small video for sharing.
- the small video in the embodiment of the present invention refers to a video whose duration is less than a specific duration (can be set as needed).
- small videos can also be video with a capacity smaller than a specific capacity (which can be set as needed).
- Step S901 is performed after it is determined that the drone meets the specified condition.
- the specified condition includes the drone completing flight in the flight mode.
- the smart terminal 2 receives the return video stream of the complete drone in the flight mode, thereby facilitating the user to return the entire video stream according to the Information to choose the direction of processing.
- the smart terminal 2 determines whether the drone completes the flight of the flight mode according to the returned video stream returned by the drone.
- the drone adds flight state information corresponding to the flight mode to a picture captured when the aircraft is flying in the flight mode, and performs real-time compression and the like on the original data stream with flight state information.
- the return video stream obtained by the smart terminal 2 After transmission to the smart terminal 2, that is, the return video stream obtained by the smart terminal 2 also carries flight status information.
- the smart terminal 2 can determine whether the drone completes the flight of the flight mode according to the flight state information in the backhaul video stream.
- the smart terminal 2 determines that the flight state information in the backhaul video stream is changed from the flight state information corresponding to the flight mode to the flight state information of another flight mode or the backhaul video stream from the presence
- the flight state information of the flight mode is changed to a backhaul video stream without flight state information, that is, the drone completes the flight of the flight mode.
- the smart terminal 2 after completing the flight of the flight mode, receives the information of the end of the flight mode sent by the drone, thereby determining that the drone completes the flight mode. Flight.
- the specified condition includes: receiving a drone that is flying in accordance with the flight mode The returned video stream.
- the smart terminal 2 performs step S901 immediately after receiving the return video stream transmitted by the drone when flying according to the flight mode, without waiting for the drone to execute the flight mode, thereby saving small video generation.
- the smart terminal 2 can generate a small video while the drone ends the flight of the flight mode.
- step S901 includes: performing frame extraction processing on the backhaul video stream to generate a video picture of a first preset duration.
- the frameback processing is performed on the backhaul video stream to generate a video image of a first preset duration, including: according to a flight mode, a flight speed, and a flight direction of the drone And performing at least one frame drawing process on the video stream to generate a video picture of a first preset duration.
- the performing a frame drawing process on the backhaul video stream to generate a video picture of a first preset duration includes: according to the duration of the backhaul video stream and the number of frames, the back The video stream is subjected to frame drawing processing to generate a video picture of a first preset duration.
- a small video with a higher degree of fit to the backhaul video stream can be obtained according to the number of the returned video stream frames. A more complete picture of the drone is presented.
- the framed processing is performed on the returned video stream according to the duration of the returned video stream and the number of frames, to generate a first preset.
- the video frame of the duration includes: splitting the backhaul video stream into multiple segments to obtain a multi-segment return video stream; performing frame extraction processing on a part of the backhaul video stream in the multi-segment return video stream, and obtaining a corresponding segment back And transmitting a framed image of the video stream; and generating a video image of the first preset duration according to another partial return video stream in the multi-segment return video stream and the obtained framed image of the corresponding segment return video stream.
- the splitting the backhaul video stream into multiple segments includes: The sequence of shooting times splits the returned video stream into at least three segments. Performing a frame drawing process on a part of the backhaul video stream in the multi-segment return video stream to obtain a framed image of the corresponding segment of the returned video stream, including: capturing time in the at least three segments of the returned video stream The returned video stream in the intermediate time segment is subjected to frame drawing processing to obtain a framed image corresponding to the segment of the returned video stream.
- the part of the returned video stream in the multi-segment return video stream is subjected to frame extraction processing to obtain a framed image of the corresponding segment of the returned video stream, including: according to a preset framed frame.
- the rate is framed by the corresponding segment return video stream, and the framed image corresponding to the corresponding segment back video stream is obtained.
- the video stream of the corresponding segment is uniformly framed, thereby avoiding the unevenness of the frame and causing the discontinuity of the video picture.
- the frame rate of the multi-segment return video stream is the same, further ensuring continuity of the generated video picture, thereby ensuring smoothness of the generated video picture.
- step S901 may further include: performing further compression processing on the backhaul video stream, thereby reducing a size of the backhaul video stream, and obtaining a video picture that is easy to share.
- the method further comprises: transmitting the video picture to a remote terminal server to enable sharing of the small video.
- the remote terminal server may be a third-party website such as a video website such as Youku, Tudou, or a social media network such as a circle of friends.
- the transmitting the video picture to the remote terminal server is performed immediately after the step S901 is completed, thereby enabling fast sharing of the small video.
- the method before the sending the video screen to the remote terminal server, the method further includes: receiving a sharing instruction input by the user, where the sharing instruction includes a corresponding remote terminal server; and sending the The video screen is sent to the remote terminal server, so that the small video can be flexibly shared according to the actual needs of the user.
- the quality of the returned video stream sent by the drone to the intelligent terminal 2 by means of the picture transmission is also It will be poor, and accordingly, the quality of the generated small video is also poor.
- the video picture generating method may further include the following steps:
- Step S1001 Acquire an original data stream captured by the drone
- the unmanned player stores the video data (that is, the original data stream) captured by the current camera device in real time, and compresses the original data stream in real time, and generates a return video stream to be sent to the smart through the image transmission mode.
- the terminal 1 is such that the smart terminal 1 displays the image currently captured by the drone in real time.
- the smart terminal 1 can also acquire the original data stream stored by the drone, and use the original data stream to generate a small video.
- the original data stream captured by the drone is stored in a storage unit of the drone or the imaging device.
- the smart terminal 2 can directly read the original data stream captured by the drone stored in the storage unit.
- steps In S901 the drone transmits the captured video stream to the smart terminal 2 by means of wireless communication during the flight, because the communication distance between the drone and the smart device is long, which may lead to the drone and the intelligent The communication quality between the terminals 2 is poor; and in step S1001, the smart terminal 2 can read the original data stream in the storage unit by wired communication or directly read the smart terminal 2 while ensuring good wireless communication quality.
- the original data stream in the storage unit is stored, thereby ensuring that the intelligent terminal 2 can obtain the original data stream with good picture quality.
- the storage unit is a device capable of storing data such as an SD card or a hard disk or a magnetic disk.
- the original data stream also carries a corresponding video tag.
- the smart terminal 1 finds a corresponding original data stream from the storage unit according to the video tag.
- step S1001 is performed after the drone meets the specified condition.
- the specified condition includes the drone completing flight in the flight mode.
- the smart terminal 2 directly reads the original data stream captured by the unmanned aerial unit stored in the storage unit, thereby obtaining a raw data stream with good picture quality for processing and generating a screen. A small video with good quality.
- Step S1002 Determine, according to the original data stream, an original video stream captured by the drone in the flight mode
- step S1002 includes: determining, according to a video stream tag corresponding to the flight mode, an original video stream captured by the drone in the flight mode in the original data stream, through a video tag, The original video stream captured by the drone in the flight mode can be obtained from a large number of video streams more accurately and quickly, thereby generating a small video in the flight mode more quickly.
- Step S1003 Process the original video stream to generate a new video picture of a second preset duration, where the second preset duration is less than the duration of the original video stream.
- step S1001, step S1002, and step S1003 are performed after determining that the resolution of the video picture obtained according to the returned video stream is less than the preset resolution, thereby obtaining a new video picture of higher quality.
- the method further comprises: transmitting the new video frame to a remote terminal server to enable sharing of the small video.
- the remote terminal server may be a third-party website such as a video website such as Youku, Tudou, or a social media network such as a circle of friends.
- the transmitting the video picture to the remote terminal server is performed immediately after the completion of step S1003, thereby enabling fast sharing of the small video.
- the method before the sending the new video screen to the remote terminal server, the method further includes: receiving a sharing instruction input by the user, where the sharing instruction includes a corresponding remote terminal server; according to the sharing instruction, Sending the video screen to the remote terminal server, thereby flexibly sharing the small video according to the actual needs of the user.
- the smart terminal 2 simultaneously performs step S1001, step S1002 and step S1003, and steps S804 and S901, thereby obtaining two video pictures for the user to select, increasing the richness of the selection.
- the method further comprises transmitting at least one of the two to the remote terminal server according to the video picture generated in step S901 and the new video picture generated in step S1003.
- the video screen generated in step S901 and the new video screen generated in step S1003 may be transmitted to the remote terminal server with a higher resolution.
- the method before the transmitting the at least one of the video picture generated in step S901 and the new video picture generated in step S1003 to the remote terminal server, the method further includes: receiving a sharing instruction input by the user, where The sharing instruction includes a corresponding remote terminal server and a video identifier to be shared, and the video identifier to be shared is a representation corresponding to at least one of the video screen generated in step S901 and the new video screen generated in step S1003; According to the sharing instruction, one of the video screen generated in step S901 and the new video screen generated in step S1003 is transmitted to the remote terminal server, thereby flexibly sharing the small video according to the actual needs of the user.
- the second preset duration may be set according to requirements, and optionally the second preset duration is equal to the first preset duration.
- step S1003 the strategy used to process the original video stream in step S1003 is similar to the strategy used to process the backhaul video stream in step S802. For details, refer to step S901 for processing the backhaul video stream. The strategy is not repeated here.
- the embodiment of the invention provides a photographing control device, which can be applied to the drone side 1.
- the photographing control apparatus may include a first processor 11, wherein the first processor 11 is configured to perform the steps of the photographing control method described in the first embodiment.
- the first processor 11 is configured to be in communication with the smart terminal 2, so that the first processor 11 can receive the start command from the smart terminal 2 and can take pictures taken by the drone and no one. Machine His data information and the like are sent to the smart terminal 2.
- the first processor 11 may be selected as a controller in a dedicated control device, or may be selected as a flight controller of the drone, or may be selected as a pan/tilt controller.
- the embodiment of the invention provides a shooting control device, which can be applied to the smart terminal 2 with the APP installed.
- the photographing control apparatus may include a second processor 21, wherein the second processor 21 is configured to perform the steps of the photographing control method described in the second embodiment.
- the second processor 21 is configured to be in communication connection with the control device of the UAV side 1, wherein the control device of the UAV side 1 can be implemented by a dedicated control device, or can be The flight controller of the human machine is implemented, and can also be implemented by a pan/tilt controller, so that the start command can be sent to the drone side 1 by the second processor 21 to indicate the aerial photography of the drone, and the second processing can be performed.
- the device 21 receives a picture taken from a drone or other data information of the drone, and the like.
- the embodiment of the invention provides a photographing control device, which can be applied to the drone side 1.
- the apparatus can include:
- a first receiving module 101 configured to receive a start instruction, where the start instruction includes a flight mode of the drone;
- a first control module 102 configured to control the drone to fly autonomously according to the flight mode
- a location calculation module 103 configured to acquire location information of the target object in the flight mode, and obtain the target object relative to the drone according to the target object identified in the image captured by the imaging device Azimuth information;
- a second control module 104 configured to control the drone according to the location information and the flight mode Flight path
- the third control module 105 is configured to control the posture of the PTZ according to the orientation information, so that the target object is in a picture captured by the imaging device.
- the apparatus further includes a shooting control module 106, configured to control the imaging device to record video in the flight mode, and send the video data to the smart terminal.
- a shooting control module 106 configured to control the imaging device to record video in the flight mode, and send the video data to the smart terminal.
- the apparatus further includes a first determining module 107, when the first determining module 107 determines that the location calculating module 103 cannot identify the target object in the screen,
- the third control module 105 replaces the step of controlling the posture of the pan/tilt according to the orientation information with a step of controlling the posture of the pan/tilt head according to the position information.
- controlling the posture of the pan/tilt includes: controlling at least one of a pitch angle, a yaw angle, and a roll angle of the pan/tilt.
- the location calculation module 103 is further configured to determine a pitch angle and/or a yaw angle of the pan/tilt.
- the determining process of the pitch angle and/or the yaw angle of the pan/tilt head includes determining a pitch angle of the pan/tilt head according to a position to be displayed in the captured picture according to a specified position of the background mark. At least one of the yaw angles.
- determining the at least one of a pitch angle and a yaw angle of the pan/tilt according to a location to be displayed in the captured image according to the specified location of the background identifier including:
- the background identifier comprises at least one of the following: ground, sky, sea surface, and building.
- the determination of the pitch angle and/or yaw angle of the gimbal includes:
- a pitch angle and/or a yaw angle of the pan/tilt is determined based on the offset angle and the elevation angle and/or horizontal angle.
- determining an offset angle of the target object with respect to a center line of the first direction of the captured picture comprises:
- controlling the flight path of the drone according to the location information and the flight mode including:
- the flight path of the drone is controlled according to the position information, the flight mode, and a distance between the target object and the imaging device.
- the determining a distance between the target object and the imaging device includes:
- Determining a distance between the target object and the imaging device according to a pixel distance corresponding to the actual height of the target object, the first total pixel distance, and the actual height of the target object in the first direction of the captured image.
- the location calculation module 103 is further configured to: after determining the distance between the target object and the imaging device, acquire a height angle of the preset shooting position, a horizontal angle of view of the imaging device, a second total pixel distance of the captured picture in the second direction, wherein the second direction corresponds to a yaw direction of the pan/tilt; and determining a center line of the target object in a second direction from the captured picture a second pixel offset distance; and according to the second pixel offset distance, the elevation angle, the horizontal field of view angle, the second total pixel distance, and a distance between the target object and an imaging device Determining a moving distance of the pan/tilt in a pitch direction; the third control module 105 controls a posture of the pan/tilt according to a moving distance of the pan-tilt in a pitch direction.
- the location calculation module 103 is further configured to acquire a horizontal field of view of the imaging device and capture the image. a second total pixel distance of the picture in the second direction, wherein the second direction corresponds to a yaw direction of the pan/tilt; and determining a center line of the target object in a second direction from the captured picture a second pixel offset distance; and determining a yaw angle of the pan/tilt according to the second total pixel distance, a horizontal field of view angle, and the second pixel offset distance; the third control module 105 is configured according to The yaw angle controls the attitude of the gimbal.
- the pan/tilt and the drone are fixed to each other on a heading axis; the third control module 105 is further configured to control a pitch angle and/or a roll angle of the pan/tilt; and control the unmanned The heading angle of the machine to control the yaw angle of the head.
- the flight mode includes at least one of: a diagonal mode, a surround mode, a spiral mode, a skyrocket mode, and a comet surround mode, each flight mode including a corresponding flight strategy, the flight strategy being used for Instructing the flight of the drone.
- the flight strategy corresponding to the oblique line mode includes: controlling, by the second control module 104, according to the location information, that the drone first flies along a horizontal plane and then forms an angle along a horizontal plane. Plane flight.
- the second control module 104 controls the drone to fly along a horizontal plane and then fly along a straight line at a certain angle with the horizontal plane, including: controlling the drone to fly along a horizontal plane; When the angle between the lowest point of the target object and the connection between the center of the drone and the highest point of the target object and the connection of the center of the drone is less than a preset multiple of the angle of view of the imaging device, then And controlling, according to the position information, the drone to fly along a plane at a certain angle with a horizontal plane, wherein the preset multiple is ⁇ 1.
- the second control module 104 controls the drone to fly along a plane at a certain angle with the horizontal plane, including: controlling the connection of the drone along the target object and the drone Flying away from the target object.
- the flight strategy corresponding to the oblique line mode includes: controlling, by the second control module 104, the UAV to fly away from the target object in a S-shaped curve according to the location information.
- the flight strategy corresponding to the surround mode includes: controlling, by the second control module 104, the drone to fly around the target object according to the specified distance according to the location information.
- the specified distance is a default distance, or the specified distance is distance information input by the user, or the specified distance is a distance between the drone and the target object at the current time.
- the flight strategy corresponding to the spiral mode includes: controlling, by the second control module 104, the ⁇ spiral, the equal ratio spiral, the equiangular spiral according to the position information Line or Aki The Mead spiral lines the track around the target object.
- the flight strategy corresponding to the spiral mode further includes: the second control module 104 controls, according to the location information, the UAV with a ⁇ ⁇ ⁇ spiral, an equal ratio spiral, an isometric
- the spiral or Archimedes spiral also controls the drone to rise or fall vertically at a preset rate while the trajectory is flying around the target object.
- the flight strategy corresponding to the sky-shaking mode includes: controlling, by the second control module 104, the flight to the first designation relative to the target object according to the preset angle according to the position information. After the position, the drone is controlled to rise vertically.
- the flight strategy corresponding to the comet surround mode includes: controlling, by the second control module 104, the flying machine to fly to a second designated position according to the position information, from the second The specified position flies away from the target object after flying around the target object.
- each flight mode further includes at least one of a corresponding flight path and flight speed.
- the location calculation module 103 acquires location information of the target object, including:
- the location information of the target object is determined based on the at least two sets of shooting information selected from the information set, wherein the location corresponding to the shooting location information in each selected group of shooting information is different.
- determining the location information of the target object based on the at least two sets of shooting information selected from the information set includes: determining, based on at least three sets of shooting information, initial positions of at least two of the target objects Estimating information; determining location information of the target object based on each location initial estimation information.
- the shooting location information is positioning information of the drone.
- the location calculation module 103 acquires location information of the target object, including: acquiring location information of the smart terminal 2, the smart terminal is a terminal that communicates with the drone, and the location information is Positioning information.
- the location calculation module 103 obtains the orientation information of the target object relative to the drone according to the image captured by the imaging device, including: acquiring feature information of the target object to be tracked; The feature information identifies a target object in the captured image based on the image recognition technology, and obtains orientation information of the target object relative to the drone.
- the apparatus further includes a reset module 108, after the second control module 104 controls the flight path of the drone according to the location information and the flight mode, for controlling The drone moves to a reset position.
- the apparatus further includes a fourth control module 109, where the first determining module 107 determines that the first receiving module 101 receives a striking operation signal sent by an external device, according to The striking operation signal controls at least one of a flight of the drone and a posture of the gimbal.
- the striking operation signal includes at least one of: controlling a signal that the drone rises or falls vertically on the ground, controls a signal of the drone to move away from or close to the target object, controls a flying speed of the drone, and controls the cloud.
- the signal of the yaw angle and the signal for controlling the rotation of the drone body includes at least one of: controlling a signal that the drone rises or falls vertically on the ground, controls a signal of the drone to move away from or close to the target object, controls a flying speed of the drone, and controls the cloud.
- the embodiment of the invention provides a shooting control device, which can be applied to the smart terminal 2 with the APP installed.
- the apparatus can include:
- a second receiving module 201 configured to receive a user instruction
- the command generating module 202 generates a start command according to the user instruction, where the start command includes an airplane mode of the drone, and the start command is used to trigger the drone to fly autonomously according to the flight mode;
- the sending module 203 sends the start command to the drone; wherein the start command is used to trigger the drone to fly autonomously according to the flight mode.
- the second receiving module 201 receives and stores the backhaul video stream returned by the drone in the flight mode.
- the user instruction comprises: determining a target object to be tracked.
- the method further includes: identifying feature information of the target object to be tracked in the current display screen, where the feature information is a preset position or preset of the target object to be displayed in the captured image. size.
- the user instruction further includes: a specified location of the background identifier, where the value to be displayed is captured in the picture position.
- the background identifier comprises at least one of the following: ground, sky, sea surface and building.
- the user instruction further includes: an elevation angle or a horizontal angle of the shooting position.
- the user instruction further includes: at least one of a flight path and a flight speed of the drone.
- the flight mode is a default flight mode; or the user command includes a mode selection instruction, the mode selection instruction including an airplane mode for indicating that the drone is flying.
- the flight mode includes at least one of: a diagonal mode, a surround mode, a spiral mode, a skyrocket mode, and a comet surround mode, each flight mode including a corresponding flight strategy, the flight strategy being used for Instructing the flight of the drone.
- the apparatus further includes a processing module 204, configured to process the backhaul video stream to generate a video picture of a first specified duration, where the first specified duration is less than the backhaul The length of the video stream.
- a processing module 204 configured to process the backhaul video stream to generate a video picture of a first specified duration, where the first specified duration is less than the backhaul The length of the video stream.
- the apparatus further includes a second determining module 205, where the processing module 204 processes the backhaul video stream, and the step of generating a video picture is determined by the second determining module 205.
- the drone is executed after the specified conditions are met.
- the specifying condition includes: the second determining module 205 determines that the drone completes the flight of the flight mode.
- the processing module 204 processes the backhaul video stream to generate a first preset duration video image, including: performing frame extraction processing on the backhaul video stream to generate a video of a first preset duration Picture.
- the processing module 204 performs frame drawing processing on the backhaul video stream to generate a video image of a first preset duration, including: according to at least one of a flight mode, a flight speed, and a flight direction of the drone Performing frame drawing processing on the video stream to generate a video picture of a first preset duration.
- the processing module 204 performs frame drawing processing on the backhaul video stream to generate a video picture of the first preset duration, including: according to the duration of the backhaul video stream and the number of frames, the back The video stream is subjected to frame drawing processing to generate a video picture of a first preset duration.
- the processing module 204 performs frame extraction processing on the backhaul video stream according to the total duration of the backhaul video stream and the number of frames, to generate a video picture of the first preset duration, including:
- the backhaul video stream is split into multiple segments to obtain a multi-segment return video stream;
- the partial return video stream in the multi-segment backhaul video stream is subjected to frame extraction processing to obtain a framed image of the corresponding segment return video stream;
- Said another part of the multi-segment return video stream The video stream and the obtained framed image of the corresponding segment return video stream are returned to generate a video picture of a first preset duration.
- the processing module 204 splits the backhaul video stream into multiple segments, including: splitting the backhaul video stream into at least three segments according to a sequence of shooting times; The part of the backhaul video stream in the multi-segment return video stream is subjected to frame-splitting processing to obtain the framed image of the corresponding segment of the returned video stream, including: the shooting time in the at least three-segment return video stream is in the middle time period The video stream is returned to perform frame drawing processing, and the framed image corresponding to the segment of the returned video stream is obtained.
- the processing module 204 performs frame extraction processing on a part of the backhaul video stream in the multi-segment return video stream, and obtains a framed image of the corresponding segment of the returned video stream, including: corresponding to the preset frame rate
- the segment return video stream performs frame drawing processing to obtain a framed image corresponding to the corresponding segment back video stream.
- the apparatus further includes a reading module 206 and a determining module 207, the reading module 206 is configured to acquire an original data stream captured by the drone; and the determining module 207 is configured to Determining, by the original data stream, an original video stream captured by the drone in the flight mode; the processing module 204 processing the original video stream to generate a new video image of a second preset duration, The second preset duration is less than the duration of the backhaul video stream.
- the determining module 207 determines, according to the original data stream, the original video stream captured by the drone in the flight mode, including: determining, according to the video stream label corresponding to the flight mode, The original video stream captured by the drone in the flight mode in the original data stream.
- the step of the reading module 206 acquiring the original data stream captured by the drone is that the second determining module 205 determines that the resolution of the video image obtained according to the returned video stream is smaller than a preset resolution. After the rate is executed.
- the apparatus further includes a sharing module 208, configured to send at least one of the video picture and the new video picture to a remote terminal server.
- the second receiving module 201 receives a sharing instruction input by the user, where the sharing instruction The corresponding remote terminal server and the video identifier to be shared, the video identifier to be shared is an identifier corresponding to at least one of the video screen and the new video screen; the sharing module 208 is configured according to the sharing instruction Sending at least one of the video picture and the new video picture to a remote terminal server.
- An embodiment of the present invention provides a computer storage medium having stored therein program instructions, wherein the computer storage medium stores program instructions, and the program executes the shooting control method of the first embodiment or the second embodiment.
- a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
- computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
- the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
- portions of the invention may be implemented in hardware, software, firmware or a combination thereof.
- multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
- a suitable instruction execution system For example, if implemented in hardware, as in another embodiment, it can be implemented with any one or combination of the following techniques well known in the art: having logic gates for implementing logic functions on data signals. Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
- each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
- the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
- the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
- the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Remote Sensing (AREA)
- Aviation & Aerospace Engineering (AREA)
- Theoretical Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Automation & Control Theory (AREA)
- Mechanical Engineering (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Signal Processing (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Studio Devices (AREA)
Abstract
本发明提供一种视频画面生成方法及装置,应用于无人机,所述无人机搭载有云台,所述云台搭载一摄像设备,其特征在于,所述方法包括:接收开始指令,所述开始指令包含无人机的飞行模式;控制所述无人机依据所述飞行模式自主飞行;在所述飞行模式中,获取目标对象的位置信息,并根据所述摄像设备拍摄到的画面中所识别到的目标对象,获得所述目标对象相对所述无人机的方位信息;根据所述位置信息和所述飞行模式,控制所述无人机的飞行轨迹;根据所述方位信息,控制所述云台的姿态,使得所述目标对象处于所述摄像设备所拍摄的画面中。无需操作者手动控制即可实现对无人机和云台的控制,拍摄出的画面更加流畅、构图更加丰富和精确。
Description
本发明涉及图像采集领域,尤其涉及一种拍摄控制方法及装置。
随着无人机航拍技术的发展,市面上无人机种类越来越繁多。无人机航拍涉及到相机设置、云台控制、摇杆控制和构图取景等一系列的操作,若用户想要利用无人机拍摄出流畅的、构图漂亮的视频,需要配合好相机、云台、摇杆和构图取景等一系列参数,控制的过程较为复杂。然而,对航拍操作不熟练的用户在段时间内很难配合好上述一系列参数。
发明内容
本发明提供一种拍摄控制方法及装置。
根据本发明的第一方面,提供一种拍摄控制方法,应用于无人机,所述无人机搭载有云台,所述云台搭载一摄像设备,所述方法包括:
接收开始指令,所述开始指令包含无人机的飞行模式;
控制所述无人机依据所述飞行模式自主飞行;
在所述飞行模式中,获取目标对象的位置信息,并根据所述摄像设备拍摄到的画面中所识别到的目标对象,获得所述目标对象相对所述无人机的方位信息;
根据所述位置信息和所述飞行模式,控制所述无人机的飞行轨迹;
根据所述方位信息,控制所述云台的姿态,使得所述目标对象处于所述摄像设备所拍摄的画面中。
根据本发明的第二方面,提供一种拍摄控制装置,应用于无人机,所述无人机搭载有云台,所述云台搭载一摄像设备,所述装置包括第一处理器,其中所述第一处理器被配置为:
接收开始指令,所述开始指令包含无人机的飞行模式;
控制所述无人机依据所述飞行模式中自主飞行;
在所述飞行模式中,获取目标对象的位置信息,并根据所述摄像设备拍摄到的画面中所识别到的目标对象,获得所述目标对象相对所述无人机的方位信息;
根据所述位置信息和所述飞行模式,控制所述无人机的飞行轨迹;
根据所述方位信息,控制所述云台的姿态,使得所述目标对象处于所述摄像设备所拍摄的画面中。
根据本发明的第三方面,提供一种拍摄控制方法,所述方法包括:
接收用户指令;
根据所述用户指令生成开始指令,所述开始指令包含无人机的飞行模式,所述开始指令用于触发所述无人机依据所述飞行模式自主飞行;
发送所述开始指令至无人机;
接收并存储所述无人机在所述飞行模式下回传的回传视频流。
根据本发明的第四方面,提供一种拍摄控制装置,所述装置包括第二处理器,其中所述第二处理器被配置为:
接收用户指令;
根据所述用户指令生成开始指令,所述开始指令包含无人机的飞行模式,所述开始指令用于触发所述无人机依据所述飞行模式自主飞行;
发送所述开始指令至无人机;
接收并存储所述无人机在所述飞行模式下回传的回传视频流。
由以上本发明实施例提供的技术方案可见,本发明通过设置飞行模式,使得无人机能够按照设置的飞行模式和目标对象的位置信息自主飞行,从而无人机可实现较为复杂的飞行轨迹,特别是规律性较强的飞行轨迹;并通过图像识别获得目标对象相对于无人机的方位信息,从而控制云台的姿态,使得目标对象处于所拍摄的画面中;无需操作者手动控制即可实现对无人机和云台的控制,拍摄出的画面更加流畅、构图更加丰富和精确。
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本发明一实施例的拍摄控制方法在无人机侧的流程图;
图2a是本发明一实施例的画面坐标系和视场角的示意图;
图2b是本发明一实施例的摄像设备的视场角的示意图;
图3是本发明一实施例的无人机和目标对象之间的位置坐标示意图;
图4是本发明一实施例的画面构图示意图;
图5是本发明另一实施例的画面构图示意图;
图6a是本发明一实施例的拍摄场景与摄像设备的位置关系示意图;
图6b是本发明另一实施例的拍摄场景与摄像设备的位置关系示意图;
图6c是本发明又一实施例的拍摄场景与摄像设备的位置关系示意图;
图7是本发明一实施例的遥控设备的结构示意图;
图8是本发明一实施例的拍摄控制方法在智能终端侧的流程图;
图9是本发明另一实施例的拍摄控制方法在智能终端侧的流程图;
图10是本发明又一实施例的拍摄控制方法在智能终端侧的流程图;
图11是本发明一实施例的拍摄控制装置的结构示意图;
图12是本发明一实施例的拍摄控制方装置在无人机侧的结构框图;
图13是本发明另一实施例的拍摄控制方装置在无人机侧的构框图;
图14是本发明一实施例的拍摄控制方装置在智能终端侧的结构框图;
图15是本发明另一实施例的拍摄控制方装置在智能终端侧的结构框图。
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、
完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
下面结合附图,对本发明的拍摄控制方法及装置进行详细说明。在不冲突的情况下,下述的实施例及实施方式中的特征可以相互组合。
所述拍摄控制方法及装置可用于控制无人机的航拍或者其他航拍设备的拍摄例如设置有云台的无人汽车、可移动机器人等。
以无人机为例,所述无人机可以包括承载体以及负载。所述承载体可以允许负载绕着一个、两个、三个或者更多的轴旋转。可选地或者额外地,所述承载体可以允许负载沿着一个、两个、三个或者更多的轴线性运动。用于旋转或者平移运动的轴可以彼此正交也可以不是正交。
在某些实施例中,所述负载可以刚性地搭载或者连接于无人机上,以使得负载相对于无人机维持相对静止的状态。例如,连接到无人机及负载的承载体可以不允许负载相对于无人机移动。可选地,所述负载可直接搭载在无人机上而不需要承载体。
在某些实施例中,所述负载可以包括一个或者多个传感器,用于监控或者追踪一个或者多个目标对象。所述负载可以包括影像捕获设备或者摄像设备(如相机、摄录机、红外线摄像设备、紫外线摄像设备或者类似的设备),音频捕获装置(例如,抛物面反射传声器),红外线摄像设备等。任何适合的传感器都可以集成到所述负载上,以捕获可视信号、音频信号、电磁信号、或则任何其它期望的信号。所述传感器可以提供静态感应数据(如图片)或者动态感应数据(如视频)。所述传感器可以实时地或者高频率地持续捕获感应数据。
在各种实施例中,所述被无人机追踪的目标对象可以包括任何自然的或者人工制造的物体或者纹理,例如,地理景观(如山川、植被、山谷、湖泊、河流等),建筑物,运输工具(如飞机、轮船、小轿车、卡车、公交车、货车或者摩托车)。所述目标对象可以包括生物体,如人或者动物。所述目标对象相对任何合适的参照物可以是运动的或者静止的。所述参照物可以是相对固定的参照物(如周围环境或者地球)。可选地,所述参照物可以是运动的参照物(如移动的运输工具)。在各种实施例中,所述目标对象可以包括被动目标对象或者主动目标对象。所述主动目标对象可以传送该目标对象的信息,如该目标对象的GPS位置,给无人机。所述信息可以通过无线传输方式
从主动目标对象中的通讯单元传送给无人机的通讯单元。主动目标对象可以是环保的运输工具、建筑物、军队等。被动目标对象不能传送目标对象的信息。被动目标对象可以包括中立的或者敌对的运输工具、建筑物、军队等。
所述无人机可以用于接收控制数据,以及所述智能终端2可以用于提供控制数据。所述控制数据用于直接或者间接地控制无人机的各方面。在某些实施例中,所述控制数据可以包括控制无人机飞行参数的飞行指令,所述飞行参数如无人机的位置、速度、方向、或者姿态。所述控制数据可以用于控制无人飞行器的飞行。所述控制数据可以实现一个或者多个动力单元的操作,以实现无人飞行器的飞行。在其它实施例中,所述控制数据可以包括控制无人机的个别部件的指令。例如,所述控制数据包括控制承载体操作的信息。例如,所述控制数据可以用于控制承载体的致动机构,以使负载相对于无人机产生角运动或者线运动。其它实施例中,所述控制数据用于控制不承载负载的承载体的运动。其它实施例中,所述控制数据用于调整负载的一个或者多个操作参数,如捕获静止的或者运动的图像、镜头的变焦、开启/关闭、切换成像模式、改变影像分辨率、改变焦点、改变景深、改变曝光时间、改变镜头速度、改变可视角度或者视场等。在其它实施例中,所述控制数据可用于控制无人机的传感系统(未图示)、通讯系统(未图示)等。
在某些实施例中,所述智能终端2的控制数据可以包括目标对象信息。在某些情况下,所述目标对象信息包括指定的目标对象的特征,如初始位置(如坐标)及/或者目标对象在无人机所搭载的摄像设备所捕获一个或者多个影像中的尺寸。额外地或者可选地,所述目标对象信息可以包括目标对象的类型信息,如目标对象的类型或者分类的特征,包括颜色、纹理、样式、尺寸、形状、维度等。目标对象信息可以包括代表目标对象的影像的数据,包括目标对象在视场内的影像。视场可以由摄像设备所能捕获到的影像所定义或者组成。
目标对象信息可以包括预期目标对象信息。所述预期目标对象信息指定所追踪的目标对象在摄像设备所捕获的影像中所预期需满足的特征。所述预期目标对象信息用于调整无人机、承载体及/或摄像设备,以依据该预期目标对象信息使所追踪的目标对象在一个或者多个影像中维持一个样态。例如,可以追踪所述目标对象,使得目标对象在摄像设备所捕获的一个或者多个影像中维持一个预期位置或者尺寸。例如,所追踪的目标对象的预期位置可以是接近影像的中心或者偏离中心。所追踪的目标对象的预期尺寸可以指包含大概某一数量的像素。所述预期目标对象信息与初始目标对象
信息可以相同也可以不相同。在各种实施例中,所述预期目标对象信息可以由所述智能终端2提供,也可以不是由所述智能终端2提供。例如,预期目标对象信息可以以硬编码的形式记录在无人机的处理单元所执行的逻辑电路中,存储在无人机本地及/或者远程的数据存储单元中,或者从其它合适的来源获取。
在某些实施例中,所述目标对象信息(包括指定的目标对象信息及目标对象的类型信息)中的至少一部分可以通过智能终端2的用户输入产生。额外地或者可选地,所述目标对象信息可以通过其它的来源产生。例如,所述目标对象类型信息可以来自于本地或者远程的数据存储单元中以前的影像或者数据。所述影像可以是无人机或者其它设备所搭载的摄像设备以前所捕获的影像。所述影像可以是计算机产生的。所述目标对象的类型信息可以是用户选择的,也可以是无人机所默认提供的。
无人机可以利用所述目标对象信息以追踪一个或者多个目标对象。所述追踪或者其它相关的数据处理可以至少一部分是通过无人机的一个或者多个处理器所执行。在某些实施例中,所述目标对象信息可以用于无人机识别要追踪的目标对象。所述目标对象的识别可以基于初始目标对象信息而执行,所述初始目标对象信息包括特殊目标对象的指定特征(例如目标对象在无人机所捕获的影像中的初始坐标),或者一类目标对象的通用特征(例如所要追踪的目标对象的颜色或者纹理)。在某些实施例中,所述目标对象的识别可以包括任何合适的影像识别及/或匹配算法。在某些实施例中,所述目标对象的识别包括比较两个或者更多的影像,以确定、提取或者匹配影像中的特征。
一旦识别到目标对象,预期目标对象信息可以用于侦测该目标对象与预期特征的偏离,如预期位置及/或尺寸的偏离。在某些实施例中,当前的目标对象特征或者信息可以通过无人机所捕获的一个或者多个影像所获知。当前的目标对象信息与智能终端2所提供的预期目标对象信息相比较,以确定两者的偏差。目标对象位置的改变可以通过将目标对象在影像中的坐标(如目标对象的中心点坐标)与预期目标对象位置的坐标进行比较得出。目标对象尺寸的改变可以将目标对象所覆盖的面积(如像素)的尺寸与预设目标对象尺寸进行比较得出。在某些实施例中,尺寸的改变可以通过侦测目标对象的方向、边界或者其它特征得出。
可以基于所述偏差中至少一部分而产生控制信号(例如通过无人机的一个或者多个处理器),根据该控制信号执行大致地校正所述偏差的调整。所述调整可以用于在无人机所捕获的影像中,大致的维持一个或者多个预期的目标对象特征(如目标对象
位置或者尺寸)。在某些实施例中,当无人机执行用户提供的飞行指令(盘旋或者移动)或者预设的飞行线路时,所述调整可以实时进行。当所述摄像设备捕获一个或者多个影像时,所述调整也可以实时进行。在某些实施例中,也可以根据其它数据,如无人机上的一个或者多个传感器(例如近程传感器或者GPS传感器)获取的感测数据执行调整。例如,所追踪的目标对象的位置信息可以通过近程传感器获得,及/或者由目标对象本身所提供(如GPS位置)。所述位置信息除了用于侦测偏差,还可以用于执行所述调整。
所述调整可以是关于所述无人机、承载体、及/或者负载(例如摄像设备)的。例如,所述调整可以导致所述无人机及/或者负载(如摄像设备)改变位置、姿态、方向、角速度或者线速度等。所述调整可以导致承载体相对于无人机绕着或者沿着一个、两个、三个或者更多的轴移动所述负载(如摄像设备)。进一步地,所述调整可以包括调整负载(如摄像设备)本身的变焦、焦点或者其它操作参数。
在某些实施例中,所述调整可以至少一部分基于所述偏差的类型所产生。例如,与预期目标对象位置的偏差可能需要绕着一个、两个、或者三个旋转轴旋转所述无人机及/或者负载(如通过承载体)。又如,与预期目标对象尺寸的偏差可能需要无人机沿着合适的轴做平移运动,及/或者改变摄像设备的焦距(如镜头的拉近或者拉远)。例如,如果当前的或者实际的目标对象尺寸小于预期目标对象尺寸,无人机可能需要靠近目标对象,及/或者摄像设备可能需要放大目标对象。另一方面,如果当前的或者实际的目标对象尺寸大于预期目标对象尺寸,无人机可能需要远离目标对象,及/或者摄像设备可能需要缩小目标对象。
在各种实施例中,所述校正与预期目标对象信息的偏差的调整可以通过利用控制信号控制一个或者多个可控制物体实现,所述可控制物体如所述可移动设备、承载体、摄像设备或者其中的任意结合。在某些实施例中,可以选择所述可控制物体执行所述调整,以及所述控制信号可以至少一部分基于对所述可控制物体的配置与设置而产生。例如,若摄像设备稳固地搭载在无人机上而不能够相对于无人机运动,则仅仅通过将无人机绕着两个轴旋转就能够实现包括绕着对应两个轴旋转的调整。此时,摄像设备可以直接搭载在无人机上,或者摄像设备通过承载体搭载在无人机上,所述承载体不允许摄像设备与无人机之间的相对运动。如果所述承载体允许摄像设备相对于无人机绕着至少一个轴旋转,对上述两个轴的调整也可以通过结合对无人机和承载体的调整来实现。这种情况下,可以控制所述承载体以执行绕着需要调整的两个轴中的
一个或者两个轴旋转,以及可以控制所述无人机以执行绕着需要调整的两个轴中的一个或者两个轴旋转。例如,所述承载体可以包括一轴云台,以允许摄像设备绕着需要调整的两个轴中的一个轴旋转,而无人机执行绕着需要调整的两个轴中的另一个轴旋转。可选地,如果承载体允许摄像设备相对于无人机绕着两个或者更多的轴旋转,则对上述两个轴的调整也可以单独地通过承载体完成。例如,所述承载体包括两轴或者三轴云台。
在其它实施例中,调整以校正目标对象的尺寸可以通过控制摄像设备的变焦操作(如果摄像设备能够达到需要的变焦水平),或者通过控制无人机的运动(以靠近或者远离目标对象),或者该两种方式的结合来实现。在执行调整时,无人机的处理器可以确定选择哪一种方式或者选择该两种方式的结合。例如,如果摄像设备不具备达到目标对象在影像中维持需要的尺寸所需要的变焦水平,则可以控制无人机的移动而取代或者附加在对摄像设备的变焦操作。
在某些实施例中,所述调整可以考虑到其它的约束条件。例如,在无人机的飞行线路是预设的情况下,所述调整可以通过承载体及/或者摄像设备来执行,而不影响到无人机的运动。例如,如果远程终端正通过智能终端2自主地控制无人机的飞行,或者如果无人机正根据预先存储的飞行线路在飞行(自主地或者半自主地),所述无人机的飞行线路可以是预设的。
其它的约束条件可能包括无人机、承载体及/或者负载(如摄像设备)的旋转角度、角速度及/或者线速度的最大及/或者最小的阈值,操作参数,或其它。所述最大及/或者最小的阈值可以用于显示调整的范围。例如,无人机及/或摄像设备绕着一特定轴的角速度可能被无人机、承载体及/或负载(如摄像设备)的最大角速度所限制。又如,无人机及/或承载体的线速度可能被无人机、承载体及/或负载(如摄像设备)的最大线速度所限制。再如,摄像设备焦距的调整可能受到特定摄像设备的最大及/或最小焦距所限制。在某些实施例中,这样的限制可以是预设,也可以依赖于无人机、承载体、及/或负载(如摄像设备)的特殊配置。在某些情况下,该配置是可调的(例如通过制造商、管理员或者用户)。
在某些实施例中,所述无人机可以用于提供数据,智能终端2可以用于接收数据,如无人机的传感器获取的感测数据,以及用于指示无人机追踪的一个或者多个目标对象的特征的追踪数据或信息。所述感测数据可以包括无人机所搭载的摄像设备所捕获的影像数据,或者其它传感器所感测的数据。例如,来自于无人机及/或者负载(如
摄像设备)的实时或者接近实时的视频流可以传送给智能终端2。所述感测数据也可以包括全球定位系统(GPS)传感器、运动传感器、惯性传感器、近程传感器或者其它传感器获得的数据。所述追踪数据包括目标对象在无人机接收的影像帧中的相对的或者绝对的坐标或者尺寸,目标对象在连续的影像帧中的变化,GPS坐标,或者目标对象的其它位置信息。在某些实施例中,所述智能终端2可以利用所述追踪数据显示所追踪的目标对象(例如,通过一个图形化的追踪指示标识,如用一个围绕目标对象周围的方框)。在各种实施例中,智能终端2接收的数据可以是未处理的数据(各传感器获取的未处理的感测数据)及/或者处理后的数据(例如无人机的一个或者多个处理器所处理得到的追踪数据)。
在某些实施例中,所述智能终端2所处的位置可以远离所述无人机、承载体及/或者负载。所述智能终端2可以放置或者粘贴在一个支撑平台上。或者,所述智能终端2可以是手持式的或者穿戴式设备。例如,所述智能终端2可以包括智能手机、平板电脑、笔记本电脑、计算机、眼镜、手套、头盔、麦克风或者任何适合的结合。
所述智能终端2用于通过显示设备显示从无人机接收的显示数据。所述显示数据包括感测数据,例如无人机所搭载的摄像设备所获取的影像。所述显示数据还包括追踪信息,所述追踪信息与影像数据分开显示或者叠加在影像数据的顶部。例如,所述显示设备可以用于显示影像,在该影像中的目标对象被追踪指示标识所指示或者突出显示。其中,所述追踪指示标识可以是利用方框、圆圈、或者其它几何图形包围所追踪的目标对象。在某些实施例中,当影像和追踪数据从所述无人机接收,及/或者当影像数据获取的时候,所述影像和追踪指示标识就可以实时地显示出来。在某些实施例中,所述显示可以是有延迟的。
所述智能终端2可以用于接收用户通过输入设备的输入。所述输入设备可以包括摇杆、键盘、鼠标、触控笔、麦克风、影像或者运动传感器,惯性传感器等。任何合适的用户输入都可以与终端交互,例如人工输入指令,声音控制、手势控制或者位置控制(如通过终端的运动、位置或者倾斜)。例如,智能终端2可以用于允许用户通过操作摇杆、改变智能终端2的方向或者姿态、利用键盘、鼠标、手指或者触控笔与图形用户界面交互,或者利用其它方法,控制无人机、承载体、负载或者其中任何结合的状态。
所述智能终端2也可以用于允许用户利用任何合适的方法输入目标对象信息。在某些实施例中,所述智能终端2能够使用户从所显示的一个或者多个影像(如视频
或者快照)中直接选择目标对象。例如,用户可以用手指直接触摸屏幕选择目标对象,或者利用鼠标或摇杆选择。用户可以划线包围所述目标对象、在影像上触摸该目标对象或者选择该目标对象。计算机视觉或者其它技术可用于识别目标对象的边界。一次可以选择一个或者多个目标对象。在某些实施例中,所选择的目标对象可以用选择指示标识显示,以指示用户已经选择了所要追踪的目标对象。在某些其它的实施例中,智能终端2可以允许用户选择或者输入目标对象信息,如颜色、纹理、形状、维度、或者所希望的目标对象的其它特征。例如,用户可以输入目标对象类型信息,通过图形用户界面选择这样的信息,或者使用其它的方法。在某些其它的实施例中,所述目标对象信息可以从一些数据源获取而非从用户获取,所述数据源如远程的或者本地的数据存储单元、与智能终端2连接或者通信的其它计算设备等。
在某些实施例中,所述智能终端2可以允许用户在人工追踪模式和自动追踪模式之间进行选择。当选择了人工追踪模式,用户需要指定需要追踪的特定目标对象。例如,用户可以从智能终端2所显示的影像中手动的选择目标对象。将所选择的目标对象的特定目标对象信息(坐标或者尺寸)提供给无人机作为目标对象的初始目标对象信息。另一方面,当选择了自动追踪模式,用户不需要指定需要追踪的特定目标对象。用户可以,例如,通过智能终端2提供的用户界面,指定关于所要追踪的目标对象类型的描述性信息。无人机利用所述特定目标对象的初始目标对象信息或者目标对象类型的描述性信息自动识别所要追踪的影像,随后追踪该识别的影像。
一般来说,提供特定的目标对象信息(如初始目标对象信息)需要更多目标对象追踪的用户控制以及较少自动处理或者计算(影像或者目标对象识别),所述自动处理或者计算通过设置在无人机上的处理系统执行。另一方面,提供目标对象类型的描述性信息需要较少的目标对象追踪的用户控制,但是需要较多设置在无人机上的处理系统所执行的计算。在追踪过程中用户控制以及处理系统控制的合理分配可以根据各种因素调整,例如无人机的环境、无人机的速度或者姿态、用户偏好、或者无人机之内或者外围的计算能力(如CPU或者存储器)等。例如,当无人机在相对复杂的环境中(如有很多的建筑物、障碍物或者在室内)飞行时,会比无人机在相对简单的环境(开放的空间或者户外)飞行分配相对多的用户控制。在另一个实施例中,当无人机在海拔较低的位置飞行时会比在海拔较高的位置飞行时分配相对多的用户控制。另一个实施例中,当无人机配备了高速处理器以能够更快地执行复杂计算时,会分配无人机更多的自控。在某些实施例中,在追踪过程中,用户控制以及无人机的自控的分配可以根据上述描述的因素而动态调整。
至少一部分的用户输入可以产生控制数据。所述控制数据可以由智能终端2、无人机、第三设备或者其中任意的结合所产生。例如,用户操作一个摇杆或智能终端2,或者与图形用户界面的交互都可以转换为预设的控制指令,以改变无人机、承载体或者负载的状态或者参数。在另一个实施例中,用户在终端所显示的影像中执行的目标对象选择可以产生所需追踪的初始及/或者预期目标对象信息,例如目标对象的初始及/或者预期位置及/或者尺寸。可选地或者额外的,所述控制数据可以根据非用户操作的信息所产生,例如,远程的或者本地的数据存储单元,或者与智能终端2连接的其它的计算设备等。
本发明实施例中。所述无人机搭载有云台,所述云台搭载一摄像设备。通过控制云台在一个或者多个转动轴上的转动角度,可以较好地保证无人机等无人机向某些地点或者方位移动的过程中,能够持续拍摄到目标对象。摄像设备拍摄到的包括目标对象的画面可以通过无线链路传回到某个地面端设备,例如,对于无人机拍摄得到的包括目标对象的画面可以通过无线链路传输给智能手机、平板电脑等智能终端2,这些智能终端2在接收到包括目标对象的画面之前,已经与无人机或者直接与摄像设备建立了通信链路。
目标对象可以是用户指定的某个物体,例如某个环境物体。可以将摄像设备拍摄得到的画面在一个用户界面中显示,用户通过针对该用户界面中显示的画面的点击操作,来选择一个物体作为目标对象。例如,用户可以选择某棵树、某个动物、或者某一片区域的物体作为目标对象。当然,用户也可以仅输入某些物体的画面特征,例如输入一张人脸特征、或者某种物体的外形特征,由相应的处理模块204进行画面处理,找到画面特征对应的人物或者物体,进而将找到的人物或者物体作为目标对象进行拍摄。
在本发明实施例中,目标对象可以是一个静止的物体,或者在持续拍摄的一段时间内该物体是不移动的,或者在持续拍摄的过程中移动的速度相对于无人机等无人机的移动速度小很多,例如两者的速度差值小于预设的阈值。
在一些例子中,为了更好地实现在在多个角度的持续拍摄,所述云台可以是一个三轴云台,该云台能够在偏航yaw、俯仰pitch以及横滚roll三个转动轴上转动。在一些实施例中,所述云台可以为两轴云台,所述云台能够在俯仰pitch以及横滚roll两个转动轴上转动,即所述云台自身包括俯仰角以及横滚角两个自由度。为了控制两轴云台的偏航yaw方向上的姿态,可通过控制无人机的偏航yaw方向,以实现所述两
轴云台在偏航方向的姿态变化。即所述云台还包括另一个自由度,为所述无人机的偏航角。
所述摄像设备可为相机或者图像传感器等具备图像采集功能的设备。
以下将以所述拍摄控制方法及装置用于控制无人机的航拍为例进一步阐述本发明的拍摄控制方法及装置。
实施例一
本发明实施例提供了一种拍摄控制方法,所述方法可应用于无人机侧1。本实施例中,在无人机侧1,所述方法可以由一个专用的控制设备实现,也可以由无人机的飞行控制器来实现,也可以由一个云台控制器来实现。
参见图1,所述拍摄控制方法可以括以下步骤:
步骤S101:接收开始指令,所述开始指令包含无人机的飞行模式;
本实施例中,开始指令可由智能终端2发送至无人机侧1。所述智能终端2可包括一用户界面,可选地,所述用户界面上设有用于产生开始指令的操作按钮,所述操作按钮可为实体按钮或者虚拟按钮。具体地,用户在需要控制无人机自主飞行时,按下所述操作按钮即可,方便、快捷地控制无人机自主飞行,无需操作遥控无人机飞行的摇杆。
在一些例子中,所述飞行模式包括以下中的至少一种:斜线模式、环绕模式、螺旋模式、冲天模式和彗星环绕模式,每一种飞行模式包括对应的飞行策略(每一种飞行模式对应的飞行策略将在下述步骤S104中具体阐述),所述飞行策略用于指示所述无人机的飞行,从而实现一键式控制无人机按照所需的飞行策略飞行的功能,这种方式控制无人机的飞行更为精确、方便,无需通过复杂的摇杆控制来实现无人机的飞行。在一些例子中,所述飞行模式还可包括其他飞行模式,例如直线模式。
在一实现方式中,所述飞行模式可为默认飞行模式,其中,所述默认飞行模式可为预设的一种飞行模式或者预设的多种飞行模式的组合。具体地,用户在按下产生开始指令的操作按钮后,智能终端2即选择所述默认飞行模式并根据所述默认飞行模式来生成所述开始指令。
在一实现方式中,所述飞行模式可为用户输入的飞行模式。本实施例中,用户可根据需要选择无人机的飞行模式。具体地,智能终端2预设设定有多种飞行模式供
用户选择,用户可根据需要选择智能终端2所提供的多种可选择的飞行模式中的一种或多种,从而指示无人机实现不同飞行模式的飞行,以获得不同视角的拍摄画面。
本实施例中,每一种飞行模式还包括对应的飞行路程和飞行速度中的至少一种,从而指示无人机按照所述飞行路程和/或所述飞行速度来自动完成每一种飞行模式下的飞行。其中,每一种飞行模式对应的飞行路程和飞行速度可根据实际需要设定,从而满足用户多样化的需求。
步骤S102:控制所述无人机依据所述飞行模式自主飞行;
本实施例中,步骤S102是在步骤S101执行完之后执行的,从而实现无人机自动控制,以实现较为复杂的飞行轨迹。具体地,所述无人机依据所述飞行模式中的飞行策略自主飞行的。
本发明实施例中,所述方法还包括:控制所述摄像设备在所述飞行模式中录制视频,并将视频数据发送至智能终端2,从而获取无人机航拍的视频数据。在一些实施例中,在各飞行模式中,无人机会将当前摄像设备拍摄的视频数据(也即原始数据流)进行实时存储,并对该原始数据流进行实时压缩,生成回传视频流发送给智能终端1,以便智能终端1对该无人机当前拍摄的图像实时显示。
步骤S103:在所述飞行模式中,获取目标对象的位置信息,并根据所述摄像设备拍摄到的画面中所识别到的目标对象,获得所述目标对象相对所述无人机的方位信息;
其中,目标对象的位置信息是指目标对象的绝对位置信息,例如目标对象在北东地坐标系下的坐标值。目标对象相对无人机的方位信息是该目标对象相对于无人机的方向,在一实施例中,该方位信息可以不包括目标对象与无人机的距离信息。
本发明实施例中,结合图2a、图4和图5,设定摄像设备拍摄到的画面的物理坐标系为XOY,其中,物理坐标系的心为所述拍摄设备的光轴位置,所述物理坐标系XOY包括X轴和Y轴。所述X轴与所述云台的偏航方向相对应,所述Y轴则与所述云台的俯仰方向相对应。
在一些实现方式中,所述获取目标对象的位置信息可包括以下步骤:获取包括至少两组拍摄信息的信息集合,并基于从所述信息集合选取的至少两组拍摄信息,确定所述目标对象的位置信息,其中,选取的各组拍摄信息中的拍摄位置信息所对应位置不相同,所述拍摄信息包括拍摄到目标对象时的拍摄位置信息和拍摄角度信息。
本发明的一种实施例中,在确定了拍摄到的画面中的目标对象后,在无人机的移动拍摄过程中,可以通过图像识别技术对画面进行分析识别,具体可以基于灰度、纹理等特征对拍摄得到的每一张画面进行图片识别,以找到目标对象并对该目标对象进行持续拍摄。
在对目标对象进行持续拍摄的过程中,可能存在目标对象丢失的情况,导致丢失原因包括多种,具体的,在目标对象被某个物体遮挡后,基于灰度、纹理等特征的图像识别可能无法找到目标对象,导致丢失该目标对象;或者,无人机移动后如果与目标对象的距离较远,使得目标对象在拍摄到的画面中的灰度、纹理等特征已经不足以从画面中识别出该目标对象,导致丢失该目标对象。当然还可能存在其他丢失目标对象的情况,例如摄像设备的镜头受到强光的照射,使得拍摄的画面中灰度、纹理等特征很弱,或者进行图像识别处理的模块出现故障等因素。需要说明的是,上述的丢失目标对象是指无法在画面中确定目标对象。
本发明实施例中,在检测到对目标对象的画面满足条件时,会记录在拍摄该满足条件的画面时的拍摄信息。具体的,对目标对象的画面满足条件是指:针对某次拍摄到的画面,若基于图像识别技术在该画面中能够准确地识别出目标对象。记录的此次拍摄时的拍摄信息包括:拍摄位置信息和拍摄角度信息,其中拍摄位置信息用于指示在摄像设备拍摄到目标对象时摄像设备的位置信息,该拍摄位置信息可以是无人机的定位信息,例如GPS坐标;本发明实施例的所述拍摄角度信息用于指示在摄像设备拍摄到目标对象时,目标对象相对摄像设备的方位,该方位可以基于云台的姿态角度(云台偏航角度yaw,俯仰角度pitch)和目标对象在拍摄到的画面中的显示位置综合进行计算确定的。
在无人机移动过程中,本发明实施例至少要检测出两次满足条件的画面,并记录对应的拍摄信息。记录的拍摄信息构成一个信息集合,以便于能够基于这些拍摄信息计算出目标对象的位置信息,方便在目标对象丢失时,或者在需要直接基于位置进行对象拍摄时,也能够在一定程度上满足用户的持续拍摄需求。在优选实施例中,所述信息集合中每组拍摄信息中包括的拍摄位置信息所对应的位置均不相同。
优选地,所述拍摄位置信息包括采集到的所述无人机的位置坐标,所述拍摄角度信息包括根据所述云台的姿态信息和所述目标对象在拍摄得到的画面中的位置信息计算得到的角度。具体的,针对其中的拍摄角度信息,如果拍摄到目标对象时目标对象是位于拍摄到的画面的中心区域,则对于拍摄角度信息中的俯仰角,可以是由云台
的俯仰角pitch,而拍摄角度信息中的偏航角则为云台的偏航角yaw。结合图2a和图2b,如果不在中心区域,则可以根据目标对象的中心点相对于画面物理坐标系的X轴的像素距离dp1(即图5中的d_rows)和水平视场角的大小,确定目标对象相对于画面中心的相对于画面X轴的偏移角度,并根据目标对象的中心点相对于画面物理坐标系的Y轴的像素距离dp2(即图5中的d_cols)和垂直视场角的大小确定目标对象相对于画面Y轴的偏移角度,对于拍摄角度信息中的俯仰角,可以是由云台的俯仰角pitch加上所述的相对于画面X轴的偏移角度,而拍摄角度信息中的偏航角则为云台的偏航角yaw加上相对于画面Y轴的偏移角度。具体的,如图2a和图2b所示,示出了画面的物理坐标系,摄像设备的水平视场角(HFOV)和垂直视场角(VFOV),基于目标对象的中心点相对于X轴和Y轴的像素距离所占的像素距离比例和对应的视场角,可以得到关于画面X轴的偏移角度和画面Y轴的偏移角度。另外,结合图6a、图6b和图6c,示出了摄像设备与拍摄场景的位置关系,可以了解目标对象与摄像设备的视场角(FOV)的关系。
在得到了信息集合后,如果需要基于位置来实现对控制无人机的飞行轨迹时,例如图像识别无法识别出目标对象,或者满足基于位置进行持续控制无人机的飞行轨迹的条件,则从信息集合中选取至少两组拍摄信息,从所述信息集合选取至少两组拍摄信息所采用的选取规则包括:基于拍摄信息中的拍摄位置信息计算得到的间隔距离来选取拍摄信息;和/或,基于拍摄信息中的拍摄角度信息计算得到的间隔角度来选取拍摄信息。其中,满足基于位置进行持续拍摄的条件可以包括:接收到用户发出的基于位置进行持续拍摄的控制指令,或者基于已经记录的信息集合中的信息能够较为准确地计算出目标对象的位置坐标。
本发明实施例以仅选取两组拍摄信息为例,来对计算目标对象的位置信息进行说明。具体的,如图3所示,在北东地坐标系上,目标对象的坐标为t(tx,ty),选取的第一组拍摄信息中的拍摄位置信息d1(d1x,d1y),拍摄角度信息中的偏航角为yaw1,第二组拍摄信息中的拍摄位置信息d2(d2x,d2y),拍摄角度信息中的偏航角为yaw2。基于两个拍摄位置的拍摄角度信息,计算得到k1=1/tan(yaw1),k2=1/tan(yaw2),进而得到d1到目标对象所在平面的距离为L1=d1x-k1*d1y,d2到目标对象所在平面的距离为L2=d2x-k2*d2y。进一步可以计算得到,所述目标对象t的坐标为:tx=k1*ty+L1,ty=(L1-L2)/(k2-k1)。同时,第一组拍摄信息的拍摄角度信息的俯仰角为pitch1,第二组拍摄信息的拍摄角度信息的俯仰角为pitch2。估计目标对象的高度为e1z,e2z,其中,e1z=d1z-L1*tan(pitch1),e2z=d1z-L2*tan(pitch2),基于估计的高度,可以计算得
到目标对象的高度tz=(e1z+e2z)/2。因此,最终得到的目标对象的三维坐标为t(tx,ty,tz)。
在本发明实施例中,目标对象的位置信息包括所述计算得到的坐标t。其中,d1和d2可以是无人机中的定位模块采集到的定位坐标,例如,无人机中的GPS定位模块得到的GPS坐标。而拍摄角度信息中的偏航角和俯仰角则是基于在拍摄到能够识别出目标对象的画面时,云台的偏航角和目标对象的画面位置相对于画面Y轴的距离、云台的俯仰角和目标对象的画面位置相对于画面X轴的距离分别计算得到,具体的计算方式可参考上述针对图2a和图2b的对应描述。
在一些实施例中,所述基于从所述信息集合选取的至少两组拍摄信息,确定所述目标对象的位置信息,包括:基于至少三组拍摄信息,确定出至少两个所述目标对象的位置初始估计信息;根据各个位置初始估计信息确定出所述目标对象的位置信息。具体的,基于至少三组拍摄信息确定位置初始估计信息时,根据该至少三组拍摄信息中的任意两组拍摄信息可以确定一个位置初始估计信息,其中位置初始估计信息的计算可参考上述实施例中关于位置信息的计算方式。在本发明实施例中,确定的目标对象相对无人机的位置信息可以是从多个位置初始估计信息中随机选择的一个信息,或者是对多个位置初始估计信息所对应的位置坐标进行平均计算后的一个平均值。也可以是按照其他一些规则确定的位置信息,例如,将间隔距离最远、和/或间隔角度最大的两组拍摄信息计算得到的位置初始估计信息确定为位置信息。
其中可选地,当已经确定的各个位置初始估计信息中至少两个位置初始估计信息所对应位置之间的位置变化幅度满足预置变化幅度要求时,确定满足所述稳定条件。所述位置变化幅度主要是指位置之间的间隔距离,满足位置变化幅度要求主要包括:多个间隔距离均在一个预设的数值范围内。基于两个或者多个位置初始估计信息之间的位置变化幅度,可以确定计算得到的关于目标对象的位置估计是否稳定,位置变化幅度越小,说明计算得到的位置初始估计信息较为准确,反之,则表明选取的拍摄信息存在不准确的情况,得到的位置初始估计信息存在不准确的量,无法确定出准确的位置信息,进而不能基于该位置信息对拍摄角度进行调整,不能基于位置信息对目标对象进行持续拍摄。
进一步地,导致多个位置初始估计信息之间的位置变化幅度较大的情况包括多种,例如,目标对象处于静止状态,在获取上述的信息集合时,其中的一个或多组拍摄信息的拍摄位置信息或拍摄角度信息不准确,进而导致计算得到的位置信息不准确。
因此,在确定所述目标对象的位置信息时,基于计算得到的多个位置初始估计信息进行计算,例如上述的可以对多个位置初始估计信息进行平均计算后,得到的一个平均值作为所述目标对象的位置信息。
上面描述了获取目标对象的位置信息的一些实现方式。在一些例子中,所述获取目标对象的位置信息,包括:智能终端2的定位信息,所述智能终端2为与所述无人机进行通信的终端,所述位置信息为所述定位信息。其中可选地,该智能终端2为目标对象佩戴的GPS定位设备,所述GPS定位设备可以是以一定频率将其检测到的目标对象的定位信息发送至无人机侧1,也可以是无人机侧1在需要时询问所述GPS定位设备,从而获得所述目标对象的定位信息。
在某些实施例中,所述根据所述摄像设备拍摄到的画面中所识别到的目标对象,获得所述目标对象相对所述无人机的方位信息,包括:获取待跟踪的目标对象的特征信息;根据所述特征信息,基于图像识别技术在拍摄到的画面中识别目标对象,获得所述目标对象相对无人机的方位信息。
其中,基于图像识别技术在拍摄到的画面中识别目标对象的描述可参考前文中的描述,在此不再赘述。
步骤S104:根据所述位置信息和所述飞行模式,控制所述无人机的飞行轨迹;
在该步骤中,无人机能够根据目标对象的实际位置信息来按照开始指令中的飞行模式进行飞行,从而实现不同的飞行轨迹,进而获得难以拍摄到的角度的画面,更贴合用户需求。本实施例尤其适用于规律性较强的飞行轨迹,而通过手动操作摇杆难以控制无人机实现较为复杂尤其是规律性较强的飞行轨迹。
在某些实施例中,所述斜线模式对应的飞行策略可包括:根据所述位置信息,控制所述无人机先沿着水平面(即平行于地面的方向)飞行再沿着与水平面呈一定夹角的平面飞行。其中,所述夹角的大小可根据需要设定,例如,45°,从而在不同角度对目标对象进行拍摄,获得内容较为丰富的拍摄画面。另外需要说明的是,控制所述无人机先沿着水平面飞行是指所述无人机只存在水平方向的飞行速度,不存在垂直方向(即垂直于地面的方向)的飞行速度。在一具体地实现方中,所述控制所述无人机先沿着水平面飞行再沿着与水平面呈一定夹角的平面飞行的步骤可包括:控制所述无人机沿着水平面飞行;当确定出所述目标对象的最低点与无人机中心的连线以及目标对象的最高点分别与无人机中心的连线之间的夹角小于摄像设备的视场角的预设倍
数时,则根据所述位置信息,控制所述无人机沿着与水平面呈一定夹角的平面飞行,其中所述预设倍数<1,从而拍摄到构图更加美观的画面。在一具体实现方式中,所述控制所述无人机沿着与水平面呈一定夹角的平面飞行,包括:控制所述无人机沿着目标对象与无人机的连线方向远离所述目标对象飞行。其中,目标对象与无人机的连线可以指目标对象上任一位置与无人机上任一位置的连线。优选地,目标对象与无人机的连线是指目标对象的中心位置与无人机的中心位置的连线。其中,目标对象的中心位置和无人机的中心位置的确定规则可根据需要设定,以目标对象的中心位置为例,可使用一个规则的形状(例如长方形、正方形五边形、圆形等)包围目标对象,所述规则的形状的中心位置即为所述目标对象的中心位置。
在某些实施例中,所述斜线模式对应的飞行策略包括:根据所述位置信息,控制所述无人机远离所述目标对象以S形曲线飞行,从而拍摄到构图更加美观的画面。其中,S形曲线的弯曲程度可根据需要设定,以满足拍摄的需求。
在一具体实现方式中,以地面作为基准,目标对象的最低点、最高点即为目标对象上距离地面最近的位置和目标对象上距离地面远的位置。而所述目标对象的最低点与无人机中心的连线以及目标对象的最高点分别与无人机中心的连线之间的夹角也可称作目标对象相对无人机的角度,例如,目标对象为人物,则人物相对无人机的角度即为人物的最低点与无人机中心连线和人物最高点。在一优选地实现方式中,所述预设倍数为1/3,目标对象位于地面上。当目标对象相对无人机的角度小于1/3的摄像设备的视场角时,无人机则会沿着所述目标对象与无人机的连线方向远离所述目标对象飞行,从而能够使得画面中的地平线出现在画面的上1/3处(即地平线距离画面的顶部边缘的像素距离占画面物理坐标系Y方向的总像素距离的1/3),同时目标对象也能够出现在所拍摄的画面中,从而获得构图更为美观的拍摄画面。
在某些实施例中,所述环绕模式对应的飞行策略包括:根据所述位置信息,控制所述无人机按照指定距离环绕目标对象飞行。本实施例的无人机以目标对象为中心,环绕目标对象作圆周运动,从而实现360°方向对目标对象的拍摄。其中,环绕目标对象飞行的飞行轨迹的形状可根据需要进行选择。在一些例子中,所述环绕目标对象飞行的飞行轨迹可以为圆形。在一些例子中,所述环绕目标对象飞行的飞行轨迹可以为椭圆形。在一些例子中,所述环绕目标对象的飞行也可以为其它类似于圆形或者椭圆形的飞行轨迹。而所述指定距离用于指示无人机在每一位置处距离目标对象的距离。在一些例子中,所述指定距离为默认距离,可选地,环绕模式对应的飞行策略中包含
一默认距离。在一些例子中,所述指定距离为用户输入的距离信息,即由用户根据实际需要来设定无人机环绕目标对象飞行的距离信息,从而满足不同的用户需求。可选地,用户在智能终端2选择环绕模式后,可在智能终端2上输入环绕模式对应的指定距离,以指示无人机环绕目标对象飞行的距离信息。在一些例子中,所述指定距离为当前时刻所述无人机与目标对象之间的距离。可选地,可根据目标对象的位置信息以及无人机当前时刻的定位信息来计算当前时刻无人机与目标对象之间的距离,进一步提高无人机的智能化程度。
在某些实施例中,所述螺旋模式对应的飞行策略包括:根据所述位置信息,控制所述无人机以裴波那契螺旋线、等比螺旋线、等角螺旋线、阿基米德螺旋线或者其他形状的螺旋线为轨迹环绕目标对象飞行。本实施例的无人机以目标对象为中心,以裴波那契螺旋线、等比螺旋线、等角螺旋线、阿基米德螺旋线或者其他形状的螺旋线为轨迹飞行,从而拍摄到内容更加丰富的画面。在一些实现方式中,为从更多角度方向来拍摄目标对象,所述螺旋模式对应的飞行策略还包括:在根据所述位置信息,控制所述无人机以裴波那契螺旋线、等比螺旋线、等角螺旋线、阿基米德螺旋线或者其他形状的螺旋线为轨迹环绕目标对象飞行的同时,还控制无人机按照预设速率垂直地面上升或下降。本实施例通过控制无人机在垂直地面方向的飞行上升或者下降,从而从更多角度来拍摄目标对象,以提高所拍摄画面的内容丰富性。而无人机上升或者下降的飞行速度可根据实际需要设定。在一些实现方式中,所述无人机是根据所述位置信息,以裴波那契螺旋线、等比螺旋线、等角螺旋线或者阿基米德螺旋线为轨迹环绕目标对象沿着水平面飞行的,即无人机只存在水平方向的飞行速度,垂直方向的飞行速度为零,从而改变目标对象在画面中的大小,增加拍摄画面的丰富性。
在某些实施例中,所述冲天模式对应的飞行策略包括:根据所述位置信息,控制所述无人机按照预设角度倾斜飞行至相对所述目标对象的第一指定位置后,控制所述无人机垂直地面上升。其中,所述预设角、所述第一指定位置以及无人机上升的飞行速度均可根据实际需要设定,从而拍摄出多样化的画面。其中,所述第一指定位置是指距离所述目标对象的指定位置特定距离处,且所述第一指定位置位于所述目标对象的指定位置的特定方位。本实施例中,所述第一指定位置可由用户根据需要设定。在一些例子中,所述控制所述无人机按照预设角度倾斜飞行至相对所述目标对象的第一指定位置,包括:控制所述无人机沿着靠近所述目标对象的方向飞行到第一指定位置。在一些例子中,所述控制所述无人机按照预设角度倾斜飞行至相对所述目标对象的第一指定位置,包括:控制所述无人机沿着远离所述目标对象的方向飞行到第一指
定位置。
另外,在冲天模式下,可以控制所述无人机从任意起点(即无人机当前位置)飞行至第一指定位置,也可以先控制无人机飞行至一个特定起始点,再控制所述无人机从所述特定起始点飞行至第一指定位置。需要说明的是,在先控制无人机飞行至一个特定起始点,再控制所述无人机从所述特定起始点飞行至第一指定位置这种情况中,无人机上的摄像设备是在无人机位于所述特定起始点后才开始录像的。
在某些实施例中,所述彗星环绕模式对应的飞行策略包括:根据所述位置信息,控制所述无人机靠近目标对象飞行至第二指定位置,并从所述第二指定位置围绕目标对象飞行之后,远离目标对象飞行。其中,所述第二指定位置可根据需要设定,例如,第二指定位置为距离目标对象的指定位置特定距离处,且第二指定位置位于所述目标对象的指定位置的特定方位,从而拍摄出多样化的画面。另外,本实施例中,无人机飞行至第二指定位置后环绕目标对象飞行的圈数可根据需要设定,例如,一周、多周或者不足一周。
在一些实现方式中,在彗星环绕模式下,可以控制所述无人机从任意起点(即无人机的当前位置)靠近所述目标对象飞行至第二指定位置,从第二指定位置围绕所述目标对象飞行后远离所述目标对象飞行,
在一些实现方式中,在彗星环绕模式下,可以先控制所述无人机飞行至一个特定起始点,再控制所述无人机从所述特定起始点靠近所述目标对象飞行至第二指定位置,从第二指定位置围绕所述目标对象飞行后远离所述目标对象飞行。本实施例中,无人机上的摄像设备是在无人机位于所述特定起始点后才开始录像的。另外,本实施例中,无人机的飞行轨迹可以以目标对象作为基点,也可以以世界坐标系下的坐标进行控制。
上述实施例中,控制所述无人机实时相应飞行策略对应的飞行轨迹均是以目标对象在画面中为前提。在其他实施例中,当获得目标对象的位置信息后,飞行器也可以预先飞行一段不看向目标对象的飞行轨迹,再朝向目标对象的位置信息飞行,从而满足不同的飞行需求。
步骤S105:根据所述方位信息,控制所述云台的姿态,使得所述目标对象处于所述摄像设备所拍摄的画面中。
在某些实施例中,需要通过控制云台姿态使得目标对象在所拍摄的画面中始终
处于预设位置,从而使得所述目标对象始终出现在所拍摄的画面中。所述方位信息则可为所述预设位置。在一些例子中,所述目标对象位于所拍摄的画面中的预设位置处是指所述目标对象的指定位置位于所拍摄的画面中的预设位置。优选地,所述目标对象的指定位置是指所述目标对象的中心位置,所述目标对象位于所拍摄的画面中的预设位置处是指所述目标对象的中心位置在所拍摄的画面中的预设位置。在一些例子中,所述预设位置可由用户直接点击智能终端2的用户界面上用于显示所拍摄画面的区域中的任一位置而产生,即所述指定位置为所述用户坐用户界面上输入的点击位置。在一些例子中,所述预设位置可选择为默认的所述目标对象的指定位置显示在所拍摄的画面中的位置。
在某些实施例中,为获得更好的构图效果,需要通过控制云台姿态使得目标对象显示在所拍摄的画面中的尺寸大小始终为预设尺寸的大小。所述方位信息则可为所述预设尺寸的中心位置信息或者所述预设尺寸对应区域中的其他位置信息(例如顶角位置信息)。本实施例中,所述目标对象显示在所拍摄的画面中的尺寸大小是指所述目标对象显示在所述拍摄的画面中的像素高度和像素宽度的乘积大小。在一些例子中,所述预设尺寸可由用户直接在智能终端2的用户界面上输入的尺寸框。在一些例子中,所述预设尺寸可以为默认的尺寸框。无论是用户设定的尺寸框还是默认的尺寸框,在后续的拍摄过程中,所述目标对象在所拍摄的画面中是位于所述尺寸框内的。本实施例中,所述尺寸框的尺寸大小设计成能够刚好将所拍摄的画面中的目标对象包围住即可,从而获得满足用户需求的构图。可选地,所述尺寸框为长方形、正方形等规则的形状。
本发明实施例中,在无人机的飞行轨迹确定后,可根据目标对象在画面中的实际位置相对待显示位置的偏差来控制云台的姿态,从而使得目标对象在所拍摄的画面中处于待显示位置(即预设位置),即保持目标对象显示在所拍摄画面中的位置为预设位置。具体地,若目标对象在画面中的实际位置相对待显示位置发生左右偏移,则通过控制云台的偏航角以使得目标对象保持在待显示位置;若目标对象在画面中的实际位置相对待显示位置发生上下偏移,则通过控制云台的俯仰角以使得目标对象保持在待显示位置。
在一具体实现方式中,待跟踪的目标对象的中心位置待显示在所拍摄画面中的位置坐标为P(u,v),其中u为X轴的像素坐标,v为Y轴的像素坐标,画面的大小为(W,H),W为画面像素宽度,H为画面像素高度。若设定画面的左上角为原点,
则云台的偏航轴转动的角速度Yx为:
Yx=μ*(u-w/2),
其中μ为常数,且μ∈R(R代表实数);
云台的俯仰轴转动的角速度Yy为:
Yy=ω*(v-h/2),
其中ω为常数,且ω∈R。
为保持目标对象在所拍摄画面中的尺寸为预设尺寸,在无人机的飞行轨迹确定后,可根据目标对象在所拍摄的画面中的大小来调节摄像设备的焦距。在一具体实施例中,设定初始化时刻(智能终端2未发送开始指令至无人机侧1之前的某一时刻),目标对象在所拍摄画面中的像素面积(即预设尺寸)为S(S定义为目标对象的像素高度乘以目标对象的像素宽度),跟踪目标对象的过程中,目标对象在所拍摄的画面中的像素面积为s,则摄像设备的焦距的调节速度F为:
F=γ*(1-s/S),
其中γ为常数,且γ∈R(R代表实数)。
本发明实施例中,在跟踪目标对象的过程中,若目标对象在所拍摄的画面中的像素面积小于预设尺寸的像素面积,则调节摄像设备的焦距变长;否则,调节摄像设备的焦距变短。
在一些实施例中,所述控制所述云台的姿态包括:控制所述云台的俯仰角、偏航角和横滚角中的至少一个,从而控制目标对象在所拍摄的画面中的位置。本实施例中,所述云台为三轴云台,通过控制三轴云台的俯仰轴、偏航轴和横滚轴中的至少一个即可改变云台的姿态。
在一些实施例中,所述云台和所述无人机在航向轴上相互固定;所述控制云台的姿态,包括:控制云台的俯仰角和/或横滚角;控制所述无人机的航向角,以控制所述云台的偏航角。本实施例中,所述云台自身包括俯仰角和横滚角两个自由度,云台的另一个自由度偏航角由无人机的航向角替代,从而通过控制无人机的航向角来实现对云台偏航角的控制。
在一些实施例中,为了实现所拍摄画面构图的美观性,云台的俯仰角和/或偏
航角的确定过程包括:根据背景标识的指定位置待显示在所拍摄的画面中的位置,确定所述云台的俯仰角和偏航角中的至少一个。本实施例通过设定背景标识的指定位置待显示在所述拍摄的画面的位置,从而满足多样化的构图需求,增强所拍摄画面的丰富性与美观性。其中,所述背景标识可包括地面、天空、海面、建筑物和其它背景标识中的至少一种。参见图4,以地面作为背景标识为例,用户可以设定地平线近似垂直于所拍摄的画面物理坐标系Y轴且位于Y轴的上1/3处,则无人机可根据地平线待显示在所拍摄的画面中的位置来计算云台的俯仰角,进而控制云台的俯仰角,从而使得地平线显示在所拍摄的画面中Y轴的上1/3处,以获得更好地构图。
在某些可行的实现方式中,所述根据背景标识的指定位置待显示在所拍摄的画面中的位置,确定所述云台的俯仰角和偏航角中的至少一个,包括:获取所拍摄的画面在第一方向上的第一总像素距离以及所述背景标识的指定位置待显示在所拍摄的画面的位置在第一方向上至画面边缘的像素距离,其中所述第一方向与云台的俯仰方向或者偏航方向对应;根据所述第一总像素距离、所述像素距离以及摄像设备的垂直视场角大小或水平视场角大小,确定所述云台的俯仰角和/或偏航角。在一具体的实现方式中,所述第一方向为所拍摄的画面的Y轴方向的上边缘的像素距离为row,所述第一总像素距离(即画面高度)为row_size,相机的垂直视场角为VFOV,在一定的简化条件下,云台的俯仰角pitch的计算公式为:
pitch=(row_size/row_size-0.5)*VFOV。
在一些实施例中,无需设定背景标识在画面中的位置,例如,所需要的构图是无地平线的俯拍,则云台的俯仰角和/或偏航角的确定过程包括:获取预设的拍摄位置的高度角和/或水平角;确定所述目标对象相在所拍摄的画面第一方向的中心线(即所拍摄的画面中,物理坐标系的X轴或Y轴)的偏移角,其中所述第一方向与所述云台的俯仰方向或偏航方向对应;根据所述偏移角和所述高度角和/或水平角,确定所述云台的俯仰角和/或偏航角。本实施例中,拍摄位置的高度角和/或水平角是由用户直接设定的,假设拍摄位置对应的拍摄位置信息(x,y,z),该拍摄位置指向目标对象确定一个方向角,则高度角定义为arctan(z/x),水平角度定义为arctan(y/x)。用户设定高度角即设定x和z的比值,用户设定水平角即设定x和y的比值。在某些实施例中,所述确定所述目标对象相对于所拍摄的画面第一方向的中心线的偏移角,包括:获取所拍摄的画面在第一方向上的第一总像素距离以及摄像设备的垂直视场角/水平视场角;确定所述目标对象距离所拍摄的画面第一方向的中心线的第一偏移像素距离;根
据所述第一总像素距离、垂直视场角/水平视场角以及所述第一偏移像素距离,确定所述目标对象相对于所拍摄的画面第一方向的中心线的偏移角。需要说明的是,在计算云台的俯仰角时,是根据垂直视场角来确定所述目标对象相对于所拍摄的画面第一方向的中心线的偏移角的;而在计算云台的偏航角时,是根据水平视场角来确定所述目标对象相对于所拍摄的画面第一方向的中心线的偏移角的。
在一具体的实现方式中,参见图5,用户设定好目标对象待显示在所拍摄的画面中的位置后,可确定出目标对象的中心距离X轴的第一偏移像素距离为d_rows,用户设定的拍摄位置的高度角为theta,画面的第一总像素距离(即画面的高度)为row_size,摄像设备的垂直视场角为VFOV,则云台的俯仰角pitch的计算公式为:
pitch=-theta-d_rows/row_size*VFOV;
或者,pitch=-arctan(z/x)-d_row/row_size*VFOV。
在一些实施例中,所述根据所述位置信息和所述飞行模式,控制所述无人机的飞行轨迹,包括:确定所述目标对象与摄像设备之间的距离;根据所述位置信息、所述飞行模式以及目标对象与摄像设备之间的距离,控制所述无人机的飞行轨迹,从而使得目标对象待显示在画面中的高度为特定高度,以满足用户的构图需求。在某些可行的实现方式中,所述确定所述目标对象与摄像设备之间的距离,包括:获取目标对象的实际高度、所拍摄的画面在第一方向上的第一总像素距离;获取目标对象的实际高度待显示在所拍摄的画面第一方向上对应的像素距离,其中所述第一方向与所述云台的俯仰方向对应;根据所述目标对象的实际高度、第一总像素距离以及目标对象的实际高度在所拍摄的画面第一方向上对应的像素距离,确定所述目标对象与摄像设备之间的距离。在一具体地实现方式中,又参见图5,目标对象的实际高度为h,无人机与目标对象之间的距离定义为d(d=sqrt(x*x+y*y+z*z)),用户设定的拍摄位置的高度角为theta,画面的高度为row_row,拍摄设备的垂直视场角为VFOV。若构图中,用户希望目标对象待显示在所拍摄的画面中的高度(即在Y轴方向上的高度)为tar_rows,则距离d满足:
cos(theta)*h/(2*d)=tan(tar_rows/(2*row_siz)*VFOV)。
在一些实施例中,若构图中,目标对象无需显示在所拍摄的画面的水平(即X轴)中央,则为了完成所需构图,所述确定所述目标对象与摄像设备之间的距离,之后还包括:获取预设的拍摄位置的高度角、摄像设备的水平视场角、所拍摄的画面在
第二方向上的第二总像素距离,其中所述第二方向与所述云台的偏航方向对应;确定所述目标对象距离所拍摄的画面中第二方向的中心线的第二像素偏移距离;根据所述第二像素偏移距离、所述高度角、所述水平视场角、所述第二总像素距离以及所述目标对象与摄像设备之间的距离,确定所述云台在俯仰方向上的移动距离;根据所述云台在俯仰方向上的移动距离,控制所述云台的姿态。在一具体地实现方式中,又参见图5,目标对象距离X轴的第二像素偏移距离为d_col,画面在X轴方向的第二总像素距离为col_size,相机的水平视场角为HFOV,无人机与目标之间的距离为d,用户设定的拍摄位置的高度角为theta,则云台在俯仰方向上的移动距离y为:
y=sin(d_col/col_size*HFOV)*d*cos(theta)。
在一些实施例中,若构图中,目标对象无需显示在所拍摄的画面的水平(即X轴)中央,则为了完成所需构图,所述控制云台的姿态,包括:获取摄像设备的水平视场角、所拍摄的画面的在第二方向上的第二总像素距离,其中所述第二方向与所述云台的偏航方向对应;确定所述目标对象距离所拍摄的画面中第二方向的中心线的第二像素偏移距离;根据所述第二总像素距离、水平视场角和所述第二像素偏移距离,确定所述云台的偏航角;根据所述偏航角,控制所述云台的姿态。在一具体地实现方式中,又参见图5,目标对象距离X轴的第二像素偏移距离为d_col,画面在X轴方向的第二总像素距离为col_size,相机的水平视场角为HFOV,则云台的偏航角为d_col/col_size*HFOV。
上述实施例的构图是通过目标对象或者背景标识待显示在所拍摄的画面中的位置来作为构图的依据,在其他一些实施例中,也可以通过CNN(Convolutional Neural Network,卷积神经网络)分类算法识别出天空、建筑物、海面等背景标识,从而更好地构图。
以上均通过目标对象的坐标系XOY作为参考,来推断出目标对象和无人机的位置关系。而无人机飞行时,还可以通过其他手段来推测出目标对象和无人机的位置关系。在一些例子中,操作无人机的用户即可目标对象,无人机从用户手掌中起飞,忽略无人机和用户之间的位置差别,假定无人机和目标对象之间的位置是一直线。在一些例子中,操作无人机的用户即可目标对象,无人机从用户的手掌中扫脸起飞,无人机的摄像设备安装在机身的正前方。扫脸时,用户需要伸出双臂,使摄像设备正对用户的面部,则通过假设手臂的通常长度,通过人脸的大小,可以推测出飞机和用户的位置关系。
本发明实施例中,通过设置飞行模式,使得无人机能够按照设置的飞行模式和目标对象的位置信息自主飞行,从而无人机可实现较为复杂的飞行轨迹,特别是规律性较强的飞行轨迹;并通过图像识别获得目标对象相对于无人机的方位信息,从而控制云台的姿态,使得目标对象处于所拍摄的画面中;无需操作者手动控制即可实现对无人机和云台的控制,拍摄出的画面更加流畅、构图更加丰富和精确。
在某些实施例中,当基于图像识别的方式无法识别出所拍摄的画面中的目标对象时,会导致无法确定出目标对象相对无人机的方位信息,从而导致无人机不能根据方位信息来控制云台的姿态,以使得目标对象位于所拍摄的画面中。而为了在目标对象丢失后能够继续对目标对象进行跟踪拍摄,在一实施例中,所述方法还包括:当判断出不能在所述画面中识别出所述目标对象时,则将所述根据所述方位信息来控制所述云台的姿态的步骤替换成根据所述位置信息来控制所述云台的姿态的步骤。
在一些实施例中,所述根据所述位置信息和所述飞行模式,控制所述无人机的飞行轨迹,之后还包括:控制所述无人机运动至复位位置。本实施例中,无人机根据所述飞行模式中的飞行策略完成飞行后,会自动运动至复位位置,从而使得无人机一直处于相同的起飞位置。其中,所述复位位置可为无人机通过GPS定位获得的某一定位坐标位置。需要说明的是,无人机在运动至复位位置的过程中,若接收到外部设备(例如控制无人机工作的遥控设备)发送的打杆操作信号,则会立马终止当前运动至复位位置的操作。
在一些实施例中,所述方法还包括:若接收到外部设备发送的打杆操作信号,则根据所述打杆操作信号来控制无人机的飞行和所述云台的姿态中的至少一种。本实施例中,打杆操作信号即用户通过操作控制无人机的遥控设备来产生。可选地,所述打杆操作信号可以包括控制无人机垂直地面上升或下降的信号、控制无人机远离或靠近目标对象的信号、控制无人机的飞行速度、控制云台偏航角的信号、控制无人机机身旋转的信号和控制其他无人机参数、云台参数中的至少一种。
参见图7,遥控无人机的遥控设备包括两副摇杆,每副摇杆包括四个自由度的调节方向。其中一副摇杆包括上升/下降和左旋转/右旋转的操作,另一副摇杆包括前/后和左/右的操作。其中,上升/下降对应无人机的高度上升/下降操作,左旋转/右旋转对应云台的yaw,左/右对应云台的roll,前/后对应云台的pitch。
无人机处于环绕模式下,控制左旋转/右旋转,分别对应目标对象在所拍摄的画面中的左右构图,即目标对象在所拍摄的画面中的左右位置;控制前/后,分别对应
无人机相对目标对象的环绕半径的扩大和缩小;控制左/右,分别对应无人机绕目标对象环绕的飞行速度的加快和减慢;控制上升/下降,分别对应无人机绕目标对象环绕时无人机高度(垂直地面方向)的上升和下降。
无人机处于斜线模式下,控制左旋转/右旋转,分别对应目标对象在所拍摄的画面中的左右构图,即目标对象在所拍摄的画面中的左右位置;控制前/后,分别对应无人机飞行速度的加快和减慢;控制左/右和上升/下降均为无效操作。
无人机处于冲天模式下,控制左旋转/右旋转,对应无人机机身的旋转,用于控制机身的旋转,从而实现对拍摄设备的镜头的旋转,以获得一个一目标对象为中心的旋转镜头,所拍摄的画面美观性更强;控制前/后和左/右均为无效操作;控制上升/下降,分别对应无人机上升速度的加快和减慢。
无人机处于螺旋模式下,控制左旋转/右旋转,分别对应目标对象在所拍摄的画面中的左右构图,即目标对象在所拍摄的画面中的左右位置;控制前/后,分别对应螺旋半径的扩大和缩小;控制左/右,分别对应螺旋飞行的横向(即平行地面的方向)飞行速度的加快和减慢;控制上升/下降,分别对应无人机螺旋上升速度的加快和减慢,或者对应无人机螺旋下降速度的加快和减慢。
实施例二
本发明实施例提供了一种拍摄控制方法,所述方法可应用于安装有APP的智能终端2。本实施例中,所述智能终端可与无人机通信连接。
参见图8,所述拍摄方法可以包括以下步骤:
步骤S801:接收用户指令;
其中,用户指令可直接由用户在智能终端2输入。在一具体实现方式中,智能终端2包括一供用户输入用户指令的APP(应用软件)。可选地,所述APP可用于显示无人机回传的画面。
在某些实施例中,所述用户指令包括:确定待识别的目标对象。本实施例中,确定待识别的目标对象之后,所述方法还包括:识别当前显示的画面中所述待跟踪的目标对象的特征信息,所述特征信息为所述目标对象待显示在所拍摄的画面中的预设位置或者预设尺寸。在一些例子中,智能终端的用户界面会实时显示摄像设备当前时刻所拍摄的画面,用户直接在当前时刻画面上点击待识别的目标对象,智能终端即可基于图像识别技术对用户选中的目标对象进行识别,获得所述待识别的目标对象的特
征信息,所述目标对象的特征信息可以为目标对象的预设位置,也可以为目标对象的预设尺寸,还可以为灰度、纹理等信息,从而方便后续对目标对象的跟踪。在一些例子中,用户选择待识别的目标对象的方式包括:用户直接点击智能终端的用户界面当前时刻的画面中某一物体,则该某一物体即为待识别的目标对象。在一些例子中,用户选择待识别的目标对象的方式包括:用户采用尺寸框的形式将智能终端的用户界面当前时刻的画面中某一物体包围住,则该包围住的某一物体即为待识别的目标对象。优选地,所述尺寸框刚好能够包围所述待识别的目标对象,或者,所述尺寸框为能够包围所述待识别的目标对象的最小规则图形框(例如方框或者圆形框)。在某些实施例中,所述特征信息可包括所述目标对象在所拍摄的画面中的预设位置或者预设尺寸,从而指示无人机侧1控制云台姿态使得目标对象在所拍摄的画面中始终处于所述预设位置,并且目标对象在所拍摄的画面中始终为所述预设尺寸的大小,以获得更好的构图效果。
其中,所述目标对象在所拍摄的画面中的预设位置是指用户选中待识别的目标对象时,所述目标对象的中心位置(也可以为目标对象的其他位置)在当前时刻(即用户选择待识别的目标对象的时刻)画面中的预设位置;所述目标对象在所拍摄的画面中的预设尺寸是指所述目标对象在当前时刻所述拍摄的画面中的像素高度和像素宽度的乘积大小。在某些实施例中,为使得所拍摄的画面的构图更美观且提高所拍摄画面的内容的丰富性,所述用户指令还包括:背景标识的指定位置待显示值所拍摄画面中的位置。本实施例通过设定背景标识的指定位置待显示在所述拍摄的画面的位置,从而满足多样化的构图需求,增强所拍摄画面的丰富性与美观性。具体地,所述背景标识可包括地面、天空、海面、建筑物和其他背景标识中的至少一种。
在某些实施例中,所述用户指令还包括:拍摄位置的高度角或水平角,以进一步确定云台的俯仰角或偏航角,从而更好地构图,使得目标对象处于所拍摄的画面中的预设位置处。
在某些实施例中,所述用户指令还包括:无人机的飞行路程和飞行速度中的至少一种,从而指示无人机按照所述飞行路程和/或所述飞行速度来自动完成每一种飞行模式下的飞行。其中,每一种飞行模式对应的飞行路程和飞行速度可根据实际需要设定,从而满足用户多样化的需求。
步骤S802:根据所述用户指令生成开始指令,所述开始指令包含无人机的飞行模式,所述开始指令用于触发所述无人机依据所述飞行模式自主飞行;
在某些实施例中,所述飞行模式为默认飞行模式。其中,所述默认飞行模式可为预设的一种飞行模式或者预设的多种飞行模式的组合。具体地,智能终端2在接收到用户指令(例如用户按下某一操作按钮或者输入某一指令信息)后,选择所述默认飞行模式并根据所述默认飞行模式来生成所述开始指令。
在某些实施例中,所述用户指令包括模式选择指令,所述模式选择指令包含用于指示无人机飞行的飞行模式。本实施例中,用户可根据需要选择无人机的飞行模式。具体地,智能终端2预设设定有多种飞行模式供用户选择,用户可根据需要选择智能终端2所提供的多种可选择的飞行模式中的一种或多种,从而指示无人机实现不同飞行模式的飞行,以获得不同视角的拍摄画面。
所述飞行模式可包括斜线模式、环绕模式、螺旋模式、冲天模式、彗星环绕模式和其他飞行模式(例如直线模式)中的至少一种,每一种飞行模式包括对应的飞行策略,所述飞行策略用于指示所述无人机的飞行。其中,每一种飞行模式对应的飞行策略可参见上述实施例一中的描述。
步骤S803:发送所述开始指令至无人机。
步骤S804:接收并存储所述无人机在所述飞行模式下回传的回传视频流。
在各飞行模式中,无人机会将当前摄像设备拍摄的视频数据(也即原始数据流)进行实时存储,并对该原始数据流进行实时压缩,生成回传视频流发送给智能终端1,以便智能终端1对该无人机当前拍摄的图像实时显示。
在步骤S804中,智能终端2在接收到所述回传视频流后会进行缓存,从而获得无人机在所述飞行模式下的完整的回传视频流。
本发明实施例中,通过用户在智能终端2上设置飞行模式,使得无人机能够按照设置的飞行模式和目标对象的位置信息自主飞行,从而无人机可实现较为复杂的飞行轨迹,特别是规律性较强的飞行轨迹;并使得无人机通过图像识别获得目标对象相对于无人机的方位信息,从而控制云台的姿态,使得目标对象处于所拍摄的画面中;无需操作者手动控制遥控设备即可实现对无人机和云台的控制,拍摄出的画面更加流畅、构图更加丰富和精确。
另外,在无人机领域,无人机飞行过程中传输的回传视频流一般是供用户直接观看的,由于无人机飞行过程中传输至地面设备(例如智能手机、平板电脑等智能终端)的视频流一般较大,用户难以直接在朋友圈等社交网络分享无人机传输的回传视
频。目前,大都需要用户进行手动剪切无人机传输的回传视频流,从而获得便于分享的小视频,而用户手动剪切获得小视频的方式可能不够专业,获得小视频特效较差。为解决上述问题,参见图9,在步骤S804之后还可包括以下步骤:
步骤S901:对所述回传视频流进行处理,生成第一指定时长的视频画面,其中所述第一指定时长小于所述回传视频流的时长。
本实施例无需用户手动剪切,通过对回传视频流进行处理,即可将较大的回传视频流转换成易于分享的小视频(第一预设时长的视频画面),用户能够快捷地在朋友圈等社交媒体中分享。所述第一预设时长可根据需要设定,例如10秒,从而获得便于分享的小视频。
另外,还需要说明的是,本发明实施例中的小视频是指时长小于特定时长(可根据需要设定)的视频。当然,在其他一些例子中,小视频也可至容量小于特定容量(可根据需要设定)的视频。
步骤S901是在判断出所述无人机满足指定条件后执行的。
在一些例子中,所述指定条件包括:所述无人机完成所述飞行模式的飞行。本实施例中,无人机完成所述飞行模式的飞行的同时,智能终端2会接收到完整的无人机在飞行模式下的回传视频流,从而可方便用户根据回传视频流的全部信息来选择处理的方向。
在一实施例中,智能终端2根据无人机返回的回传视频流判断所述无人机是否完成所述飞行模式的飞行。可选地,无人机在其位于所述飞行模式下飞行时所拍摄到的画面中添加所述飞行模式对应的飞行状态信息,并将带有飞行状态信息的原始数据流进行实时压缩等处理后传输至智能终端2,即智能终端2获得的回传视频流也会带有飞行状态信息。智能终端2根据所述回传视频流中的飞行状态信息,即可判断出所述无人机是否完成所述飞行模式的飞行。具体地,若智能终端2判断出所述回传视频流中的飞行状态信息由所述飞行模式对应的飞行状态信息变化成另一飞行模式的飞行状态信息或者所述回传视频流从存在所述飞行模式的飞行状态信息变化成无飞行状态信息的回传视频流,即表明所述无人机完成了所述飞行模式的飞行。
在一实施例中,无人机在完成所述飞行模式的飞行后,智能终端2会接收到无人机发送的该飞行模式结束的信息,从而判断出所述无人机完成所述飞行模式的飞行。
在一些例子中,所述指定条件包括:接收到无人机按照所述飞行模式飞行时传
输的回传视频流。本实施例中,智能终端2在接收到无人机按照所述飞行模式飞行时传输的回传视频流后立即执行步骤S901,无需等待无人机执行完所述飞行模式,从而节省小视频生成的时间,所述智能终端2在无人机结束所述飞行模式的飞行的同时即可生成小视频。
为减小回传视频流的大小,生成易于分享的小视频,在一些例子中,步骤S901包括:对所述回传视频流进行抽帧处理,生成第一预设时长的视频画面。
具体地,在一实施例中,所述对所述回传视频流进行抽帧处理,生成第一预设时长的视频画面,包括:根据无人机的飞行模式、飞行速度和飞行方向中的至少一种对所述视频流进行抽帧处理,生成第一预设时长的视频画面。通过将待生成的小视频与无人机的飞行模式、飞行速度和飞行方向中的至少一种相关联,从而使得待生成的小视频与无人机拍摄获得的画面的贴合度更高,并使得待生成的小视频的画面更加丰富、构图与无人机的飞行参数更匹配。
在另一实施例中,所述对所述回传视频流进行抽帧处理,生成第一预设时长的视频画面,包括:根据所述回传视频流的时长以及帧数,对所述回传视频流进行抽帧处理,生成第一预设时长的视频画面。本实施例在减小回传视频流的大小,生成易于分享的小视频同时,能够根据所述回传视频流帧数,获得与所述回传视频流贴合度更高的小视频,以呈现较为完整的无人机的拍摄画面。
可选地,为较为完整地呈现出无人机所拍摄的画面,所述根据所述回传视频流的时长以及帧数,对所述回传视频流进行抽帧处理,生成第一预设时长的视频画面,包括:将所述回传视频流拆分成多段,获得多段回传视频流;对所述多段回传视频流中的部分回传视频流进行抽帧处理,获得相应段回传视频流的抽帧图像;根据所述多段回传视频流中的另一部分回传视频流以及所获得的相应段回传视频流的抽帧图像,生成第一预设时长的视频画面。
在一优选地实施例中,为保留无人机拍摄画面的开始部分和结束部分,以确保所生成的小视频的完整性,所述将所述回传视频流拆分成多段,包括:按照拍摄时间的先后顺序将所述回传视频流拆分成至少三段。所述对所述多段回传视频流中的部分回传视频流进行抽帧处理,获得相应段回传视频流的抽帧图像,包括:对所述至少三段回传视频流中拍摄时间位于中间时间段的回传视频流进行抽帧处理,获得该段回传视频流对应的抽帧图像。
另外,为获得较为流畅的视频画面,所述对多段回传视频流中的部分回传视频流进行抽帧处理,获得相应段回传视频流的抽帧图像,包括:按照预设的抽帧速率对相应段回传视频流进行抽帧处理,获得所述相应段回传视频流对应的抽帧图像。本实施例中,对相应段回传视频流是进行匀速抽帧的,从而避免抽帧不均匀而导致视频画面的不连续。可选地,多段回传视频流的抽帧速率相同,进一步保证所生成的视频画面的连续性,从而保证生成的视频画面较为流畅。
在一些例子中,步骤S901还可包括:对所述回传视频流进一步进行压缩处理,从而减小回传视频流的大小,获得易于分享的视频画面。
在某些实施例中,所述方法还包括:发送所述视频画面至远程终端服务器,从而实现小视频的分享。其中,所述远程终端服务器可为第三方网站例如优酷、土豆等视频网站或者朋友圈等社交媒体网络。在一些例子中,所述发送所述视频画面至远程终端服务器是在步骤S901完成之后立即执行的,从而实现小视频的快速分享。在一些例子中,所述发送所述视频画面至远程终端服务器之前,还包括:接收用户输入的分享指令,其中,所述分享指令包括对应的远程终端服务器;根据所述分享指令,发送所述视频画面至远程终端服务器,从而根据用户的实际需求来灵活进行小视频的分享。
在无人机飞行的过程中,若无人机与智能终端2之间的传输链路的信号较差,则无人机通过图传的方式发送至智能终端2的回传视频流的质量也会较差,相应地,生成的小视频的质量也较差。
针对图传的回传视频流的质量差的问题,参见图10,本发明实施例中,所述视频画面生成方法还可包括以下步骤:
步骤S1001:获取所述无人机拍摄的原始数据流;
在各飞行模式中,无人机会将当前摄像设备拍摄的视频数据(也即原始数据流)进行实时存储,并对该原始数据流进行实时压缩,生成回传视频流通过图传方式发送给智能终端1,以便智能终端1对该无人机当前拍摄的图像实时显示。为提高生成的小视频的质量,智能终端1还可以获取到无人机所存储的原始数据流,采用该原始数据流来生成小视频。
本实施例中,无人机拍摄的原始数据流是存储在无人机或者摄像设备的存储单元中的。本实施例中,智能终端2可直接读取存储单元中存储的无人机拍摄的原始数据流。需要说明的是,步骤S1001与步骤S901中,数据传输的方式的区别在于:步骤
S901中,无人机在飞行的过程中通过无线通信的方式将拍摄的视频流发送至智能终端2,由于无人机与智能设备之间的通信距离较远,从而可能导致无人机与智能终端2之间的通信质量较差;而步骤S1001中,智能终端2可通过有线通信方式读取存储单元中的原始数据流或者在保证无线通信质量较好的情况下,智能终端2直接读取存储单元中的原始数据流,从而保证智能终端2能够获得画面质量较好的原始数据流。可选地,所述存储单元为SD卡或者硬盘或者磁盘等能够存储数据的器件。
在一实施例中,存储单元所存储的视频数据中,原始数据流还带有相应的视频标签。智能终端1根据该视频标签从存储单元中查找到相应的原始数据流。具体地,步骤S1001是在所述无人机满足指定条件后执行的。所述指定条件包括:所述无人机完成所述飞行模式的飞行。具体地,在无人机结束所述飞行模式的飞行后,智能终端2直接读取存储单元存储的无人机所拍摄的原始数据流,从而获得画面质量较好的原始数据流进行处理生成画面质量较好的小视频。
步骤S1002:根据所述原始数据流,确定所述无人机在所述飞行模式下所拍摄的原始视频流;
在一些例子中,步骤S1002包括:根据在所述飞行模式对应的视频流标签,确定所述原始数据流中所述无人机在所述飞行模式下所拍摄的原始视频流,通过视频标签,能够较为准确且快速地从大量的视频流中获得无人机在所述飞行模式下所拍摄的原始视频流,从而更加快速地生成所述飞行模式下的小视频。
步骤S1003:对所述原始视频流进行处理,生成第二预设时长的新视频画面,其中所述第二预设时长小于所述原始视频流的时长。
在一些例子中,步骤S1001、步骤S1002和步骤S1003是在判断出根据回传视频流获得的视频画面的分辨率小于预设分辨率后执行的,从而获得质量较高的新视频画面。
在某些实施例中,所述方法还包括:发送所述新视频画面至远程终端服务器,从而实现小视频的分享。其中,所述远程终端服务器可为第三方网站例如优酷、土豆等视频网站或者朋友圈等社交媒体网络。在一些例子中,所述发送所述视频画面至远程终端服务器是在步骤S1003完成之后立即执行的,从而实现小视频的快速分享。在一些例子中,所述送所述新视频画面至远程终端服务器之前,还包括:接收用户输入的分享指令,其中,所述分享指令包括对应的远程终端服务器;根据所述分享指令,
发送所述视频画面至远程终端服务器,从而根据用户的实际需求来灵活进行小视频的分享。
在一些例子中,智能终端2同时执行步骤S1001、步骤S1002和步骤S1003以及步骤S804和步骤S901,从而获得两个视频画面供用户进行选择,增加选择的丰富性。
在某些实施例中,所述方法还包括:根据步骤S901生成的视频画面和步骤S1003生成的新视频画面,发送两者中的至少一个至远程终端服务器。在一些例子中,可将步骤S901生成的视频画面和步骤S1003生成的新视频画面中分辨率较大的发送至远程终端服务器。在一些例子中,所述根据步骤S901生成的视频画面和步骤S1003生成的新视频画面,发送两者中的至少一个至远程终端服务器之前,还包括:接收用户输入的分享指令,其中,所述分享指令包括对应的远程终端服务器以及待分享的视频标识,所述待分享的视频标识为所述步骤S901生成的视频画面和所述骤S1003生成的新视频画面中的至少一个所对应的表述;根据所述分享指令,发送步骤S901生成的视频画面和步骤S1003生成的新视频画面中的一个至远程终端服务器,从而根据用户的实际需求来灵活进行小视频的分享。
本发明实施例中,所述第二预设时长可根据需要设定,可选地所述第二预设时长与所述第一预设时长相等。
另外,在步骤S1003中对所述原始视频流进行处理所采用的策略与步骤S802中对回传视频流进行处理所采用的策略类似,具体可参见骤S901中对回传视频流进行处理所采用的策略,这里不再赘述。
其未展开的部分请参考以上实施例一中拍摄控制方法相同或类似的部分,此处不再赘述。
实施例三
对应于实施例一的拍摄控制方法,本发明实施例提供了一种拍摄控制装置,所述装置可应用于无人机侧1。
参见图11,所述拍摄控制装置可包括第一处理器11,其中,所述第一处理器11用于执行上述实施例一所述的拍摄控制方法的步骤。
本实施例中,所述第一处理器11用于与智能终端2通信连接,从而可通过第一处理器11接收来自智能终端2的开始指令并可将无人机所拍摄的画面和无人机的其
他数据信息等发送至智能终端2。
本实施例中,所述第一处理器11可选择为一个专用的控制设备中的控制器,也可选择为无人机的飞行控制器,也可选择为一个云台控制器。
其未展开的部分请参考以上实施例一中拍摄控制方法相同或类似的部分,此处不再赘述。
实施例四
对应于实施例二的拍摄控制方法,本发明实施例提供了一种拍摄控制装置,所述装置可应用于安装有APP的智能终端2。
参见图11,所述拍摄控制装置可包括第二处理器21,其中,所述第二处理器21用于执行上述实施例二所述的拍摄控制方法的步骤。
本实施例中,所述第二处理器21用于与无人机侧1的控制设备通信连接,其中,所述无人机侧1的控制设备可由一个专用的控制设备来实现,也可由无人机的飞行控制器来实现,也可由一个云台控制器来实现,从而可通过第二处理器21发送开始指令至无人机侧1以指示无人机的航拍,并可通过第二处理器21接收来自无人机所拍摄的画面或者无人机的其他数据信息等。
其未展开的部分请参考以上实施例二中拍摄控制方法相同或类似的部分,此处不再赘述。
实施例五
对应于实施例一的拍摄控制方法,本发明实施例提供了一种拍摄控制装置,所述装置可应用于无人机侧1。
参见图12,所述装置可包括:
第一接收模块101,用于接收开始指令,所述开始指令包含无人机的飞行模式;
第一控制模块102,用于控制所述无人机依据所述飞行模式自主飞行;
位置计算模块103,用于在所述飞行模式中,获取目标对象的位置信息,并根据所述摄像设备拍摄到的画面中所识别到的目标对象,获得所述目标对象相对所述无人机的方位信息;
第二控制模块104,用于根据所述位置信息和所述飞行模式,控制所述无人机
的飞行轨迹;
第三控制模块105,用于根据所述方位信息,控制所述云台的姿态,使得所述目标对象处于所述摄像设备所拍摄的画面中。
可选地,参见图13,所述装置还包括拍摄控制模块106,用于控制所述摄像设备在所述飞行模式中录制视频,并将视频数据发送至智能终端。
可选地,参见图13,所述装置还包括第一判断模块107,当所述第一判断模块107判断出所述位置计算模块103不能在所述画面中识别出所述目标对象时,所述第三控制模块105将所述根据所述方位信息来控制所述云台的姿态的步骤替换成根据所述位置信息来控制所述云台的姿态的步骤。
可选地,所述控制所述云台的姿态,包括:控制所述云台的俯仰角、偏航角和横滚角中的至少一个。
可选地,所述位置计算模块103还用于,确定云台的俯仰角和/或偏航角。
在某些实施例中,所云台的俯仰角和/或偏航角的确定过程包括:根据背景标识的指定位置待显示在所拍摄的画面中的位置,确定所述云台的俯仰角和偏航角中的至少一个。
可选地,所述根据背景标识的指定位置待显示在所拍摄的画面中的位置,确定所述云台的俯仰角和偏航角中的至少一个,包括:
获取所拍摄的画面在第一方向上的第一总像素距离以及所述背景标识的指定位置待显示在所拍摄的画面的位置在第一方向上至画面边缘的像素距离,其中所述第一方向与云台的俯仰方向或者偏航方向对应;
根据所述第一总像素距离、所述像素距离以及摄像设备的垂直视场角大小或水平视场角大小,确定所述云台的俯仰角和/或偏航角。
可选地,所述背景标识包括以下中的至少一种:地面、天空、海面和建筑物。
在某些实施例中,云台的俯仰角和/或偏航角的确定过程包括:
获取预设的拍摄位置的高度角和/或水平角;
确定所述目标对象相在所拍摄的画面第一方向的中心线的偏移角,其中所述第一方向与所述云台的俯仰方向或偏航方向对应;
根据所述偏移角和所述高度角和/或水平角,确定所述云台的俯仰角和/或偏航角。
可选地,确定所述目标对象相对于所拍摄的画面第一方向的中心线的偏移角,包括:
获取所拍摄的画面在第一方向上的第一总像素距离以及摄像设备的垂直视场角;
确定所述目标对象距离所拍摄的画面第一方向的中心线的第一偏移像素距离;
根据所述第一总像素距离、垂直视场角以及所述第一偏移像素距离,确定所述目标对象相对于所拍摄的画面第一方向的中心线的偏移角。
可选地,所述根据所述位置信息和所述飞行模式,控制所述无人机的飞行轨迹,包括:
确定所述目标对象与摄像设备之间的距离;
根据所述位置信息、所述飞行模式以及目标对象与摄像设备之间的距离,控制所述无人机的飞行轨迹。
可选地,所述确定所述目标对象与摄像设备之间的距离,包括:
获取目标对象的实际高度、所拍摄的画面在第一方向上的第一总像素距离;
获取目标对象的实际高度待显示在所拍摄的画面第一方向上对应的像素距离,其中所述第一方向与所述云台的俯仰方向对应;
根据所述目标对象的实际高度、第一总像素距离以及目标对象的实际高度在所拍摄的画面第一方向上对应的像素距离,确定所述目标对象与摄像设备之间的距离。
可选地,所述位置计算模块103还用于,在所述确定所述目标对象与摄像设备之间的距离之后,获取预设的拍摄位置的高度角、摄像设备的水平视场角、所拍摄的画面在第二方向上的第二总像素距离,其中所述第二方向与所述云台的偏航方向对应;并确定所述目标对象距离所拍摄的画面中第二方向的中心线的第二像素偏移距离;并根据所述第二像素偏移距离、所述高度角、所述水平视场角、所述第二总像素距离以及所述目标对象与摄像设备之间的距离,确定所述云台在俯仰方向上的移动距离;所述第三控制模块105根据所述云台在俯仰方向上的移动距离,控制所述云台的姿态。
可选地,所述位置计算模块103还用于,获取摄像设备的水平视场角、所拍摄
的画面的在第二方向上的第二总像素距离,其中所述第二方向与所述云台的偏航方向对应;并确定所述目标对象距离所拍摄的画面中第二方向的中心线的第二像素偏移距离;并根据所述第二总像素距离、水平视场角和所述第二像素偏移距离,确定所述云台的偏航角;所述第三控制模块105根据所述偏航角,控制所述云台的姿态。
可选地,所述云台和所述无人机在航向轴上相互固定;所述第三控制模块105还用于,控制云台的俯仰角和/或横滚角;控制所述无人机的航向角,以控制所述云台的偏航角。
可选地,所述飞行模式包括以下中的至少一种:斜线模式、环绕模式、螺旋模式、冲天模式和彗星环绕模式,每一种飞行模式包括对应的飞行策略,所述飞行策略用于指示所述无人机的飞行。
可选地,所述斜线模式对应的飞行策略包括:由所述第二控制模块104根据所述位置信息,控制所述无人机先沿着水平面飞行再沿着与水平面呈一定夹角的平面飞行。
可选地,所述第二控制模块104控制所述无人机先沿着水平面飞行再沿着与水平面呈一定夹角的直线飞行,包括:控制所述无人机沿着水平面飞行;当确定出所述目标对象的最低点与无人机中心的连线以及目标对象的最高点分别与无人机中心的连线之间的夹角小于摄像设备的视场角的预设倍数时,则根据所述位置信息,控制所述无人机沿着与水平面呈一定夹角的平面飞行,其中所述预设倍数<1。
可选地,所述第二控制模块104控制所述无人机沿着与水平面呈一定夹角的平面飞行,包括:控制所述无人机沿着目标对象与所述无人机的连线方向远离所述目标对象飞行。
可选地,所述斜线模式对应的飞行策略包括:由所述第二控制模块104根据所述位置信息,控制所述无人机远离所述目标对象以S形曲线飞行。
可选地,所述环绕模式对应的飞行策略包括:由所述第二控制模块104根据所述位置信息,控制所述无人机按照指定距离环绕目标对象飞行。
其中,所述指定距离为默认距离,或者所述指定距离为用户输入的距离信息,或者所述指定距离为当前时刻所述无人机与目标对象之间的距离。
可选地,所述螺旋模式对应的飞行策略包括:由所述第二控制模块104根据所述位置信息,控制所述无人机以裴波那契螺旋线、等比螺旋线、等角螺旋线或者阿基
米德螺旋线为轨迹环绕目标对象飞行。
可选地,所述螺旋模式对应的飞行策略还包括:所述第二控制模块104在根据所述位置信息,控制所述无人机以裴波那契螺旋线、等比螺旋线、等角螺旋线或者阿基米德螺旋线为轨迹环绕目标对象飞行的同时,还控制无人机按照预设速率垂直地面上升或下降。
可选地,所述冲天模式对应的飞行策略包括:由所述第二控制模块104根据所述位置信息,控制所述无人机按照预设角度倾斜飞行至相对所述目标对象的第一指定位置后,控制所述无人机垂直地面上升。
可选地,所述彗星环绕模式对应的飞行策略包括:由所述第二控制模块104根据所述位置信息,控制所述无人机靠近目标对象飞行至第二指定位置,从所述第二指定位置围绕目标对象飞行之后,远离目标对象飞行。
可选地,每一种飞行模式还包括对应的飞行路程和飞行速度中的至少一种。
可选地,所述位置计算模块103获取目标对象的位置信息,包括:
获取包括至少两组拍摄信息的信息集合,所述拍摄信息包括:拍摄到目标对象时的拍摄位置信息和拍摄角度信息;
基于从所述信息集合选取的至少两组拍摄信息,确定所述目标对象的位置信息,其中,选取的各组拍摄信息中的拍摄位置信息所对应位置不相同。
可选地,所述基于从所述信息集合选取的至少两组拍摄信息,确定所述目标对象的位置信息,包括:基于至少三组拍摄信息,确定出至少两个所述目标对象的位置初始估计信息;根据各个位置初始估计信息确定出所述目标对象的位置信息。
可选地,所述拍摄位置信息为所述无人机的定位信息。
可选地,所述位置计算模块103获取目标对象的位置信息,包括:获取智能终端2的定位信息,所述智能终端为与所述无人机进行通信的终端,所述位置信息为所述定位信息。
可选地,所述位置计算模块103根据所述摄像设备拍摄到的画面,获得所述目标对象相对所述无人机的方位信息,包括:获取待跟踪的目标对象的特征信息;根据所述特征信息,基于图像识别技术在拍摄到的画面中识别目标对象,获得所述目标对象的相对所述无人机的方位信息。
可选地,参见图13,所述装置还包括复位模块108,在所述第二控制模块104根据所述位置信息和所述飞行模式,控制所述无人机的飞行轨迹之后,用于控制所述无人机运动至复位位置。
可选地,参见图13,所述装置还包括第四控制模块109,在所述第一判断模块107判断出所述第一接收模块101接收到外部设备发送的打杆操作信号,用于根据所述打杆操作信号来控制无人机的飞行和所述云台的姿态中的至少一种。
可选地,所述打杆操作信号包括以下至少一种:控制无人机垂直地面上升或下降的信号、控制无人机远离或靠近目标对象的信号、控制无人机的飞行速度、控制云台偏航角的信号、控制无人机机身旋转的信号。
其未展开的部分请参考以上实施例一中拍摄控制方法相同或类似的部分,此处不再赘述。
实施例六
对应于实施例二的拍摄控制方法,本发明实施例提供了一种拍摄控制装置,所述装置可应用于安装有APP的智能终端2。
参见图14,所述装置可包括:
第二接收模块201,用于接收用户指令;
指令生成模块202,根据所述用户指令生成开始指令,所述开始指令包含无人机的飞行模式,所述开始指令用于触发所述无人机依据所述飞行模式自主飞行;
发送模块203,发送所述开始指令至无人机;其中,所述开始指令用于触发所述无人机依据所述飞行模式自主飞行。
所述发送模块203发送所述开始指令至无人机之后,所述第二接收模块201接收并存储所述无人机在所述飞行模式下回传的回传视频流。
可选地,所述用户指令包括:确定待跟踪的目标对象。
可选地,所述方法还包括:识别当前显示画面中所述待跟踪的目标对象的特征信息,所述特征信息为所述目标对象待显示在所拍摄的画面中的预设位置或者预设尺寸。
可选地,所述用户指令还包括:背景标识的指定位置待显示值所拍摄画面中的
位置。其中,所述背景标识包括以下中的至少一种:地面、天空、海面和建筑物。
可选地,所述用户指令还包括:拍摄位置的高度角或水平角。
可选地,所述用户指令还包括:无人机的飞行路程和飞行速度中的至少一种。
可选地,所述飞行模式为默认飞行模式;或者,所述用户指令包括模式选择指令,所述模式选择指令包含用于指示无人机飞行的飞行模式。
可选地,所述飞行模式包括以下中的至少一种:斜线模式、环绕模式、螺旋模式、冲天模式和彗星环绕模式,每一种飞行模式包括对应的飞行策略,所述飞行策略用于指示所述无人机的飞行。
可选地,参见图15,所述装置还包括处理模块204,用于对所述回传视频流进行处理,生成第一指定时长的视频画面,其中所述第一指定时长小于所述回传视频流的时长。
可选地,参见图15,所述装置还包括第二判断模块205,所述处理模块204对所述回传视频流进行处理,生成视频画面的步骤是在第二判断模块205判断出所述无人机满足指定条件后执行的。
可选地,所述指定条件包括:第二判断模块205判断出所述无人机完成所述飞行模式的飞行。
可选地,所述处理模块204对所述回传视频流进行处理,生成第一预设时长视频画面,包括:对所述回传视频流进行抽帧处理,生成第一预设时长的视频画面。
可选地,所述处理模块204对所述回传视频流进行抽帧处理,生成第一预设时长的视频画面,包括:根据无人机的飞行模式、飞行速度和飞行方向中的至少一种对所述视频流进行抽帧处理,生成第一预设时长的视频画面。
可选地,所述处理模块204对所述回传视频流进行抽帧处理,生成第一预设时长的视频画面,包括:根据所述回传视频流的时长以及帧数,对所述回传视频流进行抽帧处理,生成第一预设时长的视频画面。
可选地,所述处理模块204根据所述回传视频流的总时长以及帧数,对所述回传视频流进行抽帧处理,生成第一预设时长的视频画面,包括:将所述回传视频流拆分成多段,获得多段回传视频流;对所述多段回传视频流中的部分回传视频流进行抽帧处理,获得相应段回传视频流的抽帧图像;根据所述多段回传视频流中的另一部分
回传视频流以及所获得的相应段回传视频流的抽帧图像,生成第一预设时长的视频画面。
可选地,所述处理模块204将所述回传视频流拆分成多段,包括:按照拍摄时间的先后顺序将所述回传视频流拆分成至少三段;所述处理模块204对所述多段回传视频流中的部分回传视频流进行抽帧处理,获得相应段回传视频流的抽帧图像,包括:对所述至少三段回传视频流中拍摄时间位于中间时间段的回传视频流进行抽帧处理,获得该段回传视频流对应的抽帧图像。
可选地,所述处理模块204对多段回传视频流中的部分回传视频流进行抽帧处理,获得相应段回传视频流的抽帧图像,包括:按照预设的抽帧速率对相应段回传视频流进行抽帧处理,获得所述相应段回传视频流对应的抽帧图像。
可选地,参见图15,所述装置还包括读取模块206和确定模块207,所述读取模块206用于获取所述无人机拍摄的原始数据流;所述确定模块207用于根据所述原始数据流,确定所述无人机在所述飞行模式下所拍摄的原始视频流;所述处理模块204对所述原始视频流进行处理,生成第二预设时长的新视频画面,其中所述第二预设时长小于所述回传视频流的时长。
可选地,所述确定模块207根据所述原始数据流,确定所述无人机在所述飞行模式下所拍摄的原始视频流,包括:根据所述飞行模式对应的视频流标签,确定所述原始数据流中所述无人机在所述飞行模式下所拍摄的原始视频流。
可选地,所述读取模块206获取所述无人机拍摄的原始数据流的步骤是在所述第二判断模块205判断出根据回传视频流获得的视频画面的分辨率小于预设分辨率后执行的。
可选地,又参见图15,所述装置还包括分享模块208,用于发送所述视频画面和所述新视频画面中的至少一个至远程终端服务器。
可选地,所述分享模块208发送所述视频画面和所述新视频画面中的至少一个至远程终端服务器之前,所述第二接收模块201接收用户输入的分享指令,其中,所述分享指令包括对应的远程终端服务器和待分享的视频标识,所述待分享的视频标识为所述视频画面和所述新视频画面中的至少一个所对应的标识;所述分享模块208根据所述分享指令,发送所述视频画面和所述新视频画面中的至少一个至远程终端服务器。
其未展开的部分请参考以上实施例二中拍摄控制方法相同或类似的部分,此处不再赘述。
实施例七
本发明的实施例提供了一种计算机存储介质,该计算机存储介质中存储有程序指令,该计算机存储介质中存储有程序指令,所述程序执行上述实施例一或实施例二的拍摄控制方法。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示意性实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合所述实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施例的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施例中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施例中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本发明各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。
Claims (112)
- 一种拍摄控制方法,应用于无人机,所述无人机搭载有云台,所述云台搭载一摄像设备,其特征在于,所述方法包括:接收开始指令,所述开始指令包含无人机的飞行模式;控制所述无人机依据所述飞行模式自主飞行;在所述飞行模式中,获取目标对象的位置信息,并根据所述摄像设备拍摄到的画面中所识别到的目标对象,获得所述目标对象相对所述无人机的方位信息;根据所述位置信息和所述飞行模式,控制所述无人机的飞行轨迹;根据所述方位信息,控制所述云台的姿态,使得所述目标对象处于所述摄像设备所拍摄的画面中。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:控制所述摄像设备在所述飞行模式中录制视频,并将视频数据发送至终端。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:当判断出不能在所述画面中识别出所述目标对象时,则将所述根据所述方位信息来控制所述云台的姿态的步骤替换成根据所述位置信息来控制所述云台的姿态的步骤。
- 根据权利要求1所述的方法,其特征在于,所述控制所述云台的姿态,包括:控制所述云台的俯仰角、偏航角和横滚角中的至少一个。
- 根据权利要求4所述的方法,其特征在于,云台的俯仰角和/或偏航角的确定过程包括:根据背景标识的指定位置待显示在所拍摄的画面中的位置,确定所述云台的俯仰角和偏航角中的至少一个。
- 根据权利要求4所述的方法,其特征在于,所述根据背景标识的指定位置待显示在所拍摄的画面中的位置,确定所述云台的俯仰角和偏航角中的至少一个,包括:获取所拍摄的画面在第一方向上的第一总像素距离以及所述背景标识的指定位置待显示在所拍摄的画面的位置在第一方向上至画面边缘的像素距离,其中所述第一方向与云台的俯仰方向或者偏航方向对应;根据所述第一总像素距离、所述像素距离以及摄像设备的垂直视场角大小或水平视场角大小,确定所述云台的俯仰角和/或偏航角。
- 根据权利要求4所述的方法,其特征在于,云台的俯仰角和/或偏航角的确定过程包括:获取预设的拍摄位置的高度角和/或水平角;确定所述目标对象相在所拍摄的画面第一方向的中心线的偏移角,其中所述第一方向与所述云台的俯仰方向或偏航方向对应;根据所述偏移角和所述高度角和/或水平角,确定所述云台的俯仰角和/或偏航角。
- 根据权利要求7所述的方法,其特征在于,确定所述目标对象相对于所拍摄的画面第一方向的中心线的偏移角,包括:获取所拍摄的画面在第一方向上的第一总像素距离以及摄像设备的垂直视场角;确定所述目标对象距离所拍摄的画面第一方向的中心线的第一偏移像素距离;根据所述第一总像素距离、垂直视场角以及所述第一偏移像素距离,确定所述目标对象相对于所拍摄的画面第一方向的中心线的偏移角。
- 根据权利要求1所述的方法,其特征在于,所述根据所述位置信息和所述飞行模式,控制所述无人机的飞行轨迹,包括:确定所述目标对象与摄像设备之间的距离;根据所述位置信息、所述飞行模式以及目标对象与摄像设备之间的距离,控制所述无人机的飞行轨迹。
- 根据权利要求9所述的方法,其特征在于,所述确定所述目标对象与摄像设备之间的距离,包括:获取目标对象的实际高度、所拍摄的画面在第一方向上的第一总像素距离;获取目标对象的实际高度待显示在所拍摄的画面第一方向上对应的像素距离,其中所述第一方向与所述云台的俯仰方向对应;根据所述目标对象的实际高度、第一总像素距离以及目标对象的实际高度在所拍摄的画面第一方向上对应的像素距离,确定所述目标对象与摄像设备之间的距离。
- 根据权利要求9所述的方法,其特征在于,所述确定所述目标对象与摄像设备之间的距离,之后还包括:获取预设的拍摄位置的高度角、摄像设备的水平视场角、所拍摄的画面在第二方向上的第二总像素距离,其中所述第二方向与所述云台的偏航方向对应;确定所述目标对象距离所拍摄的画面中第二方向的中心线的第二像素偏移距离;根据所述第二像素偏移距离、所述高度角、所述水平视场角、所述第二总像素距离以及所述目标对象与摄像设备之间的距离,确定所述云台在俯仰方向上的移动距离;根据所述云台在俯仰方向上的移动距离,控制所述云台的姿态。
- 根据权利要求1所述的方法,其特征在于,所述控制云台的姿态,包括:获取摄像设备的水平视场角、所拍摄的画面的在第二方向上的第二总像素距离,其中所述第二方向与所述云台的偏航方向对应;确定所述目标对象距离所拍摄的画面中第二方向的中心线的第二像素偏移距离;根据所述第二总像素距离、水平视场角和所述第二像素偏移距离,确定所述云台的偏航角;根据所述偏航角,控制所述云台的姿态。
- 根据权利要求1所述的方法,其特征在于,所述云台和所述无人机在航向轴上相互固定;所述控制云台的姿态,包括:控制云台的俯仰角和/或横滚角;控制所述无人机的航向角,以控制所述云台的偏航角。
- 根据权利要求1所述的方法,其特征在于,所述飞行模式包括以下中的至少一种:斜线模式、环绕模式、螺旋模式、冲天模式和彗星环绕模式,每一种飞行模式包括对应的飞行策略,所述飞行策略用于指示所述无人机的飞行。
- 根据权利要求14所述的方法,其特征在于,所述斜线模式对应的飞行策略包括:根据所述位置信息,控制所述无人机先沿着水平面飞行再沿着与水平面呈一定夹角的平面飞行。
- 根据权利要求15所述的方法,其特征在于,所述控制所述无人机先沿着水平面飞行再沿着与水平面呈一定夹角的平面飞行,包括:控制所述无人机沿着水平面飞行;当确定出所述目标对象的最低点与无人机中心的连线以及目标对象的最高点分别与无人机中心的连线之间的夹角小于摄像设备的视场角的预设倍数时,则根据所述位置信息,控制所述无人机沿着与水平面呈一定夹角的平面飞行,其中所述预设倍数<1。
- 根据权利要求16所述的方法,其特征在于,所述控制所述无人机沿着与水平面呈一定夹角的平面飞行,包括:控制所述无人机沿着目标对象与所述无人机的连线方向远离所述目标对象飞行。
- 根据权利要求14所述的方法,其特征在于,所述斜线模式对应的飞行策略包括:根据所述位置信息,控制所述无人机远离所述目标对象以S形曲线飞行。
- 根据权利要求14所述的方法,其特征在于,所述环绕模式对应的飞行策略包括:根据所述位置信息,控制所述无人机按照指定距离环绕目标对象飞行。
- 根据权利要求19所述的方法,其特征在于,所述指定距离为默认距离,或者所述指定距离为用户输入的距离信息,或者所述指定距离为当前时刻所述无人机与目标对象之间的距离。
- 根据权利要求14所述的方法,其特征在于,所述螺旋模式对应的飞行策略包括:根据所述位置信息,控制所述无人机以裴波那契螺旋线、等比螺旋线、等角螺旋线或者阿基米德螺旋线为轨迹环绕目标对象飞行。
- 根据权利要求21所述的方法,其特征在于,所述螺旋模式对应的飞行策略还包括:在根据所述位置信息,控制所述无人机以裴波那契螺旋线、等比螺旋线、等角螺旋线或者阿基米德螺旋线为轨迹环绕目标对象飞行的同时,还控制无人机按照预设速率垂直地面上升或下降。
- 根据权利要求14所述的方法,其特征在于,所述冲天模式对应的飞行策略包括:根据所述位置信息,控制所述无人机按照预设角度倾斜飞行至相对所述目标对象的第一指定位置后,控制所述无人机垂直地面上升。
- 根据权利要求14所述的方法,其特征在于,所述彗星环绕模式对应的飞行策略包括:根据所述位置信息,控制所述无人机靠近目标对象飞行至第二指定位置,从所述第二指定位置围绕目标对象飞行之后,远离目标对象飞行。
- 根据权利要求14所述的方法,其特征在于,每一种飞行模式还包括对应的飞行路程和飞行速度中的至少一种。
- 根据权利要求1所述的方法,其特征在于,所述获取目标对象的位置信息,包括:获取包括至少两组拍摄信息的信息集合,所述拍摄信息包括:拍摄到目标对象时的拍摄位置信息和拍摄角度信息;基于从所述信息集合选取的至少两组拍摄信息,确定所述目标对象的位置信息,其中,选取的各组拍摄信息中的拍摄位置信息所对应位置不相同。
- 根据权利要求26所述的方法,其特征在于,所述基于从所述信息集合选取的至少两组拍摄信息,确定所述目标对象的位置信息,包括:基于至少三组拍摄信息,确定出至少两个所述目标对象的位置初始估计信息;根据各个位置初始估计信息确定出所述目标对象的位置信息。
- 根据权利要求26所述的方法,其特征在于,所述拍摄位置信息为所述无人机的定位信息。
- 根据权利要求1所述的方法,其特征在于,所述获取目标对象的位置信息,包括:获取智能终端的定位信息,所述智能终端为与所述无人机进行通信的终端,所述位置信息为所述定位信息。
- 根据权利要求1所述的方法,其特征在于,所述根据所述摄像设备拍摄到的画面,获得所述目标对象相对所述无人机的方位信息,包括:获取待跟踪的目标对象的特征信息;根据所述特征信息,基于图像识别技术在拍摄到的画面中识别目标对象,获得所述目标对象的相对所述无人机的方位信息。
- 根据权利要求1所述的方法,其特征在于,所述根据所述位置信息和所述飞行模式,控制所述无人机的飞行轨迹,之后还包括:控制所述无人机运动至复位位置。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:若接收到外部设备发送的打杆操作信号,则根据所述打杆操作信号来控制无人机的飞行和所述云台的姿态中的至少一种。
- 根据权利要求32所述的方法,其特征在于,所述打杆操作信号包括以下至少一种:控制无人机垂直地面上升或下降的信号、控制无人机远离或靠近目标对象的信号、控制无人机的飞行速度、控制云台偏航角的信号、控制无人机机身旋转的信号。
- 一种拍摄控制装置,应用于无人机,所述无人机搭载有云台,所述云台搭载一摄像设备,其特征在于,所述装置包括第一处理器,其中所述第一处理器被配置为:接收开始指令,所述开始指令包含无人机的飞行模式;控制所述无人机依据所述飞行模式自主飞行;在所述飞行模式中,获取目标对象的位置信息,并根据所述摄像设备拍摄到的画面中所识别到的目标对象,获得所述目标对象相对所述无人机的方位信息;根据所述位置信息和所述飞行模式,控制所述无人机的飞行轨迹;根据所述方位信息,控制所述云台的姿态,使得所述目标对象处于所述摄像设备所拍摄的画面中。
- 根据权利要求34所述的装置,其特征在于,所述第一处理器还被配置为:控制所述摄像设备在所述飞行模式中录制视频,并将视频数据发送至终端。
- 根据权利要求34所述的装置,其特征在于,所述第一处理器还被配置为:当判断出不能在所述画面中识别出所述目标对象时,则将所述根据所述方位信息来控制所述云台的姿态的步骤替换成根据所述位置信息来控制所述云台的姿态的步骤。
- 根据权利要求34所述的装置,其特征在于,所述控制所述云台的姿态,包括:控制所述云台的俯仰角、偏航角和横滚角中的至少一个。
- 根据权利要求37所述的装置,其特征在于,云台的俯仰角和/或偏航角的确定过程包括:根据背景标识的指定位置待显示在所拍摄的画面中的位置,确定所述云台的俯仰角和偏航角中的至少一个。
- 根据权利要求37所述的装置,其特征在于,所述根据背景标识的指定位置待显示在所拍摄的画面中的位置,确定所述云台的俯仰角和偏航角中的至少一个,包括:获取所拍摄的画面在第一方向上的第一总像素距离以及所述背景标识的指定位置待显示在所拍摄的画面的位置在第一方向上至画面边缘的像素距离,其中所述第一方向与云台的俯仰方向或者偏航方向对应;根据所述第一总像素距离、所述像素距离以及摄像设备的垂直视场角大小或水平视场角大小,确定所述云台的俯仰角和/或偏航角。
- 根据权利要求37所述的装置,其特征在于,云台的俯仰角和/或偏航角的确定过程包括:获取预设的拍摄位置的高度角和/或水平角;确定所述目标对象相在所拍摄的画面第一方向的中心线的偏移角,其中所述第一方向与所述云台的俯仰方向或偏航方向对应;根据所述偏移角和所述高度角和/或水平角,确定所述云台的俯仰角和/或偏航角。
- 根据权利要求40所述的装置,其特征在于,确定所述目标对象相对于所拍摄的画面第一方向的中心线的偏移角,包括:获取所拍摄的画面在第一方向上的第一总像素距离以及摄像设备的垂直视场角;确定所述目标对象距离所拍摄的画面第一方向的中心线的第一偏移像素距离;根据所述第一总像素距离、垂直视场角以及所述第一偏移像素距离,确定所述目标对象相对于所拍摄的画面第一方向的中心线的偏移角。
- 根据权利要求34所述的装置,其特征在于,所述根据所述位置信息和所述飞行模式,控制所述无人机的飞行轨迹,包括:确定所述目标对象与摄像设备之间的距离;根据所述位置信息、所述飞行模式以及目标对象与摄像设备之间的距离,控制所述无人机的飞行轨迹。
- 根据权利要求42所述的装置,其特征在于,所述确定所述目标对象与摄像设备之间的距离,包括:获取目标对象的实际高度、所拍摄的画面在第一方向上的第一总像素距离;获取目标对象的实际高度待显示在所拍摄的画面第一方向上对应的像素距离,其中所述第一方向与所述云台的俯仰方向对应;根据所述目标对象的实际高度、第一总像素距离以及目标对象的实际高度在所拍摄的画面第一方向上对应的像素距离,确定所述目标对象与摄像设备之间的距离。
- 根据权利要求42所述的装置,其特征在于,所述确定所述目标对象与摄像设备之间的距离,之后还包括:获取预设的拍摄位置的高度角、摄像设备的水平视场角、所拍摄的画面在第二方向上的第二总像素距离,其中所述第二方向与所述云台的偏航方向对应;确定所述目标对象距离所拍摄的画面中第二方向的中心线的第二像素偏移距离;根据所述第二像素偏移距离、所述高度角、所述水平视场角、所述第二总像素距离以及所述目标对象与摄像设备之间的距离,确定所述云台在俯仰方向上的移动距离;根据所述云台在俯仰方向上的移动距离,控制所述云台的姿态。
- 根据权利要求34所述的装置,其特征在于,所述控制云台的姿态,包括:获取摄像设备的水平视场角、所拍摄的画面的在第二方向上的第二总像素距离,其中所述第二方向与所述云台的偏航方向对应;确定所述目标对象距离所拍摄的画面中第二方向的中心线的第二像素偏移距离;根据所述第二总像素距离、水平视场角和所述第二像素偏移距离,确定所述云台的偏航角;根据所述偏航角,控制所述云台的姿态。
- 根据权利要求34所述的装置,其特征在于,所述云台和所述无人机在航向轴上相互固定;所述控制云台的姿态,包括:控制云台的俯仰角和/或横滚角;控制所述无人机的航向角,以控制所述云台的偏航角。
- 根据权利要求34所述的装置,其特征在于,所述飞行模式包括以下中的至少一种:斜线模式、环绕模式、螺旋模式、冲天模式和彗星环绕模式,每一种飞行模式包括对应的飞行策略,所述飞行策略用于指示所述无人机的飞行。
- 根据权利要求47所述的装置,其特征在于,所述斜线模式对应的飞行策略包括:根据所述位置信息,控制所述无人机先沿着水平面飞行再沿着与水平面呈一定夹角的平面飞行。
- 根据权利要求48所述的装置,其特征在于,所述控制所述无人机先沿着水平面飞行再沿着与水平面呈一定夹角的平面飞行,包括:控制所述无人机沿着水平面飞行;当确定出所述目标对象的最低点与无人机中心的连线以及目标对象的最高点分别与无人机中心的连线之间的夹角小于摄像设备的视场角的预设倍数时,则根据所述位置信息,控制所述无人机沿着与水平面呈一定夹角的平面飞行,其中所述预设倍数<1。
- 根据权利要求49所述的装置,其特征在于,所述控制所述无人机沿着与水平面呈一定夹角的平面飞行,包括:控制所述无人机沿着目标对象与所述无人机的连线方向远离所述目标对象飞行。
- 根据权利要求47所述的装置,其特征在于,所述斜线模式对应的飞行策略包括:根据所述位置信息,控制所述无人机远离所述目标对象以S形曲线飞行。
- 根据权利要求47所述的装置,其特征在于,所述环绕模式对应的飞行策略包括:根据所述位置信息,控制所述无人机按照指定距离环绕目标对象飞行。
- 根据权利要求52所述的装置,其特征在于,所述指定距离为默认距离,或者所述指定距离为用户输入的距离信息,或者所述指定距离为当前时刻所述无人机与目标对象之间的距离。
- 根据权利要求47所述的装置,其特征在于,所述螺旋模式对应的飞行策略包括:根据所述位置信息,控制所述无人机以裴波那契螺旋线、等比螺旋线、等角螺旋线或者阿基米德螺旋线为轨迹环绕目标对象飞行。
- 根据权利要求54所述的装置,其特征在于,所述螺旋模式对应的飞行策略还包括:在根据所述位置信息,控制所述无人机以裴波那契螺旋线、等比螺旋线、等角螺旋线或者阿基米德螺旋线为轨迹环绕目标对象飞行的同时,还控制无人机按照预设速率垂直地面上升或下降。
- 根据权利要求47所述的装置,其特征在于,所述冲天模式对应的飞行策略包括:根据所述位置信息,控制所述无人机按照预设角度倾斜飞行至相对所述目标对象的第一指定位置后,控制所述无人机垂直地面上升。
- 根据权利要求47所述的装置,其特征在于,所述彗星环绕模式对应的飞行策略包括:根据所述位置信息,控制所述无人机靠近目标对象飞行至第二指定位置,从所述第二指定位置围绕目标对象飞行之后,远离目标对象飞行。
- 根据权利要求47所述的装置,其特征在于,每一种飞行模式还包括对应的飞行路程和飞行速度中的至少一种。
- 根据权利要求34所述的装置,其特征在于,所述获取目标对象的位置信息,包括:获取包括至少两组拍摄信息的信息集合,所述拍摄信息包括:拍摄到目标对象时的拍摄位置信息和拍摄角度信息;基于从所述信息集合选取的至少两组拍摄信息,确定所述目标对象的位置信息,其中,选取的各组拍摄信息中的拍摄位置信息所对应位置不相同。
- 根据权利要求59所述的装置,其特征在于,所述基于从所述信息集合选取的至少两组拍摄信息,确定所述目标对象的位置信息,包括:基于至少三组拍摄信息,确定出至少两个所述目标对象的位置初始估计信息;根据各个位置初始估计信息确定出所述目标对象的位置信息。
- 根据权利要求59所述的装置,其特征在于,所述拍摄位置信息为所述无人机的定位信息。
- 根据权利要求34所述的装置,其特征在于,所述获取目标对象的位置信息,包括:获取智能终端的定位信息,所述智能终端为与所述无人机进行通信的终端,所述位置信息为所述定位信息。
- 根据权利要求34所述的装置,其特征在于,所述根据所述摄像设备拍摄到的画面,获得所述目标对象相对所述无人机的方位信息,包括:获取待跟踪的目标对象的特征信息;根据所述特征信息,基于图像识别技术在拍摄到的画面中识别目标对象,获得所述目标对象的相对所述无人机的方位信息。
- 根据权利要求34所述的装置,其特征在于,所述根据所述位置信息和所述飞行模式,控制所述无人机的飞行轨迹,之后还包括:控制所述无人机运动至复位位置。
- 根据权利要求34所述的装置,其特征在于,所述第一处理器还被配置为:若接收到外部设备发送的打杆操作信号,则根据所述打杆操作信号来控制无人机的飞行和所述云台的姿态中的至少一种。
- 根据权利要求65所述的装置,其特征在于,所述打杆操作信号包括以下至少一种:控制无人机垂直地面上升或下降的信号、控制无人机远离或靠近目标对象的信号、控制无人机的飞行速度、控制云台偏航角的信号、控制无人机机身旋转的信号。
- 一种拍摄控制方法,其特征在于,所述方法包括:接收用户指令;根据所述用户指令生成开始指令,所述开始指令包含无人机的飞行模式,所述开始指令用于触发所述无人机依据所述飞行模式自主飞行;发送所述开始指令至无人机;接收并存储所述无人机在所述飞行模式下回传的回传视频流。
- 如权利要求67所述的方法,其特征在于,所述用户指令包括:确定待跟踪的目标对象。
- 如权利要求68所述的方法,其特征在于,所述方法还包括:识别当前显示画面中所述待跟踪的目标对象的特征信息,所述特征信息为所述目标对象待显示在所拍摄的画面中的预设位置或者预设尺寸。
- 如权利要求67所述的方法,其特征在于,所述用户指令还包括:背景标识的指定位置待显示值所拍摄画面中的位置。
- 根据权利要求70所述的方法,其特征在于,所述背景标识包括以下中的至少一种:地面、天空、海面和建筑物。
- 根据权利要求67所述的方法,其特征在于,所述用户指令还包括:拍摄位置的高度角或水平角。
- 根据权利要求67所述的方法,其特征在于,所述用户指令还包括:无人机的飞行路程和飞行速度中的至少一种。
- 如权利要求67所述的方法,其特征在于,所述飞行模式为默认飞行模式;或者,所述用户指令包括模式选择指令,所述模式选择指令包含用于指示无人机飞行的飞行模式。
- 如权利要求74所述的方法,其特征在于,所述飞行模式包括以下中的至少一种:斜线模式、环绕模式、螺旋模式、冲天模式和彗星环绕模式,每一种飞行模式包括对应的飞行策略,所述飞行策略用于指示所述无人机的飞行。
- 如权利要求67所述的方法,其特征在于,所述接收并存储所述无人机在所述飞行模式下回传的回传视频流,之后还包括:对所述回传视频流进行处理,生成第一指定时长的视频画面,其中所述第一指定时长小于所述回传视频流的时长。
- 如权利要求76所述的方法,其特征在于,所述对所述回传视频流进行处理,生成第一指定时长的视频画面的步骤是在判断出所述无人机满足指定条件后执行的。
- 如权利要求77所述的方法,其特征在于,所述指定条件包括:所述无人机完成所述飞行模式的飞行。
- 如权利要求76所述的方法,其特征在于,所述对所述回传视频流进行处理,生成第一预设时长视频画面,包括:对所述回传视频流进行抽帧处理,生成第一预设时长的视频画面。
- 根据权利要求79所述的方法,其特征在于,所述对所述回传视频流进行抽帧处理,生成第一预设时长的视频画面,包括:根据无人机的飞行模式、飞行速度和飞行方向中的至少一种对所述视频流进行抽帧处理,生成第一预设时长的视频画面。
- 根据权利要求79所述的方法,其特征在于,所述对所述回传视频流进行抽帧处理,生成第一预设时长的视频画面,包括:根据所述回传视频流的时长以及帧数,对所述回传视频流进行抽帧处理,生成第一预设时长的视频画面。
- 根据权利要求81所述的方法,其特征在于,所述根据所述回传视频流的总时长以及帧数,对所述回传视频流进行抽帧处理,生成第一预设时长的视频画面,包括:将所述回传视频流拆分成多段,获得多段回传视频流;对所述多段回传视频流中的部分回传视频流进行抽帧处理,获得相应段回传视频流的抽帧图像;根据所述多段回传视频流中的另一部分回传视频流以及所获得的相应段回传视频流的抽帧图像,生成第一预设时长的视频画面。
- 根据权利要求82所述的方法,其特征在于,所述将所述回传视频流拆分成多段,包括:按照拍摄时间的先后顺序将所述回传视频流拆分成至少三段;所述对所述多段回传视频流中的部分回传视频流进行抽帧处理,获得相应段回传视频流的抽帧图像,包括:对所述至少三段回传视频流中拍摄时间位于中间时间段的回传视频流进行抽帧处理,获得该段回传视频流对应的抽帧图像。
- 根据权利要求82所述的方法,其特征在于,所述对多段回传视频流中的部分回传视频流进行抽帧处理,获得相应段回传视频流的抽帧图像,包括:按照预设的抽帧速率对相应段回传视频流进行抽帧处理,获得所述相应段回传视频流对应的抽帧图像。
- 根据权利要求67至84任一项所述的方法,其特征在于,所述方法还包括:获取所述无人机拍摄的原始数据流;根据所述原始数据流,确定所述无人机在所述飞行模式下所拍摄的原始视频流;对所述原始视频流进行处理,生成第二预设时长的新视频画面,其中所述第二预设时长小于所述回传视频流的时长。
- 根据权利要求85所述的方法,其特征在于,所述根据所述原始数据流,确定所述无人机在所述飞行模式下所拍摄的原始视频流,包括:根据所述飞行模式对应的视频流标签,确定所述原始数据流中所述无人机在所述飞行模式下所拍摄的原始视频流。
- 根据权利要求86所述的方法,其特征在于,所述获取所述无人机拍摄的原始数据流的步骤是在判断出根据回传视频流获得的视频画面的分辨率小于预设分辨率后执行的。
- 根据权利要求86所述的方法,其特征在于,所述方法还包括:发送所述视频画面和所述新视频画面中的至少一个至远程终端服务器。
- 根据权利要求88所述的方法,其特征在于,所述发送所述视频画面和所述新视频画面中的至少一个至远程终端服务器之前,还包括:接收用户输入的分享指令,其中,所述分享指令包括对应的远程终端服务器和待分享的视频标识,所述待分享的视频标识为所述视频画面和所述新视频画面中的至少 一个所对应的标识;根据所述分享指令,发送所述视频画面和所述新视频画面中的至少一个至远程终端服务器。
- 一种拍摄控制装置,其特征在于,所述装置包括第二处理器;所述第二处理器被配置为:接收用户指令;根据所述用户指令生成开始指令,其中所述开始指令包含无人机的飞行模式,所述开始指令用于触发所述无人机依据所述飞行模式自主飞行;发送所述开始指令至无人机;接收并存储所述无人机在所述飞行模式下回传的回传视频流。
- 如权利要求90所述的装置,其特征在于,所述用户指令包括:确定待跟踪的目标对象。
- 如权利要求91所述的装置,其特征在于,所述第二处理器还被配置为:识别当前显示画面中所述待跟踪的目标对象的特征信息,所述特征信息为所述目标对象待显示在所拍摄的画面中的预设位置或者预设尺寸。
- 如权利要求90所述的装置,其特征在于,所述用户指令还包括:背景标识的指定位置待显示值所拍摄画面中的位置。
- 根据权利要求93所述的装置,其特征在于,所述背景标识包括以下中的至少一种:地面、天空、海面和建筑物。
- 根据权利要求90所述的装置,其特征在于,所述用户指令还包括:拍摄位置的高度角或水平角。
- 根据权利要求90所述的装置,其特征在于,所述用户指令还包括:无人机的飞行路程和飞行速度中的至少一种。
- 如权利要求90所述的装置,其特征在于,所述飞行模式为默认飞行模式;或者,所述用户指令包括模式选择指令,所述模式选择指令包含用于指示无人机飞行的飞行模式。
- 如权利要求97所述的装置,其特征在于,所述飞行模式包括以下中的至少一种:斜线模式、环绕模式、螺旋模式、冲天模式和彗星环绕模式,每一种飞行模式包括对应的飞行策略,所述飞行策略用于指示所述无人机的飞行。
- 如权利要求90所述的装置,其特征在于,所述接收并存储所述无人机在所述 飞行模式下回传的回传视频流,之后还包括:对所述回传视频流进行处理,生成第一指定时长的视频画面,其中所述第一指定时长小于所述回传视频流的时长。
- 如权利要求99所述的装置,其特征在于,所述对所述回传视频流进行处理,生成第一指定时长的视频画面的步骤是在判断出所述无人机满足指定条件后执行的。
- 如权利要求100所述的装置,其特征在于,所述指定条件包括:所述无人机完成所述飞行模式的飞行。
- 如权利要求90所述的装置,其特征在于,所述对所述回传视频流进行处理,生成第一预设时长视频画面,包括:对所述回传视频流进行抽帧处理,生成第一预设时长的视频画面。
- 根据权利要求102所述的装置,其特征在于,所述对所述回传视频流进行抽帧处理,生成第一预设时长的视频画面,包括:根据无人机的飞行模式、飞行速度和飞行方向中的至少一种对所述视频流进行抽帧处理,生成第一预设时长的视频画面。
- 根据权利要求102所述的装置,其特征在于,所述对所述回传视频流进行抽帧处理,生成第一预设时长的视频画面,包括:根据所述回传视频流的时长以及帧数,对所述回传视频流进行抽帧处理,生成第一预设时长的视频画面。
- 根据权利要求104所述的装置,其特征在于,所述根据所述回传视频流的总时长以及帧数,对所述回传视频流进行抽帧处理,生成第一预设时长的视频画面,包括:将所述回传视频流拆分成多段,获得多段回传视频流;对所述多段回传视频流中的部分回传视频流进行抽帧处理,获得相应段回传视频流的抽帧图像;根据所述多段回传视频流中的另一部分回传视频流以及所获得的相应段回传视频流的抽帧图像,生成第一预设时长的视频画面。
- 根据权利要求105所述的装置,其特征在于,所述将所述回传视频流拆分成多段,包括:按照拍摄时间的先后顺序将所述回传视频流拆分成至少三段;所述对所述多段回传视频流中的部分回传视频流进行抽帧处理,获得相应段回传视频流的抽帧图像,包括:对所述至少三段回传视频流中拍摄时间位于中间时间段的回传视频流进行抽帧处理,获得该段回传视频流对应的抽帧图像。
- 根据权利要求105所述的装置,其特征在于,所述对多段回传视频流中的部分回传视频流进行抽帧处理,获得相应段回传视频流的抽帧图像,包括:按照预设的抽帧速率对相应段回传视频流进行抽帧处理,获得所述相应段回传视频流对应的抽帧图像。
- 根据权利要求90至107任一项所述的装置,其特征在于,所述第二处理器还被配置为获取所述无人机拍摄的原始数据流;根据所述原始数据流,确定所述无人机在所述飞行模式下所拍摄的原始视频流;对所述原始视频流进行处理,生成第二预设时长的新视频画面,其中所述第二预设时长小于所述回传视频流的时长。
- 根据权利要求108所述的装置,其特征在于,所述根据所述原始数据流,确定所述无人机在所述飞行模式下所拍摄的原始视频流,包括:根据所述飞行模式对应的视频流标签,确定所述原始数据流中所述无人机在所述飞行模式下所拍摄的原始视频流。
- 根据权利要求109所述的装置,其特征在于,所述获取所述无人机拍摄的原始数据流的步骤是在判断出根据回传视频流获得的视频画面的分辨率小于预设分辨率后执行的。
- 根据权利要求108所述的装置,其特征在于,所述第二处理器还被配置为:发送所述视频画面和所述新视频画面中的至少一个至远程终端服务器。
- 根据权利要求111所述的装置,其特征在于,所述发送所述视频画面和所述新视频画面中的至少一个至远程终端服务器之前,还包括:接收用户输入的分享指令,其中,所述分享指令包括对应的远程终端服务器和待分享的视频标识,所述待分享的视频标识为所述视频画面和所述新视频画面中的至少一个所对应的标识;根据所述分享指令,发送所述视频画面和所述新视频画面中的至少一个至远程终端服务器。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2017/085791 WO2018214078A1 (zh) | 2017-05-24 | 2017-05-24 | 拍摄控制方法及装置 |
CN202110425831.3A CN113038023A (zh) | 2017-05-24 | 2017-05-24 | 拍摄控制方法及装置 |
CN202110426766.6A CN113163119A (zh) | 2017-05-24 | 2017-05-24 | 拍摄控制方法及装置 |
CN202110425837.0A CN113163118A (zh) | 2017-05-24 | 2017-05-24 | 拍摄控制方法及装置 |
CN201780004593.0A CN108476288B (zh) | 2017-05-24 | 2017-05-24 | 拍摄控制方法及装置 |
US16/690,668 US11120261B2 (en) | 2017-05-24 | 2019-11-21 | Imaging control method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2017/085791 WO2018214078A1 (zh) | 2017-05-24 | 2017-05-24 | 拍摄控制方法及装置 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/690,668 Continuation US11120261B2 (en) | 2017-05-24 | 2019-11-21 | Imaging control method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018214078A1 true WO2018214078A1 (zh) | 2018-11-29 |
Family
ID=63266484
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/085791 WO2018214078A1 (zh) | 2017-05-24 | 2017-05-24 | 拍摄控制方法及装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11120261B2 (zh) |
CN (4) | CN113163118A (zh) |
WO (1) | WO2018214078A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111415347A (zh) * | 2020-03-25 | 2020-07-14 | 上海商汤临港智能科技有限公司 | 遗留对象检测方法和装置及交通工具 |
CN115175854A (zh) * | 2020-11-24 | 2022-10-11 | 深圳市大疆创新科技有限公司 | 云台和飞行器的协同控制方法和系统 |
CN115499580A (zh) * | 2022-08-15 | 2022-12-20 | 珠海视熙科技有限公司 | 多模式融合的智能取景方法、装置及摄像设备 |
Families Citing this family (79)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105867361A (zh) * | 2016-04-18 | 2016-08-17 | 深圳市道通智能航空技术有限公司 | 一种飞行方向控制方法、装置及其无人机 |
WO2018098824A1 (zh) * | 2016-12-02 | 2018-06-07 | 深圳市大疆创新科技有限公司 | 一种拍摄控制方法、装置以及控制设备 |
US11740630B2 (en) * | 2018-06-12 | 2023-08-29 | Skydio, Inc. | Fitness and sports applications for an autonomous unmanned aerial vehicle |
US10970924B2 (en) * | 2018-06-17 | 2021-04-06 | Foresight Ai Inc. | Reconstruction of a scene from a moving camera |
CN111213365A (zh) * | 2018-08-17 | 2020-05-29 | 深圳市大疆创新科技有限公司 | 拍摄控制方法及控制器 |
CN109032184B (zh) * | 2018-09-05 | 2021-07-09 | 深圳市道通智能航空技术股份有限公司 | 飞行器的飞行控制方法、装置、终端设备及飞行控制系统 |
EP3835913A1 (en) | 2018-09-13 | 2021-06-16 | SZ DJI Technology Co., Ltd. | Control method of handheld gimbal, handheld gimbal, and handheld device |
CN109240319A (zh) * | 2018-09-27 | 2019-01-18 | 易瓦特科技股份公司 | 用于控制无人机跟随的方法及装置 |
CN109240345A (zh) * | 2018-09-27 | 2019-01-18 | 易瓦特科技股份公司 | 用于对目标对象进行跟踪的方法及装置 |
WO2020062173A1 (zh) | 2018-09-29 | 2020-04-02 | 深圳市大疆创新科技有限公司 | 视频处理方法、装置、拍摄系统及计算机可读存储介质 |
CN109491402B (zh) * | 2018-11-01 | 2020-10-16 | 中国科学技术大学 | 基于集群控制的多无人机协同目标监视控制方法 |
CN109240314B (zh) * | 2018-11-09 | 2020-01-24 | 百度在线网络技术(北京)有限公司 | 用于采集数据的方法和装置 |
CN110892354A (zh) * | 2018-11-30 | 2020-03-17 | 深圳市大疆创新科技有限公司 | 图像处理方法和无人机 |
CN109597432B (zh) * | 2018-11-30 | 2022-03-18 | 航天时代飞鸿技术有限公司 | 一种基于车载摄像机组的无人机起降监控方法及系统 |
CN109660721B (zh) * | 2018-12-14 | 2021-03-16 | 上海扩博智能技术有限公司 | 无人机飞行拍摄质量优化方法、系统、设备及存储介质 |
CN109765939A (zh) * | 2018-12-21 | 2019-05-17 | 中国科学院自动化研究所南京人工智能芯片创新研究院 | 无人机的云台控制方法、装置和存储介质 |
CN109754420B (zh) * | 2018-12-24 | 2021-11-12 | 深圳市道通智能航空技术股份有限公司 | 一种目标距离估计方法、装置及无人机 |
CN111247788A (zh) * | 2018-12-29 | 2020-06-05 | 深圳市大疆创新科技有限公司 | 一种拍摄方法及装置 |
CN109911203A (zh) * | 2019-01-25 | 2019-06-21 | 北京电影学院 | 一种可任意移动及飞行的远程遥控超高亮度探照装置 |
JP7452533B2 (ja) * | 2019-04-02 | 2024-03-19 | ソニーグループ株式会社 | 情報処理装置、情報処理方法及びプログラム |
CN109976533B (zh) * | 2019-04-15 | 2022-06-03 | 珠海天燕科技有限公司 | 显示控制方法及装置 |
CN111866361A (zh) * | 2019-04-24 | 2020-10-30 | 奇酷互联网络科技(深圳)有限公司 | 无人机的拍摄方法、无人机、智能穿戴设备以及存储装置 |
CN118484032A (zh) | 2019-04-29 | 2024-08-13 | 深圳市大疆创新科技有限公司 | 一种云台控制方法、云台及拍摄装置 |
EP3817374A1 (en) * | 2019-04-30 | 2021-05-05 | SZ DJI Technology Co., Ltd. | Method, apparatus and system for adjusting field of view of observation, and storage medium and mobile apparatus |
CN110139038B (zh) * | 2019-05-22 | 2021-10-22 | 深圳市道通智能航空技术股份有限公司 | 一种自主环绕拍摄方法、装置以及无人机 |
CN110083180A (zh) | 2019-05-22 | 2019-08-02 | 深圳市道通智能航空技术有限公司 | 云台控制方法、装置、控制终端及飞行器系统 |
WO2020237478A1 (zh) * | 2019-05-27 | 2020-12-03 | 深圳市大疆创新科技有限公司 | 一种飞行规划方法及相关设备 |
WO2020241130A1 (ja) * | 2019-05-29 | 2020-12-03 | 古野電気株式会社 | 情報処理システム、方法、及びプログラム |
CN110282105B (zh) * | 2019-06-11 | 2020-07-24 | 浙江大学 | 一种基于视觉的auv双级导引系统和方法 |
CN110147122A (zh) | 2019-06-14 | 2019-08-20 | 深圳市道通智能航空技术有限公司 | 一种移动目标的追踪方法、装置及无人机 |
CN110234003A (zh) * | 2019-06-24 | 2019-09-13 | 北京润科通用技术有限公司 | 一种仿真无人机飞行的方法、装置、终端及系统 |
WO2021012081A1 (zh) * | 2019-07-19 | 2021-01-28 | 深圳市大疆创新科技有限公司 | 云台控制方法、设备和计算机可读存储介质 |
CN110730287B (zh) * | 2019-10-24 | 2021-07-30 | 深圳市道通智能航空技术股份有限公司 | 一种可拆换的云台相机、飞行器、系统及其云台拆换方法 |
CN110716579B (zh) * | 2019-11-20 | 2022-07-29 | 深圳市道通智能航空技术股份有限公司 | 目标跟踪方法及无人飞行器 |
WO2021102797A1 (zh) * | 2019-11-28 | 2021-06-03 | 深圳市大疆创新科技有限公司 | 一种云台控制方法、控制装置及控制系统 |
CN111246184A (zh) * | 2020-02-20 | 2020-06-05 | 深圳市昊一源科技有限公司 | 控制视频数据采集的方法、发射控制装置及无线传输系统 |
CN111193872B (zh) * | 2020-03-20 | 2020-11-10 | 安徽文香信息技术有限公司 | 一种控制摄像设备的方法、系统及摄像设备 |
WO2021189429A1 (zh) * | 2020-03-27 | 2021-09-30 | 深圳市大疆创新科技有限公司 | 图像拍摄方法、装置、可移动平台和存储介质 |
CN112673330B (zh) * | 2020-03-30 | 2024-05-14 | 深圳市大疆创新科技有限公司 | 无人机下降的控制方法和装置、无人机 |
CN111462229B (zh) * | 2020-03-31 | 2023-06-30 | 普宙科技有限公司 | 基于无人机的目标拍摄方法、拍摄装置及无人机 |
WO2021212445A1 (zh) * | 2020-04-24 | 2021-10-28 | 深圳市大疆创新科技有限公司 | 拍摄方法、可移动平台、控制设备和存储介质 |
CN111352410A (zh) * | 2020-04-26 | 2020-06-30 | 重庆市亿飞智联科技有限公司 | 飞行控制方法、装置、存储介质、自动驾驶仪及无人机 |
CN113573061A (zh) * | 2020-04-29 | 2021-10-29 | 安徽华米健康科技有限公司 | 一种视频抽帧方法、装置及设备 |
US12228948B2 (en) | 2020-06-19 | 2025-02-18 | Kawasaki Jukogyo Kabushiki Kaisha | Imaging system and robot system |
CN111665870B (zh) * | 2020-06-24 | 2024-06-14 | 深圳市道通智能航空技术股份有限公司 | 一种轨迹跟踪方法及无人机 |
CN111595303A (zh) * | 2020-07-03 | 2020-08-28 | 成都微宇科技有限责任公司 | 一种筛选航片的方法 |
WO2022000497A1 (zh) * | 2020-07-03 | 2022-01-06 | 深圳市大疆创新科技有限公司 | 一种显示控制方法、装置及系统 |
WO2022016334A1 (zh) * | 2020-07-20 | 2022-01-27 | 深圳市大疆创新科技有限公司 | 图像处理方法、装置、穿越机、图像优化系统及存储介质 |
CN112085799B (zh) * | 2020-08-14 | 2024-03-15 | 国网智能科技股份有限公司 | 一种电力设备自主配准方法及系统 |
US20220081125A1 (en) * | 2020-09-17 | 2022-03-17 | Laura Leigh Donovan | Personal paparazzo drones |
CN112213716B (zh) * | 2020-09-21 | 2024-07-16 | 浙江大华技术股份有限公司 | 一种物体定位方法和装置及电子设备 |
CN114489093B (zh) * | 2020-10-27 | 2022-11-29 | 北京远度互联科技有限公司 | 姿态调整方法、装置、存储介质、图像采集设备及无人机 |
CN113758498B (zh) * | 2020-10-30 | 2024-08-16 | 北京京东乾石科技有限公司 | 一种无人机云台标定方法和装置 |
CN112637497B (zh) * | 2020-12-21 | 2022-04-01 | 维沃移动通信有限公司 | 拍摄控制方法、装置和电子设备 |
CN112631333B (zh) * | 2020-12-25 | 2024-04-12 | 南方电网数字电网研究院有限公司 | 无人机的目标跟踪方法、装置及图像处理芯片 |
CN114556256A (zh) * | 2020-12-31 | 2022-05-27 | 深圳市大疆创新科技有限公司 | 飞行控制方法、视频编辑方法、装置、无人机及存储介质 |
CN112991456A (zh) * | 2021-03-08 | 2021-06-18 | 上海闻泰信息技术有限公司 | 拍摄定位方法、装置、计算机设备和存储介质 |
CN113160437A (zh) * | 2021-03-17 | 2021-07-23 | 深圳成谷软件有限公司 | Obu设备及其防拆卸检测方法和装置 |
US20220350328A1 (en) * | 2021-04-28 | 2022-11-03 | Flir Unmanned Aerial Systems Ulc | Mobile platform systems and methods using mesh networks |
TW202248953A (zh) | 2021-06-11 | 2022-12-16 | 明泰科技股份有限公司 | 偵測社交距離的攝影裝置及系統 |
KR102347972B1 (ko) * | 2021-08-18 | 2022-01-07 | 주식회사 아이지아이에스 | 파노라마 영상 제공시스템 |
CN113747011B (zh) * | 2021-08-31 | 2023-10-24 | 网易(杭州)网络有限公司 | 一种辅助拍摄方法、装置、电子设备及介质 |
CN113709376A (zh) * | 2021-09-07 | 2021-11-26 | 深圳市道通智能航空技术股份有限公司 | 控制飞行器拍摄旋转镜头视频的方法、装置、设备及介质 |
CN113747071B (zh) * | 2021-09-10 | 2023-10-24 | 深圳市道通智能航空技术股份有限公司 | 一种无人机拍摄方法、装置、无人机及存储介质 |
CN113805607B (zh) * | 2021-09-17 | 2024-06-28 | 深圳市道通智能航空技术股份有限公司 | 一种无人机拍摄方法、装置、无人机及存储介质 |
CN113784050B (zh) * | 2021-09-17 | 2023-12-12 | 深圳市道通智能航空技术股份有限公司 | 一种图像获取方法、装置、飞行器和存储介质 |
CN114051095A (zh) * | 2021-11-12 | 2022-02-15 | 苏州臻迪智能科技有限公司 | 视频流数据的远程处理方法以及拍摄系统 |
CN117859103A (zh) * | 2021-11-19 | 2024-04-09 | 深圳市大疆创新科技有限公司 | 构图处理方法、装置、系统和存储介质 |
CN114285996B (zh) * | 2021-12-23 | 2023-08-22 | 中国人民解放军海军航空大学 | 一种地面目标覆盖拍摄方法和系统 |
CN117693946A (zh) * | 2022-04-20 | 2024-03-12 | 深圳市大疆创新科技有限公司 | 无人机的控制方法、图像显示方法、无人机及控制终端 |
WO2023211694A1 (en) * | 2022-04-27 | 2023-11-02 | Snap Inc. | Stabilization and navigation of an autonomous drone |
CN114967746B (zh) * | 2022-06-02 | 2024-09-24 | 台州宏创电力集团有限公司科技分公司 | 一种人机协同系统多场景控制方法 |
CN115225931A (zh) * | 2022-07-29 | 2022-10-21 | 深圳盈天下视觉科技有限公司 | 一种直播方法、装置及系统 |
CN115454130B (zh) * | 2022-09-25 | 2025-03-14 | 复旦大学 | 一种基于机载视角相对定位的室内无人机编队控制方法 |
CN115494880B (zh) * | 2022-11-15 | 2023-03-17 | 北京汇达城数科技发展有限公司 | 一种无人机航线规划方法、系统、终端及存储介质 |
US20240338026A1 (en) * | 2023-04-07 | 2024-10-10 | Wing Aviation Llc | Using uav flight patterns to enhance machine vision detection of obstacles |
CN116389783B (zh) * | 2023-06-05 | 2023-08-11 | 四川农业大学 | 基于无人机的直播联动控制方法、系统、终端及介质 |
WO2025020057A1 (zh) * | 2023-07-24 | 2025-01-30 | 深圳市大疆创新科技有限公司 | 控制方法、装置、系统、可移动平台及存储介质 |
CN119420851A (zh) * | 2025-01-07 | 2025-02-11 | 深圳市博坦智能有限公司 | 飞行器视频拍摄方法及飞行器 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140214239A1 (en) * | 2013-01-29 | 2014-07-31 | QinetiQ North America, Inc. | Tactical robot controller |
CN105049812A (zh) * | 2015-08-07 | 2015-11-11 | 清华大学深圳研究生院 | 一种无人机便携式地面站处理方法及系统 |
CN105487552A (zh) * | 2016-01-07 | 2016-04-13 | 深圳一电航空技术有限公司 | 无人机跟踪拍摄的方法及装置 |
CN105549614A (zh) * | 2015-12-17 | 2016-05-04 | 北京猎鹰无人机科技有限公司 | 无人机目标跟踪方法 |
CN106657779A (zh) * | 2016-12-13 | 2017-05-10 | 重庆零度智控智能科技有限公司 | 环绕拍摄方法、装置及无人机 |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008105591A (ja) * | 2006-10-26 | 2008-05-08 | Hiroboo Kk | 自律制御無人飛行体の飛行管理方法 |
US8366054B2 (en) * | 2009-12-17 | 2013-02-05 | The United States Of America As Represented By The Secretary Of The Navy | Hand launchable unmanned aerial vehicle |
EP2538298A1 (en) * | 2011-06-22 | 2012-12-26 | Sensefly Sàrl | Method for acquiring images from arbitrary perspectives with UAVs equipped with fixed imagers |
WO2016003555A2 (en) * | 2014-07-01 | 2016-01-07 | Scanifly, LLC | Device, method, apparatus, and computer-readable medium for solar site assessment |
EP3060966B1 (en) * | 2014-07-30 | 2021-05-05 | SZ DJI Technology Co., Ltd. | Systems and methods for target tracking |
US20160229533A1 (en) * | 2015-02-06 | 2016-08-11 | Izak Jan van Cruyningen | Efficient Flight Paths for Aerial Corridor Inspection |
US9891620B2 (en) * | 2015-07-15 | 2018-02-13 | Malibu Boats, Llc | Control systems for water-sports watercraft |
CN105141851B (zh) * | 2015-09-29 | 2019-04-26 | 杨珊珊 | 无人飞行器用控制系统、无人飞行器及控制方法 |
CN105353772B (zh) * | 2015-11-16 | 2018-11-09 | 中国航天时代电子公司 | 一种无人机机动目标定位跟踪中的视觉伺服控制方法 |
US9740200B2 (en) * | 2015-12-30 | 2017-08-22 | Unmanned Innovation, Inc. | Unmanned aerial vehicle inspection system |
CN105841694A (zh) * | 2016-06-14 | 2016-08-10 | 杨珊珊 | 无人飞行器的信标导航装置、信标及其导航方法 |
CN106027896A (zh) * | 2016-06-20 | 2016-10-12 | 零度智控(北京)智能科技有限公司 | 视频拍摄控制装置、方法及无人机 |
KR101769601B1 (ko) * | 2016-07-13 | 2017-08-18 | 아이디어주식회사 | 자동추적 기능을 갖는 무인항공기 |
US20180204331A1 (en) * | 2016-07-21 | 2018-07-19 | Gopro, Inc. | Subject tracking systems for a movable imaging system |
CN106603970B (zh) * | 2016-11-11 | 2020-12-08 | 北京远度互联科技有限公司 | 视频拍摄方法、系统及无人机 |
CN106586011A (zh) * | 2016-12-12 | 2017-04-26 | 高域(北京)智能科技研究院有限公司 | 航拍无人飞行器的对准方法及其航拍无人飞行器 |
CN106598071B (zh) * | 2016-12-20 | 2019-10-11 | 北京小米移动软件有限公司 | 跟随式的飞行控制方法及装置、无人机 |
-
2017
- 2017-05-24 CN CN202110425837.0A patent/CN113163118A/zh active Pending
- 2017-05-24 CN CN202110426766.6A patent/CN113163119A/zh active Pending
- 2017-05-24 CN CN201780004593.0A patent/CN108476288B/zh active Active
- 2017-05-24 WO PCT/CN2017/085791 patent/WO2018214078A1/zh active Application Filing
- 2017-05-24 CN CN202110425831.3A patent/CN113038023A/zh active Pending
-
2019
- 2019-11-21 US US16/690,668 patent/US11120261B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140214239A1 (en) * | 2013-01-29 | 2014-07-31 | QinetiQ North America, Inc. | Tactical robot controller |
CN105049812A (zh) * | 2015-08-07 | 2015-11-11 | 清华大学深圳研究生院 | 一种无人机便携式地面站处理方法及系统 |
CN105549614A (zh) * | 2015-12-17 | 2016-05-04 | 北京猎鹰无人机科技有限公司 | 无人机目标跟踪方法 |
CN105487552A (zh) * | 2016-01-07 | 2016-04-13 | 深圳一电航空技术有限公司 | 无人机跟踪拍摄的方法及装置 |
CN106657779A (zh) * | 2016-12-13 | 2017-05-10 | 重庆零度智控智能科技有限公司 | 环绕拍摄方法、装置及无人机 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111415347A (zh) * | 2020-03-25 | 2020-07-14 | 上海商汤临港智能科技有限公司 | 遗留对象检测方法和装置及交通工具 |
CN111415347B (zh) * | 2020-03-25 | 2024-04-16 | 上海商汤临港智能科技有限公司 | 遗留对象检测方法和装置及交通工具 |
CN115175854A (zh) * | 2020-11-24 | 2022-10-11 | 深圳市大疆创新科技有限公司 | 云台和飞行器的协同控制方法和系统 |
CN115499580A (zh) * | 2022-08-15 | 2022-12-20 | 珠海视熙科技有限公司 | 多模式融合的智能取景方法、装置及摄像设备 |
CN115499580B (zh) * | 2022-08-15 | 2023-09-19 | 珠海视熙科技有限公司 | 多模式融合的智能取景方法、装置及摄像设备 |
Also Published As
Publication number | Publication date |
---|---|
US20200104598A1 (en) | 2020-04-02 |
CN113163118A (zh) | 2021-07-23 |
CN108476288A (zh) | 2018-08-31 |
CN113038023A (zh) | 2021-06-25 |
CN113163119A (zh) | 2021-07-23 |
CN108476288B (zh) | 2021-05-07 |
US11120261B2 (en) | 2021-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018214078A1 (zh) | 拍摄控制方法及装置 | |
US11914370B2 (en) | System and method for providing easy-to-use release and auto-positioning for drone applications | |
US11797009B2 (en) | Unmanned aerial image capture platform | |
US11644832B2 (en) | User interaction paradigms for a flying digital assistant | |
US11573562B2 (en) | Magic wand interface and other user interaction paradigms for a flying digital assistant | |
US10409276B2 (en) | System and method for controller-free user drone interaction | |
US20210112194A1 (en) | Method and device for taking group photo | |
US20240402702A1 (en) | Magic Wand Interface And Other User Interaction Paradigms For A Flying Digital Assistant | |
WO2022141271A1 (zh) | 云台系统的控制方法、控制设备、云台系统和存储介质 | |
WO2019061466A1 (zh) | 一种飞行控制方法、遥控装置、遥控系统 | |
WO2022246608A1 (zh) | 生成全景视频的方法、装置和可移动平台 | |
WO2018214075A1 (zh) | 视频画面生成方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17910590 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17910590 Country of ref document: EP Kind code of ref document: A1 |