[go: up one dir, main page]

CN109656319B - Method and equipment for presenting ground action auxiliary information - Google Patents

Method and equipment for presenting ground action auxiliary information Download PDF

Info

Publication number
CN109656319B
CN109656319B CN201811397300.2A CN201811397300A CN109656319B CN 109656319 B CN109656319 B CN 109656319B CN 201811397300 A CN201811397300 A CN 201811397300A CN 109656319 B CN109656319 B CN 109656319B
Authority
CN
China
Prior art keywords
information
unmanned aerial
aerial vehicle
ground
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811397300.2A
Other languages
Chinese (zh)
Other versions
CN109656319A (en
Inventor
杜威
许家文
杜虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liangfengtai Shanghai Information Technology Co ltd
Original Assignee
Liangfengtai Shanghai Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liangfengtai Shanghai Information Technology Co ltd filed Critical Liangfengtai Shanghai Information Technology Co ltd
Priority to CN201811397300.2A priority Critical patent/CN109656319B/en
Publication of CN109656319A publication Critical patent/CN109656319A/en
Application granted granted Critical
Publication of CN109656319B publication Critical patent/CN109656319B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/14Relay systems
    • H04B7/15Active relay systems
    • H04B7/185Space-based or airborne stations; Stations for satellite systems
    • H04B7/18502Airborne stations
    • H04B7/18506Communications with or from aircraft, i.e. aeronautical mobile service
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • H04N7/185Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Astronomy & Astrophysics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The purpose of the application is to provide a method and equipment for presenting ground action auxiliary information, wherein unmanned aerial vehicle control equipment sends unmanned aerial vehicle auxiliary information to corresponding user equipment; the user equipment receives the auxiliary information of the unmanned aerial vehicle sent by the unmanned aerial vehicle control equipment and presents ground action auxiliary information corresponding to the auxiliary information of the unmanned aerial vehicle; wherein the ground movement assistance information is used to assist ground movement. The application can improve the ground action efficiency of the team.

Description

Method and equipment for presenting ground action auxiliary information
Technical Field
The present application relates to the field of computers, and more particularly, to a technique for presenting ground movement assistance information.
Background
With the development of technology, unmanned aerial vehicles are gradually widely used. Generally, a set of drone devices includes a drone (body) and a drone control device for controlling the drone. Due to the flexibility of action, drones are often used to assist ground action, and action guidance is usually provided to ground personnel by a user operating a drone control device (or called a drone "flyer") according to a scene picture (or called an "aerial picture") taken by the drone, such as describing the surrounding environment to the ground personnel by the flyer, providing action route suggestions, and the like. Wherein unmanned aerial vehicle flight hand and ground personnel contact through means such as radio.
Although the unmanned aerial vehicle enriches the information that the ground personnel can obtain, the information that the ground personnel obtain still has great limitation. On the one hand, the information provided by the drone's flyer via radio (e.g., walkie-talkie) is not the original information in the field, so the information obtained by ground personnel during the communication process may be erroneous; on the other hand, according to the description of the flying hand of the unmanned aerial vehicle, ground personnel may misjudge and miss the site condition. This will reduce the team's efficiency of action. In addition, when ground personnel are policemen, policemen need hand the police service equipment that is used for flying hand communication with unmanned aerial vehicle, is unfavorable for developing the task.
Disclosure of Invention
It is an object of the present application to provide a method for presenting terrestrial action assistance information.
According to an aspect of the present application, there is provided a method for presenting ground mobility assistance information at a user equipment, the method comprising:
receiving unmanned aerial vehicle auxiliary information sent by corresponding unmanned aerial vehicle control equipment;
presenting ground action auxiliary information corresponding to the unmanned aerial vehicle auxiliary information;
wherein the ground movement assistance information is used to assist ground movement.
According to another aspect of the present application, there is provided a method for presenting ground mobility assistance information at an unmanned aerial vehicle control device, the method comprising:
and sending the auxiliary information of the unmanned aerial vehicle to corresponding user equipment so that the user equipment can present corresponding ground action auxiliary information.
According to one aspect of the present application, there is provided a user equipment for presenting ground mobility assistance information, the user equipment comprising:
the first module is used for receiving unmanned aerial vehicle auxiliary information sent by corresponding unmanned aerial vehicle control equipment;
the first and second modules are used for presenting ground action auxiliary information corresponding to the unmanned aerial vehicle auxiliary information;
wherein the ground movement assistance information is used to assist ground movement.
According to another aspect of the application, there is provided a drone control device for presenting ground action-assist information, the drone control device comprising:
and the second module is used for sending the auxiliary information of the unmanned aerial vehicle to the corresponding user equipment so that the user equipment can present the corresponding ground action auxiliary information.
According to one aspect of the present application, there is provided a method for presenting ground action assistance information, the method comprising:
the unmanned aerial vehicle control equipment sends unmanned aerial vehicle auxiliary information to corresponding user equipment;
the user equipment receives the auxiliary information of the unmanned aerial vehicle sent by the unmanned aerial vehicle control equipment and presents ground action auxiliary information corresponding to the auxiliary information of the unmanned aerial vehicle;
wherein the ground movement assistance information is used to assist ground movement.
According to one aspect of the present application, there is provided an apparatus for presenting ground mobility assistance information, the apparatus comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the operations of the method described above.
According to another aspect of the present application, there is provided a computer-readable medium comprising instructions that, when executed, cause a system to perform the operations of the method described above.
This application sends unmanned aerial vehicle auxiliary information to ground personnel's user equipment through unmanned aerial vehicle control equipment, with the confession user equipment presents the ground action auxiliary information that corresponds and acts as with supplementary ground to the interaction between unmanned aerial vehicle flight hand and the ground personnel has been strengthened. Compared with the prior art, the information obtained by the ground personnel is more visual and diversified, and the ground personnel can obtain the original information of the site, so that the possibility of misjudgment of the ground personnel is greatly reduced, and the action efficiency of the team is also greatly improved. In addition, when ground personnel are the police officer, can configure its user equipment into wear-type display device such as intelligent glasses to ground personnel need not handheld police equipment that is used for flying hand communication with unmanned aerial vehicle, are convenient for carry out the task.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
fig. 1 illustrates a system topology for cooperation among drones, drone control devices, user devices to assist ground actions according to one embodiment of the present application;
FIG. 2 is a flow diagram of a method for presenting terrestrial mobility assistance information at a user equipment in accordance with one embodiment of the present application;
FIG. 3 is a flowchart of a method for presenting terrestrial mobility assistance information at a user equipment according to another embodiment of the present application;
FIG. 4 is a flow diagram of a method for presenting ground mobility assistance information at an drone controlling device in accordance with one embodiment of the present application;
FIG. 5 is a flow diagram of a method for presenting ground mobility assistance information at an drone controlling device in accordance with another embodiment of the present application;
FIG. 6 is a flow diagram of a method for presenting ground mobility assistance information at an drone controlling device in accordance with yet another embodiment of the present application;
FIG. 7 is a functional block diagram of a user device for presenting terrestrial action assistance information according to one embodiment of the present application;
FIG. 8 is a functional block diagram of a user device for presenting terrestrial action assistance information according to another embodiment of the present application;
fig. 9 is a functional block diagram of a drone control device for presenting ground action assist information according to one embodiment of the present application;
fig. 10 is a functional block diagram of a drone control device for presenting ground mobility assistance information according to another embodiment of the present application;
fig. 11 is a functional block diagram of a drone control device for presenting ground action assist information according to yet another embodiment of the present application;
FIG. 12 illustrates an exemplary system of the present application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
The device referred to in this application includes, but is not limited to, a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product, such as a smart phone, a tablet computer, smart glasses, etc., capable of performing human-computer interaction with a user (e.g., human-computer interaction through a touch panel), and the mobile electronic product may employ any operating system, such as an Android operating system, an iOS operating system, etc. The network device includes an electronic device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and hardware thereof includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like. The network device includes but is not limited to a computer, a network host, a single network server, a plurality of network server sets or a cloud of a plurality of servers; here, the Cloud is composed of a large number of computers or web servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual supercomputer consisting of a collection of loosely coupled computers. Including, but not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, a wireless Ad Hoc network (Ad Hoc network), etc. Preferably, the device may also be a program running on the user device, the network device, or a device formed by integrating the user device and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the foregoing is by way of example only, and that other existing or future devices, which may be suitable for use in the present application, are also encompassed within the scope of the present application and are hereby incorporated by reference.
In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
The user device referred to in this application includes, but is not limited to, a computing device such as a smartphone, a tablet, smart glasses, or a helmet. In some embodiments, the user equipment further comprises a camera device for collecting image information, the camera device generally comprises a photosensitive element for converting optical signals into electrical signals, and may further comprise a light ray refracting/reflecting component (such as a lens or a lens assembly) for adjusting the propagation path of incident light rays as required. To facilitate operation by a user, in some embodiments, the user device further includes a display device for presenting and/or for setting up augmented reality content to the user, where in some embodiments, the augmented reality content is presented superimposed on a target device, and the target device is presented by the user device (e.g., transmissive glasses or other user device having a display screen); in some embodiments, the display device is a touch screen, which can be used not only for outputting a graphic image, but also as an input device of a user device for receiving an operation instruction of a user (e.g., an operation instruction for interacting with the augmented reality content). Of course, those skilled in the art should understand that the input device of the user equipment is not limited to the touch screen, and other existing input technologies can be applied to the present application, and are included in the scope of the present application and are included by reference. For example, in some embodiments, the input technique for receiving the user's operation instruction is implemented based on voice control, gesture control, and/or eye tracking.
Referring to the system topology shown in fig. 1, the drone controlling device communicates with the drone to transmit data for the drone's flight control of the direction of flight, attitude, etc., and the drone sends data (e.g., including but not limited to one or more items of sensory information such as the drone's own status, scene image information, etc.) to the drone controlling device. Meanwhile, the unmanned aerial vehicle control device communicates with the user equipment of the ground personnel, so that the unmanned aerial vehicle control device sends auxiliary information of the unmanned aerial vehicle (for example, scene pictures shot by the unmanned aerial vehicle or other information determined according to the operation of the unmanned aerial vehicle flyer) to the user equipment, and the user equipment presents the auxiliary information of the ground action corresponding to the auxiliary information of the unmanned aerial vehicle, so as to assist the action of the ground personnel. Wherein, unmanned aerial vehicle can carry on multiple sensor, and these sensors are used for sensing data such as unmanned aerial vehicle self position, gesture or are used for gathering external environment's relevant information. For example, the drone collects information such as angular rate, attitude, position, acceleration, altitude, airspeed of the drone itself based on a GPS sensor, a Real-Time Kinematic (RTK) module, a barometer, a gyroscope, an electronic compass, and the like, and takes a scene picture based on an image sensor, which can be transmitted to the drone control device. Under some circumstances, can set up the cloud platform on unmanned aerial vehicle in order to install the camera to keep apart the adverse effect that external disturbance such as unmanned aerial vehicle gesture change, organism vibrations and external wind resistance moment brought to shooting work, guarantee that the visual axis of airborne camera is stable.
Based on the system shown in fig. 1, the present application provides a method for presenting ground mobility assistance information, the method comprising the steps of:
the unmanned aerial vehicle control equipment sends unmanned aerial vehicle auxiliary information to corresponding user equipment; and
the user equipment receives the auxiliary information of the unmanned aerial vehicle sent by the unmanned aerial vehicle control equipment and presents ground action auxiliary information corresponding to the auxiliary information of the unmanned aerial vehicle;
wherein the ground movement assistance information is used to assist ground movement.
The present application is described in detail below from two aspects, namely, user equipment and drone control equipment, respectively.
According to one aspect of the present application, a method for presenting terrestrial action assistance information at a user equipment side is provided. Referring to fig. 2, the method includes step S110 and step S120.
In step S110, the ue receives the drone assistance information sent by the corresponding drone controlling device. In some embodiments, the drone assistance information includes, but is not limited to, one or more of:
1) target related information including, but not limited to, location information of a destination of a ground action, surrounding landmark information, names or features of related ground targets (including, but not limited to, target objects and target persons), and the like;
2) image information captured by the drone (hereinafter drone image information), including but not limited to still image information and moving image information (e.g., video);
3) annotation information, including but not limited to the user, drone or other user based on added annotation information, annotated objects including but not limited to a point in the scene (e.g., the point determined based on latitude and longitude information of the destination of the ground actual or simulated action) or a region (e.g., the region determined based on latitude and longitude information of its respective vertices), annotated content including but not limited to color points, lines, models, images, text, etc.;
4) target location information, including but not limited to latitude and longitude information of the target for reference by ground personnel, may be used to determine the position of the target relative to the ground personnel.
In some embodiments, the ground personnel need to identify the target (e.g., the ground personnel is a police officer), and the target-related information may further include face appearance, height, sex, age, and the like of the target (e.g., the criminal suspect).
In the case where the drone assistance information includes peripheral landmark information, the peripheral landmark information may be determined based on the sensing information of the drone. For example, the drone is equipped with various sensors, including a GPS sensor, a barometric sensor, a gyroscope, an electronic compass, etc., which are used to collect information such as latitude and longitude position, attitude, speed, angular rate, acceleration, altitude, and airspeed of the drone; the unmanned aerial vehicle control equipment which is communicated with the unmanned aerial vehicle acquires longitude and latitude Information of the unmanned aerial vehicle, then sends a related request to a Geographic Information System (GIS), and the Geographic Information System returns surrounding building landmark Information to the unmanned aerial vehicle control equipment according to the received longitude and latitude data of the unmanned aerial vehicle. And then, the unmanned aerial vehicle control equipment acquires required sensing data, and superimposes surrounding architectural landmarks to the image shot by the current unmanned aerial vehicle according to the height, the direction, the longitude and latitude position and other data of the unmanned aerial vehicle, so that police and the flying hand can know the surrounding geographic information.
In step S120, the user equipment presents ground action assistance information corresponding to the unmanned aerial vehicle assistance information. For example, the user equipment presents the target related data information, the unmanned aerial vehicle image information or the annotation information. Or the user equipment generates corresponding ground action auxiliary information based on the unmanned aerial vehicle auxiliary information, for example, the unmanned aerial vehicle auxiliary information includes target position information; accordingly, in step S120, the user equipment determines ground action assistance information corresponding to the unmanned aerial vehicle assistance information based on the target location information (for example, based on the target location information and the location information of the user equipment itself, a map application interface is invoked to generate live-action navigation information (for example, based on the transmissive glasses, virtual arrows are superimposed in the live-action for navigation, or independent scene maps and navigation information are provided), and presents the ground action assistance information to guide ground personnel to act.
In some embodiments, the ground action assisting information includes image information of the unmanned aerial vehicle captured by the unmanned aerial vehicle, and the method further includes the following steps: determining corresponding unmanned aerial vehicle image annotation information according to unmanned aerial vehicle image annotation operation (such as clicking, touching, sliding and other operations performed by a user on a touch screen); and, to unmanned aerial vehicle controlgear sends unmanned aerial vehicle image annotation information looks over in order to supply the unmanned aerial vehicle user, for example this unmanned aerial vehicle image annotation information sends back to unmanned aerial vehicle controlgear for ground personnel through the user equipment that its corresponds mark on the basis of above-mentioned unmanned aerial vehicle image information to supply unmanned aerial vehicle controlgear's user to refer to, for example, unmanned aerial vehicle controlgear receives and presents user equipment sends about unmanned aerial vehicle image annotation information of unmanned aerial vehicle image information.
The user device may be a head-mounted smart device such as smart glasses/helmet, or a mobile phone, a tablet computer, a navigation device (e.g., a computing device that is handheld or fixed on a vehicle); in some cases, the user equipment described above may be used to capture a first perspective video of ground personnel, the above-mentioned ground action assisting information, interaction information with other users, command scheduling information sent by a command platform responsible for commanding ground actions, and the like. In some embodiments, the terrestrial action assistance information is presented in a fixed area (e.g., a rectangular area, or the entire displayable area) on a display device of the user equipment; in other embodiments, the ground movement assistance information is presented in an augmented reality manner, such as a projection device based on transmission glasses, so as to overlay virtual information in a relevant area of the real world to realize a virtual-real combined experience. It will be understood by those skilled in the art that the user equipment described above is merely exemplary, and other existing or future user equipment that may be suitable for use in the present application is also included within the scope of the present application and is incorporated by reference herein.
In some embodiments, in step S110, the user device receives the auxiliary information of the drone sent by the corresponding drone control device via a corresponding network device (for example, including but not limited to a cloud server), so as to implement multi-end information sharing, for example, in a case where there are multiple drone flyers or multiple ground personnel, other drone flyers or ground personnel may also obtain the auxiliary information of the drone through the network device. In some embodiments, the drone assistance information includes drone image information (e.g., video information) and is streamed by the drone control device to the network device for each participant to view or recall the corresponding image material in real-time.
In some embodiments, referring to fig. 3, the method further includes step S130. In step S130, the ue acquires ground image information of a scene where the ue is located, and sends the ground image information to the mcu. For example, the user equipment takes images of positions where ground personnel are located in real time based on a camera device fixedly arranged on the user equipment as ground image information, and sends the ground image information to the unmanned aerial vehicle control equipment. Similar to the above, in some embodiments, the user device sends the ground image information to the drone control device via the corresponding network device, e.g., the ground image information is streamed to the network device, so that each participant views or recalls the corresponding image material in real time.
In some embodiments, the method further comprises step S140 (not shown) and step S150 (not shown).
Specifically, after acquiring the above-described ground image information, the user equipment performs a target recognition operation on the ground image information in step S140. For example, the object recognition operation is used to recognize a specific object (fixed or unfixed, such as a building or a vehicle) or a person. In one embodiment, the target recognition operation is implemented based on a deep learning algorithm, first preparing a training set (e.g., an image of a pedestrian wearing clothes of different colors) and corresponding labels (e.g., a location of the pedestrian in the image); then training a deep learning model, and continuously iterating parameters of the model according to a training set until the model converges; and finally, inputting the image shot by the user equipment into the trained deep learning model to obtain the position of the pedestrian with the specific clothes color in the picture, thereby finishing the target recognition operation.
In step S150, the user equipment presents the ground image information and presents corresponding target tracking information based on the operation result of the target recognition operation. For example, after obtaining the position of the pedestrian in the screen through the target recognition operation, the user equipment superimposes and presents corresponding target tracking information at the position to distinguish the pedestrian from other objects or pedestrians in the screen, for example, the target tracking information is a highlighted contour line around the target, or a square frame, a color bar, a color point, an arrow, and the like. In some embodiments, when the ground image information changes over time (e.g., the user device takes a video rather than a still image), the object tracking information may follow the identified object to keep the identified object in a distinguished state from other objects or pedestrians in the picture, such as performing the object recognition operation on the video taken by the user device frame by frame or performing the object recognition operation on a plurality of key frames in the video.
The target tracking information can be obtained based on the auxiliary information of the unmanned aerial vehicle sent by the unmanned aerial vehicle, besides the target identification operation locally performed by the user equipment. In some embodiments, the drone assistance information includes target assistance tracking information, and the ground action assistance information includes target tracking information corresponding to the target assistance tracking information. For example, the drone controlling device performs a target recognition operation on an image captured by the drone to recognize a specific object or person in the image, thereby obtaining the above-described target assisted tracking information (e.g., a position of the specific object or person in the image captured by the drone). The process of the drone control device executing the target recognition operation is the same as or substantially the same as the process of the user device executing the target recognition operation, and is not described again and is included herein by reference.
In some embodiments, the drone assistance information includes annotation information (e.g., added by the drone's flight hand through the drone controlling device, or added by other action participants, such as command platforms, capable of communicating with the user device), including annotation elements (including but not limited to boxes, color bars, color points, arrows, pictures/videos, animations, three-dimensional models, etc.) and their rendering location information (used to determine where the aforementioned annotation elements are located in the screen). Accordingly, in step S120, the user equipment presents, based on the annotation information, ground action assistance information corresponding to the drone assistance information (e.g., the ground action assistance information includes the annotation information). For example, when the annotation information corresponds to a certain point (for example, the annotation information includes latitude and longitude information of a destination of a ground actual action or a simulated action), the user equipment superposes and presents a color point at the position of the point in a real scene or in a map picture or in an image picture transmitted by the unmanned aerial vehicle; and when the labeling information corresponds to a certain region (for example, the labeling information includes latitude and longitude information of each vertex of the certain region), the user equipment displays a color block corresponding to the region in an overlapping manner. Specifically, in some embodiments, the annotation information can include, but is not limited to, the following: route planning information, for example, a planned route for a front-line police officer to arrest actions can be added according to the position of a currently tracked suspect; strategic deployment information, for example, areas (an enclosed area during an emergency, an area to which a duty task belongs during a duty mission, a designated area during tactical deployment, and the like) can be marked in an unmanned aerial vehicle image-transmitted picture, and front-line policemen can intuitively acquire specific area positions in a capturing action; information on actual combat drills (e.g., drills that label ground conditions to aid in tactics) may be located where simulated suspects are located and highlighted with red circles. Near ground personnel can look over unmanned aerial vehicle control picture when going to the destination, and through these mark information, ground personnel can understand and carry out the task better, and action efficiency also can promote.
Wherein, in some embodiments, the auxiliary information of the unmanned aerial vehicle further includes image information of the unmanned aerial vehicle, and the image information of the unmanned aerial vehicle is a video. On the premise of ignoring network delay, after the unmanned aerial vehicle control equipment sends the video and the mark information (including the mark element and the display position information thereof) to the user equipment, the user equipment displays the video, and displays the mark element based on the display position information of the mark element, so that the real-time display of the mark content of the unmanned aerial vehicle flyer is realized at the user equipment end, the user can quickly respond based on the mark content, and the cooperation efficiency of the user and the unmanned aerial vehicle flyer is improved. In some cases, such as but not limited to the case where the network delay cannot be ignored and the case where it is necessary to recall the review video and the annotation information, the annotation information further includes the time axis position information corresponding to the annotation element, and the time axis position information is used to determine the video frame to which the annotation element exactly corresponds (e.g., by determining the position of the relevant video frame on the time axis), and superimpose the annotation element on the video frame, so as to avoid the annotation misplacement caused by the superimposition of the annotation element on the non-corresponding video frame.
For further promoting team cooperation efficiency, same user equipment can be with a plurality of unmanned aerial vehicle controlgear simultaneous or respectively the communication, and the unmanned aerial vehicle that corresponds of a plurality of unmanned aerial vehicle controlgear for example covers the different parts of same action zone respectively, and through a plurality of unmanned aerial vehicle controlgear and unmanned aerial vehicle simultaneous or respectively interdynamic, the team resource can rational utilization. In some embodiments, in step S110, the user equipment receives drone assistance information transmitted by at least one of the corresponding plurality of drone controlling devices; in step S120, the user equipment presents ground behavior assistance information corresponding to the auxiliary information of the drone, which is sent by at least one of the plurality of drone control devices. For example, the user equipment may present, in turn, the auxiliary information of the unmanned aerial vehicle sent by multiple pieces of unmanned aerial vehicle control equipment, or present multiple pieces of auxiliary information of the unmanned aerial vehicle at the same time (for example, a plurality of pieces of different information are simultaneously superimposed and presented), or present one or more pieces of auxiliary information of the unmanned aerial vehicle required by the user according to a selection operation of the user; for each item of auxiliary information of the unmanned aerial vehicle, the presentation mode is the same as or substantially the same as that of the auxiliary information of the unmanned aerial vehicle, which is not described again and is included herein by reference.
According to another aspect of the present application, a method for presenting ground mobility assistance information at an unmanned aerial vehicle control device is provided. Referring to fig. 4, the method includes step S210. In step S210, the drone controlling device sends the drone assistance information to the corresponding user device, so that the user device presents the corresponding ground action assistance information. In some embodiments, the drone assistance information includes, but is not limited to, one or more of:
1) target related information including, but not limited to, location information of a destination of a ground action, surrounding landmark information, names or features of related ground targets (including, but not limited to, target objects and target persons), and the like; in some embodiments, the drone control device performs a target recognition operation on an image captured by the drone to recognize a specific object or person target, and then the drone control device reads target-related data information in a local or accessible database thereof, thereby improving task execution efficiency;
2) drone image information including, but not limited to, still image information and moving image information, such as video;
3) annotation information, including but not limited to the user, drone or other user based on added annotation information, where the annotated object includes but is not limited to a point in the scene (e.g., the point is determined based on latitude and longitude information of the destination of the ground actual action or simulated action) or a region (e.g., the region is determined based on latitude and longitude information of its respective vertices), the form of the annotation including but not limited to a color point, a line, a model, an image, text, etc.;
4) target location information, including but not limited to latitude and longitude information of the target for reference by ground personnel, may be used to determine the position of the target relative to the ground personnel.
In some embodiments, the drone control device sends the drone assistance information to the corresponding user device via a corresponding network device (e.g., including but not limited to a cloud server), so as to implement multi-end information sharing, for example, in a case where there are multiple drone flights or multiple ground personnel, other drone flights or ground personnel may also obtain the drone assistance information through the network device. In some embodiments, the drone assistance information includes drone image information (e.g., video information) and is streamed by the drone control device to the network device for each participant to view or recall the corresponding image material in real-time.
In addition to sending the drone assistance information to the user equipment, the drone control device may also receive ground image information sent by the user equipment for reference; accordingly, in some embodiments, the method further includes step S220, as shown in fig. 5. The drone controlling device receives and presents the ground image information sent by the user device in step S220. Furthermore, based on the ground image information sent by the user device, the drone control device may also perform a target recognition operation on the ground image information, and send corresponding target-assisted tracking information (e.g., a position of the recognized target in the screen) to the user device based on an operation result of the target recognition operation, so that the user device presents the corresponding target-tracking information (e.g., a highlighted contour line around the target, or a square, a color bar, a color point, an arrow, etc.) to ground personnel based on the target-assisted tracking information. Through receiving the ground image information that user equipment sent, unmanned aerial vehicle flight hand can acquire user's first visual angle picture, grasps the site conditions comprehensively, also can assist ground personnel to carry out the target recognition operation to this ground image information to further promote cooperative efficiency. For example, the object recognition operation is used to recognize a specific object (fixed or unfixed, such as a building or a vehicle) or a person. In one embodiment, the target recognition operation is implemented based on a deep learning algorithm, first preparing a training set (e.g., an image of a pedestrian wearing clothes of different colors) and corresponding labels (e.g., a location of the pedestrian in the image); then training a deep learning model, and continuously iterating parameters of the model according to a training set until the model converges; and finally, inputting the image shot by the user equipment into the trained deep learning model to obtain the position of the pedestrian with the specific clothes color in the picture, thereby finishing the target recognition operation.
Wherein in some embodiments, the above method further comprises the steps of: and determining corresponding ground image labeling information according to the ground image labeling operation of the user on the ground image information. Wherein the auxiliary information of the unmanned aerial vehicle comprises the ground image annotation information. For example, after receiving a ground image shot and sent by a user device corresponding to a ground person, a user of the drone control device adds corresponding annotation information to the image, and sends the image and the corresponding annotation information back to the user device.
In some cases, the drone assistance information described above may be obtained based on drone image information captured by the drone. In some embodiments, referring to fig. 6, the above method includes step S250. The drone control device obtains, in step S250, drone image information (including but not limited to still images, videos, and the like) shot by a corresponding drone, and then sends, in step S210, drone assistance information to a corresponding user device based on the drone image information, so that the user device presents corresponding ground action assistance information. For example, the ground action assisting information includes the above-mentioned unmanned aerial vehicle image information; or the unmanned aerial vehicle control equipment executes target identification operation on the unmanned aerial vehicle image information to identify a specific target and determines corresponding target auxiliary tracking information; alternatively, the drone identifies the number of specific targets in the picture and sends this number information to the user device as part of the ground action assistance information.
On this basis, in some embodiments, the method further includes step S260 (not shown). In step S260, the drone control device determines image annotation information regarding the drone image information based on user operations (e.g., including but not limited to clicking, frame-and-select, dragging operations, or text input operations) of the drone user, wherein the image annotation information includes annotation elements (including but not limited to boxes, color bars, color points, arrows, pictures/videos, animations, three-dimensional models, etc.) and rendering position information thereof (for determining positions of the aforementioned annotation elements in the screen). Correspondingly, in step S210, the drone controlling device sends the drone auxiliary information to the corresponding user device based on the drone image information and the image annotation information, so that the user device presents the corresponding ground action auxiliary information.
Wherein, in some embodiments, the auxiliary information of the unmanned aerial vehicle further includes image information of the unmanned aerial vehicle, and the image information of the unmanned aerial vehicle is a video. For example, in step S260, the drone controlling device determines, based on a user operation of a drone user, image annotation information related to the drone image information, where the image annotation information includes an annotation element and presentation position information thereof, and also includes time axis position information corresponding to the annotation element, for example, on the premise of ignoring network delay, after the drone controlling device sends the video and the annotation information (including the annotation element and the presentation position information thereof) to the user device, the user device presents the video, and presents the annotation element based on the presentation position information of the annotation element, so that real-time presentation of the drone handoff annotation content is realized at the user device side, so that the user quickly reacts based on the annotation content, and the cooperation efficiency of the user and the drone handoff is improved. In some cases, such as but not limited to the case where the network delay cannot be ignored and the case where it is necessary to recall the review video and the annotation information, the annotation information further includes the time axis position information corresponding to the annotation element, and the time axis position information is used to determine the video frame to which the annotation element exactly corresponds (e.g., by determining the position of the relevant video frame on the time axis), and superimpose the annotation element on the video frame, so as to avoid the annotation misplacement caused by the superimposition of the annotation element on the non-corresponding video frame.
In some embodiments, the method further comprises step S270 (not shown). In step S270, the drone controlling device determines target position information of a designated target based on relative orientation information of a corresponding drone and the designated target, and spatial position information of the drone. Wherein in some embodiments the specified target is determined by the drone's flight hand on the drone control device, for example by clicking, frame-clicking, etc. on a display screen of the drone control device. For example, in one embodiment, the drone control device determines a corresponding designated target based on a selected operation of a user, then the drone control device controls the drone to measure a linear distance between the designated target and the drone (for example, obtained based on an onboard laser range finder), obtains a horizontal distance between the drone and the designated target in combination with altitude information of the drone itself (for example, obtained based on a barometer), finally determines longitude and latitude information of the designated target according to longitude and latitude information of the drone itself (for example, obtained based on a GPS sensor) and an azimuth angle of the target relative to the drone, and uses the longitude and latitude information as target location information of the target. For another example, in another embodiment, the drone controlling device determines an angle between a connecting line between the drone and the designated target and a plumb line based on a pitch angle of the drone (for example, obtained based on a gyroscope), calculates a horizontal distance between the drone and the designated target according to the angle and an altitude of the drone (for example, obtained based on a barometer), finally determines latitude and longitude information of the designated target according to latitude and longitude information of the drone itself (for example, obtained based on a GPS sensor) and an azimuth angle of the target with respect to the drone, and uses the latitude and longitude information as target location information of the target. Of course, those skilled in the art will appreciate that the above-described method for obtaining the target location information is merely an example, and other existing or future available obtaining methods, such as those applicable to the present application, are also included in the scope of the present application and are incorporated by reference herein.
As mentioned above, to further improve team cooperation efficiency, the same ue may communicate with multiple drone controlling devices simultaneously or separately. Similarly, for the same drone control device, it may also communicate with multiple user devices simultaneously or separately, for example, ground actors corresponding to multiple user devices are located at different positions of the same action area, and team resources are reasonably utilized by interacting with the drone control device simultaneously or separately. In some embodiments, in step S210, the drone controlling device sends the drone assistance information to the corresponding at least one user device for the at least one user device to present the corresponding ground action assistance information. For example, the drone control device may send the drone auxiliary information to multiple user devices in turn, or send multiple pieces of drone auxiliary information simultaneously (for example, multiple different pieces of information are sent to different user devices, respectively), or send one or more pieces of drone auxiliary information required by a certain ground person according to a selection operation of a drone user; for each item of auxiliary information of the drone, the manner of generating or sending the auxiliary information is the same as or substantially the same as the manner described above, and is not described again and is included herein by reference.
According to an aspect of the present application, there is also provided a user equipment for presenting terrestrial action assistance information. Referring to fig. 7, the user equipment 100 includes a first primary module 110 and a first secondary module 120.
The first module 110 receives the auxiliary information of the drone sent by the corresponding drone controlling device. In some embodiments, the drone assistance information includes, but is not limited to, one or more of:
1) target related information including, but not limited to, location information of a destination of a ground action, surrounding landmark information, names or features of related ground targets (including, but not limited to, target objects and target persons), and the like;
2) image information captured by the drone (hereinafter drone image information), including but not limited to still image information and moving image information (e.g., video);
3) annotation information, including but not limited to the user, drone or other user based on added annotation information, annotated objects including but not limited to a point in the scene (e.g., the point determined based on latitude and longitude information of the destination of the ground actual or simulated action) or a region (e.g., the region determined based on latitude and longitude information of its respective vertices), annotated forms including but not limited to color points, lines, models, images, text, etc.;
4) target location information, including but not limited to latitude and longitude information of the target for reference by ground personnel, may be used to determine the position of the target relative to the ground personnel.
In some embodiments, the ground personnel need to identify the target (e.g., the ground personnel is a police officer), and the target-related information may further include face appearance, height, sex, age, and the like of the target (e.g., the criminal suspect).
In the case where the drone assistance information includes peripheral landmark information, the peripheral landmark information may be determined based on the sensing information of the drone. For example, the drone is equipped with various sensors, including a GPS sensor, a barometric sensor, a gyroscope, an electronic compass, etc., which are used to collect information such as latitude and longitude position, attitude, speed, angular rate, acceleration, altitude, and airspeed of the drone; the unmanned aerial vehicle control equipment which is communicated with the unmanned aerial vehicle acquires longitude and latitude Information of the unmanned aerial vehicle, then sends a related request to a Geographic Information System (GIS), and the Geographic Information System returns surrounding building landmark Information to the unmanned aerial vehicle control equipment according to the received longitude and latitude data of the unmanned aerial vehicle. And then, the unmanned aerial vehicle control equipment acquires required sensing data, and superimposes surrounding architectural landmarks to the image shot by the current unmanned aerial vehicle according to the height, the direction, the longitude and latitude position and other data of the unmanned aerial vehicle, so that police and background commanders can know surrounding geographic information.
The first and second modules 120 present ground action assistance information corresponding to the unmanned aerial vehicle assistance information. For example, the user equipment presents the target related data information, the unmanned aerial vehicle image information or the annotation information. Or the user equipment generates corresponding ground action auxiliary information based on the unmanned aerial vehicle auxiliary information, for example, the unmanned aerial vehicle auxiliary information includes target position information; accordingly, the first and second modules 120 determine ground action auxiliary information corresponding to the unmanned aerial vehicle auxiliary information based on the target location information (for example, based on the target location information and the location information of the user equipment itself, call a map application interface to generate live-action navigation information, for example, based on a transmissive glasses, superimpose a virtual arrow in a live-action for navigation, or provide an independent scene map and navigation information), and present the ground action auxiliary information to guide ground personnel to act.
In some embodiments, the ground action assisting information includes image information of the drone captured by the drone, and the user equipment further includes a sixth module (not shown) configured to: determining corresponding unmanned aerial vehicle image annotation information according to unmanned aerial vehicle image annotation operation (such as clicking, touching, sliding and other operations performed by a user on a touch screen); and, to unmanned aerial vehicle controlgear sends unmanned aerial vehicle image annotation information looks over in order to supply the unmanned aerial vehicle user, for example this unmanned aerial vehicle image annotation information sends back to unmanned aerial vehicle controlgear for ground personnel through the user equipment that its corresponds mark on the basis of above-mentioned unmanned aerial vehicle image information to supply unmanned aerial vehicle controlgear's user to refer to, for example, unmanned aerial vehicle controlgear receives and presents user equipment sends about unmanned aerial vehicle image annotation information of unmanned aerial vehicle image information.
The user device may be a head-mounted smart device such as smart glasses/helmet, or a mobile phone, a tablet computer, a navigation device (e.g., a computing device that is handheld or fixed on a vehicle); in some cases, the user equipment described above may be used to capture a first perspective video of ground personnel, the above-mentioned ground action assisting information, interaction information with other users, command scheduling information sent by a command platform responsible for commanding ground actions, and the like. In some embodiments, the terrestrial action assistance information is presented in a fixed area (e.g., a rectangular area, or the entire displayable area) on a display device of the user equipment; in other embodiments, the ground movement assistance information is presented in an augmented reality manner, such as a projection device based on transmission glasses, so as to overlay virtual information in a relevant area of the real world to realize a virtual-real combined experience. It will be understood by those skilled in the art that the user equipment described above is merely exemplary, and other existing or future user equipment that may be suitable for use in the present application is also included within the scope of the present application and is incorporated by reference herein.
In some embodiments, the first module 110 receives the auxiliary information of the drone sent by the corresponding drone control device via a corresponding network device (e.g., including but not limited to a cloud server), so as to implement multi-end information sharing, for example, in a case where there are multiple drone flyers or multiple ground personnel, other drone flyers or ground personnel may also obtain the auxiliary information of the drone through the network device. In some embodiments, the drone assistance information includes drone image information (e.g., video information) and is streamed by the drone control device to the network device for each participant to view or recall the corresponding image material in real-time.
In some embodiments, referring to fig. 8, the user equipment 100 further comprises a first third module 130. The first third module 130 obtains ground image information of a scene where the user equipment is located, and sends the ground image information to the unmanned aerial vehicle control device. For example, the user equipment takes images of positions where ground personnel are located in real time based on a camera device fixedly arranged on the user equipment as ground image information, and sends the ground image information to the unmanned aerial vehicle control equipment. Similar to the above, in some embodiments, the user device sends the ground image information to the drone control device via the corresponding network device, e.g., the ground image information is streamed to the network device, so that each participant views or recalls the corresponding image material in real time.
In some embodiments, the user equipment 100 further comprises a first fourth module 140 (not shown) and a first fifth module 150 (not shown).
Specifically, after the above-mentioned ground image information is acquired, the first fourth module 140 performs a target recognition operation on the ground image information. For example, the object recognition operation is used to recognize a specific object (fixed or unfixed, such as a building or a vehicle) or a person. In one embodiment, the target recognition operation is implemented based on a deep learning algorithm, first preparing a training set (e.g., an image of a pedestrian wearing clothes of different colors) and corresponding labels (e.g., a location of the pedestrian in the image); then training a deep learning model, and continuously iterating parameters of the model according to a training set until the model converges; and finally, inputting the image shot by the user equipment into the trained deep learning model to obtain the position of the pedestrian with the specific clothes color in the picture, thereby finishing the target recognition operation.
The first fifth module 150 presents the ground image information and presents corresponding target tracking information based on the operation result of the target recognition operation. For example, after obtaining the position of the pedestrian in the screen through the target recognition operation, the user equipment superimposes and presents corresponding target tracking information at the position to distinguish the pedestrian from other objects or pedestrians in the screen, for example, the target tracking information is a highlighted contour line around the target, or a square frame, a color bar, a color point, an arrow, and the like. In some embodiments, when the ground image information changes over time (e.g., the user device takes a video rather than a still image), the object tracking information may follow the identified object to keep the identified object in a distinguished state from other objects or pedestrians in the picture, such as performing the object recognition operation on the video taken by the user device frame by frame or performing the object recognition operation on a plurality of key frames in the video.
The target tracking information can be obtained based on the auxiliary information of the unmanned aerial vehicle sent by the unmanned aerial vehicle, besides the target identification operation locally performed by the user equipment. In some embodiments, the drone assistance information includes target assistance tracking information, and the ground action assistance information includes target tracking information corresponding to the target assistance tracking information. For example, the drone controlling device performs a target recognition operation on an image captured by the drone to recognize a specific object or person in the image, thereby obtaining the above-described target assisted tracking information (e.g., a position of the specific object or person in the image captured by the drone). The process of the drone control device executing the target recognition operation is the same as or substantially the same as the process of the user device executing the target recognition operation, and is not described again and is included herein by reference.
In some embodiments, the drone assistance information includes annotation information (e.g., added by the drone's flight hand through the drone controlling device, or added by other action participants, such as command platforms, capable of communicating with the user device), including annotation elements (including but not limited to boxes, color bars, color points, arrows, pictures/videos, animations, three-dimensional models, etc.) and their rendering location information (used to determine where the aforementioned annotation elements are located in the screen). Accordingly, in step S120, the user equipment presents, based on the annotation information, ground action assistance information corresponding to the drone assistance information (e.g., the ground action assistance information includes the annotation information). For example, when the annotation information corresponds to a certain point (for example, the annotation information includes latitude and longitude information of a destination of a ground actual action or a simulated action), the user equipment superposes and presents a color point at the position of the point in a real scene or in a map picture or in an image picture transmitted by the unmanned aerial vehicle; and when the labeling information corresponds to a certain region (for example, the labeling information includes latitude and longitude information of each vertex of the certain region), the user equipment displays a color block corresponding to the region in an overlapping manner. Specifically, in some embodiments, the annotation information can include, but is not limited to, the following: route planning information, for example, a planned route for a front-line police officer to arrest actions can be added according to the position of a currently tracked suspect; strategic deployment information, for example, areas (an enclosed area during an emergency, an area to which a duty task belongs during a duty mission, a designated area during tactical deployment, and the like) can be marked in an unmanned aerial vehicle image-transmitted picture, and front-line policemen can intuitively acquire specific area positions in a capturing action; information on actual combat drills (e.g., drills that label ground conditions to aid in tactics) may be located where simulated suspects are located and highlighted with red circles. Near ground personnel can look over unmanned aerial vehicle control picture when going to the destination, and through these mark information, ground personnel can understand and carry out the task better, and action efficiency also can promote.
Wherein, in some embodiments, the auxiliary information of the unmanned aerial vehicle further includes image information of the unmanned aerial vehicle, and the image information of the unmanned aerial vehicle is a video. On the premise of ignoring network delay, after the unmanned aerial vehicle control equipment sends the video and the mark information (including the mark element and the display position information thereof) to the user equipment, the user equipment displays the video, and displays the mark element based on the display position information of the mark element, so that the real-time display of the mark content of the unmanned aerial vehicle flyer is realized at the user equipment end, the user can quickly respond based on the mark content, and the cooperation efficiency of the user and the unmanned aerial vehicle flyer is improved. In some cases, such as but not limited to the case where the network delay cannot be ignored and the case where it is necessary to recall the review video and the annotation information, the annotation information further includes the time axis position information corresponding to the annotation element, and the time axis position information is used to determine the video frame to which the annotation element exactly corresponds (e.g., by determining the position of the relevant video frame on the time axis), and superimpose the annotation element on the video frame, so as to avoid the annotation misplacement caused by the superimposition of the annotation element on the non-corresponding video frame.
For further promoting team cooperation efficiency, same user equipment can be with a plurality of unmanned aerial vehicle controlgear simultaneous or respectively the communication, and the unmanned aerial vehicle that corresponds of a plurality of unmanned aerial vehicle controlgear for example covers the different parts of same action zone respectively, and through a plurality of unmanned aerial vehicle controlgear and unmanned aerial vehicle simultaneous or respectively interdynamic, the team resource can rational utilization. In some embodiments, the first module 110 receives drone assistance information transmitted by at least one of a corresponding plurality of drone controlling devices; the first and second modules 120 present ground action auxiliary information corresponding to the auxiliary information of the unmanned aerial vehicle sent by at least one of the plurality of unmanned aerial vehicle control devices. For example, the user equipment may present, in turn, the auxiliary information of the unmanned aerial vehicle sent by multiple pieces of unmanned aerial vehicle control equipment, or present multiple pieces of auxiliary information of the unmanned aerial vehicle at the same time (for example, a plurality of pieces of different information are simultaneously superimposed and presented), or present one or more pieces of auxiliary information of the unmanned aerial vehicle required by the user according to a selection operation of the user; for each item of auxiliary information of the unmanned aerial vehicle, the presentation mode is the same as or substantially the same as that of the auxiliary information of the unmanned aerial vehicle, which is not described again and is included herein by reference.
According to another aspect of the application, a drone control device for presenting ground action-assist information is provided. Referring to fig. 9, the drone controlling device 200 comprises a second module 210. The second module 210 sends the drone assistance information to the corresponding user equipment for the user equipment to present the corresponding ground action assistance information. In some embodiments, the drone assistance information includes, but is not limited to, one or more of:
1) target related information including, but not limited to, location information of a destination of a ground action, surrounding landmark information, names or features of related ground targets (including, but not limited to, target objects and target persons), and the like; in some embodiments, the drone control device performs a target recognition operation on an image captured by the drone to recognize a specific object or person target, and then the drone control device reads target-related data information in a local or accessible database thereof, thereby improving task execution efficiency;
2) drone image information including, but not limited to, still image information and moving image information, such as video;
3) annotation information, including but not limited to the user, drone or other user based on added annotation information, where the annotated object includes but is not limited to a point in the scene (e.g., the point is determined based on latitude and longitude information of the destination of the ground actual action or simulated action) or a region (e.g., the region is determined based on latitude and longitude information of its respective vertices), the form of the annotation including but not limited to a color point, a line, a model, an image, text, etc.;
4) target location information, including but not limited to latitude and longitude information of the target for reference by ground personnel, may be used to determine the position of the target relative to the ground personnel.
In some embodiments, the drone control device sends the drone assistance information to the corresponding user device via a corresponding network device (e.g., including but not limited to a cloud server), so as to implement multi-end information sharing, for example, in a case where there are multiple drone flights or multiple ground personnel, other drone flights or ground personnel may also obtain the drone assistance information through the network device. In some embodiments, the drone assistance information includes drone image information (e.g., video information) and is streamed by the drone control device to the network device for each participant to view or recall the corresponding image material in real-time.
In addition to sending the drone assistance information to the user equipment, the drone control device may also receive ground image information sent by the user equipment for reference; accordingly, in some embodiments, the drone controlling device further comprises a second module 220, as shown in fig. 10. The second module 220 receives and presents the ground image information sent by the user equipment. Furthermore, based on the ground image information sent by the user device, the drone control device may also perform a target recognition operation on the ground image information, and send corresponding target-assisted tracking information (e.g., a position of the recognized target in the screen) to the user device based on an operation result of the target recognition operation, so that the user device presents the corresponding target-tracking information (e.g., a highlighted contour line around the target, or a square, a color bar, a color point, an arrow, etc.) to ground personnel based on the target-assisted tracking information. Through receiving the ground image information that user equipment sent, unmanned aerial vehicle flight hand can acquire user's first visual angle picture, grasps the site conditions comprehensively, also can assist ground personnel to carry out the target recognition operation to this ground image information to further promote cooperative efficiency. For example, the object recognition operation is used to recognize a specific object (fixed or unfixed, such as a building or a vehicle) or a person. In one embodiment, the target recognition operation is implemented based on a deep learning algorithm, first preparing a training set (e.g., an image of a pedestrian wearing clothes of different colors) and corresponding labels (e.g., a location of the pedestrian in the image); then training a deep learning model, and continuously iterating parameters of the model according to a training set until the model converges; and finally, inputting the image shot by the user equipment into the trained deep learning model to obtain the position of the pedestrian with the specific clothes color in the picture, thereby finishing the target recognition operation.
In some embodiments, the drone controlling device further includes a second eight module (not shown) configured to determine corresponding ground image annotation information according to a ground image annotation operation performed by a user on the ground image information. Wherein the auxiliary information of the unmanned aerial vehicle comprises the ground image annotation information. For example, after receiving a ground image shot and sent by a user device corresponding to a ground person, a user of the drone control device adds corresponding annotation information to the image, and sends the image and the corresponding annotation information back to the user device.
In some cases, the drone assistance information described above may be obtained based on drone image information captured by the drone. In some embodiments, referring to fig. 11, the drone controlling device 200 described above includes a fifth module 250. The fifth module 250 obtains image information (including but not limited to still images, videos, etc.) of the corresponding unmanned aerial vehicle captured by the unmanned aerial vehicle, and then the first module 210 sends auxiliary information of the unmanned aerial vehicle to the corresponding user equipment based on the image information of the unmanned aerial vehicle, so that the user equipment can present corresponding auxiliary information of ground actions. For example, the ground action assisting information includes the above-mentioned unmanned aerial vehicle image information; or the unmanned aerial vehicle control equipment executes target identification operation on the unmanned aerial vehicle image information to identify a specific target and determines corresponding target auxiliary tracking information; alternatively, the drone identifies the number of specific targets in the picture and sends this number information to the user device as part of the ground action assistance information.
On this basis, in some embodiments, the drone controlling device 200 also includes a second sixth module 260 (not shown). The sixth module 260 determines image annotation information related to the image information of the drone based on user operations (for example, including but not limited to clicking, frame selecting, dragging operations, or text input operations) of the drone user, wherein the image annotation information includes annotation elements (including but not limited to boxes, color bars, color points, arrows, pictures/videos, animations, three-dimensional models, and the like) and rendering position information thereof (for determining positions of the aforementioned annotation elements in the screen). Correspondingly, the second module 210 sends the auxiliary information of the unmanned aerial vehicle to the corresponding user equipment based on the image information of the unmanned aerial vehicle and the image annotation information, so that the user equipment can present the corresponding ground action auxiliary information.
Wherein, in some embodiments, the auxiliary information of the unmanned aerial vehicle further includes image information of the unmanned aerial vehicle, and the image information of the unmanned aerial vehicle is a video. For example, the sixth module 260 determines, based on a user operation of the user of the unmanned aerial vehicle, image annotation information related to the image information of the unmanned aerial vehicle, where the image annotation information includes an annotation element and presentation position information thereof, and also includes time axis position information corresponding to the annotation element, for example, on the premise of ignoring network delay, after the unmanned aerial vehicle control device sends the video and the annotation information (including the annotation element and the presentation position information thereof) to the user equipment, the user equipment presents the video, and presents the annotation element based on the presentation position information of the annotation element, so that real-time presentation of the annotation content of the unmanned aerial vehicle flight hand is realized at the user equipment side, so that the user quickly reacts based on the annotation content, and the efficiency of cooperation between the user and the unmanned aerial vehicle flight hand is improved. In some cases, such as but not limited to the case where the network delay cannot be ignored and the case where it is necessary to recall the review video and the annotation information, the annotation information further includes the time axis position information corresponding to the annotation element, and the time axis position information is used to determine the video frame to which the annotation element exactly corresponds (e.g., by determining the position of the relevant video frame on the time axis), and superimpose the annotation element on the video frame, so as to avoid the annotation misplacement caused by the superimposition of the annotation element on the non-corresponding video frame.
In some embodiments, the drone controlling device 200 described above further includes a second seventh module 270 (not shown). A second seventh module 270 determines target location information of a designated target based on relative bearing information of the corresponding drone and the designated target, and spatial location information of the drone. Wherein in some embodiments the specified target is determined by the drone's flight hand on the drone control device, for example by clicking, frame-clicking, etc. on a display screen of the drone control device. For example, in one embodiment, the drone control device determines a corresponding designated target based on a selected operation of a user, then the drone control device controls the drone to measure a linear distance between the designated target and the drone (for example, obtained based on an onboard laser range finder), obtains a horizontal distance between the drone and the designated target in combination with altitude information of the drone itself (for example, obtained based on a barometer), finally determines longitude and latitude information of the designated target according to longitude and latitude information of the drone itself (for example, obtained based on a GPS sensor) and an azimuth angle of the target relative to the drone, and uses the longitude and latitude information as target location information of the target. For another example, in another embodiment, the drone controlling device determines an angle between a connecting line between the drone and the designated target and a plumb line based on a pitch angle of the drone (for example, obtained based on a gyroscope), calculates a horizontal distance between the drone and the designated target according to the angle and an altitude of the drone (for example, obtained based on a barometer), finally determines latitude and longitude information of the designated target according to latitude and longitude information of the drone itself (for example, obtained based on a GPS sensor) and an azimuth angle of the target with respect to the drone, and uses the latitude and longitude information as target location information of the target. Of course, those skilled in the art will appreciate that the above-described method for obtaining the target location information is merely an example, and other existing or future available obtaining methods, such as those applicable to the present application, are also included in the scope of the present application and are incorporated by reference herein.
As mentioned above, to further improve team cooperation efficiency, the same ue may communicate with multiple drone controlling devices simultaneously or separately. Similarly, for the same drone control device, it may also communicate with multiple user devices simultaneously or separately, for example, ground actors corresponding to multiple user devices are located at different positions of the same action area, and team resources are reasonably utilized by interacting with the drone control device simultaneously or separately. In some embodiments, the second module 210 sends the drone assistance information to the corresponding at least one user device for the at least one user device to present the corresponding ground action assistance information. For example, the drone control device may send the drone auxiliary information to multiple user devices in turn, or send multiple pieces of drone auxiliary information simultaneously (for example, multiple different pieces of information are sent to different user devices, respectively), or send one or more pieces of drone auxiliary information required by a certain ground person according to a selection operation of a drone user; for each item of auxiliary information of the drone, the manner of generating or sending the auxiliary information is the same as or substantially the same as the manner described above, and is not described again and is included herein by reference.
It should be noted that, in the context of the present application, there may be multiple presentation manners of the annotation information sent by the user equipment or the drone controlling device. For example, the annotation information can be presented by the receiver in an overlay manner (e.g., based on the position information of the annotation element, or based on the presentation position information and the time axis position information of the annotation element, the annotation element is presented in an overlay manner on the corresponding video frame); in addition, the sender of the annotation information can also store the obtained video as a new video or video stream after superimposing the related elements on the shot video, and send the newly generated video or video stream to the receiver for presentation.
The present application also provides a computer readable storage medium having stored thereon computer code which, when executed, performs a method as in any one of the preceding.
The present application also provides a computer program product, which when executed by a computer device, performs the method of any of the preceding claims.
The present application further provides a computer device, comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
FIG. 12 illustrates an exemplary system that can be used to implement the various embodiments described in this application.
As shown in fig. 12, in some embodiments, the system 300 can function as any one of the user devices or drone controlling devices in each of the described embodiments. In some embodiments, system 300 may include one or more computer-readable media (e.g., system memory or NVM/storage 320) having instructions and one or more processors (e.g., processor(s) 305) coupled with the one or more computer-readable media and configured to execute the instructions to implement modules to perform the actions described herein.
For one embodiment, system control module 310 may include any suitable interface controllers to provide any suitable interface to at least one of processor(s) 305 and/or any suitable device or component in communication with system control module 310.
The system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315. Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
System memory 315 may be used, for example, to load and store data and/or instructions for system 300. For one embodiment, system memory 315 may include any suitable volatile memory, such as suitable DRAM. In some embodiments, the system memory 315 may include a double data rate type four synchronous dynamic random access memory (DDR4 SDRAM).
For one embodiment, system control module 310 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 320 and communication interface(s) 325.
For example, NVM/storage 320 may be used to store data and/or instructions. NVM/storage 320 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 320 may include storage resources that are physically part of the device on which system 300 is installed or may be accessed by the device and not necessarily part of the device. For example, NVM/storage 320 may be accessible over a network via communication interface(s) 325.
Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device. System 300 may wirelessly communicate with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) (e.g., memory controller module 330) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) of the system control module 310 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310 to form a system on a chip (SoC).
In various embodiments, system 300 may be, but is not limited to being: a server, a workstation, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or different architectures. For example, in some embodiments, system 300 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Those skilled in the art will appreciate that the form in which the computer program instructions reside on a computer-readable medium includes, but is not limited to, source files, executable files, installation package files, and the like, and that the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. Computer-readable media herein can be any available computer-readable storage media or communication media that can be accessed by a computer.
Communication media includes media by which communication signals, including, for example, computer readable instructions, data structures, program modules, or other data, are transmitted from one system to another. Communication media may include conductive transmission media such as cables and wires (e.g., fiber optics, coaxial, etc.) and wireless (non-conductive transmission) media capable of propagating energy waves such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules, or other data may be embodied in a modulated data signal, for example, in a wireless medium such as a carrier wave or similar mechanism such as is embodied as part of spread spectrum techniques. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory such as random access memory (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memories (MRAM, FeRAM); and magnetic and optical storage devices (hard disk, tape, CD, DVD); or other now known media or later developed that can store computer-readable information/data for use by a computer system.
An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (28)

1. A method at a user equipment for presenting ground mobility assistance information, wherein the method comprises:
receiving unmanned aerial vehicle auxiliary information sent by corresponding unmanned aerial vehicle control equipment; the unmanned aerial vehicle control equipment is used for controlling an unmanned aerial vehicle by an unmanned aerial vehicle flying hand and is communicated with the user equipment, wherein the unmanned aerial vehicle auxiliary information comprises at least one item of target related information, marking information, target position information and target auxiliary tracking information, the target related information comprises at least one item of position information of a destination of ground action, peripheral landmark information and related ground target information, the marking information is added by the unmanned aerial vehicle flying hand or other users, and the target position information is used for determining the position of a target relative to ground personnel;
presenting ground action auxiliary information corresponding to the unmanned aerial vehicle auxiliary information;
wherein the ground movement assistance information is used to assist ground movement of ground personnel.
2. The method of claim 1, wherein the ground action assistance information comprises at least any one of:
target related data information;
unmanned aerial vehicle image information;
labeling information;
and (5) live-action navigation information.
3. The method of claim 1, wherein the ground action assistance information comprises drone image information;
the method further comprises the following steps:
determining corresponding unmanned aerial vehicle image annotation information according to unmanned aerial vehicle image annotation operation of a user;
and sending the unmanned aerial vehicle image annotation information to the unmanned aerial vehicle control equipment.
4. The method of claim 1, wherein the receiving drone assistance information transmitted by a corresponding drone controlling device comprises:
and receiving the auxiliary information of the unmanned aerial vehicle sent by the corresponding unmanned aerial vehicle control equipment through the corresponding network equipment.
5. The method of claim 1, wherein the method further comprises:
and acquiring ground image information of a scene where the user equipment is located, and sending the ground image information to the unmanned aerial vehicle control equipment.
6. The method of claim 5, wherein the obtaining ground image information of a scene in which the user equipment is located and sending the ground image information to the drone controlling device comprises:
and acquiring ground image information of a scene where the user equipment is located, and sending the ground image information to the unmanned aerial vehicle control equipment through corresponding network equipment.
7. The method of claim 5, wherein the method further comprises:
performing a target recognition operation on the ground image information;
and presenting the ground image information, and presenting corresponding target tracking information based on an operation result of the target identification operation.
8. The method of claim 1, wherein the drone assistance information includes annotation information including annotation elements and their presentation location information;
the presenting of the ground action auxiliary information corresponding to the unmanned aerial vehicle auxiliary information includes:
and presenting the ground action auxiliary information corresponding to the auxiliary information of the unmanned aerial vehicle based on the labeling information.
9. The method of claim 8, wherein the annotation information further comprises timeline position information corresponding to the annotation element.
10. The method of claim 1, wherein the drone assistance information includes target location information, and the presenting ground action assistance information to which the drone assistance information corresponds includes:
and determining ground action auxiliary information corresponding to the unmanned aerial vehicle auxiliary information based on the target position information, and presenting the ground action auxiliary information.
11. The method of claim 1, wherein the drone assistance information includes target assistance tracking information, the ground action assistance information including target tracking information corresponding to the target assistance tracking information.
12. The method of claim 1, wherein the receiving drone assistance information transmitted by a corresponding drone controlling device comprises:
receiving unmanned aerial vehicle auxiliary information sent by at least one of the corresponding plurality of unmanned aerial vehicle control devices;
the presenting of the ground action auxiliary information corresponding to the unmanned aerial vehicle auxiliary information includes:
and presenting ground action auxiliary information corresponding to the unmanned aerial vehicle auxiliary information sent by at least one unmanned aerial vehicle control device in the plurality of unmanned aerial vehicle control devices.
13. A method at a drone controlling device end for presenting ground action-aiding information, wherein the method comprises:
sending unmanned aerial vehicle auxiliary information to corresponding user equipment so that the user equipment can present corresponding ground action auxiliary information; the unmanned aerial vehicle control equipment is used for controlling an unmanned aerial vehicle by an unmanned aerial vehicle flight hand, and the unmanned aerial vehicle control equipment is communicated with the user equipment, the ground action auxiliary information is used for assisting ground actions of ground personnel, wherein the unmanned aerial vehicle auxiliary information comprises at least one item of target related information, marking information, target position information and target auxiliary tracking information, the target related information comprises at least one item of position information of a destination of the ground actions, peripheral landmark information and related ground target information, the marking information is added by the unmanned aerial vehicle flight hand or other users, and the target position information is used for determining the direction of a target relative to the ground personnel.
14. The method of claim 13, wherein the sending drone assistance information to a corresponding user device for the user device to present corresponding ground action assistance information comprises:
and sending the auxiliary information of the unmanned aerial vehicle to corresponding user equipment through corresponding network equipment so that the user equipment can present corresponding ground action auxiliary information.
15. The method of claim 13, wherein the method further comprises:
and receiving and presenting the ground image information sent by the user equipment.
16. The method of claim 15, wherein the method further comprises:
performing a target recognition operation on the ground image information;
and sending corresponding target auxiliary tracking information to the user equipment based on the operation result of the target identification operation.
17. The method of claim 15, wherein the method further comprises:
determining corresponding ground image labeling information according to ground image labeling operation of a user on the ground image information;
wherein the auxiliary information of the unmanned aerial vehicle comprises the ground image annotation information.
18. The method of claim 13, wherein the method further comprises:
acquiring corresponding unmanned aerial vehicle image information shot by an unmanned aerial vehicle;
the sending unmanned aerial vehicle auxiliary information to corresponding user equipment for user equipment presents corresponding ground action auxiliary information, including:
and sending unmanned aerial vehicle auxiliary information to corresponding user equipment based on the unmanned aerial vehicle image information so that the user equipment can present corresponding ground action auxiliary information.
19. The method of claim 18, wherein the method further comprises:
determining image annotation information related to the unmanned aerial vehicle image information based on user operation of an unmanned aerial vehicle user, wherein the image annotation information comprises annotation elements and presentation position information thereof;
based on unmanned aerial vehicle image information sends unmanned aerial vehicle auxiliary information to corresponding user equipment for user equipment presents corresponding ground action auxiliary information, and includes:
and sending unmanned aerial vehicle auxiliary information to corresponding user equipment based on the unmanned aerial vehicle image information and the image annotation information so that the user equipment can present corresponding ground action auxiliary information.
20. The method of claim 19, wherein the determining image annotation information for the drone image information based on user operations by a drone user, wherein the image annotation information includes annotation elements and their presentation location information comprises:
determining image annotation information about the unmanned aerial vehicle image information based on user operation of an unmanned aerial vehicle user, wherein the image annotation information comprises annotation elements and presentation position information thereof, and also comprises time axis position information corresponding to the annotation elements.
21. The method of claim 18, wherein the method further comprises:
and receiving and presenting unmanned aerial vehicle image annotation information about the unmanned aerial vehicle image information sent by the user equipment.
22. The method of claim 13, wherein the method further comprises:
and determining the target position information of the designated target based on the relative azimuth information of the corresponding unmanned aerial vehicle and the unmanned aerial vehicle corresponding to the designated target and the spatial position information of the unmanned aerial vehicle.
23. The method of claim 13, wherein the sending drone assistance information to a corresponding user device for the user device to present corresponding ground action assistance information comprises:
sending the unmanned aerial vehicle assistance information to the corresponding at least one user equipment for the at least one user equipment to present the corresponding ground action assistance information.
24. A user device for presenting terrestrial mobility assistance information, wherein the user device comprises:
the first module is used for receiving unmanned aerial vehicle auxiliary information sent by corresponding unmanned aerial vehicle control equipment; the unmanned aerial vehicle control equipment is used for controlling an unmanned aerial vehicle by an unmanned aerial vehicle flying hand and is communicated with the user equipment, wherein the unmanned aerial vehicle auxiliary information comprises at least one item of target related information, marking information, target position information and target auxiliary tracking information, the target related information comprises at least one item of position information of a destination of ground action, peripheral landmark information and related ground target information, the marking information is added by the unmanned aerial vehicle flying hand or other users, and the target position information is used for determining the position of a target relative to ground personnel;
the first and second modules are used for presenting ground action auxiliary information corresponding to the unmanned aerial vehicle auxiliary information;
wherein the ground movement assistance information is used to assist ground movement of ground personnel.
25. A drone control device for presenting ground mobility assistance information, wherein the drone control device comprises:
the second module is used for sending the auxiliary information of the unmanned aerial vehicle to corresponding user equipment so that the user equipment can present corresponding ground action auxiliary information; the unmanned aerial vehicle control equipment is used for controlling an unmanned aerial vehicle by an unmanned aerial vehicle flight hand, and the unmanned aerial vehicle control equipment is communicated with the user equipment, the ground action auxiliary information is used for assisting ground actions of ground personnel, wherein the unmanned aerial vehicle auxiliary information comprises at least one item of target related information, marking information, target position information and target auxiliary tracking information, the target related information comprises at least one item of position information of a destination of the ground actions, peripheral landmark information and related ground target information, the marking information is added by the unmanned aerial vehicle flight hand or other users, and the target position information is used for determining the direction of a target relative to the ground personnel.
26. A method for presenting terrestrial mobility assistance information, wherein the method comprises:
the unmanned aerial vehicle control equipment sends unmanned aerial vehicle auxiliary information to corresponding user equipment, wherein the unmanned aerial vehicle auxiliary information comprises at least one item of target related information, marking information, target position information and target auxiliary tracking information, the target related information comprises at least one item of position information of a destination of a ground action, peripheral landmark information and related ground target information, the marking information is added by an unmanned aerial vehicle flight hand or other users, and the target position information is used for determining the position of a target relative to ground personnel;
the user equipment receives the auxiliary information of the unmanned aerial vehicle sent by the unmanned aerial vehicle control equipment and presents ground action auxiliary information corresponding to the auxiliary information of the unmanned aerial vehicle; the drone control device is for a drone to control a drone, and the drone control device is in communication with the user device;
wherein the ground movement assistance information is used to assist ground movement of ground personnel.
27. An apparatus for presenting ground mobility assistance information, wherein the apparatus comprises:
a processor; and
a memory arranged to store computer-executable instructions that, when executed, cause the processor to perform operations according to the method of any one of claims 1 to 23.
28. A computer-readable medium comprising instructions that, when executed, cause a system to perform operations of any of the methods of claims 1-23.
CN201811397300.2A 2018-11-22 2018-11-22 Method and equipment for presenting ground action auxiliary information Active CN109656319B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811397300.2A CN109656319B (en) 2018-11-22 2018-11-22 Method and equipment for presenting ground action auxiliary information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811397300.2A CN109656319B (en) 2018-11-22 2018-11-22 Method and equipment for presenting ground action auxiliary information

Publications (2)

Publication Number Publication Date
CN109656319A CN109656319A (en) 2019-04-19
CN109656319B true CN109656319B (en) 2021-06-15

Family

ID=66111289

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811397300.2A Active CN109656319B (en) 2018-11-22 2018-11-22 Method and equipment for presenting ground action auxiliary information

Country Status (1)

Country Link
CN (1) CN109656319B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288207A (en) * 2019-05-25 2019-09-27 亮风台(上海)信息科技有限公司 It is a kind of that the method and apparatus of scene information on duty is provided
CN110460808A (en) * 2019-06-27 2019-11-15 安徽科力信息产业有限责任公司 Target designation real-time display method, device and unmanned plane
CN112221149B (en) * 2020-09-29 2022-07-19 中北大学 Artillery and soldier continuous intelligent combat drilling system based on deep reinforcement learning
CN115439635B (en) * 2022-06-30 2024-04-26 亮风台(上海)信息科技有限公司 Method and equipment for presenting marking information of target object
JP7542597B2 (en) * 2022-12-27 2024-08-30 株式会社インターネットイニシアティブ Image synthesizing device, unmanned mobile body, and image synthesis display system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140112588A (en) * 2013-03-11 2014-09-24 한국항공우주산업 주식회사 Method of terminal guidance of airplane and apparatuse for using the same
CN107054654A (en) * 2017-05-09 2017-08-18 广东容祺智能科技有限公司 A kind of unmanned plane target tracking system and method
CN107968932A (en) * 2017-10-31 2018-04-27 易瓦特科技股份公司 The method, system and device being identified based on earth station to destination object

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100004798A1 (en) * 2005-01-25 2010-01-07 William Kress Bodin Navigating a UAV to a next waypoint
CN107209854A (en) * 2015-09-15 2017-09-26 深圳市大疆创新科技有限公司 For the support system and method that smoothly target is followed
CN105741213A (en) * 2016-01-13 2016-07-06 天津中科智能识别产业技术研究院有限公司 Disaster relief force scheduling deployment command and control system based on GIS
US10922542B2 (en) * 2016-03-01 2021-02-16 SZ DJI Technology Co., Ltd. System and method for identifying target objects
US9884530B2 (en) * 2016-07-05 2018-02-06 SkyRunner, LLC Dual engine air and land multimodal vehicle
CN107416207A (en) * 2017-06-13 2017-12-01 深圳市易成自动驾驶技术有限公司 Unmanned plane rescue mode, unmanned plane and computer-readable recording medium
CN108510689A (en) * 2018-04-23 2018-09-07 成都鹏派科技有限公司 A kind of Forest Fire Alarm reaction system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140112588A (en) * 2013-03-11 2014-09-24 한국항공우주산업 주식회사 Method of terminal guidance of airplane and apparatuse for using the same
CN107054654A (en) * 2017-05-09 2017-08-18 广东容祺智能科技有限公司 A kind of unmanned plane target tracking system and method
CN107968932A (en) * 2017-10-31 2018-04-27 易瓦特科技股份公司 The method, system and device being identified based on earth station to destination object

Also Published As

Publication number Publication date
CN109656319A (en) 2019-04-19

Similar Documents

Publication Publication Date Title
CN109656319B (en) Method and equipment for presenting ground action auxiliary information
CN109561282B (en) Method and equipment for presenting ground action auxiliary information
CN109596118B (en) A method and device for obtaining spatial position information of a target object
US10339387B2 (en) Automated multiple target detection and tracking system
EP2974509B1 (en) Personal information communicator
US9875579B2 (en) Techniques for enhanced accurate pose estimation
CN109459029B (en) Method and equipment for determining navigation route information of target object
KR101583286B1 (en) Method, system and recording medium for providing augmented reality service and file distribution system
US10532814B2 (en) Augmented reality travel route planning
CN113378605B (en) Multi-source information fusion method and device, electronic equipment and storage medium
US20200106818A1 (en) Drone real-time interactive communications system
US20150310667A1 (en) Systems and methods for context based information delivery using augmented reality
US10976163B2 (en) Robust vision-inertial pedestrian tracking with heading auto-alignment
CN111540059A (en) Enhanced video system providing enhanced environmental perception
KR101600456B1 (en) Method, system and recording medium for providing augmented reality service and file distribution system
CN115439635B (en) Method and equipment for presenting marking information of target object
CN110248157B (en) Method and equipment for scheduling on duty
CN109656259A (en) It is a kind of for determining the method and apparatus of the image location information of target object
US20230366699A1 (en) Sensor-based map correction
CN109618131B (en) Method and equipment for presenting decision auxiliary information
CN115760964B (en) Method and equipment for acquiring screen position information of target object
CN115460539B (en) Method, equipment, medium and program product for acquiring electronic fence
CN108629842B (en) Unmanned equipment motion information providing and motion control method and equipment
WO2025083335A1 (en) Visualizing area covered by drone camera
US20160012290A1 (en) Photo-Optic Comparative Geolocation System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 201210 7th Floor, No. 1, Lane 5005, Shenjiang Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai

Patentee after: HISCENE INFORMATION TECHNOLOGY Co.,Ltd.

Address before: Room 501 / 503-505, 570 shengxia Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 201203

Patentee before: HISCENE INFORMATION TECHNOLOGY Co.,Ltd.