[go: up one dir, main page]

CN118379453B - Unmanned aerial vehicle aerial image and webGIS three-dimensional scene linkage interaction method and system - Google Patents

Unmanned aerial vehicle aerial image and webGIS three-dimensional scene linkage interaction method and system Download PDF

Info

Publication number
CN118379453B
CN118379453B CN202410843465.7A CN202410843465A CN118379453B CN 118379453 B CN118379453 B CN 118379453B CN 202410843465 A CN202410843465 A CN 202410843465A CN 118379453 B CN118379453 B CN 118379453B
Authority
CN
China
Prior art keywords
aerial vehicle
unmanned aerial
coordinates
dimensional
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410843465.7A
Other languages
Chinese (zh)
Other versions
CN118379453A (en
Inventor
刘毅琪
赵明
谢俭
黄细华
罗耀晖
丁文强
唐科
黄芃怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Zhongtu Tong Drone Technology Co ltd
Original Assignee
Hunan Zhongtu Tong Drone Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Zhongtu Tong Drone Technology Co ltd filed Critical Hunan Zhongtu Tong Drone Technology Co ltd
Priority to CN202410843465.7A priority Critical patent/CN118379453B/en
Publication of CN118379453A publication Critical patent/CN118379453A/en
Application granted granted Critical
Publication of CN118379453B publication Critical patent/CN118379453B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Geometry (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a linkage interaction method and system for an aerial image and webGIS three-dimensional scene of an unmanned aerial vehicle, and relates to the technical field of data processing, wherein the method comprises the following steps: according to the calculation result, comparing the marked target pixel coordinates in the image with the length and width of the picture pixels of the image to obtain coordinates of the marked points in the view cone; calculating corresponding pixel coordinates of the marker in the unmanned aerial vehicle aerial image according to the coordinates of the marker points in the viewing cone; converting the pixel coordinates into geographic coordinates of the target object according to the pixel coordinates and the topographic data; and constructing a three-dimensional virtual scene of the target area, and synchronously displaying the geographic coordinates and the pixel coordinates of the target object in the three-dimensional scene to realize linkage interaction between the aerial image and the webGIS three-dimensional scene. The invention realizes the accurate calculation of the geographic coordinates of the target object, and can accurately display the target object in the aerial image in real time in the three-dimensional scene.

Description

Unmanned aerial vehicle aerial image and webGIS three-dimensional scene linkage interaction method and system
Technical Field
The invention relates to the technical field of data processing, in particular to a linkage interaction method and system for an unmanned aerial vehicle aerial image and webGIS three-dimensional scenes.
Background
The traditional method adopts a simple coordinate conversion or mapping technology when combining the unmanned aerial vehicle aerial image with webGIS three-dimensional scene. Although the methods can realize the corresponding display of the basic image and the three-dimensional scene, the specificity of the unmanned aerial vehicle aerial image and the complexity of the terrain data are not fully considered, so that larger errors exist in the data processing process.
Specifically, in the conventional method, when the position of the target object in the three-dimensional scene is calculated, the influence of various factors such as topographic relief, lens distortion, shooting angle and the like is ignored, so that the final position information is inaccurate.
In addition, the traditional method has certain limitation in the aspect of linkage interaction of the aerial image and the three-dimensional scene. Due to inaccuracy in data processing, it may be difficult for a user to accurately locate objects in an aerial image while browsing in a three-dimensional scene.
Disclosure of Invention
The invention aims to solve the technical problem of providing the linkage interaction method and the system for the aerial image of the unmanned aerial vehicle and the webGIS three-dimensional scene, which realize the accurate calculation of the geographic coordinates of the target object and can accurately display the target object in the aerial image in real time in the three-dimensional scene.
In order to solve the technical problems, the technical scheme of the invention is as follows:
in a first aspect, a method for linkage interaction between an aerial image of an unmanned aerial vehicle and webGIS three-dimensional scenes, the method comprising:
acquiring an aerial image of the unmanned aerial vehicle and a geographic position of the unmanned aerial vehicle;
According to the geographic position of the unmanned aerial vehicle, calculating the direction vector of the view cone and the included angle of the shooting range to obtain a calculation result;
according to the calculation result, comparing the marked target pixel coordinates in the image with the length and width of the picture pixels of the image to obtain coordinates of the marked points in the view cone;
calculating corresponding pixel coordinates of the marker in the unmanned aerial vehicle aerial image according to the coordinates of the marker points in the viewing cone;
converting the pixel coordinates into geographic coordinates of the target object according to the pixel coordinates and the topographic data;
And constructing a three-dimensional virtual scene of the target area, and synchronously displaying the geographic coordinates and the pixel coordinates of the target object in the three-dimensional scene to realize linkage interaction between the aerial image and the webGIS three-dimensional scene.
Further, acquire unmanned aerial vehicle aerial photography image and unmanned aerial vehicle geographical position, include:
setting the size of particle swarms, wherein each particle represents an unmanned aerial vehicle flight and shooting scheme;
Initializing a position and a speed for each particle, wherein the position represents a flight path and a shooting angle of the unmanned aerial vehicle;
Determining an fitness function for evaluating the quality of each of the flight and capture schemes;
In each iteration, evaluating the quality of each particle according to the fitness function, updating the individual final position and the global final position of each particle, and updating the position and the speed of each particle;
Stopping iteration when the termination condition is reached, and outputting a final flight and shooting scheme;
according to the final flight and shooting scheme, the unmanned aerial vehicle is controlled to fly and shoot according to the final flight and shooting scheme, and in the flight process, the gesture and shooting parameters of the unmanned aerial vehicle are adjusted in real time so as to acquire aerial images and geographic position information of the unmanned aerial vehicle.
Further, a specific calculation formula of the fitness function is:
wherein, Representing the quality score of the comprehensive shooting,Wherein, the method comprises the steps of, wherein,Representative of the definition,Representing color accuracy,Representing the composition score of the image,AndRepresenting the weight coefficient;
Wherein, the method comprises the steps of, wherein, Representing the efficiency of the flight of the vehicle,Representing the time of flight of the light,Represents the energy consumption of the power plant,Representing a flight stability factor; representing shooting efficiency; Representing the total cost; Representing a risk factor, wherein, AndIs a weight coefficient of the risk factor,It is the weather condition that is to be determined,Is the safety of the flight area; Representing the complexity of data processing, as the ratio of the amount of data to the processing speed, AndIs a weight coefficient.
Further, when updating the position and velocity of each particle, the update formula of the velocity is:
wherein, The velocity of the particle i at time t is indicated,The weight of the inertia is represented by the weight of the inertia,AndThe learning factor is represented as such,AndRepresenting a random number in the range 0,1,Representing the individual historical optimal positions of particles i,Represents the historical optimal position of the whole particle swarm,Indicating the position of particle i at time t; Indicating the moment of the particle i Is a speed of (2);
The update formula of the position is:
Wherein, the method comprises the steps of, wherein,
Indicating the moment of the particle iIs a position of (c).
Further, according to the geographic position of the unmanned aerial vehicle, a direction vector of the view cone and an included angle of a shooting range are calculated to obtain a calculation result, including:
acquiring a geographic position A of the unmanned aerial vehicle and a geographic position B of an object to be displayed;
According to the geographic position A of the unmanned aerial vehicle and the geographic position B of the object to be displayed, calculating a three-dimensional vector AB of a two-point connecting line, and calculating the included angle between the three-dimensional vector AB and each coordinate axis in a geographic coordinate system to obtain the included angle of the three-dimensional vector AB;
According to the orientation gesture of the unmanned aerial vehicle during shooting and the hardware parameters of the camera, calculating a view cone direction vector AC and a corresponding shooting range included angle in the current shooting state;
And determining an intersection point of the three-dimensional vector AB and a shooting view cone plane through geometric calculation according to the included angle of the three-dimensional vector AB and the shooting range included angle of the view cone, wherein the intersection point represents a specific position of an object to be displayed in the shooting range of the unmanned aerial vehicle, and the specific position is a final calculation result.
Further, according to the calculation result, comparing the coordinates of the marked target pixel in the image with the length and width of the picture pixel of the image to obtain the coordinates of the marked point in the view cone, including:
Calculating the distance ratio of the intersection point to each side of the view cone according to the calculation result to obtain the pixel coordinate of the intersection point;
The pixel coordinates of the intersection point are mapped into the scene of the view cone to obtain coordinates of the marker point within the view cone.
Further, according to the coordinates of the marking point in the viewing cone, calculating the corresponding pixel coordinates of the marking object in the unmanned aerial vehicle aerial image, including:
acquiring unmanned aerial vehicle aerial images and unmanned aerial vehicle information;
Establishing a mapping model from three-dimensional coordinates to two-dimensional image coordinates according to unmanned aerial vehicle information, and converting the three-dimensional coordinates of the marking points in the viewing cone into coordinates under an unmanned aerial vehicle camera coordinate system to realize coordinate conversion;
And calculating the specific pixel coordinates of the mark point on the aerial image according to the two-dimensional coordinates of the coordinate conversion and the resolution ratio of the image.
Further, converting the pixel coordinates into the geographic coordinates of the target object according to the pixel coordinates and the topographic data, including:
acquiring relevant terrain data according to the aerial image of the unmanned aerial vehicle and the specific geographic position of the unmanned aerial vehicle when the unmanned aerial vehicle shoots;
According to the known pixel coordinates and the position and the orientation of the unmanned aerial vehicle, determining the three-dimensional coordinates of the marking points corresponding to the pixel coordinates in the view cone of the unmanned aerial vehicle so as to convert the two-dimensional pixel coordinates into three-dimensional space coordinates;
Using the position A of the unmanned aerial vehicle as a starting point, and using the three-dimensional coordinates in the view cone of the unmanned aerial vehicle as a direction to construct a three-dimensional vector AB1;
And intersecting the three-dimensional vector AB1 with the acquired topographic data to obtain an intersection point of the vector and the topographic data, wherein the intersection point is the geographic coordinate of the target object.
Further, intersecting the three-dimensional vector AB1 with the terrain data to obtain the geographic coordinates of the target object, including:
According to images shot in different directions and positions, calculating the coordinates of the marking points in the viewing cone through the marked target pixel coordinates in the images and the length and width of the picture pixels of the images, and constructing a three-dimensional vector AB2 by the position A and the geographic position B of the unmanned aerial vehicle;
And intersecting the three-dimensional vector AB1 with the three-dimensional vector AB2 to calculate an intersection point so as to obtain the geographic coordinates of the target object.
Further, a three-dimensional virtual scene of the target area is constructed, and geographic coordinates and pixel coordinates of the target object are synchronously displayed in the three-dimensional scene to realize linkage interaction between the aerial image and the webGIS three-dimensional scene, comprising:
Constructing webGIS a three-dimensional scene, planning webGIS a planned route and marking a target object to obtain a three-dimensional virtual scene terrain;
and superposing the target object of the aerial image on the three-dimensional virtual scene terrain, and synchronously displaying the geographic coordinates and the pixel coordinates of the target object in the three-dimensional scene so as to realize linkage interaction of the aerial image and the webGIS three-dimensional scene.
In a second aspect, an information processing system for a linkage interaction method between an aerial image of an unmanned aerial vehicle and webGIS three-dimensional scenes, includes:
The acquisition module is used for acquiring the aerial image of the unmanned aerial vehicle and the geographic position of the unmanned aerial vehicle; according to the geographic position of the unmanned aerial vehicle, calculating the direction vector of the view cone and the included angle of the shooting range to obtain a calculation result; according to the calculation result, comparing the marked target pixel coordinates in the image with the length and width of the picture pixels of the image to obtain coordinates of the marked points in the view cone;
The processing module is used for calculating the corresponding pixel coordinates of the marker in the unmanned aerial vehicle aerial image according to the coordinates of the marker points in the viewing cone; converting the pixel coordinates into geographic coordinates of the target object according to the pixel coordinates and the topographic data; and constructing a three-dimensional virtual scene of the target area, and synchronously displaying the geographic coordinates and the pixel coordinates of the target object in the three-dimensional scene to realize linkage interaction between the aerial image and the webGIS three-dimensional scene.
The scheme of the invention at least comprises the following beneficial effects:
By acquiring the geographic position, the viewing cone direction, the shooting range included angle and the terrain data of the unmanned aerial vehicle, the conversion precision from the two-dimensional pixel coordinates to the three-dimensional geographic coordinates is improved, and the influence of terrain fluctuation on the positioning of the target object can be reflected more accurately.
According to the topographic data in the calculation process, errors are effectively reduced, and the accuracy of data processing is improved. The invention also realizes the tight linkage of the aerial image and webGIS three-dimensional scene by synchronously displaying the geographic coordinates and the pixel coordinates of the target object in the three-dimensional virtual scene. The linkage not only enhances the perception and understanding of the space information by the user, but also ensures that the data processing under the complex terrain and changeable shooting conditions is more reliable and accurate.
Drawings
Fig. 1 is a schematic flow chart of a linkage interaction method of an aerial image and webGIS three-dimensional scenes of an unmanned aerial vehicle according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of an unmanned aerial vehicle aerial image and webGIS three-dimensional scene linkage interaction system according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As shown in fig. 1, an embodiment of the present invention provides a linkage interaction method for an aerial image and webGIS three-dimensional scene of an unmanned aerial vehicle, which includes the following steps:
step 11, acquiring an aerial image of the unmanned aerial vehicle and a geographic position of the unmanned aerial vehicle;
Step 12, calculating the direction vector of the view cone and the included angle of the shooting range according to the geographic position of the unmanned aerial vehicle so as to obtain a calculation result;
Step 13, comparing the marked target pixel coordinates in the image with the length and width of the picture pixels of the image according to the calculation result to obtain the coordinates of the marked points in the view cone;
step 14, calculating corresponding pixel coordinates of the marker in the unmanned aerial vehicle aerial image according to the coordinates of the marker points in the viewing cone;
Step 15, converting the pixel coordinates into geographic coordinates of the target object according to the pixel coordinates and the topographic data;
And 16, constructing a three-dimensional virtual scene of the target area, and synchronously displaying the geographic coordinates and the pixel coordinates of the target object in the three-dimensional scene to realize linkage interaction of the aerial image and the webGIS three-dimensional scene.
According to the embodiment of the invention, the conversion precision from the two-dimensional pixel coordinates to the three-dimensional geographic coordinates is improved by acquiring the geographic position, the viewing cone direction, the shooting range included angle and the terrain data of the unmanned aerial vehicle, and the influence of terrain fluctuation on the positioning of the target object can be reflected more accurately. According to the topographic data in the calculation process, errors are effectively reduced, and the accuracy of data processing is improved. The invention also realizes the tight linkage of the aerial image and webGIS three-dimensional scene by synchronously displaying the geographic coordinates and the pixel coordinates of the target object in the three-dimensional virtual scene. The linkage not only enhances the perception and understanding of the space information by the user, but also ensures that the data processing under the complex terrain and changeable shooting conditions is more reliable and accurate.
In a preferred embodiment of the present invention, acquiring aerial images of an unmanned aerial vehicle and geographic locations of the unmanned aerial vehicle includes:
setting the size of particle swarms, wherein each particle represents an unmanned aerial vehicle flight and shooting scheme;
Initializing a position and a speed for each particle, wherein the position represents a flight path and a shooting angle of the unmanned aerial vehicle;
Determining an fitness function for evaluating the quality of each of the flight and capture schemes;
In each iteration, evaluating the quality of each particle according to the fitness function, updating the individual final position and the global final position of each particle, and updating the position and the speed of each particle;
Stopping iteration when the termination condition is reached, and outputting a final flight and shooting scheme;
according to the final flight and shooting scheme, the unmanned aerial vehicle is controlled to fly and shoot according to the final flight and shooting scheme, and in the flight process, the gesture and shooting parameters of the unmanned aerial vehicle are adjusted in real time so as to acquire aerial images and geographic position information of the unmanned aerial vehicle.
In the embodiment of the invention, the application of the particle swarm optimization algorithm enables the unmanned aerial vehicle to find the optimal flight path and shooting angle. By setting the size of the particle swarm and initializing the position and velocity for each particle, the algorithm can search widely in the solution space, thereby finding the flight and shooting scheme with the highest fitness. The quality of the aerial image is improved, and the comprehensiveness and the accuracy of the image data are ensured. Secondly, the embodiment carries out quality evaluation on each flight and shooting scheme through the fitness function, so that the unmanned aerial vehicle can adaptively adjust the flight path and the shooting angle in a complex flight environment to obtain the optimal aerial shooting effect. The self-adaption not only improves the definition and the identification degree of the aerial image, but also ensures the accuracy and the reliability of data acquisition. In addition, through the iterative optimization process, the algorithm can gradually approach the global optimal solution, so that the energy consumption and the flight cost of the unmanned aerial vehicle are reduced while the high-efficiency completion of the aerial photographing task is ensured. Finally, the functions of adjusting the attitude and shooting parameters of the unmanned aerial vehicle in real time are mentioned in the embodiment, so that the flexibility and the practicability of the system are further enhanced. The unmanned aerial vehicle can dynamically adjust according to actual conditions in the flight process, so that stability and consistency of aerial images are ensured.
In a preferred embodiment of the present invention, the specific calculation formula of the fitness function is:
wherein, Representing the quality score of the comprehensive shooting,Wherein, the method comprises the steps of, wherein,Representative of the definition,Representing color accuracy,Representing the composition score of the image,AndRepresenting the weight coefficient;
Wherein, the method comprises the steps of, wherein, Representing the efficiency of the flight of the vehicle,Representing the time of flight of the light,Represents the energy consumption of the power plant,Representing a flight stability factor; representing shooting efficiency; Representing the total cost; Representing a risk factor, wherein, AndIs a weight coefficient of the risk factor,It is the weather condition that is to be determined,Is the safety of the flight area; Representing the complexity of data processing, as the ratio of the amount of data to the processing speed, AndIs a weight coefficient.
In the embodiment of the invention, the comprehensive shooting quality fraction is introducedThe function can comprehensively evaluate the definition of the aerial imageColor accuracyAnd composition scoringThe comprehensive evaluation mode ensures that the unmanned aerial vehicle can take the image quality into priority when selecting the flight and shooting scheme, thereby acquiring the aerial image with higher ornamental value and information value. The function passes through the flight efficiencyAnd shooting efficiencyIn the method, the flight path and the shooting plan of the unmanned aerial vehicle are optimized, so that the operation efficiency of the unmanned aerial vehicle is improved, the energy consumption is reduced, and the environment-friendly and economic aerial shooting operation is realized. Furthermore, by incorporating the total costAnd risk coefficientThe function effectively controls the cost expenditure while guaranteeing the safety of the aerial photographing task. This balancing of risk and cost is particularly important for commercial aerial projects, which helps to maximize the profit of the project. Finally, the function also takes into account the complexity of the data processingThe introduction of the index, namely the ratio of the data quantity to the processing speed, enables the unmanned aerial vehicle to pay more attention to the processibility of the data when the unmanned aerial vehicle collects the aerial photographing data.
In a preferred embodiment of the present invention, the update formula for the velocity in updating the position and velocity of each particle is:
wherein, The velocity of the particle i at time t is indicated,The weight of the inertia is represented by the weight of the inertia,AndThe learning factor is represented as such,AndRepresenting a random number in the range 0,1,Representing the individual historical optimal positions of particles i,Represents the historical optimal position of the whole particle swarm,Indicating the position of particle i at time t; Indicating the moment of the particle i Is a speed of (2);
The update formula of the position is:
Wherein, the method comprises the steps of, wherein,
Indicating the moment of the particle iIs a position of (c).
In the embodiment of the invention, the current speed, the individual historical optimal position and the global optimal position of the particles are comprehensively considered by the speed updating formula, so that the particles can keep certain exploratory property in the search space and gradually converge to the optimal solution. The updating mechanism effectively balances the global searching capability and the local searching capability of the algorithm, and improves the efficiency of searching the optimal flight and shooting scheme. The formula increases the flexibility of particle velocity and location updating by introducing inertial weights, learning factors, and random numbers. The inertia weight enables the particles to keep a certain motion inertia, and the particles jump out of a local optimal solution; the learning factors adjust the degree of learning of the particles to the individual historic optimal and global optimal positions; the random number increases the randomness of the search and helps to avoid the algorithm falling into premature convergence. Finally, the position of the particles is adjusted according to the updated speed, so that the particles can continuously move in the search space, and a better flight and shooting scheme is found.
In a preferred embodiment of the present invention, the step 12 may include:
step 121, obtaining a geographic position A of the unmanned aerial vehicle and a geographic position B of an object to be displayed, calculating a three-dimensional vector AB of a two-point connecting line according to the geographic position A of the unmanned aerial vehicle and the geographic position B of the object to be displayed, and calculating included angles of the three-dimensional vector AB and all coordinate axes in a geographic coordinate system to obtain included angles of the three-dimensional vector AB;
Step 122, calculating a view cone direction vector AC and a corresponding shooting range included angle in the current shooting state according to the orientation gesture and the hardware parameters of the camera when the unmanned aerial vehicle shoots;
And step 123, determining an intersection point of the three-dimensional vector AB and a plane of the shooting view cone through geometric calculation according to the included angle of the three-dimensional vector AB and the included angle of the shooting range of the view cone, wherein the intersection point represents a specific position of the object to be displayed in the shooting range of the unmanned aerial vehicle, and the specific position is a final calculation result.
According to the embodiment of the invention, the space direction of the target relative to the unmanned aerial vehicle can be accurately known by calculating the included angle between the three-dimensional vector of the connecting line of the unmanned aerial vehicle position A and the target object position B and each coordinate axis in the geographic coordinate system; the shooting range of the unmanned aerial vehicle camera can be defined by calculating the shooting range included angle of the view cone direction vector AC, so that the target object is ensured to be in the visible range; knowing the specific shooting range of the camera is beneficial to adjusting the position and the orientation of the unmanned aerial vehicle so as to capture the image of the target object more accurately; the specific position of the target object in the view cone can be judged more accurately by calculating the included angle between the vector AB (the vector from the unmanned aerial vehicle to the target object) and the vector AC (the vector of the direction of the view cone of the unmanned aerial vehicle); calculating the intersection point of the vector AB and the plane of the shooting view cone, and accurately judging the position of the target object in the shooting picture of the unmanned aerial vehicle; according to the calculation of the included angles and the intersection points, the position, the orientation or the parameters of the camera of the unmanned aerial vehicle can be adjusted so as to obtain the optimal shooting effect.
In another preferred embodiment, the geographical position a (longitude, latitude and altitude) of the drone is obtained by a positioning system (such as GPS) of the drone, while the geographical position B (also including longitude, latitude and altitude) of the object to be displayed is obtained.
Converting the geographic position A and the geographic position B into points in a three-dimensional geographic coordinate system, specifically: converting the longitude and latitude coordinates into three-dimensional coordinates in a geocentric earth fixed coordinate system (ECEF) or a local tangential plane coordinate system (such as an ENU coordinate system); according to the converted three-dimensional coordinates, calculating a vector AB, namely a three-dimensional vector pointing from the point A to the point B; in the geographic coordinate system, the angles of the three-dimensional vector AB with the X-axis, Y-axis and Z-axis are calculated, and these angles can be solved by the dot product and cross product of the vectors.
Acquiring a current orientation attitude of the unmanned aerial vehicle through attitude sensors (such as a gyroscope and an accelerometer) of the unmanned aerial vehicle, wherein the current orientation attitude is commonly expressed as Euler angles (pitch angle, yaw angle and roll angle); according to the orientation gesture of the unmanned aerial vehicle, the direction of the main optical axis of the camera, namely the direction of the central line of the view cone, is determined, the direction can be expressed as a three-dimensional vector AC, the starting point of the vector AC is the position A of the unmanned aerial vehicle, and the direction is consistent with the orientation of the camera.
The shooting range included angle of the unmanned aerial vehicle is determined according to hardware parameters (such as a field angle FOV) of the camera, and the included angle defines a spatial range which can be shot by the camera, and generally, the horizontal field angle and the vertical field angle of the camera are known, and can be used for determining boundaries of the shooting range.
Using the included angle of the three-dimensional vector AB calculated in step 121 and the view cone photographing range included angle obtained in step 122, four boundary plane equations of the view cone can be constructed according to the view cone direction vector AC and the position a of the unmanned aerial vehicle in combination with the photographing range included angles (horizontal view angle and vertical view angle), the planes including: a left boundary plane, a right boundary plane, an upper boundary plane, and a lower boundary plane.
Carrying out point multiplication operation by using the three-dimensional vector AB and each boundary plane equation, judging whether the vector AB points to the inside and the outside of the view cone or intersects with the plane according to the sign of the point multiplication result, and if the point multiplication result of the vector AB and all four boundary planes indicates that the vector AB is inside or intersects with the view cone, continuing the next step; otherwise, it may be determined that the object to be displayed is not in the field of view of the unmanned aerial vehicle, and the process ends. For each boundary plane, solving the intersection of the three-dimensional vector AB with that plane, which typically involves solving a system of linear equations; it is determined whether the intersection point is within a space bounded by four boundary planes of the view cone. Among all the intersections with the view cone boundary plane, the one closest to the unmanned aerial vehicle (point A) is selected as the final intersection, and the intersection is the specific position of the object to be displayed in the shooting range of the unmanned aerial vehicle.
The three-dimensional coordinate of the intersection point is the specific position of the object to be displayed in the shooting range of the unmanned aerial vehicle, and if the intersection point does not exist or is not in the shooting range, the fact that the object to be displayed is not in the visual field of the unmanned aerial vehicle is indicated.
Wherein constructing four boundary plane equations for the view cone comprises:
Assuming that the position of the drone is point a, its coordinates are (a x,Ay,Az), the orientation of the camera can be represented by a unit vector D, its coordinates are (D x,Dy,Dz), the horizontal field angle (FOV H) and the vertical field angle (FOV V) of the camera, which are given in degrees, need to be converted into radians in the calculation. Next, four boundary plane equations for the view cone may be constructed from this information. Each plane equation can be expressed in the form a x+By+Cz +d=0, where a, B, C are normal vectors to the plane and D is the distance of the plane from the origin.
Left boundary plane equation:
First calculate the offset vector of the left boundary It is realized by rotating the camera head orientation vectorObtained. The rotation angle is half of the horizontal angle, and the rotation axis is vertical toAnd a vector of cross product results of the "up" vector (which can be typically derived from the cross product of the "right" vector or the "up" vector). Obtaining an offset vectorAfter that, the normal vector of the left boundary planeCan pass through the cross productObtained.
The left boundary plane equation is:
Wherein, the method comprises the steps of, wherein, AndIs the normal vectorIs a coordinate component of (c).
The calculation of the right boundary is similar to the left boundary except that the rotation direction is reversed.
The right boundary plane equation is:
where N R is the normal vector to the right boundary plane, AndIs the normal vectorIs a coordinate component of (c).
The offset vector of the upper boundary is obtained by orienting the camera towards the vectorAnd rotating up half the vertical field angle. The rotation axis is typically the "right" vector, which is the cross product of the "up" vector and D.
The upper boundary plane equation is:
Wherein, the method comprises the steps of, wherein, Is the normal vector to the upper boundary plane,AndIs the normal vectorCoordinate components of (a);
The calculation of the lower boundary is similar to the upper boundary, except that the rotation direction is opposite;
the lower boundary plane equation is:
Wherein, the method comprises the steps of, wherein, Is the normal vector to the lower boundary plane,AndIs the normal vectorIs a coordinate component of (c).
In a preferred embodiment of the present invention, the step 13 may include:
Step 131, calculating the distance ratio of the intersection point to each side of the view cone according to the calculation result to obtain the pixel coordinate of the intersection point;
Step 132, mapping the pixel coordinates of the intersection point into the scene of the view cone to obtain the coordinates of the marker point within the view cone.
In the embodiment of the invention, the position of the target object in the unmanned aerial vehicle image can be more accurately positioned by calculating the distance ratio of the intersection point to each side of the view cone and converting the distance ratio into the pixel coordinate; the pixel coordinates of the intersection points are mapped into the view cone scene, so that the position relation of the target object in the three-dimensional space can be displayed more intuitively, and the visual effect is improved; the blind search of the unmanned aerial vehicle during task execution can be reduced by accurate positioning, so that energy sources and time are saved, and the working efficiency is improved; the method is not only suitable for unmanned aerial vehicle aerial photography, but also can be applied to the fields of virtual reality, augmented reality and the like, and achieves more accurate space positioning.
In another preferred embodiment, step 131, determining the projection of the view cone onto the imaging plane, wherein the imaging plane of the unmanned camera can be considered as a two-dimensional plane, and the projection of the view cone onto this plane forms a rectangular area corresponding to the picture captured by the camera; calculating the projection point of the intersection point on the imaging plane specifically comprises the following steps: in computer graphics, perspective projection is typically implemented by a perspective projection matrix that is capable of mapping points in three-dimensional space onto a two-dimensional plane; the construction of the perspective projection matrix generally depends on internal parameters of the camera, such as focal length (focal length), size of the image sensor, pixel size, etc.; multiplying the intersection point coordinates (assumed to be in the form of homogeneous coordinates) in the three-dimensional space by the perspective projection matrix to obtain projected two-dimensional homogeneous coordinates, wherein the two-dimensional homogeneous coordinates represent the positions of the intersection points on the imaging plane; the two-dimensional homogeneous coordinates are converted into Euclidean coordinates (i.e. divided by the last component of the homogeneous coordinates) to obtain the standard two-dimensional coordinates of the projection point on the imaging plane, the origin of the standard two-dimensional coordinates is usually at the upper left corner of the image, the x-axis is positive to the right, the y-axis is downward positive, and the coordinate range is usually between [0,1], which indicates the relative position of the projection point in the image. The standard two-dimensional coordinates are mapped to pixel coordinates according to the resolution of the image (i.e., the width and height of the image), which typically involves multiplying the standard coordinates by the width and height of the image, and making some offset to ensure that the origin of the coordinate system is aligned with the upper left corner of the image. Through the steps, the pixel coordinates of the projection points in the image can be obtained, and the coordinates represent the specific positions of the intersection points in the captured picture of the camera.
To obtain accurate pixel coordinates, the ratio of the distances between the intersection point and each side of the view cone is calculated, and the ratio reflects the relative position of the projection point in the picture. For example, the ratio of the distance of the proxels to the left side of the picture to the width of the picture and the ratio of the distance to the bottom of the picture to the height of the picture can be calculated. These two ratios correspond to the X and Y values of the pixel coordinates, respectively.
Collecting internal parameters of the camera, wherein the internal parameters comprise focal length #Respectively correspond toAndFocal length in direction), image center [ ]I.e., principal point coordinates in the image coordinate system), and possibly lens distortion coefficients, etc., obtained through a camera calibration process and used to describe the geometric characteristics of camera imaging.
The inverse perspective projection is a process of recovering three-dimensional space coordinates from two-dimensional image coordinates, and in an ideal distortion-free case, the process can be described by a pinhole camera model; according to the pinhole camera model, a three-dimensional space pointProjecting onto an image plane to form two-dimensional pointsThe relationship can be described by a projection matrix. The conversion from pixel coordinates to image coordinates specifically includes:
First, pixel coordinates are calculated The conversion into image coordinates (u, v) is specifically: subtracting the image center from the pixel coordinatesScaling is carried out according to the focal length so as to obtain normalized image coordinates; coordinates (X, Y, Z) of the original three-dimensional spatial point are calculated using the back projection formula of the pinhole camera model in combination with the known camera internal parameters, the image coordinates (u, v) and the estimated depth information Z. Through the conversion, three-dimensional coordinates of the intersection point (namely, the mark point) in the view cone scene can be obtained. This coordinate represents the exact position of the marker point in the field of view of the drone. The back projection formula of the pinhole camera model can recover three-dimensional space coordinates according to two-dimensional image coordinates and depth information, and the formula is as follows:
wherein (X, Y) is the horizontal and vertical coordinates of the three-dimensional space point in the camera coordinate system, Is the depth information that is known to be,For the component of the camera's reference focal length in the x-axis direction,For the component of the camera's reference focal length in the y-axis direction,AndIs the coordinates of the center of the image,AndIs the coordinates of the proxels on the image.
Image coordinates (u, v), focal length componentAndImage center coordinates [ ]) And substituting the depth information Z into the back projection formula to calculate the values of X and Y respectively, wherein the coordinates of the three-dimensional space point are (X, Y, Z) because the depth information Z is known.
In a preferred embodiment of the present invention, the step 14 may include:
step 141, acquiring relevant topographic data according to the aerial image of the unmanned aerial vehicle and the specific geographic position of the unmanned aerial vehicle;
step 142, determining the three-dimensional coordinates of the marker points corresponding to the pixel coordinates in the view cone of the unmanned aerial vehicle according to the known pixel coordinates and the positions and orientations of the unmanned aerial vehicle, so as to convert the two-dimensional pixel coordinates into three-dimensional space coordinates;
Step 143, using the position a of the unmanned aerial vehicle as a starting point, and using the three-dimensional coordinates in the view cone of the unmanned aerial vehicle as a direction to construct a three-dimensional vector AB1;
Step 144, intersecting the three-dimensional vector AB1 with the obtained topographic data to obtain an intersection point of the vector and the topographic data, wherein the intersection point is the geographic coordinate of the target object.
In the embodiment of the invention, the position of the target on the aerial image can be more accurately determined by establishing the mapping model from the three-dimensional coordinates to the two-dimensional image coordinates; the pixel coordinates of the mark points on the aerial image are directly obtained, so that the data processing flow can be greatly simplified, the workload of manual positioning and measurement is reduced, and the working efficiency is improved; the method is not only suitable for the field of unmanned aerial vehicle aerial photography, but also can be applied to a plurality of fields such as a Geographic Information System (GIS), remote sensing monitoring, city planning and the like, and can be used for realizing rapid positioning and extraction of space data.
In another preferred embodiment, step 141, obtaining image data obtained by aerial photography of the unmanned aerial vehicle, wherein the images contain abundant surface information; recording a specific geographic position of the unmanned aerial vehicle when shooting images through a GPS system or other positioning technologies of the unmanned aerial vehicle; according to the geographic position photographed by the unmanned aerial vehicle, the terrain data of the corresponding area are extracted from the existing terrain database, and the data comprise elevation information, terrain fluctuation and the like. If no off-the-shelf terrain database is available, the unmanned aerial vehicle aerial image can also be used to generate terrain data through photogrammetry techniques. In step 142, from the previous step, the pixel coordinates of the mark point in the image have been obtained, and the precise position and orientation information of the unmanned aerial vehicle during shooting is obtained through the navigation system and the gesture sensor of the unmanned aerial vehicle, and the inverse process of perspective projection is used in combination with the position and orientation of the unmanned aerial vehicle and the internal parameters (such as focal length, image sensor size, etc.) of the camera, so as to convert the two-dimensional pixel coordinates into three-dimensional coordinates in the view cone of the unmanned aerial vehicle.
Step 143, using the position of the unmanned aerial vehicle as the starting point a, determining the direction pointing to the marking point from the unmanned aerial vehicle position using the three-dimensional coordinates of the marking point in the viewing cone of the unmanned aerial vehicle calculated in step 142, and constructing a three-dimensional vector AB1 along the determined direction with the unmanned aerial vehicle position a as the starting point, wherein the vector represents the spatial direction from the unmanned aerial vehicle to the marking point.
Step 144, performing an intersecting operation on the three-dimensional vector AB1 and the topographic data acquired in step 141 specifically includes:
Acquiring relevant terrain data in the form of Digital Elevation Model (DEM) or point cloud data, including elevation information of the terrain, from step 141; if the terrain data is in DEM format, it is ensured that it can be conveniently queried and accessed, such as by retrieving the corresponding elevation values by coordinates.
Depending on the type and accuracy of the terrain data, linear interpolation or bilinear interpolation may be employed to estimate the intersection of vector AB1 with the terrain for regular grid DEM data.
Starting from the position A of the unmanned plane, gradually advancing along the direction of the three-dimensional vector AB1 with a certain step length (determined according to the resolution and the precision of the topographic data); at each step, the elevation value of the current position is queried using the terrain data, the elevation of the vector AB1 at the current position is compared with the elevation in the terrain data, and if the current elevation of the vector AB1 is found to be lower than or equal to the terrain elevation, the intersection point is possibly found.
Once a possible intersection is found, further interpolation and optimization is performed using surrounding topographical data points to improve the accuracy of the intersection, and if the intersection is located between two topographical data points, a linear interpolation method can be used to estimate the elevation of the intersection.
Through intersection operation, one or more intersection points are found, the intersection points represent positions possibly intersecting with the terrain along the direction of the vector AB1 from the unmanned aerial vehicle position, the point most likely representing the position of the target object is selected from the intersection points according to actual conditions (such as fluctuation, shielding and the like of the terrain), the point is the three-dimensional space position of the target object, and finally, the three-dimensional coordinates of the selected intersection points are converted into a geographic coordinate system (such as longitude and latitude) so as to be marked on a map or perform other geographic information processing.
In a preferred embodiment of the present invention, the step 15 may include:
step 151, obtaining topographic data according to the unmanned aerial vehicle aerial image and the geographic position of the unmanned aerial vehicle;
step 152, calculating the coordinates of the marking points in the viewing cone according to the pixel coordinates, and constructing a three-dimensional vector AB1 by using the unmanned plane position A and the geographic position B;
in step 153, the three-dimensional vector AB1 is intersected with the terrain data to obtain the geographic coordinates of the target object.
In the embodiment of the invention, the accurate three-dimensional vector can be constructed by combining the unmanned aerial vehicle aerial image with the topographic data acquired by the unmanned aerial vehicle geographic position and the coordinates of the mark points in the view cone calculated by the pixel coordinates, the geographic coordinates of the target can be accurately positioned by intersecting the constructed three-dimensional vector with the topographic data, the acquired geographic coordinate data of the target can be combined with other Geographic Information System (GIS) data, richer and more accurate information can be provided for the fields of urban planning, resource management, environmental monitoring and the like, and the accurate geographic coordinate data can provide real-time and accurate information support for a decision maker, so that faster and more intelligent decision making can be facilitated.
In another preferred embodiment, step 151, the method comprises obtaining an aerial image of the unmanned aerial vehicle with a resolution of 1 m/pixel, covering an area of 10 square kilometers; by GPS positioning, the precise geographic position (longitude 116.4 degrees, latitude 39.9 degrees and height 100 meters) of the unmanned aerial vehicle when shooting is determined; and generating a high-precision Digital Elevation Model (DEM) of the region by using terrain extraction software in combination with the aerial image and the unmanned aerial vehicle position information. Step 152, importing the image data obtained by unmanned aerial vehicle aerial photography into image processing software to ensure that the definition and resolution of the image are high enough so as to accurately identify the target object; performing necessary preprocessing on the aerial image, such as contrast enhancement, noise reduction and the like, so as to improve the identifiability of the target object; automatically identifying a target object in the aerial image by using a SIFT algorithm, and determining a boundary frame or a center point of the target object so as to acquire pixel coordinates (u, v) of the target object on the image; acquiring accurate position and orientation information of the unmanned aerial vehicle when performing aerial photography tasks, which is generally obtained through GPS and IMU (inertial measurement unit) data of the unmanned aerial vehicle; and obtaining internal and external parameters of the camera, including focal length, principal point coordinates (namely image center coordinates), distortion coefficients and the like. These parameters can be obtained through a camera calibration process; using a back projection formula of a pinhole camera model, which can recover coordinates (X, Y, Z) in three-dimensional space from two-dimensional image coordinates (u, v), internal and external parameters of the camera, and estimated or known depth information Z; substituting parameters such as pixel coordinates (u, v), camera focal length, principal point coordinates and the like into a back projection formula, and obtaining three-dimensional coordinates (X, Y, Z) of the target object in the view cone of the unmanned aerial vehicle through calculation.
Using the position a of the drone as a starting point, the three-dimensional coordinates of the calculated object within the view cone are set as point B, thereby constructing a three-dimensional vector AB1 pointing from a to B. This vector represents the spatial direction from the drone to the target.
In a preferred embodiment of the present invention, the step 153 may include:
step 1531, calculating the coordinates of the marking points in the viewing cone according to the images photographed in different directions and positions and the length and width of the marked target pixels in the images and the length and width of the pixels of the images, and constructing a three-dimensional vector AB2 by the position A and the geographic position B of the unmanned aerial vehicle;
In step 1532, the intersection point is calculated by intersecting the three-dimensional vector AB1 and the three-dimensional vector AB2 to obtain the geographic coordinates of the target object.
In the embodiment of the invention, the geographic coordinates of the target object can be calculated more accurately by combining images shot in a plurality of directions and positions by utilizing a triangulation principle, the intersection point of two or more vectors can provide more accurate position information, the three-dimensional vectors are constructed by utilizing images shot in different positions and angles, the positions of the target object in the three-dimensional space can be reflected more comprehensively, the target position is determined by the intersection of the vectors, and the method is not only suitable for positioning of a static target object, but also suitable for tracking and positioning of a dynamic target.
In another preferred embodiment, step 1531, the unmanned aerial vehicle photographs the same object at two different positions and angles, so as to obtain two images, the unmanned aerial vehicle photographs at position a (longitude 116.4 °, latitude 39.9 °, height 100 meters), and the coordinates of the pixels of the object in the images are (500, 600); the unmanned aerial vehicle moves to a position A' (longitude 116.41 degrees, latitude 39.92 degrees and height 120 meters) and shoots after adjusting shooting angles, and the pixel coordinates of the target object in the new image are (400, 700); according to the pixel coordinates of the target object in the two images, the length and width of the picture pixels of the images and the position and posture information of the unmanned aerial vehicle, respectively calculating three-dimensional coordinates B1 and B2 of the target object in the two view cones; taking the positions A and A' of the unmanned aerial vehicle as starting points and taking the geographic positions B1 and B2 as end points respectively, and constructing two three-dimensional vectors AB1 and AB2; the intersection of the two vectors AB1 and AB2 is calculated using a three-dimensional geometry algorithm. The intersection point is the real position of the target object in the three-dimensional space, the real geographic coordinate is obtained, and the geographic coordinate of the target object is obtained through calculation (longitude 116.415 degrees, latitude 39.915 degrees and height 108 meters).
In a preferred embodiment of the present invention, the step 16 may include:
step 161, constructing webGIS a three-dimensional scene, planning webGIS a planned route and marking a target object to obtain a three-dimensional virtual scene terrain;
and 162, superposing the target object of the aerial image on the three-dimensional virtual scene topography, and synchronously displaying the geographic coordinates and the pixel coordinates of the target object in the three-dimensional scene to realize linkage interaction of the aerial image and the webGIS three-dimensional scene.
In the embodiment of the invention, by constructing webGIS three-dimensional scenes, a user can more intuitively know the situation of terrain and route planning, the target object of the aerial image is superimposed on the three-dimensional virtual scene, the linkage interaction of the image and the three-dimensional scene is realized, the immersion feeling and the operation experience of the user are greatly enhanced, route planning and target object marking are carried out in the three-dimensional scene, the actual flight route and the target point position can be more accurately simulated, and the planning efficiency and the planning accuracy are improved; meanwhile, the synchronous display of the geographic coordinates and the pixel coordinates is beneficial to the rapid positioning and understanding of the spatial position relationship of the target object by the user.
In another preferred embodiment, step 161, a suitable webGIS platform is selected, such as Cesium, three.js in combination with GeoServer, or other platform supporting three-dimensional geographic information, to ensure that the selected platform supports construction of three-dimensional scenes, loading of terrain data, and planning of routes and objects. Loading basic terrain and image data by using the selected webGIS platforms to construct an initial three-dimensional earth or local three-dimensional scene; according to the actual requirements, necessary settings and optimization are carried out on the scene, such as adjusting illumination, adding shadows, setting camera view angles and the like. Planning the course of the unmanned aerial vehicle in a three-dimensional scene may be accomplished by drawing a path in the scene or importing a predefined course file, marking targets, which may be actual geographic features, buildings, facilities, etc., at key locations or areas of interest on the course. Setting corresponding attribute information, such as names, descriptions, coordinates and the like, for each target object, and generating a complete three-dimensional virtual scene terrain according to the loaded terrain data and planned route and target object information, so as to ensure that the precision and resolution of the terrain data are matched with the actual application requirements.
Step 162, obtaining unmanned aerial vehicle aerial images corresponding to objects in the three-dimensional scene, performing necessary processing on the aerial images, such as clipping, adjusting colors and contrast, so as to ensure the fusion degree of the quality of the aerial images and the three-dimensional scene, using the function of webGIS platforms, superposing the processed aerial images on the corresponding objects or terrains, adjusting the transparency, size and position of the images so as to ensure perfect fusion of the images and the three-dimensional scene, adding interactive functions such as clicking display detailed information, mouse hovering display coordinates and the like for each superposed aerial image in the three-dimensional scene, and synchronously displaying the geographic coordinates (longitude and latitude) of the objects corresponding to the images and the pixel coordinates in the original aerial images when the user clicks or hovers on the aerial images, so that the user can freely roam in the three-dimensional scene, and meanwhile, the corresponding relation between the aerial images and the objects is kept unchanged, and interactive operations such as zooming, rotating and translating are realized, so that the user can view the objects and the aerial images from different angles and distances.
As shown in fig. 2, the embodiment of the present invention further provides an unmanned aerial vehicle aerial image and webGIS three-dimensional scene linkage interaction system 20, including:
An acquisition module 21, configured to acquire an aerial image of the unmanned aerial vehicle and a geographic location of the unmanned aerial vehicle; according to the geographic position of the unmanned aerial vehicle, calculating the direction vector of the view cone and the included angle of the shooting range to obtain a calculation result; according to the calculation result, comparing the marked target pixel coordinates in the image with the length and width of the picture pixels of the image to obtain coordinates of the marked points in the view cone;
The processing module 22 is configured to calculate, according to coordinates of the marker point in the view cone, corresponding pixel coordinates of the marker in the aerial image of the unmanned aerial vehicle; converting the pixel coordinates into geographic coordinates of the target object according to the pixel coordinates and the topographic data; and constructing a three-dimensional virtual scene of the target area, and synchronously displaying the geographic coordinates and the pixel coordinates of the target object in the three-dimensional scene to realize linkage interaction between the aerial image and the webGIS three-dimensional scene.
It should be noted that, the system is a system corresponding to the above method, and all implementation manners in the above method embodiment are applicable to the embodiment, so that the same technical effects can be achieved.
Embodiments of the present invention also provide a computing device comprising: a processor, a memory storing a computer program which, when executed by the processor, performs the method as described above. All the implementation manners in the method embodiment are applicable to the embodiment, and the same technical effect can be achieved.
Embodiments of the present invention also provide a computer-readable storage medium storing instructions that, when executed on a computer, cause the computer to perform a method as described above. All the implementation manners in the method embodiment are applicable to the embodiment, and the same technical effect can be achieved.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
Furthermore, it should be noted that in the apparatus and method of the present invention, it is apparent that the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent aspects of the present invention. Also, the steps of performing the series of processes described above may naturally be performed in chronological order in the order of description, but are not necessarily performed in chronological order, and some steps may be performed in parallel or independently of each other. It will be appreciated by those of ordinary skill in the art that all or any of the steps or components of the methods and apparatus of the present invention may be implemented in hardware, firmware, software, or any combination thereof in any computing device (including processors, storage media, etc.) or network of computing devices, as would be apparent to one of ordinary skill in the art upon reading the present specification.
The object of the invention can thus also be achieved by running a program or a set of programs on any computing device. The computing device may be a well-known general purpose device. The object of the invention can thus also be achieved by merely providing a program product containing program code for implementing said method or apparatus. That is, such a program product also constitutes the present invention, and a storage medium storing such a program product also constitutes the present invention. It is apparent that the storage medium may be any known storage medium or any storage medium developed in the future. It should also be noted that in the apparatus and method of the present invention, it is apparent that the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent aspects of the present invention. The steps of executing the series of processes may naturally be executed in chronological order in the order described, but are not necessarily executed in chronological order. Some steps may be performed in parallel or independently of each other.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the present invention.

Claims (6)

1. The method for linkage interaction between an aerial image of an unmanned aerial vehicle and webGIS three-dimensional scenes is characterized by comprising the following steps:
acquiring an aerial image of the unmanned aerial vehicle and a geographic position of the unmanned aerial vehicle;
According to the geographic position of the unmanned aerial vehicle, calculating the direction vector of the view cone and the included angle of the shooting range to obtain a calculation result;
according to the calculation result, comparing the marked target pixel coordinates in the image with the length and width of the picture pixels of the image to obtain coordinates of the marked points in the view cone;
calculating corresponding pixel coordinates of the marker in the unmanned aerial vehicle aerial image according to the coordinates of the marker points in the viewing cone;
converting the pixel coordinates into geographic coordinates of the target object according to the pixel coordinates and the topographic data;
Constructing a three-dimensional virtual scene of a target area, and synchronously displaying geographic coordinates and pixel coordinates of a target object in the three-dimensional scene to realize linkage interaction between an aerial image and the webGIS three-dimensional scene;
according to unmanned aerial vehicle geographical position, calculate the direction vector and the shooting range contained angle of looking cone to obtain the calculation result, include:
acquiring a geographic position A of the unmanned aerial vehicle and a geographic position B of an object to be displayed;
According to the geographic position A of the unmanned aerial vehicle and the geographic position B of the object to be displayed, calculating a three-dimensional vector AB of a two-point connecting line, and calculating the included angle between the three-dimensional vector AB and each coordinate axis in a geographic coordinate system to obtain the included angle of the three-dimensional vector AB;
According to the orientation gesture of the unmanned aerial vehicle during shooting and the hardware parameters of the camera, calculating a view cone direction vector AC and a corresponding shooting range included angle in the current shooting state;
Determining an intersection point of the three-dimensional vector AB and a plane of the shooting view cone through geometric calculation according to the included angle of the three-dimensional vector AB and the included angle of the shooting range of the view cone, wherein the intersection point represents a specific position of an object to be displayed in the shooting range of the unmanned aerial vehicle, and the specific position is a final calculation result; according to the calculation result, comparing the marked target pixel coordinate in the image with the length and width of the picture pixels of the image to obtain the coordinate of the marked point in the view cone, wherein the method comprises the following steps:
Calculating the distance ratio of the intersection point to each side of the view cone according to the calculation result to obtain the pixel coordinate of the intersection point;
mapping the pixel coordinates of the intersection points into the scene of the view cone to obtain the coordinates of the mark points in the view cone; according to the coordinates of the marking points in the viewing cone, calculating the corresponding pixel coordinates of the marker in the unmanned aerial vehicle aerial image, including:
acquiring unmanned aerial vehicle aerial images and unmanned aerial vehicle information;
Establishing a mapping model from three-dimensional coordinates to two-dimensional image coordinates according to unmanned aerial vehicle information, and converting the three-dimensional coordinates of the marking points in the viewing cone into coordinates under an unmanned aerial vehicle camera coordinate system to realize coordinate conversion;
Calculating the specific pixel coordinates of the mark points on the aerial image according to the two-dimensional coordinates of the coordinate conversion and the resolution ratio of the image; according to the pixel coordinates and the terrain data, converting the pixel coordinates into target geographic coordinates, including:
acquiring relevant terrain data according to the aerial image of the unmanned aerial vehicle and the specific geographic position of the unmanned aerial vehicle when the unmanned aerial vehicle shoots;
According to the known pixel coordinates and the position and the orientation of the unmanned aerial vehicle, determining the three-dimensional coordinates of the marking points corresponding to the pixel coordinates in the view cone of the unmanned aerial vehicle so as to convert the two-dimensional pixel coordinates into three-dimensional space coordinates;
Using the position A of the unmanned aerial vehicle as a starting point, and using the three-dimensional coordinates in the view cone of the unmanned aerial vehicle as a direction to construct a three-dimensional vector AB1;
And intersecting the three-dimensional vector AB1 with the acquired topographic data to obtain an intersection point of the vector and the topographic data, wherein the intersection point is the geographic coordinate of the target object.
2. The method for coordinated interaction between an aerial image of an unmanned aerial vehicle and a webGIS three-dimensional scene according to claim 1, wherein obtaining the aerial image of the unmanned aerial vehicle and the geographic location of the unmanned aerial vehicle comprises:
setting the size of particle swarms, wherein each particle represents an unmanned aerial vehicle flight and shooting scheme;
Initializing a position and a speed for each particle, wherein the position represents a flight path and a shooting angle of the unmanned aerial vehicle;
Determining an fitness function for evaluating the quality of each of the flight and capture schemes;
In each iteration, evaluating the quality of each particle according to the fitness function, updating the individual final position and the global final position of each particle, and updating the position and the speed of each particle;
Stopping iteration when the termination condition is reached, and outputting a final flight and shooting scheme;
according to the final flight and shooting scheme, the unmanned aerial vehicle is controlled to fly and shoot according to the final flight and shooting scheme, and in the flight process, the gesture and shooting parameters of the unmanned aerial vehicle are adjusted in real time so as to acquire aerial images and geographic position information of the unmanned aerial vehicle.
3. The unmanned aerial vehicle aerial image and webGIS three-dimensional scene linkage interaction method according to claim 2, wherein the specific calculation formula of the fitness function is:
wherein, Representing the quality score of the comprehensive shooting,Wherein, the method comprises the steps of, wherein,Representative of the definition,Representing color accuracy,Representing the composition score of the image,AndRepresenting the weight coefficient;
Wherein, the method comprises the steps of, wherein, Representing the efficiency of the flight, T represents the time of flight,Represents the energy consumption of the power plant,Representing a flight stability factor; representing shooting efficiency; Representing the total cost; Representing a risk factor, wherein, AndIs a weight coefficient of the risk factor,It is the weather condition that is to be determined,Is the safety of the flight area; Representing the complexity of data processing, as the ratio of the amount of data to the processing speed, AndIs a weight coefficient.
4. The method for coordinated interaction between an aerial image of an unmanned aerial vehicle and a webGIS three-dimensional scene according to claim 3, wherein when updating the position and the speed of each particle, the update formula of the speed is:
wherein, The velocity of the particle i at time t is indicated,The weight of the inertia is represented by the weight of the inertia,AndThe learning factor is represented as such,AndRepresenting a random number in the range 0,1,Representing the individual historical optimal positions of particles i,Represents the historical optimal position of the whole particle swarm,Indicating the position of particle i at time t; Indicating the moment of the particle i Is a speed of (2);
The update formula of the position is:
Wherein, the method comprises the steps of, wherein,
Indicating the moment of the particle iIs a position of (c).
5. The method for coordinated interaction between an aerial image of an unmanned aerial vehicle and webGIS three-dimensional scenes according to claim 4, wherein intersecting the three-dimensional vector AB1 with terrain data to obtain geographic coordinates of a target comprises:
According to images shot in different directions and positions, calculating the coordinates of the marking points in the viewing cone through the marked target pixel coordinates in the images and the length and width of the picture pixels of the images, and constructing a three-dimensional vector AB2 by the position A and the geographic position B of the unmanned aerial vehicle;
And intersecting the three-dimensional vector AB1 with the three-dimensional vector AB2 to calculate an intersection point so as to obtain the geographic coordinates of the target object.
6. An information processing system for a linkage interaction method of an aerial image of an unmanned aerial vehicle and a webGIS three-dimensional scene, which is characterized by being applied to the method as claimed in any one of claims 1 to 5, and comprising:
The acquisition module is used for acquiring the aerial image of the unmanned aerial vehicle and the geographic position of the unmanned aerial vehicle; according to the geographic position of the unmanned aerial vehicle, calculating the direction vector of the view cone and the included angle of the shooting range to obtain a calculation result; according to the calculation result, comparing the marked target pixel coordinates in the image with the length and width of the picture pixels of the image to obtain coordinates of the marked points in the view cone;
The processing module is used for calculating the corresponding pixel coordinates of the marker in the unmanned aerial vehicle aerial image according to the coordinates of the marker points in the viewing cone; converting the pixel coordinates into geographic coordinates of the target object according to the pixel coordinates and the topographic data; and constructing a three-dimensional virtual scene of the target area, and synchronously displaying the geographic coordinates and the pixel coordinates of the target object in the three-dimensional scene to realize linkage interaction between the aerial image and the webGIS three-dimensional scene.
CN202410843465.7A 2024-06-27 2024-06-27 Unmanned aerial vehicle aerial image and webGIS three-dimensional scene linkage interaction method and system Active CN118379453B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410843465.7A CN118379453B (en) 2024-06-27 2024-06-27 Unmanned aerial vehicle aerial image and webGIS three-dimensional scene linkage interaction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410843465.7A CN118379453B (en) 2024-06-27 2024-06-27 Unmanned aerial vehicle aerial image and webGIS three-dimensional scene linkage interaction method and system

Publications (2)

Publication Number Publication Date
CN118379453A CN118379453A (en) 2024-07-23
CN118379453B true CN118379453B (en) 2024-09-03

Family

ID=91908803

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410843465.7A Active CN118379453B (en) 2024-06-27 2024-06-27 Unmanned aerial vehicle aerial image and webGIS three-dimensional scene linkage interaction method and system

Country Status (1)

Country Link
CN (1) CN118379453B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114286045A (en) * 2021-11-15 2022-04-05 自然资源部经济管理科学研究所(黑龙江省测绘科学研究所) Three-dimensional information acquisition method and device
KR20220166689A (en) * 2021-06-10 2022-12-19 이재영 Drone used 3d mapping method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118149761A (en) * 2024-03-14 2024-06-07 兖矿能源集团股份有限公司 Three-dimensional monitoring and visualization system of mining area subsidence unmanned aerial vehicle

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20220166689A (en) * 2021-06-10 2022-12-19 이재영 Drone used 3d mapping method
CN114286045A (en) * 2021-11-15 2022-04-05 自然资源部经济管理科学研究所(黑龙江省测绘科学研究所) Three-dimensional information acquisition method and device

Also Published As

Publication number Publication date
CN118379453A (en) 2024-07-23

Similar Documents

Publication Publication Date Title
Zollmann et al. Augmented reality for construction site monitoring and documentation
US7944547B2 (en) Method and system of generating 3D images with airborne oblique/vertical imagery, GPS/IMU data, and LIDAR elevation data
AU2008322565B9 (en) Method and apparatus of taking aerial surveys
AU2011312140B2 (en) Rapid 3D modeling
JP4685313B2 (en) Method for processing passive volumetric image of any aspect
US8593486B2 (en) Image information output method
US10789673B2 (en) Post capture imagery processing and deployment systems
US20100142748A1 (en) Height measurement in a perspective image
US8395760B2 (en) Unified spectral and geospatial information model and the method and system generating it
Koeva 3D modelling and interactive web-based visualization of cultural heritage objects
WO2020051208A1 (en) Method for obtaining photogrammetric data using a layered approach
Stal et al. Highly detailed 3D modelling of Mayan cultural heritage using an UAV
CN111127661B (en) Data processing method and device and electronic equipment
CN118379453B (en) Unmanned aerial vehicle aerial image and webGIS three-dimensional scene linkage interaction method and system
AU2013260677B2 (en) Method and apparatus of taking aerial surveys
Fridhi et al. DATA ADJUSTMENT OF THE GEOGRAPHIC INFORMATION SYSTEM, GPS AND IMAGE TO CONSTRUCT A VIRTUAL REALITY.
Templin MAPPING BUILDINGS AND CITIES
Growe et al. 3D visualization and evaluation of remote sensing data
CN111561949B (en) Coordinate matching method of airborne laser radar and hyperspectral imager integrated machine
CN117252921A (en) Aviation thermal infrared image positioning method, device and equipment based on view synthesis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant