[go: up one dir, main page]

CN107018522B - A positioning method of UAV ground base station based on multi-information fusion - Google Patents

A positioning method of UAV ground base station based on multi-information fusion Download PDF

Info

Publication number
CN107018522B
CN107018522B CN201710109786.4A CN201710109786A CN107018522B CN 107018522 B CN107018522 B CN 107018522B CN 201710109786 A CN201710109786 A CN 201710109786A CN 107018522 B CN107018522 B CN 107018522B
Authority
CN
China
Prior art keywords
base station
positioning
ultrasonic
ground base
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710109786.4A
Other languages
Chinese (zh)
Other versions
CN107018522A (en
Inventor
杨令晨
丁永生
张悦
蒋章
金晓涛
姚思雅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Donghua University
Original Assignee
Donghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Donghua University filed Critical Donghua University
Priority to CN201710109786.4A priority Critical patent/CN107018522B/en
Publication of CN107018522A publication Critical patent/CN107018522A/en
Application granted granted Critical
Publication of CN107018522B publication Critical patent/CN107018522B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/18Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • H04W64/006Locating users or terminals or network equipment for network management purposes, e.g. mobility management with additional information processing, e.g. for direction or speed determination

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

本发明涉及一种基于多信息融合的无人机地面基站的定位方法,其定位的实现需对地面基站与无人机进行特殊设计并且要求无人机与地面基站相互配合,具体包括以下步骤:基于GPS的定位步骤实现地面基站区域的大致定位;基于高精度测距法的超声波定位步骤实现无人机悬停于地面基站上空正中央;基于图像处理和引力作用的降落步骤实现无人机高精度降落;基于光电开关的定位检测步骤用于检测无人机是否准确停在了归定位置。本发明能够使得无人机自动识别地面基站降落点,并进行精准降落。

Figure 201710109786

The invention relates to a positioning method for a ground base station of an unmanned aerial vehicle based on multi-information fusion. The realization of the positioning requires special design of the ground base station and the unmanned aerial vehicle and requires the unmanned aerial vehicle and the ground base station to cooperate with each other, and specifically includes the following steps: The GPS-based positioning step realizes the approximate positioning of the ground base station area; the ultrasonic positioning step based on the high-precision ranging method realizes the UAV hovering in the center of the ground base station; the landing step based on image processing and gravitational effect realizes the UAV high altitude Accuracy landing; the photoelectric switch-based positioning detection step is used to detect whether the drone has accurately stopped at the home position. The present invention enables the drone to automatically identify the landing point of the ground base station and perform accurate landing.

Figure 201710109786

Description

Positioning method of unmanned aerial vehicle ground base station based on multi-information fusion
Technical Field
The invention relates to the technical field of unmanned aerial vehicle (unmanned aerial vehicle) positioning, in particular to a positioning method of an unmanned aerial vehicle ground base station based on multi-information fusion.
Background
With the rapid development of science and technology, people research aircrafts in depth, and various aircrafts are applied in more and more occasions. Compared with other aircrafts, the unmanned aerial vehicle has the advantages of simple and compact mechanical structure, more flexible action, lower requirements on the taking-off and landing environment, good operation performance and capability of realizing taking-off, hovering and landing in a small range. Due to the characteristics, the device is widely applied to the fields of aerial photography, monitoring, investigation, search and rescue, agricultural pest control and the like.
Nowadays, scholars at home and abroad publish a large number of relevant articles and research results, new application fields are continuously emerged, and the application value of the unmanned aerial vehicle is obviously improved. For these applications, drones need to work autonomously for a long time in a certain area. Due to the limited flight time of the unmanned aerial vehicle, after a certain period of mission is carried out in the air, the unmanned aerial vehicle must return to the ground base station for charging and information exchange with related mechanisms. Therefore, accurate and efficient ground base station positioning systems are becoming increasingly important.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: can make unmanned aerial vehicle automatic positioning ground basic station landing point and carry out accurate descending.
In order to solve the technical problem, the technical scheme of the invention provides a positioning method of an unmanned aerial vehicle ground base station based on multi-information fusion, wherein the unmanned aerial vehicle comprises N universal wheels for landing, and N is more than or equal to 3, and the positioning method is characterized in that ultrasonic receivers are arranged at 4 corners of the ground base station and are connected with an ultrasonic ranging system, the intersection point of two diagonal lines of the ground base station is the center of the base station, the center of the base station is provided with 1 small image recognition object icon, the N large image recognition object icons surround the small image recognition object icons, 1 positioning and calibrating device is arranged in the area of each large image recognition object icon, and the orientation of the N positioning and calibrating devices is respectively matched with the orientation of the N universal wheels of the unmanned aerial vehicle;
the unmanned aerial vehicle is also provided with an electronic compass, a spherical omnidirectional ultrasonic transducer for transmitting ultrasonic waves in an omnidirectional manner and a holder camera, and the holder camera vertically faces the ground;
the positioning method comprises the following steps:
the method comprises the following steps that firstly, an unmanned aerial vehicle calculates the current position by using an airborne navigation positioning module, sends wireless signals and data to a ground base station, the ground base station allows the unmanned aerial vehicle to land after receiving the signals and sends azimuth information of the position of the base station to the unmanned aerial vehicle, and an unmanned aerial vehicle control module controls the unmanned aerial vehicle to fly to the approximate position of the base station according to the azimuth information;
secondly, the unmanned aerial vehicle sends two sections of frequencies respectively of f to the ground base station through the spherical omnidirectional transducer at the same time interval1、f2The ultrasonic signal, ultrasonic ranging system utilizes the dual-frequency ranging method to confirm 4 ultrasonic receiver's ultrasonic wave propagation time TOF respectively, selects 3 ultrasonic receiver's ultrasonic wave propagation time TOF at random as a set of, calculates the unmanned aerial vehicle coordinate, regards the base station center as the origin of coordinates, according to the unmanned aerial vehicle coordinate that obtains of calculation, control unmanned aerial vehicle flies to the base station center to hover setting for highly, in this step:
the specific steps of respectively determining the ultrasonic wave propagation time TOF of any one ultrasonic receiver by utilizing a double-frequency ranging method are as follows: the frequencies received by the current ultrasonic receiver of the ultrasonic ranging system are respectively f1、 f2Determining the ultrasonic propagation time by the relative time difference of the relevant zero crossings between the ultrasonic signals, wherein the frequencies are respectively f1、f2A group of related zero-crossing points with highest reliability among the ultrasonic signals are judged through a depth learning algorithm;
and thirdly, after the unmanned aerial vehicle hovers above the center of the base station, the unmanned aerial vehicle starts to descend slowly, in the descending process, the direction of the universal wheels of the unmanned aerial vehicle is adjusted through an electronic compass, so that the body of the unmanned aerial vehicle is positively oriented to a set direction when the unmanned aerial vehicle lands on the base station, meanwhile, a cradle head camera acquires a small icon of an image recognition object and a large icon of the image recognition object, so that the center of the base station is obtained, unmanned control parameters are obtained according to the relative position of the center of the base station in the image obtained by the cradle head camera in real time, the unmanned aerial vehicle is controlled to enable the center of the base station to move towards the center of the image obtained by the cradle head camera in real time, finally, the unmanned aerial vehicle lands on a ground base.
Preferably, the small image recognition object icon is a small circle icon; the image recognition object large icon is a large ring icon.
Preferably, the upper half part of the positioning and calibrating device is an arc concave surface sinking towards the center, is matched with an unmanned universal wheel and realizes accurate positioning under the action of gravity; the lower half part of the positioning and calibrating device is cylindrical and matched with the universal wheel in size, and the positioning and calibrating device is used for clamping the universal wheel.
Preferably, the surface wall of the lower half part of the positioning and calibrating device is provided with a butt-joint photoelectric switch for positioning detection after positioning is finished, and in the third step, after the unmanned aerial vehicle lands on the ground base station, if the photoelectric switch is completely closed, the positioning is successful, otherwise, the positioning fails.
Preferably, in the second step, the method for determining the ultrasonic wave propagation time TOF of any one of the ultrasonic receivers includes the following steps:
step 2.1, the frequencies received by the ultrasonic ranging system by using the current ultrasonic receiver are respectively f1、f2After the ultrasonic signals are processed, the time approximate to the zero crossing point after the two sections of ultrasonic signals are extracted to obtain two corresponding groups of time data P1、P2,P1={t11,t12,t13,...,t1m,...},P2={t21,t22,t23,...,t2n,.., wherein t is1mIs a frequency f1With the frequency f, the approximate zero-crossing time of the mth extracted ultrasonic signal1Corresponding time of mth zero crossing point in the ultrasonic signal
Figure BDA0001233529440000032
The relationship between them is unknown; t is t2nIs a frequency f2With the frequency f, the approximate zero-crossing time of the nth extracted ultrasonic signal2Corresponds to the nth zero crossing point in the ultrasonic signal
Figure BDA0001233529440000033
The relationship between them is unknown. Then, P is extracted1、P2Error correcting time data of (1) to obtain Q1、Q2, Q1={t′11,t′12,t′13,...,t′1m,...},Q2={t′21,t′22,t′23,...,t′2n,.. }, where, t'1m、t′2nAre each t1m、t2nError calibration time of (1);
step 2.2, adding P1And P2The time data in (1) are paired pairwise to form a plurality of time data sets, and Q is combined1、 Q2And obtaining a group of time data groups with highest reliability in the time data groups through a deep learning algorithm. If a group of time data with the highest reliability corresponds to the gamma group in all the related approximate zero-crossing point groups, the corresponding time of the related approximate zero-crossing point is expressed as
Figure BDA0001233529440000034
And
Figure BDA0001233529440000035
step 2.3, counting the equal ultrasonic propagation time TOF:
Figure BDA0001233529440000031
in the formula (I), the compound is shown in the specification,
Figure BDA0001233529440000036
and
Figure BDA0001233529440000037
respectively, the frequency of the gamma group related approximate zero-crossing point is f1And f2The time corresponding to the acoustic wave signal of (c),
Figure BDA0001233529440000038
is composed of
Figure BDA0001233529440000039
The corresponding error-calibration time is set to be,
Figure BDA00012335294400000310
is composed of
Figure BDA00012335294400000311
Corresponding to the error time, at is the time interval between two transmissions,
Figure BDA00012335294400000312
is the relative time difference between the relevant approximate zero crossings of the gamma th group, gamma being the group number of the relevant zero crossings. Preferably, the deep learning algorithm in step 2.2 adopts a BP neural network as a deep learning network model.
Preferably, in the step 2.1, the schmidt shaping circuit extracts the time close to the zero crossing point after the two sections of ultrasonic signals are extracted; extraction of P by inverting comparator shaping circuit1、P2The error of (2) calibrating the time data.
Preferably, the method for obtaining the training sample of the deep learning network model comprises the following steps:
two sets of time data P were obtained experimentally using the same method as step 2.11、P2, P1={t11,t12,t13,...,t1m,...},P2={t21,t22,t23,...,t2n,., two sets of time data P1And P2Combining the two to obtain P ═ t11t21;t11t22;t11t23;...;t1mt2n,.. }; then, use P, Q1And Q2Form a matrix
Figure BDA0001233529440000041
In the formula t12-mn=1/(1+e-α(k-β))、
Figure BDA0001233529440000042
α is an empirical parameter, β is the number of cycles corresponding to the maximum amplitude of the received signal, t1-m=|t′1m-t1m-0.5/f1|/2, t2-n=|t′2n-t2n-0.5/f2I/2; each column of X is an input feature vector, the first row of the input feature vector is a feature value of data, and the second row and the third row of the input feature vector are respectively the error influence of Schmidt shaping on the time of respective zero crossing points of two sound wave signals; finally, the 1-dimensional output corresponding to each feature vector is set by human experience, and the value of the 1-dimensional output is between 0 and 1.
Preferably, the deep learning network model is implemented as follows: firstly, normalizing input data; secondly, constructing a network and initializing the network, setting the number of neurons in an input layer to be 3, a hidden layer to be 10 and an output layer to be 1, and initializing a weight w, a threshold b, a learning rate, a minimum error of a training target and a maximum allowable training step number; then, training a deep learning network, putting a large number of input and output training samples into the network, training weights and thresholds in the network, and finally obtaining a deep learning network model with each new weight and threshold when the error is within an acceptable range or exceeds the limit of training times; and finally, performing network test, comparing the test result with an expected value so as to evaluate the training result of the network model and correct the network parameters.
Due to the adoption of the technical scheme, compared with the prior art, the invention has the following advantages and positive effects:
(1) the traditional ultrasonic ranging method is a single threshold detection method and a phase method. However, in the former case, the time for the received signal to jump over the threshold value varies with the amplitude of the signal, thereby causing measurement uncertainty; in the latter case, the range of the ultrasonic waves is small, and when the range is larger than the wavelength of the ultrasonic waves, a range ambiguity occurs. The ultrasonic ranging method disclosed by the invention is a ranging method based on double frequencies, can skillfully avoid the defects and can realize accurate ranging. In addition, the distance measurement algorithm is combined with the BP neural network, the relevant zero crossing point can be determined from the fuzzy angle, and therefore the problem is simplified and generalized. And the BP neural network fuses the time error of measurement, thereby improving the robustness of ranging. The ultrasonic positioning method based on the high-precision distance measurement realizes the positioning effect with high precision and high robustness.
(2) The invention discloses a method for calibrating a positioning device of an unmanned aerial vehicle, which is characterized in that 4 positioning calibration devices are designed in the center of a ground base station, and 4 wheels of the unmanned aerial vehicle are designed into universal wheels. Therefore, the unmanned aerial vehicle can simply realize positioning through the action of gravity, and complex image processing is avoided, so that the processing speed of the processor is increased, and the landing accuracy is improved.
(3) The invention integrates a plurality of systems to position the ground base station of the unmanned aerial vehicle, and can further increase the rapidity, the accuracy and the robustness of positioning.
Drawings
Fig. 1 is a schematic diagram of a ground base station according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a planar structure of a ground base station according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a dual frequency ranging method in an embodiment of the present invention;
FIG. 4 is a diagram illustrating error analysis and processing by Schmitt shaping in an embodiment of the present invention;
FIG. 5 is a comparison graph of the predicted output of the BP neural network in an embodiment of the present invention;
FIG. 6 is a schematic view of a positioning calibration device according to an embodiment of the present invention;
fig. 7 and 8 are diagrams showing the universal wheel seizing in the embodiment of the present invention.
Detailed Description
The invention will be further illustrated with reference to the following specific examples. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Furthermore, it should be understood that various changes and modifications can be made by those skilled in the art after reading the teachings of the present invention, and such equivalents also fall within the scope of the appended claims.
The embodiment of the invention relates to a fusion positioning method of an unmanned aerial vehicle ground base station. The positioning step mainly comprises 4 steps: a positioning step based on a GPS, an ultrasonic positioning step based on a high-precision distance measurement method, a landing step based on image processing and gravitation and a positioning detection step based on a photoelectric switch. Realizing these location steps, need to carry out special design and require unmanned aerial vehicle and ground basic station to mutually support to ground basic station and unmanned aerial vehicle.
As shown in fig. 1, the ground base station according to the present invention has a cubic shape with a length, a width and a height of 1m, 1m and 0.2m, respectively, and a specific control system is provided therein. Ultrasonic receivers 1 are arranged in 4 corners of the ground base station, and the ultrasonic receivers form an ultrasonic receiver array together. The right middle of the ground base station is provided with 4 large ring icons 4 and a small ring icon 3. There are 4 positioning calibration devices in the area of the large circle icon 4. Wherein, ultrasonic receiver 1 is the annular receiver, can all-round receipt ultrasonic wave. The 4 large ring icons 4 and the small ring icons 3 are the objects of image identification, and are used for determining the center of the ground base station. Orientation of 4 location calibrating device is southeast, northeast, southwest, northwest respectively for match with the position of 4 universal wheels of unmanned aerial vehicle, as shown in fig. 2. The upper half part 2 of the positioning and calibrating device is an arc concave surface sinking towards the center, is matched with a universal wheel of the unmanned aerial vehicle and realizes accurate positioning by means of the action of gravity; the shape of the lower half part 5 of the positioning and calibrating device is a cylinder, is matched with the size of the universal wheel and is used for clamping the universal wheel. The surface wall of the lower half part 5 of the positioning and calibrating device is provided with a butt-joint type photoelectric switch for positioning detection after positioning is finished, as shown in fig. 7 and 8. The specific shape of the positioning and calibrating device is shown in fig. 7.
Besides the basic control sensor, the unmanned aerial vehicle is also provided with an electronic compass sensor, a spherical omnidirectional ultrasonic transducer and a tripod head camera. Wherein, the electronic compass sensor is positioned in the body; the holder camera vertically faces the ground and moves to the position right below the machine body when in use; the spherical ultrasonic sensor is positioned in the center of the bottom of the machine body and can transmit ultrasonic waves in all directions.
Positioning step based on GPS
The GPS positioning step mainly aims to realize that the body flies back to the approximate area where the ground base station is located from the execution ground, and can be roughly positioned in the square range of 2m around the ground base station. When the unmanned aerial vehicle receives a signal returned to the ground base station, the airborne INS/GPS integrated navigation positioning module calculates the current position of the unmanned aerial vehicle and sends a wireless signal and data to the ground base station. After the ground base station receives the signal, the unmanned aerial vehicle is allowed to land, and the base station position is sent to the unmanned aerial vehicle. The unmanned aerial vehicle control module controls the unmanned aerial vehicle to fly to the approximate position of the base station according to the azimuth information.
Ultrasonic positioning step based on high-precision distance measurement method
The ultrasonic positioning step is a core positioning step and mainly aims to enable the unmanned aerial vehicle to hover at the midpoint above the ground base station, so that subsequent accurate positioning operation is facilitated. After the unmanned aerial vehicle carries out rough positioning through GPS, the flying speed is slowed down, and ultrasonic waves are periodically transmitted downwards through a spherical omnidirectional transducer in the center of the machine body. At this time, the ultrasonic receiver arrays at the 4 corners of the ground base station receive the ultrasonic waves sent by the unmanned aerial vehicle in sequence, so as to obtain the corresponding propagation time tof (time of flight). If the intersection point of two diagonal lines of the ground base station is taken as the origin of coordinates, the processing system takes TOFs obtained by randomly selecting three receivers as a group, calculates the coordinates of the unmanned aerial vehicle, and performs redundant calculation through other combinations to reduce errors.
Because the positioning precision is directly determined by the distance measurement precision, the invention specially designs a distance measurement algorithm and an ultrasonic distance measurement system.
The high-precision ultrasonic ranging algorithm is a double-frequency ranging method based on deep learning, and can accurately measure the TOF. The principle of the double-frequency ranging is as follows: as shown in FIG. 3, the ranging system utilizes the relative time difference between the correlated zero crossings of the received signals of different frequencies
Figure BDA0001233529440000071
To determine the propagation time of the ultrasonic wave
Figure BDA0001233529440000072
In the formula f1Is the frequency of the first transmitted signal, f2Is the second time of hairThe frequency at which the signal is transmitted,
Figure BDA0001233529440000073
and
Figure BDA0001233529440000074
respectively, the frequency of the ith group of related zero-crossing points is f1And f2Is the frequency difference between the two, at is the time interval between two transmissions, and i is the group number of the relevant zero crossing point. The extraction method of the zero crossing point time data is a Schmitt shaping method, which can avoid the interference of noise, but can cause the extracted zero crossing point to be approximate zero crossing point, thereby bringing about measurement error. The measurement error may be calibrated by error calibration time data extracted by the inverse comparator shaping circuit, as shown in fig. 4. The Schmitt shaping circuit and the reverse comparator shaping circuit are equal in threshold value. And the time data is the time corresponding to the rising edge of the waveform output by the shaping circuit. The correlation of the approximate zero-crossing point needs to be judged by using the reliability output by the depth learning algorithm. The deep learning algorithm can determine a group of data with the highest reliability, so that the accurate ultrasonic wave propagation time is obtained. The reliability output by the deep learning algorithm is fused with the detection error of the Schmitt shaping method, so that the accuracy, reliability and robustness of distance measurement are enhanced.
The double-frequency ranging method based on deep learning comprises the following specific steps:
step 1: the unmanned aerial vehicle sends two sections of ultrasonic signals with different frequencies to the ground through the spherical omnidirectional transducer at the same time interval and informs the ground base station of an ultrasonic positioning system to start timing. Both of said frequencies are within the transducer bandwidth, such as 50KHz and 51.3 KHz;
step 2: the ground base station ultrasonic ranging system receives the two sections of sound wave signals in the step 1 in sequence, and extracts the approximate zero crossing time of the two sections of signals through a Schmidt shaping circuit to obtain two corresponding groups of time data P1={t11,t12,t13,...,t1m,.. } and P2={t21,t22,t23,...,t2n,.. }; extracting error calibration time data Q of the two signals by an inverse comparator shaping circuit1={t′11,t′12,t′13,...,t′1m,.. } and Q2={t′21,t′22,t′23,...,t′2n,...}. In the corner marks of the time data, the first is a sound wave signal label, and the second is a data sequence label;
and step 3: due to P described in step 21And P2The approximate zero crossing corresponding to the jth data in (a) does not necessarily correspond in waveform, i.e., t1jAnd t2jNot necessarily related. In addition, the zero-crossing point obtained by the schmitt shaping method is an approximate zero-crossing point. The ranging system will thus utilize the time data P1、P2、Q1And Q2And obtaining a group of data with highest credibility through a depth learning algorithm
Figure BDA0001233529440000081
And 4, step 4: data as described by step 3
Figure BDA0001233529440000082
And calculating the propagation time of the ultrasonic wave. Has the formula of
Figure BDA0001233529440000083
Figure BDA0001233529440000084
And
Figure BDA0001233529440000085
respectively, the frequency of the gamma group related approximate zero-crossing point is f1And f2The time corresponding to the acoustic wave signal of (c),
Figure BDA0001233529440000086
is composed of
Figure BDA0001233529440000087
The corresponding error-calibration time is set to be,
Figure BDA0001233529440000088
is composed of
Figure BDA0001233529440000089
Corresponding to the error time, at is the time interval between two transmissions,
Figure BDA00012335294400000812
is the relative time difference between the relevant approximate zero crossings of the gamma th group, gamma being the group number of the relevant zero crossings.
The deep learning algorithm mainly adopts a BP neural network as a deep learning network model, wherein training samples come from experimental data.
The training sample acquisition method comprises the following steps: a group of P obtained by experiments in advance1And P2Make two-by-two combination, i.e. about { t11,t12,t13,...,t1m,.. } and t11,t12,t13,...,t2n,.. } into P ═ t11t21;t11t22;t11t23;...;t1mt2n,.. }; then, use P, Q1And Q2Composition matrix
Figure BDA00012335294400000810
In the formula t12-mn=1/(1+e-α(k-β))、
Figure BDA00012335294400000811
α is an empirical parameter, β is the number of cycles (typically a constant value) corresponding to the maximum amplitude of the received signal, t1-m=|t′1m-t1m-0.5/f1|/2,t2-n=|t′2n-t2n-0.5/f2I/2; from this, each column of X is an input feature vector, the first row of which is the feature value of the data,The second row and the third row respectively have error influence on the time of respective zero crossing points of two sound wave signals caused by Schmidt shaping (the larger the numerical value is, the larger the error is); finally, the 1-dimensional output (confidence) for each feature vector is set by human experience to have a value between 0 and 1 (a larger value indicates a higher probability of being used to calculate TOF).
The BP neural network model is realized as follows: firstly, because the numerical values of input samples are different in size and have large difference, input data normalization processing is required; secondly, constructing a network and initializing the network, setting the number of neurons of an input layer to be 3, a hidden layer to be 10 and an output layer to be 1, and initializing a weight w, a threshold b, a learning rate, a minimum error of a training target and a maximum allowable training step number; then, training a deep learning network, putting a large number of input and output training samples into the network, training weights and thresholds in the network, and finally obtaining a deep learning network model with each new weight and threshold when the error is within an acceptable range or exceeds the limit of training times; and finally, performing network test, comparing the test result with an expected value so as to evaluate the training result of the network model and correct the network parameters. Therefore, a deep learning detection model of the relevant points can be obtained.
The deep learning algorithm comprises the following steps:
step 1: setting parameters of the neural network, including the number of hidden layers, the number of neurons in each layer, a weight value w and a threshold value b, and constructing a basic neural network model;
step 2: selecting the first five groups of two groups of time data, and extracting a characteristic vector as detected input data according to the method for inputting the training samples;
and step 3: calculating an output matrix corresponding to the characteristic vector by using the neural network model;
and 4, step 4: traversing the output matrix in the step 3, searching for a maximum value and recording the position of the maximum value in the array;
and 5: obtaining a group of time data with highest credibility from step 4
Figure BDA0001233529440000091
Thereby obtaining the propagation time;
the core of the ultrasonic ranging algorithm is a deep learning algorithm. When the relevant point detection is carried out, the deep learning network model can be used for describing a complex nonlinear relation, and the detection error of the Schmitt shaping method is fused, so that the accuracy and the robustness of the relevant point detection are improved. As shown in FIG. 5, the predicted output using the BP neural network model is very close to the expected output.
The ultrasonic ranging system is composed of a signal processing module and a data processing module as shown in fig. 6.
The signal processing module consists of a preceding stage processing circuit, a filter circuit, a program control amplifying circuit, a Schmidt shaping circuit and a reverse comparator shaping circuit. The signal captured by the ultrasonic receiver is weak, usually only a few millivolts to a few tens of millivolts, and is doped with ambient interference signals. Therefore, correlation processing is required before the received signal is sent to the schmitt shaping circuit and the inverse comparator circuit. Firstly, the acoustic wave signal is subjected to signal amplitude limiting and primary amplification processing through a pre-stage processing circuit. The signal is then passed to a filter circuit that has a high quality factor and a deep notch depth that substantially filters out the interfering signal. Finally, the filtered signal is sent to the program control amplifying circuit. After the preprocessing is finished, the Schmidt shaping circuit and the inverting comparison circuit receive the output signal of the program control amplifying circuit at the same time. In the signal processing module, the program-controlled amplifying circuit can adjust the amplification factor according to specific conditions so as to improve the robustness of system ranging; the program control amplifying circuit is designed behind the filter circuit to filter interference signals in advance and prevent the interference signals from being further amplified to influence the ranging performance; the schmitt shaping circuit and the inverse comparator shaping circuit are required to be equal in threshold value.
The data processing module consists of a TDC-GP21, an MCU, an external sound velocity calibration module and a main control computer. The TDC-GP21 is used for extracting the rising edge time of the output signals of the Schmitt shaping circuit and the inverse comparator circuit with high precision and transmitting the rising edge time to the MCU. The external sound velocity calibration module is used for calibrating the sound velocity in real time, calibrating by using the time for the ultrasonic wave to propagate the known distance, and transmitting the time data to the MCU. The MCU preprocesses data from the TDC-GP21 and the external sound velocity calibration module and transmits the data to the main control computer. And the main control computer performs deep processing on the data from the MCU and transmits the processed data to the unmanned aerial vehicle.
Based on the distance measurement algorithm and the distance measurement system, the specific steps of ultrasonic positioning are as follows:
step 1: the unmanned aerial vehicle sends two sections of ultrasonic signals with different frequencies to the ground through the spherical omnidirectional transducer at the same time interval and informs the ground base station of an ultrasonic positioning system to start timing.
Step 2: after a period of time, the ultrasonic receiver array will receive the acoustic signals of the two frequencies in sequence.
And step 3: and 2, transmitting the sound wave signals to respective signal processing modules of the ultrasonic receiver array to respectively obtain two corresponding sections of square wave signals.
And 4, step 4: and 3, transmitting the two sections of square wave signals to an ultrasonic data processing module. The module obtains the distance between the unmanned aerial vehicle and the ultrasonic receiver array according to a dual-frequency ranging method based on deep learning, so that the position coordinate of the unmanned aerial vehicle, the distance relative to a landing point, the height and the attitude angle are obtained, and the position coordinate, the distance relative to the landing point, the height and the attitude angle are transmitted to the unmanned aerial vehicle.
And 5: the unmanned aerial vehicle automatic driving system collects parameters provided by the sensor according to an input control instruction, generates the control instruction according to a set control method and logic, and realizes related control through an execution mechanism.
Step 6: and (5) circulating the steps 1-5 until the positioning error is within an allowable range.
Landing step based on image recognition and gravitation
The main purpose of the accurate landing based on the image recognition and the gravity action is to enable 4 universal wheels of the unmanned aerial vehicle to accurately land at the bottom of the specified 4 circular arc concave surfaces. After the unmanned aerial vehicle is positioned in the center above the ground base station, the unmanned aerial vehicle begins to slowly descend. The positioning based on the image recognition is used for fine adjustment of the aerial position of the unmanned aerial vehicle, and the positioning precision in the landing process is enhanced. The positioning based on the action of the attractive force enables 4 universal wheels of the unmanned aerial vehicle to fall along the corresponding 4 smooth arc concave surfaces through the attractive force, and finally stop at the bottoms of the accurate 4 arc concave surfaces, as shown in fig. 8. The fixed point position of 4 universal wheels is adjusted through the electron compass for the organism openly faces the true north when unmanned aerial vehicle descends, thereby the position of 4 universal wheels is southeast, northeast, southwest, northwest respectively. This can prevent that 4 universal wheels's falling point from being located the connection face of 4 circular arc concave surfaces, can't gliding.
The image identification specifically comprises image acquisition, perspective transformation, dynamic threshold binarization, circle detection and circle center positioning. The image is acquired by a pan-tilt camera right below the machine body; the cloud deck camera adjusts the position of the camera in real time and keeps the camera vertically downward, so that the image is stabilized; the perspective transformation is used for correcting a malformed image; the dynamic threshold binarization is used for extracting the outlines of 4 large circular ring icons and a central small circular ring icon in the image, and separating a background from a foreground, so that the subsequent detection operation is facilitated; the circle detection is used for identifying a circular icon outline in the graph; the circle center positioning specifically comprises coarse positioning and fine positioning.
The specific steps of accurate landing based on image recognition and gravitational interaction are as follows:
step 1: the unmanned aerial vehicle carries out azimuth calibration through an electronic compass;
step 2: the method comprises the steps that an unmanned aerial vehicle camera obtains a ground base station image;
and step 3: when the unmanned aerial vehicle is far away from the ground base station, the processor performs image recognition on the image in the step 1 to obtain the positions of the centers of 4 great circles, so that the intersection point of the cross connecting lines of the 4 centers of circles is the center position of the ground base station; when the unmanned aerial vehicle is close to the ground base station, the processor identifies the image in the step 1 to obtain the position of the center of the middle small circle, namely the center position of the ground base station;
and 4, step 4: and 3, the processor obtains the control parameters of the unmanned aerial vehicle according to the relative position of the ground base station center position in the image in the step 3, so that the ground base station center position moves towards the positive center of the image.
And 5: repeating the steps 1-4 until the unmanned aerial vehicle lands on the ground base station, and further executing the step 6;
step 6: unmanned aerial vehicle passes through the gravitation, relies on 4 smooth circular arc concave surfaces to realize automatic gliding, stops on 4 setpoint, and 4 universal wheels card have been 4 smooth circular arc concave surface bottoms promptly.
Detection step based on photoelectric switch positioning
The positioning detection step mainly aims to detect whether the unmanned aerial vehicle is accurately stopped at a specified position. The detection sensor is a photoelectric switch located at the lower half of the positioning and calibrating device, as shown in fig. 7 and 8. The detection of the success of the positioning is based on the detection 4 of the complete closing of the photoelectric switch. If the photoelectric switches are completely closed, the universal wheels are completely clamped, and then the positioning is successful; if the photoelectric switches are not completely closed, the positioning fails, the unmanned plane receives a signal of the positioning failure, and the unmanned plane is repositioned after taking off.

Claims (7)

1.一种基于多信息融合的无人机地面基站的定位方法,无人机包括N个用于着陆的万向轮,N≥3,其特征在于,所述地面基站的4个角上设有超声波接收器,超声波接收器与超声波测距系统相连,地面基站的两条对角线的交点为基站中心,在基站中心有1个图像识别对象小图标,N个图像识别对象大图标围绕图像识别对象小图标,每个图像识别对象大图标的区域内有1个定位校准装置,N个定位校准装置的朝向分别与无人机的N个万向轮的方位相匹配;1. a positioning method based on the unmanned aerial vehicle ground base station of multi-information fusion, the unmanned aerial vehicle comprises N universal wheels for landing, and N ≥ 3, it is characterized in that, 4 corners of described ground base station are set up. There is an ultrasonic receiver. The ultrasonic receiver is connected to the ultrasonic ranging system. The intersection of the two diagonal lines of the ground base station is the center of the base station. There is a small icon of an image recognition object in the center of the base station. There is a small icon of the recognition object, there is a positioning calibration device in the area of the large icon of each image recognition object, and the orientations of the N positioning calibration devices match the orientations of the N universal wheels of the UAV; 在无人机上还安装有电子罗盘、用于全方位发射超声波的球形全向超声波换能器以及云台摄像头,云台摄像头垂直朝向地面;An electronic compass, a spherical omnidirectional ultrasonic transducer for emitting ultrasonic waves in all directions, and a pan-tilt camera are also installed on the drone, and the pan-tilt camera faces the ground vertically; 所述定位方法包括以下步骤:The positioning method includes the following steps: 第一步、无人机利用机载导航定位模块计算得到当前所在位置,并向地面基站发送无线信号和数据,地面基站接收信号后,允许无人机降落,并将基站位置的方位信息发送到无人机,无人机控制模块根据方位信息,控制无人机飞向基站的大致位置;The first step, the drone uses the airborne navigation and positioning module to calculate the current location, and sends wireless signals and data to the ground base station. After the ground base station receives the signal, the drone is allowed to land, and the position information of the base station is sent to the UAV, the UAV control module controls the UAV to fly to the approximate position of the base station according to the orientation information; 第二步、无人机通过球形全向换能器以相同时间间隔,向地面基站发送两段频率分别为f1、f2的超声波信号,超声波测距系统利用双频率测距法分别确定4个超声波接收器的超声波传播时间TOF,随机选择3个超声波接收器的超声波传播时间TOF作为一组,计算出无人机坐标,将基站中心作为坐标原点,根据计算得到的无人机坐标,控制无人机飞向基站中心,并在设定高度悬停,在本步骤中:In the second step, the drone sends two ultrasonic signals with frequencies f 1 and f 2 to the ground base station at the same time interval through the spherical omnidirectional transducer. The ultrasonic ranging system uses the dual-frequency ranging method to determine 4 The ultrasonic propagation time TOF of each ultrasonic receiver, randomly select the ultrasonic propagation time TOF of 3 ultrasonic receivers as a group, calculate the coordinates of the UAV, take the center of the base station as the coordinate origin, and control the UAV according to the calculated coordinates of the UAV. The drone flies to the center of the base station and hovers at the set altitude. In this step: 利用双频率测距法分别确定任意一个超声波接收器的超声波传播时间TOF的具体步骤为:The specific steps for determining the ultrasonic propagation time TOF of any ultrasonic receiver by using the dual-frequency ranging method are as follows: 步骤2.1、超声波测距系统利用当前超声波接收器接收到的频率分别为f1、f2的超声波信号后,提取两段超声波信号后的近似过零点的时刻,得到对应的两组时间数据P1、P2,P1={t11,t12,t13,…,t1m,…},P2={t21,t22,t23,…,t2n,…},式中,t1m为频率f1的超声波信号中第m个被提取到的近似过零点时刻,其与频率f1的超声波信号中第m个过零点对应时刻
Figure FDA0002349761060000011
之间的关系未知;t2n为频率f2的超声波信号中第n个被提取到的近似过零点时刻,其与频率f2的超声波信号中第n个过零点对应时刻
Figure FDA0002349761060000021
之间的关系未知,接着,提取P1、P2的误差校准时间数据,得到Q1、Q2,Q1={t′11,t′12,t′13,…,t′1m,…},Q2={t′21,t′22,t′23,…,t′2n,…},式中,t′1m、t′2n分别为t1m、t2n的误差校准时间;
Step 2.1. After the ultrasonic ranging system uses the ultrasonic signals with frequencies f 1 and f 2 received by the current ultrasonic receiver, it extracts the approximate zero-crossing moments after the two ultrasonic signals, and obtains corresponding two sets of time data P 1 , P 2 , P 1 ={t 11 ,t 12 ,t 13 ,…,t 1m ,…}, P 2 ={t 21 ,t 22 ,t 23 ,…,t 2n ,…}, where t 1m is the approximate zero-crossing time of the m-th extracted ultrasonic signal of frequency f 1 , which corresponds to the m-th zero-crossing point in the ultrasonic signal of frequency f 1
Figure FDA0002349761060000011
The relationship between is unknown; t 2n is the approximate zero - crossing time that is extracted in the nth ultrasonic signal of frequency f2, which corresponds to the nth zero - crossing point in the ultrasonic signal of frequency f2
Figure FDA0002349761060000021
The relationship between them is unknown. Next, extract the error calibration time data of P 1 and P 2 to obtain Q 1 , Q 2 , Q 1 ={t′ 11 ,t′ 12 ,t′ 13 ,…,t′ 1m ,… }, Q 2 ={t' 21 , t' 22 , t' 23 ,...,t' 2n ,...}, where t' 1m and t' 2n are the error calibration times of t 1m and t 2n , respectively;
步骤2.2、将P1与P2中的时间数据两两配对形成多组时间数据组,结合Q1、Q2,通过深度学习算法得到时间数据组中可信度最高的一组时间数据组,设可信度最高的一组时间数据组对应于所有相关过零点组中的第γ组,则相关近似过零点对应时刻表示为
Figure FDA0002349761060000022
Figure FDA0002349761060000023
Step 2.2, pairing the time data in P 1 and P 2 to form multiple sets of time data groups, combining Q 1 and Q 2 to obtain a set of time data groups with the highest reliability in the time data groups through a deep learning algorithm, Assuming that a group of time data groups with the highest reliability corresponds to the γth group in all relevant zero-crossing point groups, the corresponding time of the relevant approximate zero-crossing point is expressed as
Figure FDA0002349761060000022
and
Figure FDA0002349761060000023
步骤2.3、计算超声波传播时间TOF:Step 2.3, calculate the ultrasonic propagation time TOF:
Figure FDA0002349761060000024
式中,
Figure FDA0002349761060000025
Figure FDA0002349761060000026
分别是第γ组相关近似过零点中频率为f1和f2的声波信号对应的时间,
Figure FDA0002349761060000027
Figure FDA0002349761060000028
对应的误差校准时间,
Figure FDA0002349761060000029
Figure FDA00023497610600000210
对应的误差时间,Δt是两次发射的时间间隔,
Figure FDA00023497610600000211
是第γ组相关近似过零点之间的相对时间差,γ是相关过零点的组号;
Figure FDA0002349761060000024
In the formula,
Figure FDA0002349761060000025
and
Figure FDA0002349761060000026
are the corresponding times of the acoustic wave signals with frequencies f 1 and f 2 in the approximate zero-crossing point of the γ-th group of correlations, respectively,
Figure FDA0002349761060000027
for
Figure FDA0002349761060000028
The corresponding error calibration time,
Figure FDA0002349761060000029
for
Figure FDA00023497610600000210
The corresponding error time, Δt is the time interval between two transmissions,
Figure FDA00023497610600000211
is the relative time difference between the approximate zero-crossing points of the γth group, and γ is the group number of the relevant zero-crossing points;
所述深度学习算法采用BP神经网络作为深度学习网络模型;The deep learning algorithm adopts BP neural network as the deep learning network model; 第三步、无人机悬停在基站中心上方后,开始缓慢下降,在下降过程中,通过电子罗盘调整无人机万向轮的方位,使得无人机着落时机体正面朝向设定方位,同时,由云台摄像头获取图像识别对象小图标及图像识别对象大图标,从而得到基站中心,根据基站中心在云台摄像头实时获得的图像中的相对位置得到无人机的控制参数,控制无人机使得基站中心向云台摄像头实时获得的图像的正中央移动,最终无人机降落在地面基站上,无人机的N个万向轮在引力作用下落入N个定位校准装置内。Step 3: After the drone hovers above the center of the base station, it begins to descend slowly. During the descending process, adjust the orientation of the universal wheel of the drone through the electronic compass, so that the front of the drone faces the set orientation when the drone lands. At the same time, the small icon of the image recognition object and the large icon of the image recognition object are obtained by the PTZ camera, so as to obtain the base station center, and the control parameters of the UAV are obtained according to the relative position of the base station center in the real-time image obtained by the PTZ camera to control the unmanned aerial vehicle. The drone moves the center of the base station to the center of the real-time image obtained by the PTZ camera, and finally the drone landed on the ground base station, and the N universal wheels of the drone fall into the N positioning calibration devices under the action of gravity.
2.如权利要求1所述的一种基于多信息融合的无人机地面基站的定位方法,其特征在于,所述图像识别对象小图标为小圆环图标;所述图像识别对象大图标为大圆环图标。2. The method for positioning a multi-information fusion-based UAV ground base station according to claim 1, wherein the small icon of the image recognition object is a small circle icon; the large icon of the image recognition object is Big circle icon. 3.如权利要求1所述的一种基于多信息融合的无人机地面基站的定位方法,其特征在于,所述定位校准装置的上半部分为一个向中心下陷的圆弧凹面,与无人机的万向轮相配合,并依靠引力作用实现精确定位;所述定位校准装置的下半部分的形状为圆柱体,与万向轮的大小匹配,用于卡住万向轮。3. a kind of positioning method based on the multi-information fusion unmanned aerial vehicle ground base station as claimed in claim 1, is characterized in that, the upper part of described positioning calibration device is a circular arc concave surface sunk to the center, and no The universal wheels of the man and the machine are matched, and the precise positioning is realized by the action of gravity; the shape of the lower half of the positioning and calibration device is a cylinder, which matches the size of the universal wheel and is used for clamping the universal wheel. 4.如权利要求3所述的一种基于多信息融合的无人机地面基站的定位方法,其特征在于,所述定位校准装置的下半部分的面壁上装有对接形式的光电开关,用于定位结束后进行定位检测,在所述第三步中,当无人机降落在地面基站后,若光电开关全部关闭,则定位成功,否则,定位失败。4. a kind of positioning method based on the multi-information fusion unmanned aerial vehicle ground base station as claimed in claim 3, is characterized in that, the face wall of the lower half of described positioning calibration device is equipped with the photoelectric switch of docking form, is used for After the positioning is completed, the positioning detection is performed. In the third step, after the drone landed on the ground base station, if the photoelectric switches are all turned off, the positioning is successful, otherwise, the positioning fails. 5.如权利要求1所述的一种基于多信息融合的无人机地面基站的定位方法,其特征在于,所述步骤2.1中,通过施密特整型电路提取两段超声波信号后的近似过零点的时刻;通过反向比较器整型电路提取P1、P2的误差校准时间数据。5. a kind of positioning method based on multi-information fusion UAV ground base station as claimed in claim 1, it is characterized in that, in described step 2.1, by Schmitt shaping circuit to extract the approximation after two ultrasonic signals The time of zero-crossing; the error calibration time data of P 1 and P 2 are extracted through the inverse comparator integer circuit. 6.如权利要求5所述的一种基于多信息融合的无人机地面基站的定位方法,其特征在于,所述深度学习网络模型的训练样本获取方法为:6. a kind of positioning method based on the multi-information fusion unmanned aerial vehicle ground base station as claimed in claim 5, is characterized in that, the training sample acquisition method of described deep learning network model is: 通过实验利用与步骤2.1相同的方法得到两组时间数据P1、P2,P1={t11,t12,t13,…,t1m,…},P2={t21,t22,t23,…,t2n,…},将两组时间数据P1和P2进行两两组合,得到P={t11t21;t11 t22;t11 t23;…;t1m t2n,…};接着,利用P、Q1和Q2组成矩阵
Figure FDA0002349761060000031
式中t12-mn=1/(1+e-α(k-β))、
Figure FDA0002349761060000032
α为经验参数、β为接收信号幅值最大处对应的周期数,t1-m=|t′1m-t1m-0.5/f1|/2,t2-n=|t′2n-t2n-0.5/f2|/2;由此可得X的每一列为一个输入特征向量,其第一行是数据的特征值、第二行和第三行分别是施密特整形对两个声波信号各自过零点的时间带来的误差影响;最后,每一个特征向量对应的1维输出通过人为经验设定,其值介于0和1之间。
Two sets of time data P 1 , P 2 are obtained through experiments using the same method as in step 2.1, P 1 ={t 11 ,t 12 ,t 13 ,…,t 1m ,…}, P 2 ={t 21 ,t 22 ,t 23 ,...,t 2n ,...}, the two groups of time data P 1 and P 2 are combined in pairs to obtain P={t 11 t 21 ; t 11 t 22 ; t 11 t 23 ;...;t 1m t 2n ,...}; then, use P, Q 1 and Q 2 to form a matrix
Figure FDA0002349761060000031
where t 12-mn =1/(1+e -α(k-β) ),
Figure FDA0002349761060000032
α is an empirical parameter, β is the number of cycles corresponding to the maximum amplitude of the received signal, t 1 - m =|t' 1m -t 1m -0.5/f 1 |/2, t 2-n =|t' 2n -t 2n -0.5/f 2 |/2; thus, each column of X is an input feature vector, the first row is the eigenvalue of the data, the second row and the third row are Schmitt shaping pairs of two The influence of the error caused by the time of the respective zero-crossing points of the acoustic signal; finally, the 1-dimensional output corresponding to each feature vector is set by human experience, and its value is between 0 and 1.
7.如权利要求6所述的一种基于多信息融合的无人机地面基站的定位方法,其特征在于,所述深度学习网络模型的实现如下:首先,对输入数据进行归一化处理;其次,构建网络并进行网络的初始化工作,设置输入层神经元个数为3、隐含层为10、输出层为1,并对权值w、阈值b、学习速率、训练目标最小误差和最大允许训练步数进行初始化;接着,开始深度学习网络的训练,将大量的输入输出训练样本投入网络中,进行网络中的权值和阈值的训练,最终当误差在可接受范围内或超出训练次数限制时,得到带有各个新权值和阈值的深度学习网络模型;最后,进行网络测试,将测试结果与预期值进行对比,以便评估该网络模型的训练结果,并对网络参数进行修正。7. a kind of positioning method based on the multi-information fusion unmanned aerial vehicle ground base station as claimed in claim 6, is characterized in that, the realization of described deep learning network model is as follows: firstly, input data is normalized; Secondly, build the network and initialize the network, set the number of neurons in the input layer to 3, the hidden layer to 10, and the output layer to 1, and set the weight w, threshold b, learning rate, training target minimum error and maximum Allow the number of training steps to be initialized; then, start the training of the deep learning network, put a large number of input and output training samples into the network, and train the weights and thresholds in the network, and finally when the error is within an acceptable range or exceeds the number of training times When limiting, a deep learning network model with various new weights and thresholds is obtained; finally, a network test is performed to compare the test results with the expected values, so as to evaluate the training results of the network model and revise the network parameters.
CN201710109786.4A 2017-02-27 2017-02-27 A positioning method of UAV ground base station based on multi-information fusion Expired - Fee Related CN107018522B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710109786.4A CN107018522B (en) 2017-02-27 2017-02-27 A positioning method of UAV ground base station based on multi-information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710109786.4A CN107018522B (en) 2017-02-27 2017-02-27 A positioning method of UAV ground base station based on multi-information fusion

Publications (2)

Publication Number Publication Date
CN107018522A CN107018522A (en) 2017-08-04
CN107018522B true CN107018522B (en) 2020-05-26

Family

ID=59440597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710109786.4A Expired - Fee Related CN107018522B (en) 2017-02-27 2017-02-27 A positioning method of UAV ground base station based on multi-information fusion

Country Status (1)

Country Link
CN (1) CN107018522B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10852427B2 (en) * 2017-06-30 2020-12-01 Gopro, Inc. Ultrasonic ranging state management for unmanned aerial vehicles
CN107902049A (en) * 2017-10-30 2018-04-13 上海大学 The autonomous fuel loading system of unmanned boat based on image and laser sensor
CN109125004A (en) * 2018-09-26 2019-01-04 张子脉 A kind of supersonic array obstacle avoidance apparatus, method and its intelligent blind crutch
CN110018239B (en) * 2019-04-04 2022-07-08 珠海一微半导体股份有限公司 Carpet detection method
CN110155350B (en) * 2019-04-23 2022-07-26 西北大学 Control method of unmanned aerial vehicle landing device
CN110287271B (en) * 2019-06-14 2021-03-16 南京拾柴信息科技有限公司 Method for establishing association matrix of wireless base station and regional geographic ground object
CN110758136A (en) * 2019-09-23 2020-02-07 广西诚新慧创科技有限公司 Charging parking apron and unmanned aerial vehicle charging system
CN112068160B (en) * 2020-04-30 2024-03-29 东华大学 Unmanned aerial vehicle signal interference method based on navigation positioning system
CN112558626A (en) * 2020-11-11 2021-03-26 安徽翼讯飞行安全技术有限公司 Air control system for small civil unmanned aerial vehicle
CN112649570A (en) * 2020-12-11 2021-04-13 河海大学 Tail gas detection device and method based on infrared thermal imaging double vision and ultrasonic positioning
CN112835021B (en) * 2020-12-31 2023-11-14 杭州海康威视数字技术股份有限公司 Positioning method, device, system and computer readable storage medium
CN112731442B (en) * 2021-01-12 2023-10-27 桂林航天工业学院 An adjustable surveying instrument for UAV surveying and mapping
CN116700354B (en) * 2023-08-01 2023-10-17 众芯汉创(江苏)科技有限公司 Spatial position checking and judging method based on visible light data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102883430A (en) * 2012-09-12 2013-01-16 南京邮电大学 Range-based wireless sensing network node positioning method
CN105182994A (en) * 2015-08-10 2015-12-23 普宙飞行器科技(深圳)有限公司 Unmanned-aerial-vehicle fixed-point landing method
CN106168808A (en) * 2016-08-25 2016-11-30 南京邮电大学 A kind of rotor wing unmanned aerial vehicle automatic cruising method based on degree of depth study and system thereof
CN106184786A (en) * 2016-08-31 2016-12-07 马彦亭 A kind of automatic landing system of unmanned plane and method
CN106200677A (en) * 2016-08-31 2016-12-07 中南大学 A kind of express delivery delivery system based on unmanned plane and method
CN106444824A (en) * 2016-05-23 2017-02-22 重庆零度智控智能科技有限公司 UAV (unmanned aerial vehicle), and UAV landing control method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9321531B1 (en) * 2014-07-08 2016-04-26 Google Inc. Bystander interaction during delivery from aerial vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102883430A (en) * 2012-09-12 2013-01-16 南京邮电大学 Range-based wireless sensing network node positioning method
CN105182994A (en) * 2015-08-10 2015-12-23 普宙飞行器科技(深圳)有限公司 Unmanned-aerial-vehicle fixed-point landing method
CN106444824A (en) * 2016-05-23 2017-02-22 重庆零度智控智能科技有限公司 UAV (unmanned aerial vehicle), and UAV landing control method and device
CN106168808A (en) * 2016-08-25 2016-11-30 南京邮电大学 A kind of rotor wing unmanned aerial vehicle automatic cruising method based on degree of depth study and system thereof
CN106184786A (en) * 2016-08-31 2016-12-07 马彦亭 A kind of automatic landing system of unmanned plane and method
CN106200677A (en) * 2016-08-31 2016-12-07 中南大学 A kind of express delivery delivery system based on unmanned plane and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于地面基站的目标定位技术研究;姚金杰;《中国博士学位论文全文数据库》;20111031;全文 *

Also Published As

Publication number Publication date
CN107018522A (en) 2017-08-04

Similar Documents

Publication Publication Date Title
CN107018522B (en) A positioning method of UAV ground base station based on multi-information fusion
CN111860589B (en) A multi-sensor multi-target cooperative detection information fusion method and system
Patruno et al. A vision-based approach for unmanned aerial vehicle landing
CN104807460B (en) Unmanned plane indoor orientation method and system
Gao et al. Cooperative localization and navigation: theory, research, and practice
CN106527481A (en) Unmanned aerial vehicle flight control method, device and unmanned aerial vehicle
Yang et al. Panoramic UAV surveillance and recycling system based on structure-free camera array
Yuan et al. Mmaud: A comprehensive multi-modal anti-uav dataset for modern miniature drone threats
CN108896957A (en) The positioning system and method in a kind of unmanned plane control signal source
CN108363041B (en) Unmanned aerial vehicle sound source positioning method based on K-means clustering iteration
WO2021007855A1 (en) Base station, photo-control-point positioning method, electronic device and computer readable medium
Kwak et al. Emerging ICT UAV applications and services: Design of surveillance UAVs
Basiri et al. Localization of emergency acoustic sources by micro aerial vehicles
CN107087441B (en) A kind of information processing method and its device
Springer et al. Autonomous drone landing with fiducial markers and a gimbal-mounted camera for active tracking
CN112394744A (en) Integrated unmanned aerial vehicle system
Ching et al. Ultra-wideband localization and deep-learning-based plant monitoring using micro air vehicles
JP2017053687A (en) Flying object position calculation system, flying object position calculation method, and flying object position calculation program
CN205644279U (en) Tracking system based on radio side direction technique
Cao et al. Research on application of computer vision assist technology in high-precision UAV navigation and positioning
Marut et al. Visual-based landing system of a multirotor UAV in GNSS denied environment
Teague et al. Time series classification of radio signal strength for qualitative estimate of UAV motion
CN113960581A (en) Unmanned aerial vehicle target detection system applied to transformer substation and combined with radar
CN114322943A (en) A method and device for measuring target distance based on front-view image of unmanned aerial vehicle
Zeng et al. Dual-channel LIDAR searching, positioning, tracking and landing system for rotorcraft from ships at sea

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200526

CF01 Termination of patent right due to non-payment of annual fee