Disclosure of Invention
The technical problem to be solved by the invention is as follows: can make unmanned aerial vehicle automatic positioning ground basic station landing point and carry out accurate descending.
In order to solve the technical problem, the technical scheme of the invention provides a positioning method of an unmanned aerial vehicle ground base station based on multi-information fusion, wherein the unmanned aerial vehicle comprises N universal wheels for landing, and N is more than or equal to 3, and the positioning method is characterized in that ultrasonic receivers are arranged at 4 corners of the ground base station and are connected with an ultrasonic ranging system, the intersection point of two diagonal lines of the ground base station is the center of the base station, the center of the base station is provided with 1 small image recognition object icon, the N large image recognition object icons surround the small image recognition object icons, 1 positioning and calibrating device is arranged in the area of each large image recognition object icon, and the orientation of the N positioning and calibrating devices is respectively matched with the orientation of the N universal wheels of the unmanned aerial vehicle;
the unmanned aerial vehicle is also provided with an electronic compass, a spherical omnidirectional ultrasonic transducer for transmitting ultrasonic waves in an omnidirectional manner and a holder camera, and the holder camera vertically faces the ground;
the positioning method comprises the following steps:
the method comprises the following steps that firstly, an unmanned aerial vehicle calculates the current position by using an airborne navigation positioning module, sends wireless signals and data to a ground base station, the ground base station allows the unmanned aerial vehicle to land after receiving the signals and sends azimuth information of the position of the base station to the unmanned aerial vehicle, and an unmanned aerial vehicle control module controls the unmanned aerial vehicle to fly to the approximate position of the base station according to the azimuth information;
secondly, the unmanned aerial vehicle sends two sections of frequencies respectively of f to the ground base station through the spherical omnidirectional transducer at the same time interval1、f2The ultrasonic signal, ultrasonic ranging system utilizes the dual-frequency ranging method to confirm 4 ultrasonic receiver's ultrasonic wave propagation time TOF respectively, selects 3 ultrasonic receiver's ultrasonic wave propagation time TOF at random as a set of, calculates the unmanned aerial vehicle coordinate, regards the base station center as the origin of coordinates, according to the unmanned aerial vehicle coordinate that obtains of calculation, control unmanned aerial vehicle flies to the base station center to hover setting for highly, in this step:
the specific steps of respectively determining the ultrasonic wave propagation time TOF of any one ultrasonic receiver by utilizing a double-frequency ranging method are as follows: the frequencies received by the current ultrasonic receiver of the ultrasonic ranging system are respectively f1、 f2Determining the ultrasonic propagation time by the relative time difference of the relevant zero crossings between the ultrasonic signals, wherein the frequencies are respectively f1、f2A group of related zero-crossing points with highest reliability among the ultrasonic signals are judged through a depth learning algorithm;
and thirdly, after the unmanned aerial vehicle hovers above the center of the base station, the unmanned aerial vehicle starts to descend slowly, in the descending process, the direction of the universal wheels of the unmanned aerial vehicle is adjusted through an electronic compass, so that the body of the unmanned aerial vehicle is positively oriented to a set direction when the unmanned aerial vehicle lands on the base station, meanwhile, a cradle head camera acquires a small icon of an image recognition object and a large icon of the image recognition object, so that the center of the base station is obtained, unmanned control parameters are obtained according to the relative position of the center of the base station in the image obtained by the cradle head camera in real time, the unmanned aerial vehicle is controlled to enable the center of the base station to move towards the center of the image obtained by the cradle head camera in real time, finally, the unmanned aerial vehicle lands on a ground base.
Preferably, the small image recognition object icon is a small circle icon; the image recognition object large icon is a large ring icon.
Preferably, the upper half part of the positioning and calibrating device is an arc concave surface sinking towards the center, is matched with an unmanned universal wheel and realizes accurate positioning under the action of gravity; the lower half part of the positioning and calibrating device is cylindrical and matched with the universal wheel in size, and the positioning and calibrating device is used for clamping the universal wheel.
Preferably, the surface wall of the lower half part of the positioning and calibrating device is provided with a butt-joint photoelectric switch for positioning detection after positioning is finished, and in the third step, after the unmanned aerial vehicle lands on the ground base station, if the photoelectric switch is completely closed, the positioning is successful, otherwise, the positioning fails.
Preferably, in the second step, the method for determining the ultrasonic wave propagation time TOF of any one of the ultrasonic receivers includes the following steps:
step 2.1, the frequencies received by the ultrasonic ranging system by using the current ultrasonic receiver are respectively f
1、f
2After the ultrasonic signals are processed, the time approximate to the zero crossing point after the two sections of ultrasonic signals are extracted to obtain two corresponding groups of time data P
1、P
2,P
1={t
11,t
12,t
13,...,t
1m,...},P
2={t
21,t
22,t
23,...,t
2n,.., wherein t is
1mIs a frequency f
1With the frequency f, the approximate zero-crossing time of the mth extracted ultrasonic signal
1Corresponding time of mth zero crossing point in the ultrasonic signal
The relationship between them is unknown; t is t
2nIs a frequency f
2With the frequency f, the approximate zero-crossing time of the nth extracted ultrasonic signal
2Corresponds to the nth zero crossing point in the ultrasonic signal
The relationship between them is unknown. Then, P is extracted
1、P
2Error correcting time data of (1) to obtain Q
1、Q
2, Q
1={t′
11,t′
12,t′
13,...,t′
1m,...},Q
2={t′
21,t′
22,t′
23,...,t′
2n,.. }, where, t'
1m、t′
2nAre each t
1m、t
2nError calibration time of (1);
step 2.2, adding P
1And P
2The time data in (1) are paired pairwise to form a plurality of time data sets, and Q is combined
1、 Q
2And obtaining a group of time data groups with highest reliability in the time data groups through a deep learning algorithm. If a group of time data with the highest reliability corresponds to the gamma group in all the related approximate zero-crossing point groups, the corresponding time of the related approximate zero-crossing point is expressed as
And
step 2.3, counting the equal ultrasonic propagation time TOF:
in the formula (I), the compound is shown in the specification,
and
respectively, the frequency of the gamma group related approximate zero-crossing point is f
1And f
2The time corresponding to the acoustic wave signal of (c),
is composed of
The corresponding error-calibration time is set to be,
is composed of
Corresponding to the error time, at is the time interval between two transmissions,
is the relative time difference between the relevant approximate zero crossings of the gamma th group, gamma being the group number of the relevant zero crossings. Preferably, the deep learning algorithm in step 2.2 adopts a BP neural network as a deep learning network model.
Preferably, in the step 2.1, the schmidt shaping circuit extracts the time close to the zero crossing point after the two sections of ultrasonic signals are extracted; extraction of P by inverting comparator shaping circuit1、P2The error of (2) calibrating the time data.
Preferably, the method for obtaining the training sample of the deep learning network model comprises the following steps:
two sets of time data P were obtained experimentally using the same method as step 2.1
1、P
2, P
1={t
11,t
12,t
13,...,t
1m,...},P
2={t
21,t
22,t
23,...,t
2n,., two sets of time data P
1And P
2Combining the two to obtain P ═ t
11t
21;t
11t
22;t
11t
23;...;t
1mt
2n,.. }; then, use P, Q
1And Q
2Form a matrix
In the formula t
12-mn=1/(1+e
-α(k-β))、
α is an empirical parameter, β is the number of cycles corresponding to the maximum amplitude of the received signal, t
1-m=|t′
1m-t
1m-0.5/f
1|/2, t
2-n=|t′
2n-t
2n-0.5/f
2I/2; each column of X is an input feature vector, the first row of the input feature vector is a feature value of data, and the second row and the third row of the input feature vector are respectively the error influence of Schmidt shaping on the time of respective zero crossing points of two sound wave signals; finally, the 1-dimensional output corresponding to each feature vector is set by human experience, and the value of the 1-dimensional output is between 0 and 1.
Preferably, the deep learning network model is implemented as follows: firstly, normalizing input data; secondly, constructing a network and initializing the network, setting the number of neurons in an input layer to be 3, a hidden layer to be 10 and an output layer to be 1, and initializing a weight w, a threshold b, a learning rate, a minimum error of a training target and a maximum allowable training step number; then, training a deep learning network, putting a large number of input and output training samples into the network, training weights and thresholds in the network, and finally obtaining a deep learning network model with each new weight and threshold when the error is within an acceptable range or exceeds the limit of training times; and finally, performing network test, comparing the test result with an expected value so as to evaluate the training result of the network model and correct the network parameters.
Due to the adoption of the technical scheme, compared with the prior art, the invention has the following advantages and positive effects:
(1) the traditional ultrasonic ranging method is a single threshold detection method and a phase method. However, in the former case, the time for the received signal to jump over the threshold value varies with the amplitude of the signal, thereby causing measurement uncertainty; in the latter case, the range of the ultrasonic waves is small, and when the range is larger than the wavelength of the ultrasonic waves, a range ambiguity occurs. The ultrasonic ranging method disclosed by the invention is a ranging method based on double frequencies, can skillfully avoid the defects and can realize accurate ranging. In addition, the distance measurement algorithm is combined with the BP neural network, the relevant zero crossing point can be determined from the fuzzy angle, and therefore the problem is simplified and generalized. And the BP neural network fuses the time error of measurement, thereby improving the robustness of ranging. The ultrasonic positioning method based on the high-precision distance measurement realizes the positioning effect with high precision and high robustness.
(2) The invention discloses a method for calibrating a positioning device of an unmanned aerial vehicle, which is characterized in that 4 positioning calibration devices are designed in the center of a ground base station, and 4 wheels of the unmanned aerial vehicle are designed into universal wheels. Therefore, the unmanned aerial vehicle can simply realize positioning through the action of gravity, and complex image processing is avoided, so that the processing speed of the processor is increased, and the landing accuracy is improved.
(3) The invention integrates a plurality of systems to position the ground base station of the unmanned aerial vehicle, and can further increase the rapidity, the accuracy and the robustness of positioning.
Detailed Description
The invention will be further illustrated with reference to the following specific examples. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Furthermore, it should be understood that various changes and modifications can be made by those skilled in the art after reading the teachings of the present invention, and such equivalents also fall within the scope of the appended claims.
The embodiment of the invention relates to a fusion positioning method of an unmanned aerial vehicle ground base station. The positioning step mainly comprises 4 steps: a positioning step based on a GPS, an ultrasonic positioning step based on a high-precision distance measurement method, a landing step based on image processing and gravitation and a positioning detection step based on a photoelectric switch. Realizing these location steps, need to carry out special design and require unmanned aerial vehicle and ground basic station to mutually support to ground basic station and unmanned aerial vehicle.
As shown in fig. 1, the ground base station according to the present invention has a cubic shape with a length, a width and a height of 1m, 1m and 0.2m, respectively, and a specific control system is provided therein. Ultrasonic receivers 1 are arranged in 4 corners of the ground base station, and the ultrasonic receivers form an ultrasonic receiver array together. The right middle of the ground base station is provided with 4 large ring icons 4 and a small ring icon 3. There are 4 positioning calibration devices in the area of the large circle icon 4. Wherein, ultrasonic receiver 1 is the annular receiver, can all-round receipt ultrasonic wave. The 4 large ring icons 4 and the small ring icons 3 are the objects of image identification, and are used for determining the center of the ground base station. Orientation of 4 location calibrating device is southeast, northeast, southwest, northwest respectively for match with the position of 4 universal wheels of unmanned aerial vehicle, as shown in fig. 2. The upper half part 2 of the positioning and calibrating device is an arc concave surface sinking towards the center, is matched with a universal wheel of the unmanned aerial vehicle and realizes accurate positioning by means of the action of gravity; the shape of the lower half part 5 of the positioning and calibrating device is a cylinder, is matched with the size of the universal wheel and is used for clamping the universal wheel. The surface wall of the lower half part 5 of the positioning and calibrating device is provided with a butt-joint type photoelectric switch for positioning detection after positioning is finished, as shown in fig. 7 and 8. The specific shape of the positioning and calibrating device is shown in fig. 7.
Besides the basic control sensor, the unmanned aerial vehicle is also provided with an electronic compass sensor, a spherical omnidirectional ultrasonic transducer and a tripod head camera. Wherein, the electronic compass sensor is positioned in the body; the holder camera vertically faces the ground and moves to the position right below the machine body when in use; the spherical ultrasonic sensor is positioned in the center of the bottom of the machine body and can transmit ultrasonic waves in all directions.
Positioning step based on GPS
The GPS positioning step mainly aims to realize that the body flies back to the approximate area where the ground base station is located from the execution ground, and can be roughly positioned in the square range of 2m around the ground base station. When the unmanned aerial vehicle receives a signal returned to the ground base station, the airborne INS/GPS integrated navigation positioning module calculates the current position of the unmanned aerial vehicle and sends a wireless signal and data to the ground base station. After the ground base station receives the signal, the unmanned aerial vehicle is allowed to land, and the base station position is sent to the unmanned aerial vehicle. The unmanned aerial vehicle control module controls the unmanned aerial vehicle to fly to the approximate position of the base station according to the azimuth information.
Ultrasonic positioning step based on high-precision distance measurement method
The ultrasonic positioning step is a core positioning step and mainly aims to enable the unmanned aerial vehicle to hover at the midpoint above the ground base station, so that subsequent accurate positioning operation is facilitated. After the unmanned aerial vehicle carries out rough positioning through GPS, the flying speed is slowed down, and ultrasonic waves are periodically transmitted downwards through a spherical omnidirectional transducer in the center of the machine body. At this time, the ultrasonic receiver arrays at the 4 corners of the ground base station receive the ultrasonic waves sent by the unmanned aerial vehicle in sequence, so as to obtain the corresponding propagation time tof (time of flight). If the intersection point of two diagonal lines of the ground base station is taken as the origin of coordinates, the processing system takes TOFs obtained by randomly selecting three receivers as a group, calculates the coordinates of the unmanned aerial vehicle, and performs redundant calculation through other combinations to reduce errors.
Because the positioning precision is directly determined by the distance measurement precision, the invention specially designs a distance measurement algorithm and an ultrasonic distance measurement system.
The high-precision ultrasonic ranging algorithm is a double-frequency ranging method based on deep learning, and can accurately measure the TOF. The principle of the double-frequency ranging is as follows: as shown in FIG. 3, the ranging system utilizes the relative time difference between the correlated zero crossings of the received signals of different frequencies
To determine the propagation time of the ultrasonic wave
In the formula f
1Is the frequency of the first transmitted signal, f
2Is the second time of hairThe frequency at which the signal is transmitted,
and
respectively, the frequency of the ith group of related zero-crossing points is f
1And f
2Is the frequency difference between the two, at is the time interval between two transmissions, and i is the group number of the relevant zero crossing point. The extraction method of the zero crossing point time data is a Schmitt shaping method, which can avoid the interference of noise, but can cause the extracted zero crossing point to be approximate zero crossing point, thereby bringing about measurement error. The measurement error may be calibrated by error calibration time data extracted by the inverse comparator shaping circuit, as shown in fig. 4. The Schmitt shaping circuit and the reverse comparator shaping circuit are equal in threshold value. And the time data is the time corresponding to the rising edge of the waveform output by the shaping circuit. The correlation of the approximate zero-crossing point needs to be judged by using the reliability output by the depth learning algorithm. The deep learning algorithm can determine a group of data with the highest reliability, so that the accurate ultrasonic wave propagation time is obtained. The reliability output by the deep learning algorithm is fused with the detection error of the Schmitt shaping method, so that the accuracy, reliability and robustness of distance measurement are enhanced.
The double-frequency ranging method based on deep learning comprises the following specific steps:
step 1: the unmanned aerial vehicle sends two sections of ultrasonic signals with different frequencies to the ground through the spherical omnidirectional transducer at the same time interval and informs the ground base station of an ultrasonic positioning system to start timing. Both of said frequencies are within the transducer bandwidth, such as 50KHz and 51.3 KHz;
step 2: the ground base station ultrasonic ranging system receives the two sections of sound wave signals in the step 1 in sequence, and extracts the approximate zero crossing time of the two sections of signals through a Schmidt shaping circuit to obtain two corresponding groups of time data P1={t11,t12,t13,...,t1m,.. } and P2={t21,t22,t23,...,t2n,.. }; extracting error calibration time data Q of the two signals by an inverse comparator shaping circuit1={t′11,t′12,t′13,...,t′1m,.. } and Q2={t′21,t′22,t′23,...,t′2n,...}. In the corner marks of the time data, the first is a sound wave signal label, and the second is a data sequence label;
and step 3: due to P described in
step 2
1And P
2The approximate zero crossing corresponding to the jth data in (a) does not necessarily correspond in waveform, i.e., t
1jAnd t
2jNot necessarily related. In addition, the zero-crossing point obtained by the schmitt shaping method is an approximate zero-crossing point. The ranging system will thus utilize the time data P
1、P
2、Q
1And Q
2And obtaining a group of data with highest credibility through a depth learning algorithm
And 4, step 4: data as described by
step 3
And calculating the propagation time of the ultrasonic wave. Has the formula of
And
respectively, the frequency of the gamma group related approximate zero-crossing point is f
1And f
2The time corresponding to the acoustic wave signal of (c),
is composed of
The corresponding error-calibration time is set to be,
is composed of
Corresponding to the error time, at is the time interval between two transmissions,
is the relative time difference between the relevant approximate zero crossings of the gamma th group, gamma being the group number of the relevant zero crossings.
The deep learning algorithm mainly adopts a BP neural network as a deep learning network model, wherein training samples come from experimental data.
The training sample acquisition method comprises the following steps: a group of P obtained by experiments in advance
1And P
2Make two-by-two combination, i.e. about { t
11,t
12,t
13,...,t
1m,.. } and t
11,t
12,t
13,...,t
2n,.. } into P ═ t
11t
21;t
11t
22;t
11t
23;...;t
1mt
2n,.. }; then, use P, Q
1And Q
2Composition matrix
In the formula t
12-mn=1/(1+e
-α(k-β))、
α is an empirical parameter, β is the number of cycles (typically a constant value) corresponding to the maximum amplitude of the received signal, t
1-m=|t′
1m-t
1m-0.5/f
1|/2,t
2-n=|t′
2n-t
2n-0.5/f
2I/2; from this, each column of X is an input feature vector, the first row of which is the feature value of the data,The second row and the third row respectively have error influence on the time of respective zero crossing points of two sound wave signals caused by Schmidt shaping (the larger the numerical value is, the larger the error is); finally, the 1-dimensional output (confidence) for each feature vector is set by human experience to have a value between 0 and 1 (a larger value indicates a higher probability of being used to calculate TOF).
The BP neural network model is realized as follows: firstly, because the numerical values of input samples are different in size and have large difference, input data normalization processing is required; secondly, constructing a network and initializing the network, setting the number of neurons of an input layer to be 3, a hidden layer to be 10 and an output layer to be 1, and initializing a weight w, a threshold b, a learning rate, a minimum error of a training target and a maximum allowable training step number; then, training a deep learning network, putting a large number of input and output training samples into the network, training weights and thresholds in the network, and finally obtaining a deep learning network model with each new weight and threshold when the error is within an acceptable range or exceeds the limit of training times; and finally, performing network test, comparing the test result with an expected value so as to evaluate the training result of the network model and correct the network parameters. Therefore, a deep learning detection model of the relevant points can be obtained.
The deep learning algorithm comprises the following steps:
step 1: setting parameters of the neural network, including the number of hidden layers, the number of neurons in each layer, a weight value w and a threshold value b, and constructing a basic neural network model;
step 2: selecting the first five groups of two groups of time data, and extracting a characteristic vector as detected input data according to the method for inputting the training samples;
and step 3: calculating an output matrix corresponding to the characteristic vector by using the neural network model;
and 4, step 4: traversing the output matrix in the step 3, searching for a maximum value and recording the position of the maximum value in the array;
and 5: obtaining a group of time data with highest credibility from
step 4
Thereby obtaining the propagation time;
the core of the ultrasonic ranging algorithm is a deep learning algorithm. When the relevant point detection is carried out, the deep learning network model can be used for describing a complex nonlinear relation, and the detection error of the Schmitt shaping method is fused, so that the accuracy and the robustness of the relevant point detection are improved. As shown in FIG. 5, the predicted output using the BP neural network model is very close to the expected output.
The ultrasonic ranging system is composed of a signal processing module and a data processing module as shown in fig. 6.
The signal processing module consists of a preceding stage processing circuit, a filter circuit, a program control amplifying circuit, a Schmidt shaping circuit and a reverse comparator shaping circuit. The signal captured by the ultrasonic receiver is weak, usually only a few millivolts to a few tens of millivolts, and is doped with ambient interference signals. Therefore, correlation processing is required before the received signal is sent to the schmitt shaping circuit and the inverse comparator circuit. Firstly, the acoustic wave signal is subjected to signal amplitude limiting and primary amplification processing through a pre-stage processing circuit. The signal is then passed to a filter circuit that has a high quality factor and a deep notch depth that substantially filters out the interfering signal. Finally, the filtered signal is sent to the program control amplifying circuit. After the preprocessing is finished, the Schmidt shaping circuit and the inverting comparison circuit receive the output signal of the program control amplifying circuit at the same time. In the signal processing module, the program-controlled amplifying circuit can adjust the amplification factor according to specific conditions so as to improve the robustness of system ranging; the program control amplifying circuit is designed behind the filter circuit to filter interference signals in advance and prevent the interference signals from being further amplified to influence the ranging performance; the schmitt shaping circuit and the inverse comparator shaping circuit are required to be equal in threshold value.
The data processing module consists of a TDC-GP21, an MCU, an external sound velocity calibration module and a main control computer. The TDC-GP21 is used for extracting the rising edge time of the output signals of the Schmitt shaping circuit and the inverse comparator circuit with high precision and transmitting the rising edge time to the MCU. The external sound velocity calibration module is used for calibrating the sound velocity in real time, calibrating by using the time for the ultrasonic wave to propagate the known distance, and transmitting the time data to the MCU. The MCU preprocesses data from the TDC-GP21 and the external sound velocity calibration module and transmits the data to the main control computer. And the main control computer performs deep processing on the data from the MCU and transmits the processed data to the unmanned aerial vehicle.
Based on the distance measurement algorithm and the distance measurement system, the specific steps of ultrasonic positioning are as follows:
step 1: the unmanned aerial vehicle sends two sections of ultrasonic signals with different frequencies to the ground through the spherical omnidirectional transducer at the same time interval and informs the ground base station of an ultrasonic positioning system to start timing.
Step 2: after a period of time, the ultrasonic receiver array will receive the acoustic signals of the two frequencies in sequence.
And step 3: and 2, transmitting the sound wave signals to respective signal processing modules of the ultrasonic receiver array to respectively obtain two corresponding sections of square wave signals.
And 4, step 4: and 3, transmitting the two sections of square wave signals to an ultrasonic data processing module. The module obtains the distance between the unmanned aerial vehicle and the ultrasonic receiver array according to a dual-frequency ranging method based on deep learning, so that the position coordinate of the unmanned aerial vehicle, the distance relative to a landing point, the height and the attitude angle are obtained, and the position coordinate, the distance relative to the landing point, the height and the attitude angle are transmitted to the unmanned aerial vehicle.
And 5: the unmanned aerial vehicle automatic driving system collects parameters provided by the sensor according to an input control instruction, generates the control instruction according to a set control method and logic, and realizes related control through an execution mechanism.
Step 6: and (5) circulating the steps 1-5 until the positioning error is within an allowable range.
Landing step based on image recognition and gravitation
The main purpose of the accurate landing based on the image recognition and the gravity action is to enable 4 universal wheels of the unmanned aerial vehicle to accurately land at the bottom of the specified 4 circular arc concave surfaces. After the unmanned aerial vehicle is positioned in the center above the ground base station, the unmanned aerial vehicle begins to slowly descend. The positioning based on the image recognition is used for fine adjustment of the aerial position of the unmanned aerial vehicle, and the positioning precision in the landing process is enhanced. The positioning based on the action of the attractive force enables 4 universal wheels of the unmanned aerial vehicle to fall along the corresponding 4 smooth arc concave surfaces through the attractive force, and finally stop at the bottoms of the accurate 4 arc concave surfaces, as shown in fig. 8. The fixed point position of 4 universal wheels is adjusted through the electron compass for the organism openly faces the true north when unmanned aerial vehicle descends, thereby the position of 4 universal wheels is southeast, northeast, southwest, northwest respectively. This can prevent that 4 universal wheels's falling point from being located the connection face of 4 circular arc concave surfaces, can't gliding.
The image identification specifically comprises image acquisition, perspective transformation, dynamic threshold binarization, circle detection and circle center positioning. The image is acquired by a pan-tilt camera right below the machine body; the cloud deck camera adjusts the position of the camera in real time and keeps the camera vertically downward, so that the image is stabilized; the perspective transformation is used for correcting a malformed image; the dynamic threshold binarization is used for extracting the outlines of 4 large circular ring icons and a central small circular ring icon in the image, and separating a background from a foreground, so that the subsequent detection operation is facilitated; the circle detection is used for identifying a circular icon outline in the graph; the circle center positioning specifically comprises coarse positioning and fine positioning.
The specific steps of accurate landing based on image recognition and gravitational interaction are as follows:
step 1: the unmanned aerial vehicle carries out azimuth calibration through an electronic compass;
step 2: the method comprises the steps that an unmanned aerial vehicle camera obtains a ground base station image;
and step 3: when the unmanned aerial vehicle is far away from the ground base station, the processor performs image recognition on the image in the step 1 to obtain the positions of the centers of 4 great circles, so that the intersection point of the cross connecting lines of the 4 centers of circles is the center position of the ground base station; when the unmanned aerial vehicle is close to the ground base station, the processor identifies the image in the step 1 to obtain the position of the center of the middle small circle, namely the center position of the ground base station;
and 4, step 4: and 3, the processor obtains the control parameters of the unmanned aerial vehicle according to the relative position of the ground base station center position in the image in the step 3, so that the ground base station center position moves towards the positive center of the image.
And 5: repeating the steps 1-4 until the unmanned aerial vehicle lands on the ground base station, and further executing the step 6;
step 6: unmanned aerial vehicle passes through the gravitation, relies on 4 smooth circular arc concave surfaces to realize automatic gliding, stops on 4 setpoint, and 4 universal wheels card have been 4 smooth circular arc concave surface bottoms promptly.
Detection step based on photoelectric switch positioning
The positioning detection step mainly aims to detect whether the unmanned aerial vehicle is accurately stopped at a specified position. The detection sensor is a photoelectric switch located at the lower half of the positioning and calibrating device, as shown in fig. 7 and 8. The detection of the success of the positioning is based on the detection 4 of the complete closing of the photoelectric switch. If the photoelectric switches are completely closed, the universal wheels are completely clamped, and then the positioning is successful; if the photoelectric switches are not completely closed, the positioning fails, the unmanned plane receives a signal of the positioning failure, and the unmanned plane is repositioned after taking off.