Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present application and should not be construed as limiting the application.
The unmanned autonomous laser movement measurement method and device according to the embodiment of the application are described below with reference to the accompanying drawings. The application provides an unmanned autonomous laser movement measurement method aiming at the problems in the background art, wherein the method comprises the steps of obtaining environment information of at least two target robots and determining viewpoint sets of the at least two target robots, evaluating the environment information of each target robot in the at least two target robots based on a preset factor graph and AutoMerge frames to obtain an evaluation result, merging the environment information of the at least two target robots according to preset merging conditions according to the evaluation result to generate merging map information, calculating the minimum viewpoint set of each target robot in the at least two target robots according to preset coverage standards and a dynamic viewpoint rewarding mechanism, constructing a global route map based on the merging map information and the minimum viewpoint set, and calculating the communication cost of each preset communication condition under a plurality of preset communication conditions by utilizing the global route map so as to control each target robot to execute corresponding travelling operation according to the communication cost. The application integrates the advantages of the laser radars on multiple vehicles, overcomes the limitations of each single robot and the traditional method through efficient map merging and communication navigation algorithm, and simultaneously provides map construction and navigation solutions with high precision and high reliability in various complex environments. Therefore, the problems that the existing unmanned autonomous laser detection method is far from meeting the requirements of high autonomy, adaptability and high efficiency in the high dynamic and changeable environments, is difficult to apply to the fields of automatic exploration and monitoring and the like are solved.
Specifically, fig. 1 is a flowchart of an unmanned autonomous laser movement measurement method according to an embodiment of the present application.
As shown in fig. 1, the unmanned autonomous laser movement measurement method includes the steps of:
in step S101, environmental information of at least two target robots is acquired, and a set of viewpoints of the at least two target robots is determined.
In the embodiment of the application, the environment information (namely map segments) of the robot, such as laser radar data, point cloud data, system IMU direction information and the like, can be firstly obtained, and in the actual execution process, the point cloud state, the motion state information and the distance information can be acquired under various environments by using the multi-machine-assembled laser radar sensor, and the sensors of all vehicles cooperatively work to keep time synchronization and space alignment, so that a plurality of acquired indoor and outdoor scene data are obtained.
Secondly, embodiments of the present application may construct a view-set model, as shown in fig. 2, which is specifically defined as follows:
In the view-point set model, Defined as the workspace to be explored,The view point v epsilon SE (3) describes the gesture v epsilon [ p v,qv ] of the vehicle-mounted sensor, wherein p v∈Wtrav and q v epsilon SO (3) respectively represent the position and the direction; For the surface perceived by the sensor at v, the union of the surfaces perceived at the viewpoint along L is Order theRepresenting a subset of perceived surfaces that have been covered so far in the exploration process, the perceived but not yet covered surfaces being represented as
Embodiments of the present application then also define a workspace and traversable subspaces to describe the pose and perceived surface of the sensor through the set of viewpoints and determine the coverage path.
In step S102, based on the preset factor graph and AutoMerge frame, the environmental information of each of the at least two target robots is evaluated, an evaluation result is obtained, and the environmental information of the at least two target robots is combined according to the evaluation result and a preset combining condition, so as to generate combined map information.
Further, in the embodiment of the present application, a factor graph is also required to be maintained to represent the interconnection between robots, as shown in fig. 3, and the quality score of the current feature association is evaluated by using AutoMerge frames, and the overlapping length is associated and expanded by planning path verification data, and sub-maps are combined under the condition that a specific threshold is met, and meanwhile, the relative positions of the combined robots are determined, so that the sub-maps generated by each robot are dynamically adjusted and combined to obtain final combined map information.
Optionally, in one embodiment of the present application, based on a preset factor graph and AutoMerge frames, evaluating environmental information of each of at least two target robots to obtain an evaluation result, merging environmental information of at least two target robots according to preset merging conditions according to the evaluation result to generate merged map information, including constructing a factor graph according to the environmental information of each target robot and an interconnection manner between each target robot, acquiring an overlap length between the environmental information of each target robot, extracting feature descriptors according to the overlap length, performing a matching operation on the environmental information of each target robot based on AutoMerge frames, feature descriptors and factor graph, generating a matching sequence pair corresponding to each target robot, and performing validity verification on each matching sequence in the matching sequence pair to obtain a verification result of each matching sequence, judging whether the verification result meets a preset validity requirement, removing the corresponding matching sequence if the verification result does not meet the preset validity requirement, calculating an interconnection connection of a matching sequence pair if the verification result meets the preset validity requirement, acquiring an interconnection score, and performing an interconnection score on the basis of the preset connection of the map pair, and obtaining an active score by a large threshold value, and performing an active score calculation on the matching sequence until the merged score is reached by each matching sequence pair, if the combination score is not met, and the active score is calculated based on the preset interconnection score, and the combination score is calculated by the threshold value, to generate consolidated map information.
It should be noted that, the specific process of evaluating and merging data collected by different robots through the factor graph and AutoMerge framework and merging map segments when specific conditions are satisfied in the embodiment of the present application is as follows:
in step1, in a specific implementation process, a factor graph g= { V, E } may be maintained by MUI-TARE, where the factor graph g= { V, E } is used to represent the connection between robots, the node V represents the map segments obtained by each robot, and the edge e= { ωi, j } represents the connection between these segments, where ωi, j represents the connection between robots vi and vj, and the calculation method is shown in formula 1. In AutoMerge, for location identification, the interconnection between two segments is determined by the difference in overlap length (i.e., number of matching frames) and location identification descriptor, the interconnection has a higher score, and the longer the overlap length, the more stable the interconnection.
Wherein F i,Fj represents stitching of feature descriptors extracted from overlapping parts, L i,j represents overlapping length between v i,vj, c w, sigma is a super parameter, c w adjusts dependence of omega i,j on overlapping length, epsilon is a constant avoiding zero division, only segments v i,vj with an interconnection score omega i,j greater than a set threshold are merged, and relative positions of the merged robots can be determined through a transformation matrix;
Step 2, stable interconnection requires a longer overlapping distance, but is difficult to realize when the robot independently explores, so that a shorter matching frame sequence is required to detect overlapping, but data association errors and large errors of a transformation matrix are also required to be avoided; therefore, the embodiment of the application can verify data association and expand the overlapping length by planning a path through an adaptive merging strategy and adopting an active method, and once a new interconnection is detected in the exploration process, the robot which is closer to the overlapping area is used for adaptive merging, the robot can verify and increase the interconnection by actively exploring the potential overlapping area, and the merging process can be continued until the interconnection is proved to be a data association error or a sub map is successfully merged;
Step 3, the MUI-TARE regularly calls AutoMerge to match the point cloud from the robot and returns a matching sequence pair, and the embodiment of the application can traverse all the matching sequence pairs, verify internal connection, calculate a transformation matrix, estimate a merging distance A diat and calculate a method as shown in a formula 2;
Where ω i,j represents the current internal connection, ω thresh represents the internal connection threshold that directs map merging, F i and F j are expected feature descriptors when the overlap length reaches the threshold, a dist is the estimated distance that the agent needs to travel to establish a stable internal connection;
Step 4, planning a complete exploration path by combining a local planner and a global planner, wherein the local planner firstly samples the view points covering the surface of the subspace and plans the shortest path by solving the problem of a traveller, and the global planner calculates a group of global rough paths for N robots in the sub map m To travel in the "under explore" subspace and attempt to minimize the longest travel distance in all paths, defined as follows:
Then, based on A dist calculated in step 3, the embodiment of the application can actively merge the planner to plan a path, so that the merge agent accesses the A dst frame view of the overlapped agent, and uses a greedy strategy to navigate the agent to the nearest uncovered frame view, and solves the problem of error data association possibly occurring during the active merge through a fault detection mechanism.
Wherein, For the current connection value to be connected,For the agent to start actively increasing the pre-stack internal connection value, C t is the super-parameter that controls the change of G (i, j, t) over time;
In the active merging process, if error data association is encountered, the overlapping area is not increased by active merging, so that the verification gain is reduced along with the increase of the active merging duration. When the verification gain is reduced below a certain threshold, the server informs the robot to exit the active merging and marks the internal connection as invalid, otherwise, the agent continues the active merging until the sub-map merging requirement is met, and once the reliable inter-sub-map internal connection is established, the sub-maps are merged into a single sub-map, namely the final merged map information, so that reliable data basis and theoretical support are provided for the construction of the subsequent global roadmap.
In step S103, a minimum viewpoint set of each of the at least two target robots is calculated according to a preset coverage standard and a dynamic viewpoint rewarding mechanism, a global route map is constructed based on the combined map information and the minimum viewpoint set, and communication costs of each preset communication condition under a plurality of preset communication conditions are calculated by using the global route map, so that each target robot is controlled to execute a corresponding traveling operation according to the communication costs.
Furthermore, the embodiment of the application also needs to utilize the coverage standard of the sensor and a dynamic viewpoint rewarding mechanism to calculate the minimum viewpoint set to cover all the unaware surfaces so as to optimize the detail degree of the map, then the embodiment of the application can build a global roadmap through the known track segmentation space so as to ensure that paths are collision-free and communicated, simultaneously optimize path selection by using an efficient algorithm, synchronize the states of multiple robots and solve the path planning problem under the condition of limited communication, and then the embodiment of the application can compare the cost under different communication assumptions to determine whether to execute a pursuit strategy to optimize information sharing, and if the pursuit is unsuccessful, the embodiment of the application adopts a preset convergence strategy to ensure the information communication efficiency.
In addition, the embodiment of the application optimizes information sharing and improves communication efficiency through a multi-machine formation pursuit strategy, finally reduces algorithm running time and improves environment exploration efficiency.
Optionally, in one embodiment of the present application, calculating the minimum view point set of each of the at least two target robots according to a preset coverage standard and a dynamic view point rewarding mechanism includes determining a coverage surface point standard and a local range space of the target sensor, performing a view point rewarding comparison operation based on the coverage surface point standard and the local range space to obtain a comparison result, selecting a spatial initial view point according to the comparison result, and calculating an uncovered surface of the spatial initial view point to select a subsequent view point through the uncovered surface, dynamically adjusting the subsequent view point rewarding, and repeatedly performing the view point rewarding comparison operation according to the dynamically adjusted subsequent view point rewarding to generate the minimum view point set.
In a specific implementation process, an embodiment of the present application constructs a local map planning path based on viewpoint sampling, as shown in fig. 4, and specifies a standard of a sensor coverage surface and a configuration space within a local range, dynamically adjusts viewpoint rewards according to a sub-model of viewpoint sampling to solve a set coverage problem, so as to optimize selection of additional viewpoints to maximize a total coverage area, and the specific process is as follows:
Step 1, defining the standard of the sensor coverage surface point as follows:
|ps-pv|≤D
Wherein p s is the surface patch in the exploration space, n s is the normal pointing to the free space side, viewpoint v is the center point on the surface patch, D and T are two constants, D is set to be shorter than the physical sensor range during implementation, the criteria are used to limit the relative distance and direction of the surface patch to the viewpoint, which ensures that the surface is well perceived, furthermore, embodiments of the application set a local planning range Order theTo determine traversable subspaces that take into account collisions and connectivity,To take into account the respective configuration space of rotation and translation;
Step 2, as more views are selected, the rewards for selecting additional views are reduced, mainly because nearby views have overlapping views and the same surface can be perceived from multiple views, therefore, the rewards for views depend on the previously selected views, and the rewards for views are reduced For the i-th selected viewpoint, the uncovered surface in v i is requiredThe adjustment is as follows:
wherein, the embodiment of the application uses the problem of set coverage to solve the problem of set coverage To find a covered unaware surfaceIs provided with a minimum set of view points,Representing slaveThe perceived uncovered surface of which represents uncovered area rendering sub-model, embodiments of the present application select an initial set of view pointsDynamically adjusting v i rewards based on the contribution of view points to uncovered areasTo optimize the selection of additional viewpoints to maximize the total coverage area.
Thus, embodiments of the present application calculate a minimum set of viewpoints to cover all the imperceptible surfaces based on the coverage criteria of the sensor and the dynamic viewpoint rewarding mechanism, thereby optimizing the level of detail of the map.
Optionally, in one embodiment of the application, the global roadmap is constructed based on the combined map information and the minimum view point set, and the method comprises the steps of constructing a space subdivision model, dividing a target space into a plurality of sub-target spaces according to the space subdivision model, marking the sub-target spaces, acquiring candidate view points based on the marked sub-target spaces, the minimum view point set, the preset resolution and the preset reuse strategy, and constructing the global roadmap according to the candidate view points and the historical motion trail of each target robot.
It should be noted that, the embodiment of the present application may construct a global map planning path based on space subdivision, as shown in fig. 5, firstly, the embodiment of the present application may define a space subdivision model to construct a global roadmap according to a past track, and obtain a view point of a collision-free and communicable unknown area through view point reuse, and optimize global path selection by a probabilistic integrity algorithm, and in addition, the same initial state is adopted under the limitation of multi-machine communication, and the VRP is solved by sharing new information by utilizing multi-hop communication, where the specific process is as follows:
Step 1, defining a space subdivision model, namely dividing a space outside H into equal-length Fang Tizi spaces, storing covered and uncovered surfaces formed in the exploration process in each subspace, wherein data are only stored and remain in the subspace, and the data in H are updated along with the exploration, storing the states of 'unexplored', 'explored' and 'explored' in each subspace, wherein the states are unexplored if the subspace does not contain any covered or uncovered surfaces, the explored if the subspace contains only covered surfaces, the explored if the subspace contains any uncovered surfaces, the explored if the subspace contains only covered surfaces, and considering the explored subspace in the global planning H E is expressed as exploring subspace, andRepresented as an exploration subspace set;
Step 2, building a global roadmap, namely, if nodes represent physical traversable positions in the environment, if traversable paths exist between a pair of nodes, the nodes are connected through edges, and in order to avoid collision checking and redundant calculation of path planning, viewpoints obtained by solving the viewpoint sampling problem are reused, and since viewpoint candidates are in the process of Is uniformly generated and takes collision and connectivity into consideration, in other words, each viewpoint candidate is collision-free and is connected withAny other view candidates having traversable paths therein are connected, and a subset of the view candidates are reused as nodes in the global roadmap to avoid redundant collisions and connectivity checks. In order to ensure sparsity of roadmaps, viewpoint candidates are randomly sampled at a fixed resolution to maintain sufficiently connected roadmaps without incurring high computational costs, embodiments of the application may use other data structures or route planning algorithms instead of global roadmaps, as long as they have probabilistic integrity in calculating the shortest path between two locations;
Thus, it can be appreciated that, in order to find a view through the current viewpoint v current and Global path of each subspace centroid in (a)According to the embodiment of the application, a global roadmap can be constructed in a traversable space along the past track of a vehicle, traversable nodes are connected by edges, the redundant calculation for avoiding collision check and path planning is reused through viewpoints to obtain viewpoints of non-collision and communicable unknown areas, and finally fixed-resolution random sampling is adopted for the viewpoint candidates to ensure the roadmap connection and acceptable calculation cost, and the A * algorithm or other probability integrity algorithms are utilized to optimize the selection of the global path and reduce the redundant calculation for collision check and path planning;
Step 3, defining a multi-robot exploration model under limited communication, wherein all robots are initialized in the same coordinate system, the starting positions of other robots are known in advance, and the robots can communicate when being physically in a certain distance range, in addition, the robots can communicate through multiple hops, wherein information is transmitted by other robots in the middle, all robots share the current position of the robots, the states of explored subspaces and the traversability of the robots, only share new information each time to avoid redundant transmission, and the optimal solution for solving the VRP is calculated as follows:
Wherein, For a set of global paths of N robots, the lowest cost solution, the optimal solution, is shared by all robots and used as an initial guess for further optimization.
Optionally, in one embodiment of the present application, the communication cost of each preset communication condition under a plurality of preset communication conditions is calculated by using a global roadmap, so as to control each target robot to execute a corresponding travelling operation according to the communication cost, including determining environmental awareness information of each target robot and constructing an information synchronization model according to the environmental awareness information, when the target robot i and the target robot j exceed a preset communication range, performing non-communication evaluation, semi-communication evaluation and hypothesis communication evaluation on the target robot i according to the information synchronization model, respectively calculating global paths and communication costs corresponding to the non-communication evaluation, the semi-communication evaluation and the hypothesis communication evaluation, wherein i and j belong to positive integers, calculating tracking cost of the target robot i according to the global paths and the communication costs, and constructing a prescribed pursuit cost comparison model based on the tracking cost, when the M target robots are located within the communication range of N robots, selecting communication targets from the N-M target robots by using a preset random sampling strategy, wherein M, N belongs to the positive integers, and M < target robots are determined and the tracking cost of the target robots are controlled by combining the tracking strategy and the tracking cost of each target robot.
In the actual implementation process, as shown in fig. 6, the embodiment of the present application may set a multi-robot information synchronization model, calculate global path planning as cost under no communication and assumed communication, further calculate additional cost generated by the occurrence of chase if the cost under the assumed communication is lower, and take chase decision if the additional cost is still lower than the global cost under the no communication, specifically, ensure effective information exchange based on the session and communication policy when the chase failure is taken, and the specific process is as follows:
step 1, an information synchronization model, which defines the knowledge of the robot to the environment as a global subspace set which is explored and is being explored, and uses Representing that each robot will track its own knowledge and knowledge of other robots; to represent knowledge of robot i about robot j about the environment, a symbol is usedWherein i and j are integers between 1 and N, and the symbolRepresenting the knowledge of the robot i itself;
Prior to the search for a search term, Using robot i to initialize a priori knowledge of robot j, e.g., a single global subspace containing the start position of robot j, when two robots i and j can communicate with each other, they synchronize their knowledge with updated knowledge represented by the superscript t, represented by the following equation:
At the same time, the knowledge understanding of other robots is updated, expressed as:
Where k is an integer between 1 and N and k is not equal to i and j, where, Information combinations representing two robots, the explored state being prioritized over the exploring state when updating the state of the global subspace;
Step2, communication cost comparison model, wherein in case that the robot i and the robot j exceed the communication range, the robot i must decide whether to pursue the robot j for communication by evaluating three options (no communication, half communication and hypothesis communication) Representing a global subspace that is currently known only to robot i such thatWithout communication, robot i plans a global pathThe cost is expressed as:
Wherein, Not to accessWhile under hypothetical communication, robot j knowsSuch that:
robot i may plan the global path as The cost is expressed as:
in the case of half communication, pair Performing space division, and randomly selecting half of the regions or sampling points to obtainSo thatEnsure that robot i can effectively plan global path on the basis of incomplete dataThe half communication cost is expressed as:
embodiment of the application priority comparison And c, ifLess than c, then robot i may be considered to be beneficial to communicate with robot j, otherwise it will be againIf c + is less than c, as compared to c, it is beneficial for robot i to communicate with robot j, which can share accessThe above steps, however, require that robot i deviate from its current exploration path, thereby creating additional costs, and further embodiments of the present application can examine the costs that robot i generates when chasing robot j, and evaluate the situation that the costs generated can justify by potential benefits;
step 3, estimating that the robot i tracks the robot j to share the related information by using the pursuit cost comparison model Cost of information generation of (a) assuming that robot j follows a global pathSearch forBy estimating the time for robot j to reach each subspace, robot i may plan a path to maximize the likelihood of encountering robot j while reducing travel time, embodiments of the application may approximate by solving the TSP problem constrained by a time window that calculates the access of robot i from the current locationAnd eventually returns to the co-located global path. The path taken by robot i to track robot j is denoted asIn view of this tracking, the total cost is given by the following formula:
assuming successful information exchange between two robots, if Less than the current cost c, robot i should interrupt its exploration and followAfter the information synchronization is completed, the robot I calculates the difference delta I between the information obtained by the chase and the current known information, wherein the difference delta I comprises the increment delta G exp of a search area, the decrement delta G unexp of an unexplored area and the refinement delta G redifine of the information of the searched area;
Deciding whether to proceed to a new exploration area based on the newly obtained information Δi and the current resource status R, the decision being guided by the ratio Δi/R of information gain to resource consumption to maximize information collection efficiency if the resources allow;
After completing the exploration of the known area, the robot should catch up with other robots to obtain more information about the environment or to transfer information about the previously explored space, determine if and where to catch up with a certain robot as described above, the main objective being to minimize the overall exploration time, however, in this case, the focus of the information exchange is turned to the possibility to find new exploration areas or to avoid repeated access to already explored space, instead of sharing the workload of exploring a large space as is typically the case;
Step 4, chasing the correction model when the total When the robots are in the communication range of N robots in total, iteratively selecting a communication target from the rest N-M robots by using a random sampling method;
Specifically, there are 2 (N-M) possible combinations of target robots to communicate, the selection of target robots is distributed in planning cycles, wherein each planning cycle selects only a small number of combinations from a total of 2 (N-M) combinations without replacement, a small number of robots (one or two) are targeted to improve non-communication strategies, and therefore the selection is biased to prioritize the small targets, wherein the likelihood of selecting the small number of combinations of targets is higher than the large number of combinations of targets;
It will be appreciated that in the worst case, the pursuit strategy is degenerated to a convergence-based strategy, but due to the symmetry of robot reasoning, it is rare that the target robots that find enough new information will also try pursuit, which will tend to meet the pursuit robots halfway, otherwise the target robots will not be far from their original exploration path, and the pursuit robots can easily find them.
Therefore, the embodiment of the application utilizes the self-adaptive fusion and advanced collaborative strategies to fusion construct the dual-resolution global-local map of the three-dimensional space, improves the accuracy and the exploration time efficiency of the map, reduces the calculation operation time, optimizes the information sharing through the multi-machine formation pursuit strategy and improves the overall efficiency of the system. For example, in the environment including but not limited to large-scale space, fluctuating topography, complex topological structure, disordered obstacles and unstructured environment, the adaptive merging in MUI-TARE structure is used for determining verification technology, compared with the existing method, the exploring time is reduced by 7% -56% and the planning time is accelerated by 13% -52%, the laser radar data on multiple vehicles can be effectively fused and enhanced, errors of overlapped parts are reduced to ensure efficient merging of sub-maps, in addition, the operating time is reduced by 50% and always kept below 1 second compared with the existing NBVP, GBP, MBP method, the exploring time efficiency is improved by 80%, the exploring in the position environment can be made faster and more comprehensively, compared with the traditional method based on the following strategy, the average exploring efficiency in the model operation of thousands of times is improved by 39% through the multiple machine system progressive technology, the advantage of radar integration on multiple vehicles can be effectively achieved, the high-accuracy map merging of the multiple vehicles can be overcome through the high-accuracy map construction method, the navigation algorithm can be overcome under the condition that the navigation algorithm is not only high enough to overcome, and the navigation environment is not limited by the traditional method.
In addition, the application constructs a corresponding unmanned autonomous laser movement measuring system according to the unmanned autonomous laser movement measuring method, and the unmanned autonomous laser movement measuring system mainly comprises a sensor module, a data transmission module, a motion control module, a data processing module, a data processor and other components.
The sensor module is used for acquiring laser radar data of map building navigation and comprises a multi-machine multi-laser radar device;
the data transmission module is used for connecting each sensor with the data processing module;
the motion control module is used for realizing a planned path and comprises a four-wheel-drive ROS robot device;
The data processing module is used for processing the data acquired by the sensor module and carrying out unmanned autonomous laser movement measurement;
The data processor comprises a sub-map self-adaptive fusion module, a dual-resolution map construction module, a chase strategy cost analysis module and the like.
Specifically, the sub-map adaptive fusion module is configured to combine only segments with connection scores greater than a set threshold, wherein stable connection requires a longer overlapping distance, the robot verifies data association and expands the overlapping length by planning a path, the MUI-TARE periodically calls AutoMerge to match point clouds from the robot and returns matching sequence pairs in an exploration process, traverses all matching sequence pairs, verifies internal connection, calculates a transformation matrix and estimates a combining distance a dist, starts a fault detection mechanism and adopts a rollback strategy if internal connection verification fails until a sub-map combining condition is met, and the combining distance a dist is calculated as follows:
Where ω i,j represents the current internal connection, ω thresh is the internal connection threshold that directs map merging, F i and F j are the expected feature descriptors when the overlap length reaches the threshold, and a dist is the estimated distance that the agent needs to travel to establish a stable internal connection. To evaluate the relationship of the connection increase to the active merge duration, the verification gain is required:
Wherein, For the current connection value to be connected,For the agent to begin actively incrementing the pre-stack internal connection value, C t is the hyper-parameter that controls the change in G (i, j, t) over time.
In the sub-map self-adaptive fusion module, error data association is encountered, the overlapping area is not increased through active merging, the verification gain is reduced along with the increase of the active merging duration, when the verification gain is reduced below a set threshold value, the server informs the robot to exit the active merging, the internal connection is marked as invalid, and otherwise, the agent continues the active merging until the sub-map merging requirement is met. Once reliable inter-sub-map connections are established, the sub-maps are merged into a single sub-map;
a dual resolution map construction module for defining local view sampling criteria to ensure that the local surface is well perceived, view definition and selection through points in space And the distance between the viewpoint p v and the angular relationship with the normal n s. The conditions under which the viewpoint v covers the surface patch are as follows:
|ps-pv|≤D
where D and T are two constants that limit the distance and viewing angle of the surface patch relative to the viewpoint, respectively. In the corresponding configuration space Is sought to be covered as a covered surfaceThe view sampling problem exhibits sub-modularity, and rewards for views depend on previously selected views, requiring dynamic adjustments as follows:
Wherein, Representing an uncovered surface perceived from the viewpoint v i Is the total uncovered surface. In addition, in a multi-robot system, the path planning is optimized through the vehicle route problem, so that the effective coordination and information sharing among robots are realized, and especially in a communication limited environment, the collaborative exploration capacity of the system is enhanced through multi-hop communication and dynamic information synchronization;
a chase strategy cost analysis module in which a chase decision cost benefit analysis model is constructed for deciding whether a robot should chase another robot for information exchange, which requires evaluating the potential benefit and cost of the chase, the costs before and after communication are compared using the following formula:
assuming that after communication, the cost is calculated as:
in the case of half communication, the cost is calculated as:
Wherein, Global paths planned for N robots,Global path planning for robots to acquire other robot knowledge under hypothetical communications,For the global path planning of the robot in acquiring half of other robot knowledge, ifLess than c, indicating that the pursuit and communication benefits exceed the cost, otherwise continuing the comparisonAnd c;
further, in the information synchronization model, robot-see information synchronization is performed by:
Wherein, The merging operation of the representation information ensures that the global subspace knowledge of the participating robots is up-to-date and complete after each communication.
After that, the embodiment of the application can calculate the path required by the robot i to track the robot j after determining that the tracking is beneficial by tracking the calculation model, and approximates the path by using the TSP problem of the time window constraint:
Wherein, The robot i is tracked with the desired path of robot j.
In addition, as shown in fig. 7, the MID-360 multi-line laser radar in the embodiment of the application is specially designed for accurate environmental scanning, and the effective working range can reach 70 m under 80% reflectivity, and the minimum blind area is 0.1 m. The 360 ° horizontal field of view and the-7 ° to +52° vertical field of view ensure full spatial coverage, the lidar outputs a point cloud at a high resolution of 200,000 points per second, a typical frame rate of 10Hz, connected by a 100BASE-TX ethernet, and data synchronized by IEEE 1588-2008 (PTPv 2) or GPS. Advanced functions include interference immunity and extremely low false alarm rates in strong sunlight. The integrated ICM40609 IMU supports improving positioning accuracy and is of great importance to dynamic environment.
The HEPBURN P27J data transmission module provides a frequency range of 1.40GHz to 1.46GHz and supports Mesh networking of maximum 32 nodes. The radio frequency bandwidth is 4/8/10/14/20MHz, the code stream transmission of up to 90Mbps and the multi-hop transmission of 9 hops are supported, the single-hop delay of the module is less than 7ms, and the network access time is less than 1s. The device can stably run at an air-ground viewing distance of 30km and a ground viewing distance of 1-2km, has a weight of less than 175g, provides various interfaces, and is suitable for data transmission requirements of long distance and high bandwidth.
The four-wheel drive type motion control module has the characteristics of providing a transmission ratio of 1:27 and a highest speed of 1.82m/s, and can bear a maximum weight of 12kg and have a dead weight of 7.5kg. The dimension is 381mm x 466mm x 152mm and the turning radius is 0m. 22.2V and 5000mAh batteries are mounted to support a maximum of 6.5 hours of run time and a charge time of 2 hours. A solid rubber tire with a tread of 125mm equipped with an S20F 20kgf load sensor was driven by an MD36N 35W high torque motor. The wireless data transmission within the range of 500 meters is supported, the CAN, serial port and APP remote control functions are integrated, the wireless data transmission system is compatible with ROS, OLED display screens, light and sound prompt systems are provided, safety protection measures are provided, real-time monitoring and remote operation capability are provided, and the wireless data transmission system is suitable for efficient operation in variable environments.
The data processing module is positioned in the center of the unmanned aerial vehicle equipment and consists of an embedded high-performance CPU processor and other related processors, and comprises a processor which adopts 2 kernels and 4 threads, the basic operating frequency is 3.5GHz, and the turbo frequency acceleration can reach 4.0GHzCore TM i7-7567U CPU, integrated INTEL IRIS Plus Graphics 650, operating frequency between 300 and 1100MHz, is suitable for high performance environment focusing on energy efficiency, can provide stable performance (CPU benchmark test) for general computing and Graphics tasks, and 32GB LPDDR5 memory module. The module is internally provided with a Ubuntu operating system and is provided with an unmanned autonomous laser movement measurement method, and the unmanned autonomous laser movement measurement method can receive data received by the data acquisition module for real-time processing.
Therefore, the unmanned autonomous laser movement measurement system can also adopt self-adaptive fusion and advanced cooperative strategies to fusion and construct a dual-resolution global-local map of the three-dimensional space, and the accuracy and the exploration time efficiency of the map are improved. The system optimizes information sharing through a multi-machine formation pursuit strategy, evaluates and merges data collected by different robots through factor diagrams and AutoMerge frames, and optimizes global path selection through a dynamic viewpoint rewarding mechanism and a space subdivision technology, so that the system can be applied to the fields of robot collaborative operation, post-disaster rescue, environment monitoring and the like, has remarkable advantages in improving the accuracy, adaptability and efficiency of an autonomous mobile measurement system, and is particularly suitable for automatic exploration and monitoring tasks in high-dynamic and changeable environments.
According to the unmanned autonomous laser movement measurement method, environment information of at least two target robots is obtained, viewpoint sets of the at least two target robots are determined, the environment information of each of the at least two target robots is estimated based on a preset factor graph and AutoMerge frames, an estimation result is obtained, the environment information of the at least two target robots is combined according to preset combining conditions according to the estimation result to generate combining map information, the minimum viewpoint set of each of the at least two target robots is calculated according to preset coverage standards and a dynamic viewpoint rewarding mechanism, a global route map is built based on the combining map information and the minimum viewpoint sets, communication cost of each preset communication condition under various preset communication conditions is calculated by using the global route map, and corresponding advancing operation of each target robot is controlled according to the communication cost. The application integrates the advantages of the laser radars on multiple vehicles, overcomes the limitations of each single robot and the traditional method through efficient map merging and communication navigation algorithm, and simultaneously provides map construction and navigation solutions with high precision and high reliability in various complex environments.
Next, an unmanned autonomous laser movement measuring apparatus according to an embodiment of the present application will be described with reference to the accompanying drawings.
Fig. 8 is a block schematic diagram of an unmanned autonomous laser movement measuring device in accordance with an embodiment of the present application.
As shown in fig. 8, the unmanned autonomous laser movement measuring apparatus 10 includes an acquisition module 100, an evaluation module 200, and a control module 300.
The acquiring module 100 is configured to acquire environmental information of at least two target robots, and determine viewpoint sets of the at least two target robots.
And the evaluation module 200 is configured to evaluate the environmental information of each of the at least two target robots based on a preset factor graph and AutoMerge frames, obtain an evaluation result, and combine the environmental information of the at least two target robots according to a preset combining condition according to the evaluation result, so as to generate combined map information.
The control module 300 is configured to calculate, based on the viewpoint set, a minimum viewpoint set of each of the at least two target robots according to a preset coverage standard and a dynamic viewpoint rewarding mechanism, construct a global route map based on the combined map information and the minimum viewpoint set, and calculate a communication cost of each preset communication condition under a plurality of preset communication conditions by using the global route map, so as to control each target robot to execute a corresponding traveling operation according to the communication cost.
Optionally, in one embodiment of the present application, the evaluation module 200 includes a first building unit, an extracting unit, a matching unit, a first judging unit, a second judging unit, a first merging unit, a second merging unit, and a traversing unit.
The first construction unit is used for constructing the factor graph according to the environment information of each target robot and the interconnection mode between each target robot.
And the extraction unit is used for acquiring the overlapping length between the environment information of each target robot and extracting the feature descriptors according to the overlapping length.
And the matching unit is used for executing matching operation on the environmental information of each target robot based on the AutoMerge framework, the feature descriptor and the factor graph, generating a matching sequence pair corresponding to each target robot, and carrying out validity verification on each matching sequence in the matching sequence pair to obtain a verification result of each matching sequence.
And the first judging unit is used for judging whether the verification result meets the preset validity requirement or not, and removing the corresponding matching sequence if the verification result does not meet the preset validity requirement.
And the second judging unit is used for calculating the interconnection score of the matching sequence pair and judging whether the interconnection score is larger than a preset score threshold value or not if the verification result meets the preset validity requirement.
And the first merging unit is used for merging the environment information of each target robot to obtain merged sub-map information if the internal connection score is larger than the preset score threshold value.
And the second merging unit is used for calculating the merging distance of the matching sequence pair if the internal connection score is not greater than the preset score threshold value, and executing active merging operation on the matching sequence pair according to the merging distance so as to obtain an active merging result.
And the traversing unit is used for traversing the environment information of each robot based on the active merging result or the merging sub-map information until the preset exploration requirement is met so as to generate the merging map information.
Alternatively, in one embodiment of the application, the control module 300 includes a determination unit, an execution unit, a first calculation unit, and an adjustment unit.
Wherein the determining unit is used for determining the coverage surface point standard and the local range space of the target sensor.
And the execution unit is used for executing viewpoint rewarding comparison operation based on the coverage surface point standard and the local range space so as to obtain a comparison result.
And the first calculating unit is used for selecting a space initial viewpoint according to the comparison result and calculating an uncovered surface of the space initial viewpoint so as to select a subsequent viewpoint through the uncovered surface.
And the adjusting unit is used for dynamically adjusting the subsequent viewpoint rewards and repeatedly executing viewpoint rewards comparison operation according to the dynamically adjusted subsequent viewpoint rewards so as to generate the minimum viewpoint set.
Optionally, in one embodiment of the application, the control module 300 further comprises a modeling unit and a second building unit.
The modeling unit is used for constructing a space subdivision model, dividing a target space into a plurality of sub-target spaces according to the space subdivision model, and marking the plurality of sub-target spaces.
The second construction unit is used for acquiring candidate viewpoints based on the marked multiple sub-target spaces, the minimum viewpoint set, the preset resolution and the preset reuse strategy, and constructing the global roadmap according to the candidate viewpoints and the historical motion trail of each target robot.
Optionally, in one embodiment of the present application, the control module 300 further includes a third building unit, a second calculating unit, a third calculating unit, a selecting unit, and a tracking unit.
The third construction unit is used for determining the environment knowledge information of each target robot and constructing an information synchronization model according to the environment knowledge information.
And the second calculation unit is used for respectively carrying out non-communication evaluation, semi-communication evaluation and assumption communication evaluation on the target robot i according to the information synchronization model when the target robot i and the target robot j exceed a preset communication range, and respectively calculating global paths and communication costs corresponding to the non-communication evaluation, the semi-communication evaluation and the assumption communication evaluation, wherein i and j belong to positive integers.
And a third calculation unit, configured to calculate a tracking cost of the target robot i for tracking the target robot j according to the global path and the communication cost, and construct a specified chase cost comparison model based on the tracking cost and the communication cost.
And the selecting unit is used for selecting a communication target from the N-M target robots by adopting a preset random sampling strategy when the M target robots are positioned in the communication range of the N robots, wherein M, N belongs to a positive integer, and M is smaller than N.
The tracking unit is used for determining the tracking route of the M target robots for tracking the communication targets, acquiring the tracking results of the M target robots, and controlling each target robot to execute corresponding travelling operation based on the tracking route and the tracking results and combining a preset pursuit decision and a communication strategy.
It should be noted that the foregoing explanation of the embodiment of the unmanned autonomous laser movement measuring method is also applicable to the unmanned autonomous laser movement measuring device of this embodiment, and will not be repeated here.
The unmanned autonomous laser mobile measurement device provided by the embodiment of the application comprises an acquisition module, an evaluation module, a control module and a global route map, wherein the acquisition module is used for acquiring environment information of at least two target robots and determining viewpoint sets of the at least two target robots, the evaluation module is used for evaluating the environment information of each target robot in the at least two target robots based on a preset factor graph and AutoMerge frames to obtain an evaluation result, merging the environment information of the at least two target robots according to preset merging conditions according to the evaluation result to generate merging map information, the control module is used for calculating the minimum viewpoint set of each target robot in the at least two target robots according to preset coverage standards and a dynamic viewpoint rewarding mechanism, constructing the global route map based on the merging map information and the minimum viewpoint set, and calculating the communication cost of each preset communication condition under various preset communication conditions by using the global route map so as to control each target robot to execute corresponding travelling operation according to the communication cost. The application integrates the advantages of the laser radars on multiple vehicles, overcomes the limitations of each single robot and the traditional method through efficient map merging and communication navigation algorithm, and simultaneously provides map construction and navigation solutions with high precision and high reliability in various complex environments.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device may include:
memory 901, processor 902, and a computer program stored on memory 901 and executable on processor 902.
The processor 902 implements the unmanned autonomous laser movement measurement method provided in the above-described embodiment when executing a program.
Further, the electronic device further includes:
A communication interface 903 for communication between the memory 901 and the processor 902.
Memory 901 for storing a computer program executable on processor 902.
Memory 901 may comprise high-speed RAM memory or may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
If the memory 901, the processor 902, and the communication interface 903 are implemented independently, the communication interface 903, the memory 901, and the processor 902 may be connected to each other through a bus and perform communication with each other. The bus may be an industry standard architecture (Industry Standard Architecture, abbreviated ISA) bus, an external device interconnect (PERIPHERAL COMPONENT, abbreviated PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one thick line is shown in fig. 9, but not only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 901, the processor 902, and the communication interface 903 are integrated on a chip, the memory 901, the processor 902, and the communication interface 903 may communicate with each other through internal interfaces.
The processor 902 may be a central processing unit (Central Processing Unit, abbreviated as CPU), or an Application SPECIFIC INTEGRATED Circuit (ASIC), or one or more integrated circuits configured to implement embodiments of the application.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when being executed by a processor, implements the unmanned autonomous laser movement measuring method as above.
The embodiment of the application also provides a computer program product, which comprises a computer program, wherein the computer program is used for realizing the unmanned autonomous laser movement measuring method when being executed.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or N embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, "N" means at least two, for example, two, three, etc., unless specifically defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order from that shown or discussed, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include an electrical connection (an electronic device) having one or more wires, a portable computer diskette (a magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the N steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. If implemented in hardware as in another embodiment, it may be implemented using any one or combination of techniques known in the art, discrete logic circuits with logic gates for implementing logic functions on data signals, application specific integrated circuits with appropriate combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like. While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.