[go: up one dir, main page]

CN119064951B - Unmanned autonomous laser movement measurement method and device - Google Patents

Unmanned autonomous laser movement measurement method and device Download PDF

Info

Publication number
CN119064951B
CN119064951B CN202411033480.1A CN202411033480A CN119064951B CN 119064951 B CN119064951 B CN 119064951B CN 202411033480 A CN202411033480 A CN 202411033480A CN 119064951 B CN119064951 B CN 119064951B
Authority
CN
China
Prior art keywords
target
preset
viewpoint
communication
robots
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411033480.1A
Other languages
Chinese (zh)
Other versions
CN119064951A (en
Inventor
杨必胜
陈驰
韩辉煌
胡逸晨
郭亚男
金昂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202411033480.1A priority Critical patent/CN119064951B/en
Publication of CN119064951A publication Critical patent/CN119064951A/en
Application granted granted Critical
Publication of CN119064951B publication Critical patent/CN119064951B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

本申请涉及一种无人自主激光移动测量方法及装置,其中,方法包括:获取至少两个目标机器人的环境信息并确定其视点集合;评估每个目标机器人的环境信息,并根据评估结果合并至少两个目标机器人的环境信息生成合并地图信息;计算至少两个目标机器人中每个目标机器人的最小视点集合,基于合并地图信息和最小视点集合,构建全局路线图,并利用全局路线图计算多种预设通信条件下每种预设通信条件的通信成本,以根据通信成本控制每个目标机器人执行相应的行进操作。由此,解决了现有的无人自主激光探测方法在高动态、多变环境下的表现远远不能满足高度自主性、适应性、高效率的要求,难以应用于自动探索与监测领域等问题。

The present application relates to an unmanned autonomous laser mobile measurement method and device, wherein the method includes: obtaining environmental information of at least two target robots and determining their viewpoint sets; evaluating the environmental information of each target robot, and merging the environmental information of at least two target robots to generate merged map information based on the evaluation results; calculating the minimum viewpoint set of each target robot in the at least two target robots, and constructing a global roadmap based on the merged map information and the minimum viewpoint set, and using the global roadmap to calculate the communication cost of each preset communication condition under multiple preset communication conditions, so as to control each target robot to perform corresponding travel operations according to the communication cost. Thus, the existing unmanned autonomous laser detection method is solved, and the performance in a highly dynamic and changeable environment is far from meeting the requirements of high autonomy, adaptability, and high efficiency, and is difficult to be applied to the field of automatic exploration and monitoring.

Description

Unmanned autonomous laser movement measurement method and device
Technical Field
The application relates to the technical field of multi-unmanned system collaborative autonomous exploration and 3D perception positioning mapping, in particular to an unmanned autonomous laser movement measurement method and device.
Background
Along with the rapid development of automation and intelligent technologies, the requirements of positioning drawing and autonomous exploration of a multi-unmanned system in an unknown environment become particularly critical in the fields of robot collaborative operation, post-disaster rescue, environment monitoring and the like. The traditional mapping and navigation technology has poor self-adaptability and low reliability, and is faced with obvious limitation in complex environment.
The point cloud-based method relies on geometric feature matching for point cloud registration, and the performance of the method is highly dependent on the robustness of 3D geometric features. Although SLAM communities develop various methods to improve the accuracy and robustness of sub-map merging, including using semantic objects, bag-of-words vectors, full image descriptors, and sequence-based position recognition, in practical applications, problems such as data association errors and transformation matrix errors still need to be overcome
The traditional environment exploration method mostly adopts a greedy algorithm, only focuses on a short-term optimal solution and neglects long-term efficiency, and particularly, the efficiency problem is obvious in a complex scene which relates to a multi-robot system and is limited in communication. In addition, existing multi-robot collaborative exploration strategies often prioritize communications over the exploration task itself, potentially negatively impacting overall task efficiency, and these strategies often rely on heuristic methods, do not fully take into account the actual impact of information exchange on exploration efficiency, and ignore adaptability and reliability in complex or harsh environments.
In face of limitations of multi-robot sub-map merging, environment exploration and collaborative exploration strategies in the traditional three-dimensional environment, the existing unmanned autonomous laser detection method can not meet the requirements of high autonomy, adaptability and high efficiency far from being represented in the high-dynamic and changeable environment, is difficult to apply to the fields of automatic exploration and monitoring, and needs to be solved urgently.
Disclosure of Invention
The application provides an unmanned autonomous laser movement measurement method and device, which are used for solving the problems that the existing unmanned autonomous laser detection method can not meet the requirements of high autonomy, adaptability and high efficiency far away in a high dynamic and changeable environment, and is difficult to apply to the fields of automatic exploration and monitoring and the like.
An embodiment of the first aspect of the application provides an unmanned autonomous laser movement measurement method, which comprises the steps of obtaining environment information of at least two target robots and determining viewpoint sets of the at least two target robots, evaluating the environment information of each target robot in the at least two target robots based on a preset factor graph and AutoMerge frame to obtain an evaluation result, merging the environment information of the at least two target robots according to a preset merging condition according to the evaluation result to generate merging map information, calculating a minimum viewpoint set of each target robot in the at least two target robots according to a preset coverage standard and a dynamic viewpoint rewarding mechanism based on the viewpoint sets, constructing a global route map based on the merging map information and the minimum viewpoint set, and calculating communication cost of each preset communication condition under a plurality of preset communication conditions by utilizing the global route map so as to control each target robot to execute corresponding travelling operation according to the communication cost.
Optionally, in an embodiment of the present application, the method includes evaluating the environmental information of each of the at least two target robots based on a preset factor graph and AutoMerge frames to obtain an evaluation result, merging the environmental information of the at least two target robots according to a preset merge condition according to the evaluation result to generate merged map information, including constructing the factor graph according to the environmental information of each target robot and an intra-connection mode between each target robot, acquiring an overlap length between the environmental information of each target robot and extracting a feature descriptor according to the overlap length, performing a matching operation on the environmental information of each target robot based on the AutoMerge frames, the feature descriptor and the factor graph, generating a matching sequence pair corresponding to each target robot, and performing validity verification on each matching sequence in the matching sequence pair to obtain a verification result of each matching sequence, determining whether the verification result meets a preset validity requirement according to an intra-connection mode between the environmental information of each target robot and each target robot, calculating a score if the intra-connection mode between the verification result meets the preset validity requirement and the intra-connection sequence does not meet the preset validity requirement, calculating a score if the intra-connection threshold is high, and performing a score calculation on the matching sequence pair corresponding to the matching sequence pair, and performing a score, and a score is obtained if the intra-connection score is high, and traversing the environment information of each robot based on the active merging result or the merging sub-map information until the preset exploration requirement is met, so as to generate the merging map information.
Optionally, in one embodiment of the present application, the calculating the minimum viewpoint set of each of the at least two target robots according to a preset coverage standard and a dynamic viewpoint rewarding mechanism based on the viewpoint set includes determining a coverage surface point standard and a local range space of a target sensor, performing a viewpoint rewarding comparison operation based on the coverage surface point standard and the local range space to obtain a comparison result, selecting a spatial initial viewpoint according to the comparison result, and calculating an uncovered surface of the spatial initial viewpoint to select a subsequent viewpoint through the uncovered surface, dynamically adjusting the subsequent viewpoint rewards, and repeatedly performing a viewpoint rewarding comparison operation according to the dynamically adjusted subsequent viewpoint rewards to generate the minimum viewpoint set.
Optionally, in one embodiment of the present application, the building a global roadmap based on the merged map information and the minimum view point set includes building a space subdivision model, dividing a target space into a plurality of sub-target spaces according to the space subdivision model, marking the plurality of sub-target spaces, obtaining candidate view points based on the marked plurality of sub-target spaces, the minimum view point set, a preset resolution and a preset reuse policy, and building the global roadmap according to the candidate view points and the historical motion trail of each target robot.
Optionally, in an embodiment of the present application, the calculating the communication cost of each preset communication condition under a plurality of preset communication conditions by using the global route map to control the target robots to perform corresponding travelling operations according to the communication cost includes determining environmental awareness information of each target robot and constructing an information synchronization model according to the environmental awareness information, when the target robots i and j are beyond a preset communication range, performing non-communication evaluation, semi-communication evaluation and hypothesis communication evaluation on the target robots i according to the information synchronization model, and calculating global paths and communication costs corresponding to the non-communication evaluation, the semi-communication evaluation and the hypothesis communication evaluation, respectively, wherein i and j belong to positive integers, calculating the tracking cost of the target robots i to track the target robots j according to the global paths and the communication costs, and constructing a specified chase cost comparison model based on the tracking cost, when the target robots i and the target robots j are within a communication range of N robots, performing random pursuit by using a random strategy from the target robots M to the target robots, and performing a decision-making a decision to the target pursuit by M, and performing a communication strategy combining the communication strategy with the target robots M to the target robots, and the communication result M belonging to the target robots, and the target pursuit, and the target robot being determined to a communication strategy.
The embodiment of the second aspect of the application provides an unmanned autonomous laser movement measurement device, which comprises an acquisition module, an evaluation module and a control module, wherein the acquisition module is used for acquiring environment information of at least two target robots and determining viewpoint sets of the at least two target robots, the evaluation module is used for evaluating the environment information of each target robot in the at least two target robots based on a preset factor graph and AutoMerge frames, obtaining an evaluation result, merging the environment information of the at least two target robots according to preset merging conditions according to the evaluation result to generate merging map information, the control module is used for calculating the minimum viewpoint set of each target robot in the at least two target robots according to a preset coverage standard and a dynamic viewpoint rewarding mechanism based on the viewpoint sets, constructing a global route map based on the merging map information and the minimum viewpoint set, and calculating communication cost of each preset communication condition under a plurality of preset communication conditions by utilizing the global route map so as to control each target robot to execute corresponding travelling operation according to the communication cost.
Optionally, in one embodiment of the present application, the evaluation module includes a first construction unit configured to construct the factor graph according to the environmental information of each target robot and the inter-connection manner between each target robot, an extraction unit configured to obtain an overlapping length between the environmental information of each target robot and extract a feature descriptor according to the overlapping length, a matching unit configured to perform a matching operation on the environmental information of each target robot based on the AutoMerge frame, the feature descriptor, and the factor graph, generate a matching sequence pair corresponding to each target robot, and perform validity verification on each matching sequence in the matching sequence pair to obtain a verification result of each matching sequence, a first judgment unit configured to judge whether the verification result meets a preset validity requirement, remove a corresponding matching sequence if the verification result does not meet the preset validity requirement, a second judgment unit configured to calculate a matching sequence pair if the verification result meets the preset validity requirement, and perform an intra-connection unit based on the preset validity requirement, and score a first judgment unit configured to score a large score and merge the combined with the first score if the verification result does not meet the preset validity requirement, and a score is calculated and a large score is combined with the first score and the combined with the first score and the first score is calculated and the score combined with the score is combined with the large score and the score. And traversing the environment information of each robot based on the active merging result or the merging sub-map information until reaching a preset exploration requirement, so as to generate the merging map information.
Optionally, in one embodiment of the present application, the control module includes a determining unit configured to determine a covered surface point standard and a local range space of the target sensor, an executing unit configured to execute a viewpoint rewards comparing operation based on the covered surface point standard and the local range space to obtain a comparison result, a first calculating unit configured to select a spatial initial viewpoint according to the comparison result and calculate an uncovered surface of the spatial initial viewpoint to select a subsequent viewpoint through the uncovered surface, and an adjusting unit configured to dynamically adjust the subsequent viewpoint rewards and repeatedly execute the viewpoint rewards comparing operation according to the dynamically adjusted subsequent viewpoint rewards to generate the minimum viewpoint set.
Optionally, in one embodiment of the present application, the control module further includes a modeling unit configured to construct a space subdivision model, divide a target space into a plurality of sub-target spaces according to the space subdivision model, and mark the plurality of sub-target spaces, and a second construction unit configured to obtain a candidate viewpoint based on the marked plurality of sub-target spaces, the minimum viewpoint set, a preset resolution, and a preset reuse policy, and construct the global roadmap according to the candidate viewpoint and the historical motion trail of each target robot.
Optionally, in one embodiment of the present application, the control module further includes a third construction unit configured to determine environmental awareness information of each target robot and construct an information synchronization model according to the environmental awareness information, a second calculation unit configured to perform communication-free evaluation, semi-communication evaluation, and hypothesis communication evaluation on the target robot i according to the information synchronization model when the target robot i and the target robot j exceed a preset communication range, and calculate global paths and communication costs corresponding to the communication-free evaluation, the semi-communication evaluation, and the hypothesis communication evaluation, respectively, wherein i and j belong to positive integers, a third calculation unit configured to calculate a tracking cost of the target robot i for tracking the target robot j according to the global paths and the communication costs, and construct a prescribed pursuit cost comparison model based on the tracking cost, and a selection unit configured to select a communication-free target from N-M target robots according to the information synchronization model when the target robot i and the target robot j are located within the communication range, wherein i and j belong to positive integers, and M is a decision-making unit configured to control a pursuit-by combining a communication strategy, and a communication-taking strategy, wherein M is a decision, and the communication-free target robot is a communication-M and a communication-M is a communication-target robot, and a communication-target is determined by the communication-taking strategy, and each of the communication-target robot is a communication-pursuit, and the communication-target-pursuit is determined by the communication-target and the communication-target robot.
An embodiment of a third aspect of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the program to implement the unmanned autonomous laser movement measurement method according to the above embodiment.
A fourth aspect of the present application provides a computer readable storage medium storing a computer program which when executed by a processor implements the unmanned autonomous laser movement measurement method as above.
A fifth aspect of the present application provides a computer program product comprising a computer program to be executed for implementing the unmanned autonomous laser movement measurement method described above.
Thus, embodiments of the present application have the following beneficial effects:
The method and the system can obtain environment information of at least two target robots and determine viewpoint sets of the at least two target robots, evaluate the environment information of each target robot in the at least two target robots based on a preset factor graph and AutoMerge frames, obtain evaluation results, combine the environment information of the at least two target robots according to preset combining conditions according to the evaluation results to generate combined map information, calculate minimum viewpoint sets of each target robot in the at least two target robots according to preset coverage standards and a dynamic viewpoint rewarding mechanism, construct a global route map based on the combined map information and the minimum viewpoint sets, and calculate communication cost of each preset communication condition under various preset communication conditions by using the global route map so as to control each target robot to execute corresponding travelling operation according to the communication cost. The application integrates the advantages of the laser radars on multiple vehicles, overcomes the limitations of each single robot and the traditional method through efficient map merging and communication navigation algorithm, and simultaneously provides map construction and navigation solutions with high precision and high reliability in various complex environments. Therefore, the problems that the existing unmanned autonomous laser detection method is far from meeting the requirements of high autonomy, adaptability and high efficiency in the high dynamic and changeable environments, is difficult to apply to the fields of automatic exploration and monitoring and the like are solved.
Additional aspects and advantages of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
Fig. 1 is a flowchart of an unmanned autonomous laser movement measurement method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of the execution logic of an unmanned autonomous laser movement measurement method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an interconnection computation and adaptive merging process according to an embodiment of the present application;
fig. 4 is a schematic view of a view sampling local planning flow according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a space subdivision global programming flow provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of a chase strategy information synchronization cost analysis flow according to one embodiment of the present application;
FIG. 7 is a schematic diagram of the execution logic of an unmanned autonomous laser movement measurement system according to an embodiment of the present application;
FIG. 8 is an exemplary diagram of an unmanned autonomous laser movement measuring device in accordance with an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
The system comprises a 10-unmanned autonomous laser mobile measuring device, a 100-acquisition module, a 200-evaluation module, a 300-control module, a 901-memory, a 902-processor and a 903-communication interface.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present application and should not be construed as limiting the application.
The unmanned autonomous laser movement measurement method and device according to the embodiment of the application are described below with reference to the accompanying drawings. The application provides an unmanned autonomous laser movement measurement method aiming at the problems in the background art, wherein the method comprises the steps of obtaining environment information of at least two target robots and determining viewpoint sets of the at least two target robots, evaluating the environment information of each target robot in the at least two target robots based on a preset factor graph and AutoMerge frames to obtain an evaluation result, merging the environment information of the at least two target robots according to preset merging conditions according to the evaluation result to generate merging map information, calculating the minimum viewpoint set of each target robot in the at least two target robots according to preset coverage standards and a dynamic viewpoint rewarding mechanism, constructing a global route map based on the merging map information and the minimum viewpoint set, and calculating the communication cost of each preset communication condition under a plurality of preset communication conditions by utilizing the global route map so as to control each target robot to execute corresponding travelling operation according to the communication cost. The application integrates the advantages of the laser radars on multiple vehicles, overcomes the limitations of each single robot and the traditional method through efficient map merging and communication navigation algorithm, and simultaneously provides map construction and navigation solutions with high precision and high reliability in various complex environments. Therefore, the problems that the existing unmanned autonomous laser detection method is far from meeting the requirements of high autonomy, adaptability and high efficiency in the high dynamic and changeable environments, is difficult to apply to the fields of automatic exploration and monitoring and the like are solved.
Specifically, fig. 1 is a flowchart of an unmanned autonomous laser movement measurement method according to an embodiment of the present application.
As shown in fig. 1, the unmanned autonomous laser movement measurement method includes the steps of:
in step S101, environmental information of at least two target robots is acquired, and a set of viewpoints of the at least two target robots is determined.
In the embodiment of the application, the environment information (namely map segments) of the robot, such as laser radar data, point cloud data, system IMU direction information and the like, can be firstly obtained, and in the actual execution process, the point cloud state, the motion state information and the distance information can be acquired under various environments by using the multi-machine-assembled laser radar sensor, and the sensors of all vehicles cooperatively work to keep time synchronization and space alignment, so that a plurality of acquired indoor and outdoor scene data are obtained.
Secondly, embodiments of the present application may construct a view-set model, as shown in fig. 2, which is specifically defined as follows:
In the view-point set model, Defined as the workspace to be explored,The view point v epsilon SE (3) describes the gesture v epsilon [ p v,qv ] of the vehicle-mounted sensor, wherein p v∈Wtrav and q v epsilon SO (3) respectively represent the position and the direction; For the surface perceived by the sensor at v, the union of the surfaces perceived at the viewpoint along L is Order theRepresenting a subset of perceived surfaces that have been covered so far in the exploration process, the perceived but not yet covered surfaces being represented as
Embodiments of the present application then also define a workspace and traversable subspaces to describe the pose and perceived surface of the sensor through the set of viewpoints and determine the coverage path.
In step S102, based on the preset factor graph and AutoMerge frame, the environmental information of each of the at least two target robots is evaluated, an evaluation result is obtained, and the environmental information of the at least two target robots is combined according to the evaluation result and a preset combining condition, so as to generate combined map information.
Further, in the embodiment of the present application, a factor graph is also required to be maintained to represent the interconnection between robots, as shown in fig. 3, and the quality score of the current feature association is evaluated by using AutoMerge frames, and the overlapping length is associated and expanded by planning path verification data, and sub-maps are combined under the condition that a specific threshold is met, and meanwhile, the relative positions of the combined robots are determined, so that the sub-maps generated by each robot are dynamically adjusted and combined to obtain final combined map information.
Optionally, in one embodiment of the present application, based on a preset factor graph and AutoMerge frames, evaluating environmental information of each of at least two target robots to obtain an evaluation result, merging environmental information of at least two target robots according to preset merging conditions according to the evaluation result to generate merged map information, including constructing a factor graph according to the environmental information of each target robot and an interconnection manner between each target robot, acquiring an overlap length between the environmental information of each target robot, extracting feature descriptors according to the overlap length, performing a matching operation on the environmental information of each target robot based on AutoMerge frames, feature descriptors and factor graph, generating a matching sequence pair corresponding to each target robot, and performing validity verification on each matching sequence in the matching sequence pair to obtain a verification result of each matching sequence, judging whether the verification result meets a preset validity requirement, removing the corresponding matching sequence if the verification result does not meet the preset validity requirement, calculating an interconnection connection of a matching sequence pair if the verification result meets the preset validity requirement, acquiring an interconnection score, and performing an interconnection score on the basis of the preset connection of the map pair, and obtaining an active score by a large threshold value, and performing an active score calculation on the matching sequence until the merged score is reached by each matching sequence pair, if the combination score is not met, and the active score is calculated based on the preset interconnection score, and the combination score is calculated by the threshold value, to generate consolidated map information.
It should be noted that, the specific process of evaluating and merging data collected by different robots through the factor graph and AutoMerge framework and merging map segments when specific conditions are satisfied in the embodiment of the present application is as follows:
in step1, in a specific implementation process, a factor graph g= { V, E } may be maintained by MUI-TARE, where the factor graph g= { V, E } is used to represent the connection between robots, the node V represents the map segments obtained by each robot, and the edge e= { ωi, j } represents the connection between these segments, where ωi, j represents the connection between robots vi and vj, and the calculation method is shown in formula 1. In AutoMerge, for location identification, the interconnection between two segments is determined by the difference in overlap length (i.e., number of matching frames) and location identification descriptor, the interconnection has a higher score, and the longer the overlap length, the more stable the interconnection.
Wherein F i,Fj represents stitching of feature descriptors extracted from overlapping parts, L i,j represents overlapping length between v i,vj, c w, sigma is a super parameter, c w adjusts dependence of omega i,j on overlapping length, epsilon is a constant avoiding zero division, only segments v i,vj with an interconnection score omega i,j greater than a set threshold are merged, and relative positions of the merged robots can be determined through a transformation matrix;
Step 2, stable interconnection requires a longer overlapping distance, but is difficult to realize when the robot independently explores, so that a shorter matching frame sequence is required to detect overlapping, but data association errors and large errors of a transformation matrix are also required to be avoided; therefore, the embodiment of the application can verify data association and expand the overlapping length by planning a path through an adaptive merging strategy and adopting an active method, and once a new interconnection is detected in the exploration process, the robot which is closer to the overlapping area is used for adaptive merging, the robot can verify and increase the interconnection by actively exploring the potential overlapping area, and the merging process can be continued until the interconnection is proved to be a data association error or a sub map is successfully merged;
Step 3, the MUI-TARE regularly calls AutoMerge to match the point cloud from the robot and returns a matching sequence pair, and the embodiment of the application can traverse all the matching sequence pairs, verify internal connection, calculate a transformation matrix, estimate a merging distance A diat and calculate a method as shown in a formula 2;
Where ω i,j represents the current internal connection, ω thresh represents the internal connection threshold that directs map merging, F i and F j are expected feature descriptors when the overlap length reaches the threshold, a dist is the estimated distance that the agent needs to travel to establish a stable internal connection;
Step 4, planning a complete exploration path by combining a local planner and a global planner, wherein the local planner firstly samples the view points covering the surface of the subspace and plans the shortest path by solving the problem of a traveller, and the global planner calculates a group of global rough paths for N robots in the sub map m To travel in the "under explore" subspace and attempt to minimize the longest travel distance in all paths, defined as follows:
Then, based on A dist calculated in step 3, the embodiment of the application can actively merge the planner to plan a path, so that the merge agent accesses the A dst frame view of the overlapped agent, and uses a greedy strategy to navigate the agent to the nearest uncovered frame view, and solves the problem of error data association possibly occurring during the active merge through a fault detection mechanism.
Wherein, For the current connection value to be connected,For the agent to start actively increasing the pre-stack internal connection value, C t is the super-parameter that controls the change of G (i, j, t) over time;
In the active merging process, if error data association is encountered, the overlapping area is not increased by active merging, so that the verification gain is reduced along with the increase of the active merging duration. When the verification gain is reduced below a certain threshold, the server informs the robot to exit the active merging and marks the internal connection as invalid, otherwise, the agent continues the active merging until the sub-map merging requirement is met, and once the reliable inter-sub-map internal connection is established, the sub-maps are merged into a single sub-map, namely the final merged map information, so that reliable data basis and theoretical support are provided for the construction of the subsequent global roadmap.
In step S103, a minimum viewpoint set of each of the at least two target robots is calculated according to a preset coverage standard and a dynamic viewpoint rewarding mechanism, a global route map is constructed based on the combined map information and the minimum viewpoint set, and communication costs of each preset communication condition under a plurality of preset communication conditions are calculated by using the global route map, so that each target robot is controlled to execute a corresponding traveling operation according to the communication costs.
Furthermore, the embodiment of the application also needs to utilize the coverage standard of the sensor and a dynamic viewpoint rewarding mechanism to calculate the minimum viewpoint set to cover all the unaware surfaces so as to optimize the detail degree of the map, then the embodiment of the application can build a global roadmap through the known track segmentation space so as to ensure that paths are collision-free and communicated, simultaneously optimize path selection by using an efficient algorithm, synchronize the states of multiple robots and solve the path planning problem under the condition of limited communication, and then the embodiment of the application can compare the cost under different communication assumptions to determine whether to execute a pursuit strategy to optimize information sharing, and if the pursuit is unsuccessful, the embodiment of the application adopts a preset convergence strategy to ensure the information communication efficiency.
In addition, the embodiment of the application optimizes information sharing and improves communication efficiency through a multi-machine formation pursuit strategy, finally reduces algorithm running time and improves environment exploration efficiency.
Optionally, in one embodiment of the present application, calculating the minimum view point set of each of the at least two target robots according to a preset coverage standard and a dynamic view point rewarding mechanism includes determining a coverage surface point standard and a local range space of the target sensor, performing a view point rewarding comparison operation based on the coverage surface point standard and the local range space to obtain a comparison result, selecting a spatial initial view point according to the comparison result, and calculating an uncovered surface of the spatial initial view point to select a subsequent view point through the uncovered surface, dynamically adjusting the subsequent view point rewarding, and repeatedly performing the view point rewarding comparison operation according to the dynamically adjusted subsequent view point rewarding to generate the minimum view point set.
In a specific implementation process, an embodiment of the present application constructs a local map planning path based on viewpoint sampling, as shown in fig. 4, and specifies a standard of a sensor coverage surface and a configuration space within a local range, dynamically adjusts viewpoint rewards according to a sub-model of viewpoint sampling to solve a set coverage problem, so as to optimize selection of additional viewpoints to maximize a total coverage area, and the specific process is as follows:
Step 1, defining the standard of the sensor coverage surface point as follows:
|ps-pv|≤D
Wherein p s is the surface patch in the exploration space, n s is the normal pointing to the free space side, viewpoint v is the center point on the surface patch, D and T are two constants, D is set to be shorter than the physical sensor range during implementation, the criteria are used to limit the relative distance and direction of the surface patch to the viewpoint, which ensures that the surface is well perceived, furthermore, embodiments of the application set a local planning range Order theTo determine traversable subspaces that take into account collisions and connectivity,To take into account the respective configuration space of rotation and translation;
Step 2, as more views are selected, the rewards for selecting additional views are reduced, mainly because nearby views have overlapping views and the same surface can be perceived from multiple views, therefore, the rewards for views depend on the previously selected views, and the rewards for views are reduced For the i-th selected viewpoint, the uncovered surface in v i is requiredThe adjustment is as follows:
wherein, the embodiment of the application uses the problem of set coverage to solve the problem of set coverage To find a covered unaware surfaceIs provided with a minimum set of view points,Representing slaveThe perceived uncovered surface of which represents uncovered area rendering sub-model, embodiments of the present application select an initial set of view pointsDynamically adjusting v i rewards based on the contribution of view points to uncovered areasTo optimize the selection of additional viewpoints to maximize the total coverage area.
Thus, embodiments of the present application calculate a minimum set of viewpoints to cover all the imperceptible surfaces based on the coverage criteria of the sensor and the dynamic viewpoint rewarding mechanism, thereby optimizing the level of detail of the map.
Optionally, in one embodiment of the application, the global roadmap is constructed based on the combined map information and the minimum view point set, and the method comprises the steps of constructing a space subdivision model, dividing a target space into a plurality of sub-target spaces according to the space subdivision model, marking the sub-target spaces, acquiring candidate view points based on the marked sub-target spaces, the minimum view point set, the preset resolution and the preset reuse strategy, and constructing the global roadmap according to the candidate view points and the historical motion trail of each target robot.
It should be noted that, the embodiment of the present application may construct a global map planning path based on space subdivision, as shown in fig. 5, firstly, the embodiment of the present application may define a space subdivision model to construct a global roadmap according to a past track, and obtain a view point of a collision-free and communicable unknown area through view point reuse, and optimize global path selection by a probabilistic integrity algorithm, and in addition, the same initial state is adopted under the limitation of multi-machine communication, and the VRP is solved by sharing new information by utilizing multi-hop communication, where the specific process is as follows:
Step 1, defining a space subdivision model, namely dividing a space outside H into equal-length Fang Tizi spaces, storing covered and uncovered surfaces formed in the exploration process in each subspace, wherein data are only stored and remain in the subspace, and the data in H are updated along with the exploration, storing the states of 'unexplored', 'explored' and 'explored' in each subspace, wherein the states are unexplored if the subspace does not contain any covered or uncovered surfaces, the explored if the subspace contains only covered surfaces, the explored if the subspace contains any uncovered surfaces, the explored if the subspace contains only covered surfaces, and considering the explored subspace in the global planning H E is expressed as exploring subspace, andRepresented as an exploration subspace set;
Step 2, building a global roadmap, namely, if nodes represent physical traversable positions in the environment, if traversable paths exist between a pair of nodes, the nodes are connected through edges, and in order to avoid collision checking and redundant calculation of path planning, viewpoints obtained by solving the viewpoint sampling problem are reused, and since viewpoint candidates are in the process of Is uniformly generated and takes collision and connectivity into consideration, in other words, each viewpoint candidate is collision-free and is connected withAny other view candidates having traversable paths therein are connected, and a subset of the view candidates are reused as nodes in the global roadmap to avoid redundant collisions and connectivity checks. In order to ensure sparsity of roadmaps, viewpoint candidates are randomly sampled at a fixed resolution to maintain sufficiently connected roadmaps without incurring high computational costs, embodiments of the application may use other data structures or route planning algorithms instead of global roadmaps, as long as they have probabilistic integrity in calculating the shortest path between two locations;
Thus, it can be appreciated that, in order to find a view through the current viewpoint v current and Global path of each subspace centroid in (a)According to the embodiment of the application, a global roadmap can be constructed in a traversable space along the past track of a vehicle, traversable nodes are connected by edges, the redundant calculation for avoiding collision check and path planning is reused through viewpoints to obtain viewpoints of non-collision and communicable unknown areas, and finally fixed-resolution random sampling is adopted for the viewpoint candidates to ensure the roadmap connection and acceptable calculation cost, and the A * algorithm or other probability integrity algorithms are utilized to optimize the selection of the global path and reduce the redundant calculation for collision check and path planning;
Step 3, defining a multi-robot exploration model under limited communication, wherein all robots are initialized in the same coordinate system, the starting positions of other robots are known in advance, and the robots can communicate when being physically in a certain distance range, in addition, the robots can communicate through multiple hops, wherein information is transmitted by other robots in the middle, all robots share the current position of the robots, the states of explored subspaces and the traversability of the robots, only share new information each time to avoid redundant transmission, and the optimal solution for solving the VRP is calculated as follows:
Wherein, For a set of global paths of N robots, the lowest cost solution, the optimal solution, is shared by all robots and used as an initial guess for further optimization.
Optionally, in one embodiment of the present application, the communication cost of each preset communication condition under a plurality of preset communication conditions is calculated by using a global roadmap, so as to control each target robot to execute a corresponding travelling operation according to the communication cost, including determining environmental awareness information of each target robot and constructing an information synchronization model according to the environmental awareness information, when the target robot i and the target robot j exceed a preset communication range, performing non-communication evaluation, semi-communication evaluation and hypothesis communication evaluation on the target robot i according to the information synchronization model, respectively calculating global paths and communication costs corresponding to the non-communication evaluation, the semi-communication evaluation and the hypothesis communication evaluation, wherein i and j belong to positive integers, calculating tracking cost of the target robot i according to the global paths and the communication costs, and constructing a prescribed pursuit cost comparison model based on the tracking cost, when the M target robots are located within the communication range of N robots, selecting communication targets from the N-M target robots by using a preset random sampling strategy, wherein M, N belongs to the positive integers, and M < target robots are determined and the tracking cost of the target robots are controlled by combining the tracking strategy and the tracking cost of each target robot.
In the actual implementation process, as shown in fig. 6, the embodiment of the present application may set a multi-robot information synchronization model, calculate global path planning as cost under no communication and assumed communication, further calculate additional cost generated by the occurrence of chase if the cost under the assumed communication is lower, and take chase decision if the additional cost is still lower than the global cost under the no communication, specifically, ensure effective information exchange based on the session and communication policy when the chase failure is taken, and the specific process is as follows:
step 1, an information synchronization model, which defines the knowledge of the robot to the environment as a global subspace set which is explored and is being explored, and uses Representing that each robot will track its own knowledge and knowledge of other robots; to represent knowledge of robot i about robot j about the environment, a symbol is usedWherein i and j are integers between 1 and N, and the symbolRepresenting the knowledge of the robot i itself;
Prior to the search for a search term, Using robot i to initialize a priori knowledge of robot j, e.g., a single global subspace containing the start position of robot j, when two robots i and j can communicate with each other, they synchronize their knowledge with updated knowledge represented by the superscript t, represented by the following equation:
At the same time, the knowledge understanding of other robots is updated, expressed as:
Where k is an integer between 1 and N and k is not equal to i and j, where, Information combinations representing two robots, the explored state being prioritized over the exploring state when updating the state of the global subspace;
Step2, communication cost comparison model, wherein in case that the robot i and the robot j exceed the communication range, the robot i must decide whether to pursue the robot j for communication by evaluating three options (no communication, half communication and hypothesis communication) Representing a global subspace that is currently known only to robot i such thatWithout communication, robot i plans a global pathThe cost is expressed as:
Wherein, Not to accessWhile under hypothetical communication, robot j knowsSuch that:
robot i may plan the global path as The cost is expressed as:
in the case of half communication, pair Performing space division, and randomly selecting half of the regions or sampling points to obtainSo thatEnsure that robot i can effectively plan global path on the basis of incomplete dataThe half communication cost is expressed as:
embodiment of the application priority comparison And c, ifLess than c, then robot i may be considered to be beneficial to communicate with robot j, otherwise it will be againIf c + is less than c, as compared to c, it is beneficial for robot i to communicate with robot j, which can share accessThe above steps, however, require that robot i deviate from its current exploration path, thereby creating additional costs, and further embodiments of the present application can examine the costs that robot i generates when chasing robot j, and evaluate the situation that the costs generated can justify by potential benefits;
step 3, estimating that the robot i tracks the robot j to share the related information by using the pursuit cost comparison model Cost of information generation of (a) assuming that robot j follows a global pathSearch forBy estimating the time for robot j to reach each subspace, robot i may plan a path to maximize the likelihood of encountering robot j while reducing travel time, embodiments of the application may approximate by solving the TSP problem constrained by a time window that calculates the access of robot i from the current locationAnd eventually returns to the co-located global path. The path taken by robot i to track robot j is denoted asIn view of this tracking, the total cost is given by the following formula:
assuming successful information exchange between two robots, if Less than the current cost c, robot i should interrupt its exploration and followAfter the information synchronization is completed, the robot I calculates the difference delta I between the information obtained by the chase and the current known information, wherein the difference delta I comprises the increment delta G exp of a search area, the decrement delta G unexp of an unexplored area and the refinement delta G redifine of the information of the searched area;
Deciding whether to proceed to a new exploration area based on the newly obtained information Δi and the current resource status R, the decision being guided by the ratio Δi/R of information gain to resource consumption to maximize information collection efficiency if the resources allow;
After completing the exploration of the known area, the robot should catch up with other robots to obtain more information about the environment or to transfer information about the previously explored space, determine if and where to catch up with a certain robot as described above, the main objective being to minimize the overall exploration time, however, in this case, the focus of the information exchange is turned to the possibility to find new exploration areas or to avoid repeated access to already explored space, instead of sharing the workload of exploring a large space as is typically the case;
Step 4, chasing the correction model when the total When the robots are in the communication range of N robots in total, iteratively selecting a communication target from the rest N-M robots by using a random sampling method;
Specifically, there are 2 (N-M) possible combinations of target robots to communicate, the selection of target robots is distributed in planning cycles, wherein each planning cycle selects only a small number of combinations from a total of 2 (N-M) combinations without replacement, a small number of robots (one or two) are targeted to improve non-communication strategies, and therefore the selection is biased to prioritize the small targets, wherein the likelihood of selecting the small number of combinations of targets is higher than the large number of combinations of targets;
It will be appreciated that in the worst case, the pursuit strategy is degenerated to a convergence-based strategy, but due to the symmetry of robot reasoning, it is rare that the target robots that find enough new information will also try pursuit, which will tend to meet the pursuit robots halfway, otherwise the target robots will not be far from their original exploration path, and the pursuit robots can easily find them.
Therefore, the embodiment of the application utilizes the self-adaptive fusion and advanced collaborative strategies to fusion construct the dual-resolution global-local map of the three-dimensional space, improves the accuracy and the exploration time efficiency of the map, reduces the calculation operation time, optimizes the information sharing through the multi-machine formation pursuit strategy and improves the overall efficiency of the system. For example, in the environment including but not limited to large-scale space, fluctuating topography, complex topological structure, disordered obstacles and unstructured environment, the adaptive merging in MUI-TARE structure is used for determining verification technology, compared with the existing method, the exploring time is reduced by 7% -56% and the planning time is accelerated by 13% -52%, the laser radar data on multiple vehicles can be effectively fused and enhanced, errors of overlapped parts are reduced to ensure efficient merging of sub-maps, in addition, the operating time is reduced by 50% and always kept below 1 second compared with the existing NBVP, GBP, MBP method, the exploring time efficiency is improved by 80%, the exploring in the position environment can be made faster and more comprehensively, compared with the traditional method based on the following strategy, the average exploring efficiency in the model operation of thousands of times is improved by 39% through the multiple machine system progressive technology, the advantage of radar integration on multiple vehicles can be effectively achieved, the high-accuracy map merging of the multiple vehicles can be overcome through the high-accuracy map construction method, the navigation algorithm can be overcome under the condition that the navigation algorithm is not only high enough to overcome, and the navigation environment is not limited by the traditional method.
In addition, the application constructs a corresponding unmanned autonomous laser movement measuring system according to the unmanned autonomous laser movement measuring method, and the unmanned autonomous laser movement measuring system mainly comprises a sensor module, a data transmission module, a motion control module, a data processing module, a data processor and other components.
The sensor module is used for acquiring laser radar data of map building navigation and comprises a multi-machine multi-laser radar device;
the data transmission module is used for connecting each sensor with the data processing module;
the motion control module is used for realizing a planned path and comprises a four-wheel-drive ROS robot device;
The data processing module is used for processing the data acquired by the sensor module and carrying out unmanned autonomous laser movement measurement;
The data processor comprises a sub-map self-adaptive fusion module, a dual-resolution map construction module, a chase strategy cost analysis module and the like.
Specifically, the sub-map adaptive fusion module is configured to combine only segments with connection scores greater than a set threshold, wherein stable connection requires a longer overlapping distance, the robot verifies data association and expands the overlapping length by planning a path, the MUI-TARE periodically calls AutoMerge to match point clouds from the robot and returns matching sequence pairs in an exploration process, traverses all matching sequence pairs, verifies internal connection, calculates a transformation matrix and estimates a combining distance a dist, starts a fault detection mechanism and adopts a rollback strategy if internal connection verification fails until a sub-map combining condition is met, and the combining distance a dist is calculated as follows:
Where ω i,j represents the current internal connection, ω thresh is the internal connection threshold that directs map merging, F i and F j are the expected feature descriptors when the overlap length reaches the threshold, and a dist is the estimated distance that the agent needs to travel to establish a stable internal connection. To evaluate the relationship of the connection increase to the active merge duration, the verification gain is required:
Wherein, For the current connection value to be connected,For the agent to begin actively incrementing the pre-stack internal connection value, C t is the hyper-parameter that controls the change in G (i, j, t) over time.
In the sub-map self-adaptive fusion module, error data association is encountered, the overlapping area is not increased through active merging, the verification gain is reduced along with the increase of the active merging duration, when the verification gain is reduced below a set threshold value, the server informs the robot to exit the active merging, the internal connection is marked as invalid, and otherwise, the agent continues the active merging until the sub-map merging requirement is met. Once reliable inter-sub-map connections are established, the sub-maps are merged into a single sub-map;
a dual resolution map construction module for defining local view sampling criteria to ensure that the local surface is well perceived, view definition and selection through points in space And the distance between the viewpoint p v and the angular relationship with the normal n s. The conditions under which the viewpoint v covers the surface patch are as follows:
|ps-pv|≤D
where D and T are two constants that limit the distance and viewing angle of the surface patch relative to the viewpoint, respectively. In the corresponding configuration space Is sought to be covered as a covered surfaceThe view sampling problem exhibits sub-modularity, and rewards for views depend on previously selected views, requiring dynamic adjustments as follows:
Wherein, Representing an uncovered surface perceived from the viewpoint v i Is the total uncovered surface. In addition, in a multi-robot system, the path planning is optimized through the vehicle route problem, so that the effective coordination and information sharing among robots are realized, and especially in a communication limited environment, the collaborative exploration capacity of the system is enhanced through multi-hop communication and dynamic information synchronization;
a chase strategy cost analysis module in which a chase decision cost benefit analysis model is constructed for deciding whether a robot should chase another robot for information exchange, which requires evaluating the potential benefit and cost of the chase, the costs before and after communication are compared using the following formula:
assuming that after communication, the cost is calculated as:
in the case of half communication, the cost is calculated as:
Wherein, Global paths planned for N robots,Global path planning for robots to acquire other robot knowledge under hypothetical communications,For the global path planning of the robot in acquiring half of other robot knowledge, ifLess than c, indicating that the pursuit and communication benefits exceed the cost, otherwise continuing the comparisonAnd c;
further, in the information synchronization model, robot-see information synchronization is performed by:
Wherein, The merging operation of the representation information ensures that the global subspace knowledge of the participating robots is up-to-date and complete after each communication.
After that, the embodiment of the application can calculate the path required by the robot i to track the robot j after determining that the tracking is beneficial by tracking the calculation model, and approximates the path by using the TSP problem of the time window constraint:
Wherein, The robot i is tracked with the desired path of robot j.
In addition, as shown in fig. 7, the MID-360 multi-line laser radar in the embodiment of the application is specially designed for accurate environmental scanning, and the effective working range can reach 70 m under 80% reflectivity, and the minimum blind area is 0.1 m. The 360 ° horizontal field of view and the-7 ° to +52° vertical field of view ensure full spatial coverage, the lidar outputs a point cloud at a high resolution of 200,000 points per second, a typical frame rate of 10Hz, connected by a 100BASE-TX ethernet, and data synchronized by IEEE 1588-2008 (PTPv 2) or GPS. Advanced functions include interference immunity and extremely low false alarm rates in strong sunlight. The integrated ICM40609 IMU supports improving positioning accuracy and is of great importance to dynamic environment.
The HEPBURN P27J data transmission module provides a frequency range of 1.40GHz to 1.46GHz and supports Mesh networking of maximum 32 nodes. The radio frequency bandwidth is 4/8/10/14/20MHz, the code stream transmission of up to 90Mbps and the multi-hop transmission of 9 hops are supported, the single-hop delay of the module is less than 7ms, and the network access time is less than 1s. The device can stably run at an air-ground viewing distance of 30km and a ground viewing distance of 1-2km, has a weight of less than 175g, provides various interfaces, and is suitable for data transmission requirements of long distance and high bandwidth.
The four-wheel drive type motion control module has the characteristics of providing a transmission ratio of 1:27 and a highest speed of 1.82m/s, and can bear a maximum weight of 12kg and have a dead weight of 7.5kg. The dimension is 381mm x 466mm x 152mm and the turning radius is 0m. 22.2V and 5000mAh batteries are mounted to support a maximum of 6.5 hours of run time and a charge time of 2 hours. A solid rubber tire with a tread of 125mm equipped with an S20F 20kgf load sensor was driven by an MD36N 35W high torque motor. The wireless data transmission within the range of 500 meters is supported, the CAN, serial port and APP remote control functions are integrated, the wireless data transmission system is compatible with ROS, OLED display screens, light and sound prompt systems are provided, safety protection measures are provided, real-time monitoring and remote operation capability are provided, and the wireless data transmission system is suitable for efficient operation in variable environments.
The data processing module is positioned in the center of the unmanned aerial vehicle equipment and consists of an embedded high-performance CPU processor and other related processors, and comprises a processor which adopts 2 kernels and 4 threads, the basic operating frequency is 3.5GHz, and the turbo frequency acceleration can reach 4.0GHzCore TM i7-7567U CPU, integrated INTEL IRIS Plus Graphics 650, operating frequency between 300 and 1100MHz, is suitable for high performance environment focusing on energy efficiency, can provide stable performance (CPU benchmark test) for general computing and Graphics tasks, and 32GB LPDDR5 memory module. The module is internally provided with a Ubuntu operating system and is provided with an unmanned autonomous laser movement measurement method, and the unmanned autonomous laser movement measurement method can receive data received by the data acquisition module for real-time processing.
Therefore, the unmanned autonomous laser movement measurement system can also adopt self-adaptive fusion and advanced cooperative strategies to fusion and construct a dual-resolution global-local map of the three-dimensional space, and the accuracy and the exploration time efficiency of the map are improved. The system optimizes information sharing through a multi-machine formation pursuit strategy, evaluates and merges data collected by different robots through factor diagrams and AutoMerge frames, and optimizes global path selection through a dynamic viewpoint rewarding mechanism and a space subdivision technology, so that the system can be applied to the fields of robot collaborative operation, post-disaster rescue, environment monitoring and the like, has remarkable advantages in improving the accuracy, adaptability and efficiency of an autonomous mobile measurement system, and is particularly suitable for automatic exploration and monitoring tasks in high-dynamic and changeable environments.
According to the unmanned autonomous laser movement measurement method, environment information of at least two target robots is obtained, viewpoint sets of the at least two target robots are determined, the environment information of each of the at least two target robots is estimated based on a preset factor graph and AutoMerge frames, an estimation result is obtained, the environment information of the at least two target robots is combined according to preset combining conditions according to the estimation result to generate combining map information, the minimum viewpoint set of each of the at least two target robots is calculated according to preset coverage standards and a dynamic viewpoint rewarding mechanism, a global route map is built based on the combining map information and the minimum viewpoint sets, communication cost of each preset communication condition under various preset communication conditions is calculated by using the global route map, and corresponding advancing operation of each target robot is controlled according to the communication cost. The application integrates the advantages of the laser radars on multiple vehicles, overcomes the limitations of each single robot and the traditional method through efficient map merging and communication navigation algorithm, and simultaneously provides map construction and navigation solutions with high precision and high reliability in various complex environments.
Next, an unmanned autonomous laser movement measuring apparatus according to an embodiment of the present application will be described with reference to the accompanying drawings.
Fig. 8 is a block schematic diagram of an unmanned autonomous laser movement measuring device in accordance with an embodiment of the present application.
As shown in fig. 8, the unmanned autonomous laser movement measuring apparatus 10 includes an acquisition module 100, an evaluation module 200, and a control module 300.
The acquiring module 100 is configured to acquire environmental information of at least two target robots, and determine viewpoint sets of the at least two target robots.
And the evaluation module 200 is configured to evaluate the environmental information of each of the at least two target robots based on a preset factor graph and AutoMerge frames, obtain an evaluation result, and combine the environmental information of the at least two target robots according to a preset combining condition according to the evaluation result, so as to generate combined map information.
The control module 300 is configured to calculate, based on the viewpoint set, a minimum viewpoint set of each of the at least two target robots according to a preset coverage standard and a dynamic viewpoint rewarding mechanism, construct a global route map based on the combined map information and the minimum viewpoint set, and calculate a communication cost of each preset communication condition under a plurality of preset communication conditions by using the global route map, so as to control each target robot to execute a corresponding traveling operation according to the communication cost.
Optionally, in one embodiment of the present application, the evaluation module 200 includes a first building unit, an extracting unit, a matching unit, a first judging unit, a second judging unit, a first merging unit, a second merging unit, and a traversing unit.
The first construction unit is used for constructing the factor graph according to the environment information of each target robot and the interconnection mode between each target robot.
And the extraction unit is used for acquiring the overlapping length between the environment information of each target robot and extracting the feature descriptors according to the overlapping length.
And the matching unit is used for executing matching operation on the environmental information of each target robot based on the AutoMerge framework, the feature descriptor and the factor graph, generating a matching sequence pair corresponding to each target robot, and carrying out validity verification on each matching sequence in the matching sequence pair to obtain a verification result of each matching sequence.
And the first judging unit is used for judging whether the verification result meets the preset validity requirement or not, and removing the corresponding matching sequence if the verification result does not meet the preset validity requirement.
And the second judging unit is used for calculating the interconnection score of the matching sequence pair and judging whether the interconnection score is larger than a preset score threshold value or not if the verification result meets the preset validity requirement.
And the first merging unit is used for merging the environment information of each target robot to obtain merged sub-map information if the internal connection score is larger than the preset score threshold value.
And the second merging unit is used for calculating the merging distance of the matching sequence pair if the internal connection score is not greater than the preset score threshold value, and executing active merging operation on the matching sequence pair according to the merging distance so as to obtain an active merging result.
And the traversing unit is used for traversing the environment information of each robot based on the active merging result or the merging sub-map information until the preset exploration requirement is met so as to generate the merging map information.
Alternatively, in one embodiment of the application, the control module 300 includes a determination unit, an execution unit, a first calculation unit, and an adjustment unit.
Wherein the determining unit is used for determining the coverage surface point standard and the local range space of the target sensor.
And the execution unit is used for executing viewpoint rewarding comparison operation based on the coverage surface point standard and the local range space so as to obtain a comparison result.
And the first calculating unit is used for selecting a space initial viewpoint according to the comparison result and calculating an uncovered surface of the space initial viewpoint so as to select a subsequent viewpoint through the uncovered surface.
And the adjusting unit is used for dynamically adjusting the subsequent viewpoint rewards and repeatedly executing viewpoint rewards comparison operation according to the dynamically adjusted subsequent viewpoint rewards so as to generate the minimum viewpoint set.
Optionally, in one embodiment of the application, the control module 300 further comprises a modeling unit and a second building unit.
The modeling unit is used for constructing a space subdivision model, dividing a target space into a plurality of sub-target spaces according to the space subdivision model, and marking the plurality of sub-target spaces.
The second construction unit is used for acquiring candidate viewpoints based on the marked multiple sub-target spaces, the minimum viewpoint set, the preset resolution and the preset reuse strategy, and constructing the global roadmap according to the candidate viewpoints and the historical motion trail of each target robot.
Optionally, in one embodiment of the present application, the control module 300 further includes a third building unit, a second calculating unit, a third calculating unit, a selecting unit, and a tracking unit.
The third construction unit is used for determining the environment knowledge information of each target robot and constructing an information synchronization model according to the environment knowledge information.
And the second calculation unit is used for respectively carrying out non-communication evaluation, semi-communication evaluation and assumption communication evaluation on the target robot i according to the information synchronization model when the target robot i and the target robot j exceed a preset communication range, and respectively calculating global paths and communication costs corresponding to the non-communication evaluation, the semi-communication evaluation and the assumption communication evaluation, wherein i and j belong to positive integers.
And a third calculation unit, configured to calculate a tracking cost of the target robot i for tracking the target robot j according to the global path and the communication cost, and construct a specified chase cost comparison model based on the tracking cost and the communication cost.
And the selecting unit is used for selecting a communication target from the N-M target robots by adopting a preset random sampling strategy when the M target robots are positioned in the communication range of the N robots, wherein M, N belongs to a positive integer, and M is smaller than N.
The tracking unit is used for determining the tracking route of the M target robots for tracking the communication targets, acquiring the tracking results of the M target robots, and controlling each target robot to execute corresponding travelling operation based on the tracking route and the tracking results and combining a preset pursuit decision and a communication strategy.
It should be noted that the foregoing explanation of the embodiment of the unmanned autonomous laser movement measuring method is also applicable to the unmanned autonomous laser movement measuring device of this embodiment, and will not be repeated here.
The unmanned autonomous laser mobile measurement device provided by the embodiment of the application comprises an acquisition module, an evaluation module, a control module and a global route map, wherein the acquisition module is used for acquiring environment information of at least two target robots and determining viewpoint sets of the at least two target robots, the evaluation module is used for evaluating the environment information of each target robot in the at least two target robots based on a preset factor graph and AutoMerge frames to obtain an evaluation result, merging the environment information of the at least two target robots according to preset merging conditions according to the evaluation result to generate merging map information, the control module is used for calculating the minimum viewpoint set of each target robot in the at least two target robots according to preset coverage standards and a dynamic viewpoint rewarding mechanism, constructing the global route map based on the merging map information and the minimum viewpoint set, and calculating the communication cost of each preset communication condition under various preset communication conditions by using the global route map so as to control each target robot to execute corresponding travelling operation according to the communication cost. The application integrates the advantages of the laser radars on multiple vehicles, overcomes the limitations of each single robot and the traditional method through efficient map merging and communication navigation algorithm, and simultaneously provides map construction and navigation solutions with high precision and high reliability in various complex environments.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device may include:
memory 901, processor 902, and a computer program stored on memory 901 and executable on processor 902.
The processor 902 implements the unmanned autonomous laser movement measurement method provided in the above-described embodiment when executing a program.
Further, the electronic device further includes:
A communication interface 903 for communication between the memory 901 and the processor 902.
Memory 901 for storing a computer program executable on processor 902.
Memory 901 may comprise high-speed RAM memory or may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
If the memory 901, the processor 902, and the communication interface 903 are implemented independently, the communication interface 903, the memory 901, and the processor 902 may be connected to each other through a bus and perform communication with each other. The bus may be an industry standard architecture (Industry Standard Architecture, abbreviated ISA) bus, an external device interconnect (PERIPHERAL COMPONENT, abbreviated PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one thick line is shown in fig. 9, but not only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 901, the processor 902, and the communication interface 903 are integrated on a chip, the memory 901, the processor 902, and the communication interface 903 may communicate with each other through internal interfaces.
The processor 902 may be a central processing unit (Central Processing Unit, abbreviated as CPU), or an Application SPECIFIC INTEGRATED Circuit (ASIC), or one or more integrated circuits configured to implement embodiments of the application.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when being executed by a processor, implements the unmanned autonomous laser movement measuring method as above.
The embodiment of the application also provides a computer program product, which comprises a computer program, wherein the computer program is used for realizing the unmanned autonomous laser movement measuring method when being executed.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or N embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, "N" means at least two, for example, two, three, etc., unless specifically defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order from that shown or discussed, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include an electrical connection (an electronic device) having one or more wires, a portable computer diskette (a magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the N steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. If implemented in hardware as in another embodiment, it may be implemented using any one or combination of techniques known in the art, discrete logic circuits with logic gates for implementing logic functions on data signals, application specific integrated circuits with appropriate combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like. While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (7)

1.一种无人自主激光移动测量方法,其特征在于,包括以下步骤:1. An unmanned autonomous laser mobile measurement method, characterized in that it comprises the following steps: 获取至少两个目标机器人的环境信息,并确定所述至少两个目标机器人的视点集合;Acquire environmental information of at least two target robots, and determine a viewpoint set of the at least two target robots; 基于预设的因子图和AutoMerge框架,评估所述至少两个目标机器人中每个目标机器人的环境信息,得到评估结果,并根据所述评估结果按照预设合并条件合并所述至少两个目标机器人的环境信息,以生成合并地图信息;Based on a preset factor graph and an AutoMerge framework, evaluating the environmental information of each of the at least two target robots to obtain an evaluation result, and merging the environmental information of the at least two target robots according to a preset merging condition based on the evaluation result to generate merged map information; 基于所述视点集合,按照预设的覆盖标准和动态视点奖励机制计算所述至少两个目标机器人中每个目标机器人的最小视点集合,基于所述合并地图信息和所述最小视点集合,构建全局路线图,并利用所述全局路线图计算多种预设通信条件下每种预设通信条件的通信成本,以根据所述通信成本控制所述每个目标机器人执行相应的行进操作;Based on the viewpoint set, a minimum viewpoint set of each of the at least two target robots is calculated according to a preset coverage standard and a dynamic viewpoint reward mechanism, a global roadmap is constructed based on the merged map information and the minimum viewpoint set, and a communication cost of each preset communication condition under multiple preset communication conditions is calculated using the global roadmap, so as to control each target robot to perform a corresponding travel operation according to the communication cost; 所述基于预设的因子图和AutoMerge框架,评估所述至少两个目标机器人中每个目标机器人的环境信息,得到评估结果,并根据所述评估结果按照预设合并条件合并所述至少两个目标机器人的环境信息,以生成合并地图信息,包括:The step of evaluating the environmental information of each of the at least two target robots based on the preset factor graph and the AutoMerge framework to obtain an evaluation result, and merging the environmental information of the at least two target robots according to the evaluation result and a preset merging condition to generate merged map information includes: 根据所述每个目标机器人的环境信息和所述每个目标机器人之间的内连接方式构建所述因子图;Constructing the factor graph according to the environmental information of each target robot and the internal connection mode between each target robot; 获取所述每个目标机器人环境信息之间的重叠长度,并根据所述重叠长度提取特征描述符;Obtaining the overlapping length between each target robot environment information, and extracting a feature descriptor according to the overlapping length; 基于所述AutoMerge框架、所述特征描述符和所述因子图,对所述每个目标机器人的环境信息执行匹配操作,生成所述每个目标机器人对应的匹配序列对,并对所述匹配序列对中的每个匹配序列进行有效性验证,以得到每个匹配序列的验证结果;Based on the AutoMerge framework, the feature descriptor and the factor graph, a matching operation is performed on the environmental information of each target robot to generate a matching sequence pair corresponding to each target robot, and each matching sequence in the matching sequence pair is validated to obtain a verification result of each matching sequence; 判断所述验证结果是否满足预设有效性要求,如果所述验证结果不满足所述预设有效性要求,则移除对应的匹配序列;Determining whether the verification result meets the preset validity requirement, and if the verification result does not meet the preset validity requirement, removing the corresponding matching sequence; 如果所述验证结果满足所述预设有效性要求,则计算所述匹配序列对的内连接评分,并判断所述内连接评分是否大于预设评分阈值;If the verification result meets the preset validity requirement, then calculating the inner connection score of the matching sequence pair, and determining whether the inner connection score is greater than a preset score threshold; 如果所述内连接评分大于所述预设评分阈值,则合并所述每个目标机器人的环境信息,以得到合并子地图信息;If the inner connection score is greater than the preset score threshold, merging the environment information of each target robot to obtain merged sub-map information; 如果所述内连接评分不大于所述预设评分阈值,则计算所述匹配序列对的合并距离,并根据所述合并距离对所述匹配序列对执行主动合并操作,以得到主动合并结果;If the inner connection score is not greater than the preset score threshold, calculating the merging distance of the matching sequence pair, and performing an active merging operation on the matching sequence pair according to the merging distance to obtain an active merging result; 基于所述主动合并结果或所述合并子地图信息,遍历所述每个目标机器人的环境信息,直至达到预设探索要求为止,以生成所述合并地图信息;Based on the active merging result or the merged sub-map information, traverse the environmental information of each target robot until a preset exploration requirement is met to generate the merged map information; 所述基于所述视点集合,按照预设的覆盖标准和动态视点奖励机制计算所述至少两个目标机器人中每个目标机器人的最小视点集合,包括:The step of calculating the minimum viewpoint set of each of the at least two target robots based on the viewpoint set according to a preset coverage standard and a dynamic viewpoint reward mechanism includes: 确定目标传感器的覆盖表面点标准和局部范围空间;Determine the coverage surface point criteria and local range space of the target sensor; 基于所述覆盖表面点标准和所述局部范围空间,执行视点奖励比较操作,以得到比较结果;Based on the coverage surface point standard and the local range space, performing a viewpoint reward comparison operation to obtain a comparison result; 根据所述比较结果选取空间初始视点,并计算所述空间初始视点的未覆盖表面,以通过所述未覆盖表面选取后续视点;Selecting a spatial initial viewpoint according to the comparison result, and calculating an uncovered surface of the spatial initial viewpoint, so as to select subsequent viewpoints through the uncovered surface; 动态调整后续视点奖励,并根据动态调整后的后续视点奖励重复执行视点奖励比较操作,以生成所述最小视点集合;Dynamically adjusting subsequent viewpoint rewards, and repeatedly performing the viewpoint reward comparison operation according to the dynamically adjusted subsequent viewpoint rewards to generate the minimum viewpoint set; 其中,所述覆盖表面点标准的数学表达式为:The mathematical expression of the coverage surface point standard is: |ps-pv|≤D| ps - pv |≤D 其中,ps为探索空间中的表面贴片;pv表示车载传感器视点对应的位置;ns为指向自由空间侧的法线;视点v是表面贴片上的中心点,D和T为两个常数。Among them, ps is the surface patch in the exploration space; pv represents the position corresponding to the viewpoint of the on-board sensor; ns is the normal pointing to the free space side; the viewpoint v is the center point on the surface patch, and D and T are two constants. 2.根据权利要求1所述的方法,其特征在于,所述基于所述合并地图信息和所述最小视点集合,构建全局路线图,包括:2. The method according to claim 1, characterized in that constructing a global roadmap based on the merged map information and the minimum viewpoint set comprises: 构建空间细分模型,并根据所述空间细分模型将目标空间划分为多个子目标空间,且对所述多个子目标空间进行标记;Constructing a space segmentation model, dividing the target space into a plurality of sub-target spaces according to the space segmentation model, and marking the plurality of sub-target spaces; 基于标记后的多个子目标空间、所述最小视点集合、预设分辨率和预设重用策略,获取候选视点,并根据所述候选视点与所述每个目标机器人的历史运动轨迹构建所述全局路线图。Based on the marked multiple sub-target spaces, the minimum viewpoint set, the preset resolution and the preset reuse strategy, candidate viewpoints are obtained, and the global roadmap is constructed according to the candidate viewpoints and the historical motion trajectories of each target robot. 3.根据权利要求2所述的方法,其特征在于,所述利用所述全局路线图计算多种预设通信条件下每种预设通信条件的通信成本,以根据所述通信成本控制所述每个目标机器人执行相应的行进操作,包括:3. The method according to claim 2, characterized in that the use of the global roadmap to calculate the communication cost of each preset communication condition under multiple preset communication conditions, so as to control each target robot to perform a corresponding moving operation according to the communication cost, comprises: 确定所述每个目标机器人的环境了解信息,并根据所述环境了解信息构建信息同步模型;Determining the environmental understanding information of each target robot, and building an information synchronization model according to the environmental understanding information; 当目标机器人i和目标机器人j超出预设通信范围时,根据所述信息同步模型对所述目标机器人i分别进行无通信评估、半通信评估和假设通信评估,并分别计算所述无通信评估、所述半通信评估和所述假设通信评估对应的全局路径和通信成本,其中i和j属于正整数;When the target robot i and the target robot j are beyond the preset communication range, the target robot i is evaluated without communication, evaluated semi-communication and evaluated as if it were communication according to the information synchronization model, and the global path and communication cost corresponding to the evaluation without communication, the evaluation semi-communication and the evaluation as if it were communication are calculated, respectively, where i and j are positive integers; 根据所述全局路径和所述通信成本计算所述目标机器人i追踪所述目标机器人j的追踪成本,并基于所述追踪成本和所述通信成本,构建规定追逐成本比较模型;Calculating the tracking cost of the target robot i tracking the target robot j according to the global path and the communication cost, and constructing a prescribed chasing cost comparison model based on the tracking cost and the communication cost; 当M个目标机器人位于N个机器人的通信范围内时,采用预设随机抽样策略从N-M个目标机器人中选取通信目标,其中,M、N属于正整数,且M<N;When M target robots are within the communication range of N robots, a preset random sampling strategy is used to select communication targets from N-M target robots, where M and N are positive integers and M<N; 确定所述M个目标机器人追踪所述通信目标的追踪路线,并获取所述M个目标机器人的追踪结果,以基于所述追踪路线和追踪结果,并结合预设追逐决策和通信策略控制所述每个目标机器人执行相应的行进操作。Determine the tracking routes of the M target robots to track the communication target, and obtain the tracking results of the M target robots, so as to control each target robot to perform corresponding travel operations based on the tracking routes and tracking results and in combination with preset chasing decisions and communication strategies. 4.一种无人自主激光移动测量装置,其特征在于,包括:4. An unmanned autonomous laser mobile measurement device, characterized in that it includes: 获取模块,用于获取至少两个目标机器人的环境信息,并确定所述至少两个目标机器人的视点集合;An acquisition module, used to acquire environmental information of at least two target robots and determine a viewpoint set of the at least two target robots; 评估模块,用于基于预设的因子图和AutoMerge框架,评估所述至少两个目标机器人中每个目标机器人的环境信息,得到评估结果,并根据所述评估结果按照预设合并条件合并所述至少两个目标机器人的环境信息,以生成合并地图信息;An evaluation module, configured to evaluate the environmental information of each of the at least two target robots based on a preset factor graph and an AutoMerge framework to obtain an evaluation result, and merge the environmental information of the at least two target robots according to a preset merging condition based on the evaluation result to generate merged map information; 控制模块,用于基于所述视点集合,按照预设的覆盖标准和动态视点奖励机制计算所述至少两个目标机器人中每个目标机器人的最小视点集合,基于所述合并地图信息和所述最小视点集合,构建全局路线图,并利用所述全局路线图计算多种预设通信条件下每种预设通信条件的通信成本,以根据所述通信成本控制所述每个目标机器人执行相应的行进操作;A control module, configured to calculate a minimum viewpoint set of each of the at least two target robots based on the viewpoint set according to a preset coverage standard and a dynamic viewpoint reward mechanism, construct a global roadmap based on the merged map information and the minimum viewpoint set, and calculate a communication cost of each preset communication condition under multiple preset communication conditions using the global roadmap, so as to control each target robot to perform a corresponding travel operation according to the communication cost; 其中,所述评估模块包括:Wherein, the evaluation module comprises: 第一构建单元,用于根据所述每个目标机器人的环境信息和所述每个目标机器人之间的内连接方式构建所述因子图;A first construction unit, configured to construct the factor graph according to the environmental information of each target robot and the internal connection mode between each target robot; 提取单元,用于获取所述每个目标机器人环境信息之间的重叠长度,并根据所述重叠长度提取特征描述符;An extraction unit, used for obtaining the overlap length between the environment information of each target robot, and extracting a feature descriptor according to the overlap length; 匹配单元,用于基于所述AutoMerge框架、所述特征描述符和所述因子图,对所述每个目标机器人的环境信息执行匹配操作,生成所述每个目标机器人对应的匹配序列对,并对所述匹配序列对中的每个匹配序列进行有效性验证,以得到每个匹配序列的验证结果;A matching unit, configured to perform a matching operation on the environmental information of each target robot based on the AutoMerge framework, the feature descriptor and the factor graph, generate a matching sequence pair corresponding to each target robot, and perform validity verification on each matching sequence in the matching sequence pair to obtain a verification result for each matching sequence; 第一判断单元,用于判断所述验证结果是否满足预设有效性要求,如果所述验证结果不满足所述预设有效性要求,则移除对应的匹配序列;A first judging unit, configured to judge whether the verification result meets a preset validity requirement, and if the verification result does not meet the preset validity requirement, remove the corresponding matching sequence; 第二判断单元,用于如果所述验证结果满足所述预设有效性要求,则计算所述匹配序列对的内连接评分,并判断所述内连接评分是否大于预设评分阈值;A second judgment unit, configured to calculate an inner connection score of the matching sequence pair if the verification result meets the preset validity requirement, and judge whether the inner connection score is greater than a preset score threshold; 第一合并单元,用于如果所述内连接评分大于所述预设评分阈值,则合并所述每个目标机器人的环境信息,以得到合并子地图信息;A first merging unit, configured to merge the environment information of each target robot to obtain merged sub-map information if the inner connection score is greater than the preset score threshold; 第二合并单元,用于如果所述内连接评分不大于所述预设评分阈值,则计算所述匹配序列对的合并距离,并根据所述合并距离对所述匹配序列对执行主动合并操作,以得到主动合并结果;a second merging unit, configured to calculate a merging distance of the matching sequence pair if the inner connection score is not greater than the preset score threshold, and perform an active merging operation on the matching sequence pair according to the merging distance to obtain an active merging result; 遍历单元,用于基于所述主动合并结果或所述合并子地图信息,遍历所述每个目标机器人的环境信息,直至达到预设探索要求为止,以生成所述合并地图信息;A traversal unit, configured to traverse the environment information of each target robot based on the active merging result or the merged sub-map information until a preset exploration requirement is met, so as to generate the merged map information; 所述控制模块包括:The control module comprises: 确定单元,用于确定目标传感器的覆盖表面点标准和局部范围空间;a determination unit for determining a coverage surface point standard and a local range space of a target sensor; 执行单元,用于基于所述覆盖表面点标准和所述局部范围空间,执行视点奖励比较操作,以得到比较结果;an execution unit, configured to execute a viewpoint reward comparison operation based on the coverage surface point standard and the local range space to obtain a comparison result; 第一计算单元,用于根据所述比较结果选取空间初始视点,并计算所述空间初始视点的未覆盖表面,以通过所述未覆盖表面选取后续视点;A first calculation unit, configured to select an initial viewpoint of the space according to the comparison result, and calculate an uncovered surface of the initial viewpoint of the space, so as to select a subsequent viewpoint through the uncovered surface; 调整单元,用于动态调整后续视点奖励,并根据动态调整后的后续视点奖励重复执行视点奖励比较操作,以生成所述最小视点集合;an adjusting unit, configured to dynamically adjust subsequent viewpoint rewards, and repeatedly perform the viewpoint reward comparison operation according to the dynamically adjusted subsequent viewpoint rewards, so as to generate the minimum viewpoint set; 其中,所述覆盖表面点标准的数学表达式为:The mathematical expression of the coverage surface point standard is: |ps-pv|≤D| ps - pv |≤D 其中,ps为探索空间中的表面贴片;pv表示车载传感器视点对应的位置;ns为指向自由空间侧的法线;视点v是表面贴片上的中心点,D和T为两个常数。Among them, ps is the surface patch in the exploration space; pv represents the position corresponding to the viewpoint of the on-board sensor; ns is the normal pointing to the free space side; the viewpoint v is the center point on the surface patch, and D and T are two constants. 5.一种电子设备,其特征在于,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述程序,以实现如权利要求1-3任一项所述的无人自主激光移动测量方法。5. An electronic device, characterized in that it comprises: a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the program to implement the unmanned autonomous laser mobile measurement method as described in any one of claims 1 to 3. 6.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行,以用于实现如权利要求1-3任一项所述的无人自主激光移动测量方法。6. A computer-readable storage medium having a computer program stored thereon, wherein the program is executed by a processor to implement the unmanned autonomous laser mobile measurement method as described in any one of claims 1 to 3. 7.一种计算机程序产品,包括计算机程序,其特征在于,所述计算机程序被执行,以用于实现如权利要求1-3任一项所述的无人自主激光移动测量方法。7. A computer program product, comprising a computer program, characterized in that the computer program is executed to implement the unmanned autonomous laser mobile measurement method according to any one of claims 1 to 3.
CN202411033480.1A 2024-07-30 2024-07-30 Unmanned autonomous laser movement measurement method and device Active CN119064951B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411033480.1A CN119064951B (en) 2024-07-30 2024-07-30 Unmanned autonomous laser movement measurement method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411033480.1A CN119064951B (en) 2024-07-30 2024-07-30 Unmanned autonomous laser movement measurement method and device

Publications (2)

Publication Number Publication Date
CN119064951A CN119064951A (en) 2024-12-03
CN119064951B true CN119064951B (en) 2025-05-16

Family

ID=93637920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411033480.1A Active CN119064951B (en) 2024-07-30 2024-07-30 Unmanned autonomous laser movement measurement method and device

Country Status (1)

Country Link
CN (1) CN119064951B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109945871A (en) * 2019-03-15 2019-06-28 中山大学 A method for simultaneous positioning and map construction of multiple unmanned platforms under the condition of limited communication bandwidth and distance
CN115200588A (en) * 2022-09-14 2022-10-18 煤炭科学研究总院有限公司 SLAM autonomous navigation method and device for mobile robot

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107567036B (en) * 2017-09-30 2023-07-04 山东大学 SLAM system and method based on wireless self-organizing local area network of robot search and rescue environment
CN115512054B (en) * 2022-09-06 2024-07-12 武汉大学 Method for constructing three-dimensional space-time continuous point cloud map
CN116295337A (en) * 2023-02-28 2023-06-23 武汉大学 Machine location map generation and machine ubiquitous location method for autonomous driving
CN118111443A (en) * 2024-03-08 2024-05-31 江淮前沿技术协同创新中心 Decentralized distributed collaborative positioning method for drone swarms

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109945871A (en) * 2019-03-15 2019-06-28 中山大学 A method for simultaneous positioning and map construction of multiple unmanned platforms under the condition of limited communication bandwidth and distance
CN115200588A (en) * 2022-09-14 2022-10-18 煤炭科学研究总院有限公司 SLAM autonomous navigation method and device for mobile robot

Also Published As

Publication number Publication date
CN119064951A (en) 2024-12-03

Similar Documents

Publication Publication Date Title
Mahdoui et al. Communicating multi-uav system for cooperative slam-based exploration
CN106406338B (en) Autonomous navigation device and method of omnidirectional mobile robot based on laser range finder
CN116576857A (en) A multi-obstacle predictive navigation and obstacle avoidance method based on single-line lidar
WO2024016457A1 (en) Heterogeneous multi-agent networking cooperative scheduling planning method based on autonomous obstacle bypassing
CN111006667B (en) Automatic driving track generation system under high-speed scene
CN110825112B (en) Oil field dynamic invasion target tracking system and method based on multiple unmanned aerial vehicles
CN119289972A (en) A collaborative autonomous perception and navigation method in a denied environment based on reinforcement learning
CN117389305A (en) A method, system, equipment and medium for UAV inspection path planning
CN119356362B (en) A real-time obstacle avoidance method and device for unmanned aerial vehicle in dynamic environment based on vision-assisted B-spline trajectory optimization
TW201819225A (en) Vehicle control system and vehicle control method
Gao et al. Localization of mobile robot based on multi-sensor fusion
Gokhale et al. FEEL: Fast, energy-efficient localization for autonomous indoor vehicles
CN120482088A (en) Small-sized carrier for carrying people and logistics and unmanned intelligent driving system thereof
Zhao et al. A review of multi-robot collaborative simultaneous localization and mapping
CN118484001A (en) Collision detection method in unstructured environment with dense obstacles based on ray tracing
Chen et al. An Intelligent Navigation Control Approach for Autonomous Unmanned Vehicles via Deep Learning-Enhanced Visual SLAM Framework
CN117707177A (en) Dynamic multi-vehicle cooperative path planning decision system and method based on space algorithm
CN119064951B (en) Unmanned autonomous laser movement measurement method and device
CN115344049B (en) Automatic path planning and vehicle control method and device for passenger boarding vehicle
CN120298587A (en) Multi-machine distributed collaborative mapping method and system based on laser radar and IMU
Li et al. U2AD: A UAV-Assisted Autonomous Driving Framework for Enhancing Vehicle Risk Perception and Decision-Making Capabilities
Yuksek et al. Federated meta learning for visual navigation in GPS-denied urban airspace
CN115145312A (en) Cluster cooperative control method based on double-leader mode under only azimuth measurement
CN119618199B (en) Active SLAM system, method and equipment based on laser inertial navigation fusion
CN118506600B (en) Vehicle control method, device, storage medium and electronic terminal for parking lot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant