Detailed Description
The invention is described in detail below with reference to the drawings.
According to the embodiment of the invention, a cloud robot is provided.
Fig. 1 is a block diagram of a cloud robot according to an embodiment of the present invention. The cloud robot comprises a state group management module 10 for managing state information of robots in each group according to a user or a robot group identifier bound by a scene, a map group management module 11 for grouping service maps according to a scene identifier bound by robot position information and managing the service maps in each group in a group management mode, wherein each map group corresponds to at least one scene, each scene corresponds to a plurality of service maps, a map splicing module 12 connected with the map group management module 12 and used for carrying out matching splicing on the plurality of service maps in the same scene to obtain a first spliced map in the same scene and/or carrying out matching splicing on the service maps in the same map group to obtain a second spliced map in the same map group, and a first interactive communication module 13 respectively connected with the state group management module 10, the map group management module 11 and the map splicing module 12 and used for synchronizing the first map and/or the second map and the state information of the robots to at least one robot to realize the robot.
The cloud server shown in the figure 1 is adopted, the cloud server manages the state information of robots under each group according to the group identifications of the robots bound by the users or scenes, the service maps are grouped according to the scene identifications bound by the position information of the robots, the service maps under each group are managed in a group management mode, a plurality of service maps under the same scene and/or under the same map group are matched and spliced, scene construction is carried out along with multi-machine loop iteration of the robots, and the full-scene coverage of the map is gradually realized. The multiple independent robots are based on map operation after splicing, so that the multiple robots are good in coordination effect, high in working efficiency and strong in scene adaptability.
Preferably, the first interactive communication module 13 is further configured to receive status information reported by a robot in real time and a robot group identifier corresponding to the status information, and receive a service map established or updated by the robot and a scene identifier corresponding to the service map, the status group management module 10 is further configured to group and manage the status information according to the robot group identifier corresponding to the status information reported in real time, and the map group management module 11 is further configured to group and manage the service map according to the scene identifier corresponding to the service map established or updated.
In a plurality of robot cooperation operation scenes, a cloud server receives state information (for example, service information, pose information, speed information, sensor working state information, robot related information related to battery power and the like) and a robot group identifier corresponding to the state information, wherein the state information is reported in real time by a robot, the state information is managed in a grouping mode according to the robot group identifier corresponding to the state information reported in real time, and the cloud server receives a service map (namely, the service map can be established for the first time or updated in a follow-up operation process) and a scene identifier corresponding to the service map, and the service map is managed in a grouping mode according to the scene identifier corresponding to the service map established or updated. Therefore, the state information and the service map of the robot maintained and managed by the cloud server can be acquired in advance, and can be uploaded and updated from the edge robot in real time and dynamically in real time in an actual service scene, so that the real-time performance is good, and the scene adaptability is strong.
In a preferred implementation process, the cloud server and the plurality of independent robots can form a framework of the multi-machine management system, wherein a single robot serves as an independent edge end node, the cloud server serves as a central node, and the cloud robot and the plurality of robots are connected through a star network topology.
Each robot may be provided with independent identification information (e.g., code, etc., denoted as machine code). The scene identification may be set based on the position information of the robots in the traffic scene, for example, each robot may set an initial start position in the traffic scene, which is encoded based on the human or natural features, and recorded as a scene code.
The robot-created map may be bound to a scene identification (e.g., scene code), i.e., a single scene code may correspond to multiple maps, which are not bound to the robot. And mounting the map under the corresponding group according to the scene code, and managing the map under each group in a group management mode, wherein the group is marked as a map group.
A single user or all robots within a scene may form a group of robot groups that are identified (e.g., coded, etc., as a group of robot groups code). And (3) mounting all robots in a single user or scene under a corresponding robot group, managing the robots by taking the groups as units, managing the running states of the robots based on the groups, and recording the running states as state groups.
The system comprises a plurality of robot groups, a cloud server, a plurality of machine groups and a plurality of cloud servers, wherein the single robot group allows at least 1 machine, the maximum number of the machines is determined according to the performance of the servers, the plurality of robot groups and the cloud server form a multi-machine management system through a star network topology, robots in each robot group are managed in a unified mode, and the robot groups are isolated from each other. One user may correspond to multiple robot groups. Robots within each group support robots of different sizes and types, but the robots of different sizes and types are required to adopt uniform data standards and formats.
The service map comprises, but is not limited to, a scene map and service operation information corresponding to each target point in a robot motion track bound in the scene map and at least one target point on the robot motion track based on real-time coordinates.
In the preferred implementation process, one or more edge robots in the robot group start from an initial position, are started based on scene codes, perform independent environment mapping and business operation, bind the data such as the motion trail of the robots and the business operation to a scene map of the robots based on real-time coordinates to obtain a business map (comprising information such as the scene map, the motion trail and the business operation), and bind the business map to the scene codes.
Of course, in the working process of the robot based on the service map, the service map can be set to be updated continuously according to the scene change or not according to the service requirement. When the service map is selected to be updated according to the scene change, after the robot work is completed, the updated map needs to be uploaded to a server to replace the service map which is not updated originally.
Preferably, as shown in fig. 2, the map stitching module 12 may further include a matching sub-module 120 configured to perform matching of two service maps on a plurality of service maps in a same scene respectively, determine a local relative relationship of the two service maps until a first global relative relationship of the plurality of service maps in the same scene is acquired, and/or perform matching of every two service maps on a plurality of service maps in the same map group respectively, determine a local relative relationship of each two service maps until a second global relative relationship of the plurality of service maps in the same map group is acquired, and a stitching sub-module 122 configured to stitch the plurality of service maps in the same scene according to the first global relative relationship to obtain the first map, and/or stitch the service maps in the same map group according to the second global relative relationship to obtain the second map, where the stitching sub-module performs stitching on a robot motion track bound in the same scene based on real-time coordinates in a process of stitching the plurality of service maps, and performs stitching on at least one corresponding point of the robot motion track in the same scene.
In the preferred implementation process, the cloud server respectively matches service maps under the same scene code and/or service maps under the same map group, the successfully matched service maps are spliced based on the matching result, in the process of executing service map splicing, the robot motion tracks bound on a plurality of service maps are spliced, and service operation information corresponding to each target point in at least one target point on the motion track bound with the service map is integrated, so that spliced maps under the same scene code and/or spliced maps of the same map group are respectively formed. When a robot in the multi-machine management system is started, the spliced service map can be requested to the cloud server, and single Zhang Detu service scene full coverage is achieved. The plurality of independent robots in the same business scene are positioned based on the spliced map, so that the pose information of the robot on the current map can be accurately obtained, meanwhile, the real-time pose of the other party can be obtained by combining the interaction information of the robot and the cloud server, and the plurality of machines realize multi-machine collaborative work based on pose collaboration.
In the specific implementation process, the robot senses external information through the sensors in the operation process, multiple sensors can be adopted, wherein the sensors comprise infrared sensors, ultrasonic sensors, collision sensors and the like, the sensors can detect obstacles at a short distance, the sensors can detect obstacles at a long distance through a laser radar, a monocular or binocular camera, structured light and TOF, the sensors can detect the obstacles at a long distance, the detection range of the laser radar and the detection range of the camera are relatively large, the robot generally mainly refers to the data of the two sensors when constructing a map, and the data of other sensors can also be used for establishing a scene map according to the difference of the functional platforms of the robot. And binding the data such as the motion trail of the robot, the business operation and the like to the scene map of the robot based on the real-time coordinates to obtain the business map (comprising the scene map, the motion trail, the business operation and the like). And then respectively executing matching of two service maps in the service maps independently established by each robot, for example, extracting a first service map and a second service map, obtaining a first two-dimensional matrix corresponding to the first service map and a second two-dimensional matrix corresponding to the second service map, respectively performing Fourier transformation on the first two-dimensional matrix and the second two-dimensional matrix to generate a first amplitude value matrix corresponding to the first service map and a second amplitude value matrix corresponding to the second service map, and transforming the first amplitude value matrix and the second amplitude value matrix by adopting a phase correlation method to generate a pulse function for representing translation quantity and rotation quantity between the first service map and the second service map, and obtaining the relative transformation relation between the first service map and the second service map according to coordinate values corresponding to the pulse function. According to the method, the global relative relation of a plurality of business maps in the same scene or the same map group is obtained.
Preferably, as shown in fig. 2, the cloud server may further include a service management module 14 connected to the first interactive communication module 13 for managing and distributing services corresponding to service demands (e.g., determining which one or more robots are used to process a current service order) and determining a processing policy of a robot under specific conditions (e.g., a processing policy when there is no service order or a lack of power of the robot, etc.), a planning module 15 connected to the first interactive communication module 13 and the map splicing module 12 for executing a routing policy so that the robot reaches a corresponding target point along a planned optimal path, managing a sequence of at least one target point on a motion trajectory of the robot (e.g., adding, deleting, modifying an attribute setting of a target point, etc.), and managing service operation information corresponding to each target point in the at least one target point (e.g., binding setting of a target point and corresponding service operation information, etc.), a scheduling management module 16 connected to the first interactive communication module 13 and the map splicing module 12 for scheduling and implementing a scheduling scheme (e.g., implementing a task management and a robot to avoid a task allocation, a collision of the robot, etc.), and performing a task management and at least one robot monitoring and a state, monitor the status of whether the target point is occupied by other objects or robots, etc.), and authorize a target point accessible to the robot from among the at least one target point.
Preferably, as shown in fig. 2, the cloud server may further include a requirement docking module 17 connected to the service management module 14 and further configured to dock the service requirement with the service requirement end, and a man-machine interaction module 18 connected to the service management module 14 and configured to receive a user instruction through a man-machine interaction interface (UI user interface) and present service information related to the user instruction.
According to an embodiment of the invention, a robot is also provided.
Fig. 3 is a block diagram of a robot according to an embodiment of the present invention. As shown in fig. 3, the robot includes a map management module 30, configured to bind a motion track corresponding to the one or more robots and service operation information corresponding to each of at least one target point on the motion track in a set-up scene map to obtain the service map when performing environment mapping and service operation from an initial point position, a second interactive communication module 31, configured to report status information and a robot group identifier corresponding to the status information to the cloud server, and a service map set by the robot and a scene identifier corresponding to the service map, and receive a first map obtained by matching and splicing multiple service maps in the same scene from the cloud server, and/or a second map obtained by matching and splicing service maps in the same map group, where the status information includes pose information of the robot and service information, and a collaborative operation module 32, configured to perform positioning based on the first map and/or the second map, and obtain all the relevant service information of the robot and other robots in the current service based on the current pose information of the robot.
When the robot shown in fig. 3 starts from the initial point position and performs environment mapping and service operation, the motion track of the robot and service operation information corresponding to each target point in at least one target point on the motion track are bound in the established scene map to obtain the service map, the cloud server performs map splicing based on the map established by the robot, and performs scene mapping along with multi-machine circulation iteration of the robot, so that full scene coverage of the map is gradually realized. The multiple independent robots are based on map operation after splicing, so that the multiple robots are good in coordination effect, high in working efficiency and strong in scene adaptability.
In the preferred implementation process, when a robot works, for example, the robot works for the first time, one or more edge robots in a robot group are started from an initial position based on scene identification (for example, scene coding and the like), independent scene map building and business operation are performed, data of a robot motion track, business operation and the like are bound to a robot scene map based on real-time coordinates, a business map (comprising scene map, motion track, business operation information and the like) is obtained, and meanwhile, the business map is bound to the scene coding. And after the robot finishes the work, uploading the established service map and the corresponding scene code to the cloud server, wherein the cloud server acquires the service map and the corresponding scene code uploaded by the robots in the robot group, groups the service map based on the scene code, and manages the service map and the corresponding scene code of the robot by taking the map group as a unit.
Preferably, the second interactive communication module 31 is further configured to send the request carrying the current scene identifier to the cloud server when the robot starts and/or operates, and receive a service map corresponding to the current scene identifier fed back from the cloud server, and the map management module 30 is further configured to determine to continue using the currently locally stored service map when the currently locally stored service map is consistent with the service map fed back from the cloud server.
Preferably, the map management module 30 is further configured to determine to trigger to execute one of the following operations when the currently stored service map is inconsistent with the service map fed back by the cloud server:
When one or more business maps in the business maps fed back by the cloud server are not stored locally, storing the one or more business maps locally;
When one or more service maps in the service maps fed back by the cloud server are identical to the locally stored service maps in the presence of identification information, and the map content is different after matching, judging the time information of the service map corresponding to the identical identification information, reserving the service map corresponding to the latest time, and deleting other service maps except the reserved service map in the service map corresponding to the identical identification information;
And when the one or more locally stored service maps are not stored in the cloud server, sending the one or more service maps to the cloud server.
In the preferred implementation process, when the robot starts and/or works based on the scene code, the robot requests the server, the request carries the current scene identifier, the cloud server receives the request and sends the service map corresponding to the current scene identifier to the robot, the robot compares whether the locally stored service map is consistent with the service map under the scene code of the cloud server, if so, the robot does not operate and continues to use the local service map, and if not, the robot processes according to one of the following three modes:
the cloud server end stores certain service maps, but the edge end robot does not store the service maps, and the robot downloads and stores the service maps from the server end;
The cloud server end stores certain service maps with the same names as the service maps stored by the edge end robot, but the service maps are different in content, so that the latest service map can be reserved at the server end or the robot end according to time sequence, and one or more service maps with the prior time are deleted;
And thirdly, if some service maps stored by the edge end robot are not stored on the cloud server, uploading the service maps to the cloud server by the robot.
Preferably, the collaborative operation module 32 is further configured to convert a business requirement into business operation information, convert a processing result of the business operation information into an instruction for a specific module, and transmit the instruction to each module of the robot in real time, and when the robot runs to one or more target points, control the robot to complete business operations corresponding to the one or more target points.
According to the embodiment of the invention, a multi-machine management system is also provided.
Fig. 4 is a schematic diagram of a multi-machine management system according to an embodiment of the present invention. As shown in fig. 5, the multi-machine management system includes a cloud server 40, and a plurality of robots (e.g., robot 1-1, robot 1-2, robot 2-1, robot 2-n shown in fig. 4) respectively connected to the cloud server, wherein each of the robot groups includes at least one robot (e.g., robot group 1 shown in fig. 3 includes robot 1-1, robot 1-2, robot 1-n, robot group 2 includes robot 2-1, robot 2-2, robot 2-n, each robot group corresponds to a user or a business scenario, and isolation management between the robot groups (e.g., isolation management between robot group 1 and robot group 2 in fig. 4).
In the preferred implementation process, the cloud server is used as a central node and is respectively connected with a plurality of robots, and the cloud server and the robots form a star-shaped network topology structure. The single edge robot serves as an independent edge node, and at least one robot in one user or service scene can be mounted under a corresponding robot group (such as the robot group in fig. 4, at least one robot of one user or at least one robot in one service scene) according to the robot group identification bound by the user or scene (such as a distribution service scene, a disinfection service scene, a cleaning service scene and the like). A user or a scene may also correspond to multiple robot groups. Robots in each robot group are managed in a unified mode, and isolation management is conducted among the robot groups. Robots within each group support robots of different sizes and types, but the robots of different sizes and types are required to adopt uniform data standards and formats.
It should be noted that, in the multi-machine management system of the present embodiment, a preferred embodiment of the combination of each module in the cloud server and the robot may be specifically referred to the description of fig. 1 to 3, and the description is omitted herein.
According to the embodiment of the invention, a multi-machine management method based on the cloud server is also provided.
Fig. 5 is a flowchart of a multi-machine management method based on a cloud server according to an embodiment of the present invention.
As shown in fig. 5, the multi-machine management method based on the cloud server includes:
step S501, matching and splicing a plurality of service maps in the same scene to obtain a first spliced map in the same scene, and/or matching and splicing service maps in the same map group to obtain a second spliced map in the same map group;
And S502, synchronizing the first map and/or the second map and the state information of the robots to at least one robot to realize multi-machine collaborative operation of the at least one robot.
And matching and splicing a plurality of service maps under the same scene and/or the same map group by adopting the multi-machine management method shown in fig. 5, and carrying out scene map building along with multi-machine circulation iteration of the robot so as to gradually realize full-scene coverage of the map. The multiple independent robots are based on map operation after splicing, so that the multiple robots are good in coordination effect, high in working efficiency and strong in scene adaptability.
Preferably, the service map comprises a scene map and service operation information corresponding to each target point in a robot motion track and at least one target point on the robot motion track bound in the scene map based on real-time coordinates;
In the step S501, matching and splicing the plurality of service maps in the same scene to obtain a first map after being spliced in the same scene, and/or matching and splicing the service maps in the same map group to obtain a second map after being spliced in the same map group may further include the following steps:
Respectively executing matching of every two business maps in the same scene, determining local relative relations of every two business maps until first global relative relations of the business maps in the same scene are acquired, and/or respectively executing matching of every two business maps in the same map group, determining local relative relations of every two business maps until second global relative relations of the business maps in the same map group are acquired;
And splicing the plurality of service maps in the same scene according to the first global relative relation to obtain the first map, and/or matching and splicing the service maps in the same map group according to the second global relative relation to obtain the second map, wherein the splicing submodule splices the robot motion tracks bound in the scene map based on real-time coordinates in the process of splicing the plurality of service maps, and integrates service operation information corresponding to each target point in at least one target point on the robot motion track.
According to the embodiment provided by the invention, a multi-machine management method based on the robot is also provided.
Fig. 6 is a flowchart of a robot-based multi-machine management method according to an embodiment of the present invention. As shown in fig. 6, the robot-based multi-machine management method includes:
Step S601, binding a robot motion track and service operation information corresponding to each target point in at least one target point on the motion track in an established scene map to obtain the service map when environment map construction and service operation are carried out from an initial point position;
step S602, reporting state information, a robot group identifier corresponding to the state information, a service map established by a robot and a scene identifier corresponding to the service map to the cloud server;
Step S603, receiving a first map obtained by matching and splicing a plurality of service maps in the same scene from the cloud server, and/or a second map obtained by matching and splicing service maps in the same map group, and state information of the robot, wherein the state information comprises pose information of the robot and service information;
Step S604, positioning is performed based on the first map and/or the second map, pose information and business information of other robots except the current robot in all robots related to the current business are obtained based on the interaction information of the cloud server, and multi-machine collaborative operation is realized with the other robots.
When the method shown in fig. 6 is adopted to start from the initial point position and perform environment mapping and business operation, the movement locus of the robot and the business operation information corresponding to each target point in at least one target point on the movement locus are bound in the established scene map to obtain the business map, the cloud server performs map splicing based on the map established by the robot, and performs scene mapping along with multi-machine circulation iteration of the robot, so that full scene coverage of the map is gradually realized. The multiple independent robots are based on map operation after splicing, so that the multiple robots are good in coordination effect, high in working efficiency and strong in scene adaptability.
In summary, with the help of the multi-machine management system based on the cloud server and the edge robot provided by the embodiment of the invention, the star topology and group management are adopted, when the robot performs environment map building and business operation, the movement locus of the robot and the business operation information corresponding to each target point in at least one target point on the movement locus are bound in the built scene map to obtain the business map, the cloud server performs map splicing based on the business map built by the robot, and performs scene map building and map updating along with multi-machine circulation iteration of the robot, so that the map full scene coverage is gradually realized. Robots in the group are positioned based on the spliced map, meanwhile, based on cloud interaction, the pose of the other party is known among the multiple machines in real time, multi-robot collaborative management based on the cloud and multi-robot collaborative operation are achieved, and the intelligent level of the robots and the multi-machine collaborative work efficiency are improved.
The above disclosure is only a few specific embodiments of the present invention, but the present invention is not limited thereto, and any changes that can be thought by those skilled in the art should fall within the protection scope of the present invention.