HK40011873A - Systems and methods for robotic mapping - Google Patents
Systems and methods for robotic mapping Download PDFInfo
- Publication number
- HK40011873A HK40011873A HK62020001498.8A HK62020001498A HK40011873A HK 40011873 A HK40011873 A HK 40011873A HK 62020001498 A HK62020001498 A HK 62020001498A HK 40011873 A HK40011873 A HK 40011873A
- Authority
- HK
- Hong Kong
- Prior art keywords
- robot
- nodes
- map
- scan
- location
- Prior art date
Links
Description
Priority
This application claims priority from commonly owned and co-pending U.S. patent application No. 15/340,807, filed 2016, 11/1/2016, having the same title, the contents of which are incorporated herein by reference in their entirety.
Copyright rights
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the patent and trademark office patent file or records, but otherwise reserves all copyright rights whatsoever.
Technical Field
The present application relates generally to robotics, and more specifically to systems and methods for robotic mapping.
Background
In some cases, the robot draws an environment map. These maps enable autonomous navigation of the robot. When the robot maps the environment, it relies on its senses (e.g., using sensors) to detect features of the environment and remember those features for later reference. However, mapping can be a slow and difficult process because of environmental noise, sensor noise and inaccuracies, ambiguities in the environment (e.g., substantially similar features), sensor offset, and other issues, among others.
Inaccuracies in the drawing may cause the robot to become lost during navigation or to become completely unable to navigate in a certain environment. In some cases, mapping problems may cause the robot to collide with objects or people in the environment, or fail to achieve the goal that the robot is responsible for performing. Accordingly, there is a need in the art for systems and methods that can correct for mapping inaccuracies and generate a map that represents the environment and/or route traveled by the robot.
Another challenge is that it is generally desirable that the robot be affordable, lightweight, and as small as possible. Therefore, it is desirable that the systems and methods be computationally efficient and capable of running on low cost (e.g., potentially less accurate) hardware, including sensors and processors.
Disclosure of Invention
The present disclosure satisfies the above needs and, in particular, provides an apparatus and method for mapping in autonomous navigation. The example implementations described herein have innovative features, each of which is not necessarily essential or solely responsible for its desirable attributes. Without limiting the scope of the claims, some advantageous features will now be summarized.
In a first aspect, a method for a robot to generate a map is disclosed. In one exemplary embodiment, a method comprises: the robot travels in the environment; forming a graph comprising a plurality of nodes, wherein each node corresponds to a scan obtained by a sensor of the robot at a location in the environment; constraining the graph to begin and end at substantially similar locations; performing scan matching on an extended scan group determined at least in part from a group of a plurality of nodes; associating a range of possible locations with each of the plurality of nodes based at least in part on the scan matching; determining a confidence level associated with each possible location range; optimizing the graph based at least in part on the confidence to find possible locations of the plurality of nodes; and rendering the map according to the optimized graph.
In one variation, the generating of the map includes ray tracing with scans obtained by the sensors, the scans being associated with a plurality of nodes. In another variation, the optimization graph includes: generating a cost function based at least in part on: (1) a relative position of each of the plurality of nodes, and (2) a confidence associated with each of the plurality of nodes, and solving for a minimum of the cost function.
In another variation, constraining the graphic to begin and end at substantially similar locations additionally includes constraining the graphic to begin and end where visible to the home locator. In another variation, traveling in the environment includes walking under user control.
In another variation, the associated confidence additionally includes using a probability distribution that is at least partially indicative of a probability that the given node is at the given location.
In another variation, the extended scan group determined at least in part from the group of the plurality of nodes additionally includes selecting a root node as a reference location, looking for possible overlap of measurements of unique features of scans obtained at other nodes, and grouping the other nodes into the extended scan group based at least in part on the overlap of unique features of scans at those respective other nodes.
In a second aspect, a robot is disclosed. In one exemplary embodiment, a robot comprises: a sensor configured to obtain a scan of an environment at nodes, wherein each node is associated with a location; and a mapping and positioning unit configured to form a node pattern based at least in part on the obtained scan, determine an extended scan group based at least in part on a group of the plurality of nodes, and perform scan matching on the extended scan group.
In one variation, the mapping and positioning unit is further configured to constrain the graphics to begin and end at substantially similar locations. In another variation, the mapping and localization unit is further configured to determine a confidence level associated with the location of each node.
In another variation, the mapping and locating unit is further configured to generate a cost function based at least in part on the confidence and location of each node, wherein the mapping and locating unit determines the location of each node in the graph based on solving for a minimum of the cost function.
In another variation, the processor is additionally configured to present the map according to a graphic, the map presented based at least in part on the scanned ray trace.
In another variation, the robot additionally includes an actuator configured to move the robot between the positions. In another variation, the robot additionally includes a user interface configured for a user to control the robot.
In another variant, the robot additionally comprises a navigation unit configured to autonomously navigate the robot.
In a third aspect, a non-transitory computer-readable storage device is disclosed. In one embodiment, a non-transitory computer readable storage device has stored thereon a plurality of instructions executable by a processing device to operate a robot. The instructions are configured to, when executed by a processing device, cause a sensor to generate a scan of an environment at a plurality of nodes, wherein each node of the plurality of nodes is associated with a location; forming a graph of a plurality of nodes based on the generated scan; determining an extended scan group based at least in part on a scan associated with a group of a plurality of nodes; and performing scan matching on the extended scan group.
In one variation, the instructions additionally cause the processing device to constrain the graphics to begin and end at substantially similar locations.
In another variation, each extended scan group includes three or more scans.
In another variation, the instructions additionally cause the processing device to determine a cost function based at least in part on the confidence of the location of each node.
In another variation, the instructions additionally cause the processing device to generate a map based at least in part on minimizing the cost function.
These and other objects, features, and characteristics of the present disclosure, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the disclosure. As used in the specification and in the claims, the singular form of "a", "an", and "the" include plural referents unless the context clearly dictates otherwise.
Drawings
The disclosed aspects will hereinafter be described in conjunction with the appended drawings, provided to illustrate and not to limit the disclosed aspects, wherein like designations denote like elements.
Fig. 1 illustrates various side views of an exemplary body form of a robot in accordance with the principles of the present disclosure.
Fig. 2 is a diagram of a top view of a robot as an operator demonstrates a path, according to some embodiments of the present disclosure.
Fig. 3 illustrates a functional block diagram of an example robot in some embodiments.
Fig. 4A is an overhead view of a map and route that a robot generates as it travels in an environment, according to some embodiments of the present disclosure.
Fig. 4B is a map that does not accurately reflect the surroundings and travel route of the robot, according to some embodiments of the present disclosure.
Fig. 5 is a process flow diagram of an example method for forming a map, according to some embodiments of the present disclosure.
Fig. 6A is a top view of a graph including dispersed measurements of a robot along a route in a graph, according to some embodiments of the present disclosure.
Fig. 6B is a top view of a diagram including a continuation of the route illustrated in fig. 6A, according to some embodiments of the present disclosure.
Fig. 6C illustrates a constraint of the graph of fig. 6B, wherein the start and end positions are at the same and/or substantially the same position, according to some embodiments of the present disclosure.
Fig. 7A is a process flow diagram of an example method for forming an extended scan group, in accordance with some embodiments of the present disclosure.
Fig. 7B is a conceptual diagram of a scan matching transformation, according to some embodiments of the present disclosure.
Fig. 8 illustrates example results of scan matching of the graphically extended scan group illustrated in fig. 6C, in accordance with some implementations of the present disclosure.
Fig. 9 illustrates springs to/from a node in the graph illustrated in fig. 6A, according to some embodiments of the present disclosure.
Fig. 10 is a diagram of pixels of a map constructed by scanning, according to some embodiments of the present disclosure.
Fig. 11 is a process flow diagram of an example method for a robot to generate a map, according to some embodiments of the present disclosure.
Fig. 12 is a process flow diagram of an example method for operating a robot, according to some embodiments of the present disclosure.
All figures disclosed herein areBrain Corporation. Copyright ownership.
Detailed Description
Various aspects of the novel systems, devices, and methods disclosed herein are described more fully hereinafter with reference to the accompanying drawings. This disclosure may, however, be embodied in many different forms and should not be construed as limited to any specific structure or function presented throughout this disclosure. Rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. Based on the teachings herein one skilled in the art should appreciate that the scope of the present disclosure is intended to cover any aspect of the novel systems, apparatus, and methods disclosed herein, whether implemented independently of or combined with any other aspect of the present disclosure. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the present disclosure is intended to cover such an apparatus or method as practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects of the present disclosure set forth herein. It should be understood that any aspect disclosed herein may be implemented by one or more elements of a claim.
Although specific aspects are described herein, many variations and permutations of these aspects are within the scope of the present disclosure. Although some benefits and advantages of the preferred aspects are mentioned, the scope of the present disclosure is not intended to be limited to the specific benefits, uses, and/or objectives. The detailed description and drawings are merely illustrative of the disclosure rather than limiting, the scope of the disclosure being defined by the appended claims and equivalents thereof.
The present disclosure provides improved systems and methods for robotic mapping. As used herein, a robot may include a mechanical or virtual entity configured to automatically perform a complex series of actions. In some cases, the robot may be a machine guided by a computer program or electronic circuitry. In some cases, the robot may include electromechanical components configured for navigation, where the robot may move from one location to another. Such navigation robots may include autonomous cars, floor cleaners, rovers, drones, carts, and the like.
As mentioned herein, the floor cleaner can include a manually controlled (e.g., powered or remotely controlled) and/or autonomous (e.g., with little user control) floor cleaner. For example, the floor cleaner may include a floor scrubber operated by a cleaner, custodian, or other person and/or a robotic floor scrubber that is autonomously navigated and/or cleaned in the environment. Similarly, the floor cleaner may also include a vacuum cleaner, steamer, bumper, mop, polisher, sweeper, polisher, etc.
One of ordinary skill in the art will appreciate that, as mentioned herein, a robot may have several different appearances/forms. Fig. 1 illustrates various side views of an exemplary body form of a robot in accordance with the principles of the present disclosure. These are non-limiting examples intended to further illustrate the various body forms, but are not intended to limit the robots described herein to any particular body form. For example, the body form 100 illustrates an example in which the robot is a stand-up shop vacuum cleaner. Body form 102 illustrates an example in which the robot is a humanoid robot having an appearance substantially similar to a human body. Body form 104 illustrates an example in which the robot is a drone with a propeller. Body form 106 illustrates an example in which the robot has a vehicle shape with wheels and a passenger compartment. Body form 108 illustrates an example in which the robot is a rover.
The body form 110 may be an example where the robot is a motorized floor scrubber. The body form 112 may be a motorized floor scrubber having a seat, pedals, and a steering wheel, where a user may steer the body form 112 like a vehicle while the body form 112 is cleaning, but the body form 112 may also operate autonomously. Other body forms are also contemplated, including industrial machines that can be automated, such as forklifts, tugboats, ships, airplanes, and the like.
A detailed description of various implementations and variations of the disclosed systems and methods is now provided. Although many of the examples discussed herein are in the context of a robotic floor cleaner, it should be understood that the systems and methods contained herein may be used with other robots. One of ordinary skill in the art, in view of this disclosure, will readily envision a myriad of other exemplary embodiments or uses of the techniques described herein.
Advantageously, the system and method of the present disclosure at least: (i) providing a precise route and/or environmental mapping by the robot; (ii) providing computational efficiencies that may reduce consumption of processing power, energy, and/or other resources while the robot is navigating; and (iii) allows for lower cost hardware to be used in constructing the robot. Other advantages may be readily discerned by those skilled in the art in view of this disclosure.
For example, many current robots that can navigate autonomously are programmed to walk along a route and/or path to a target. To walk along these routes and/or paths, these robots may form a map, which may sometimes be referred to as a global solution because the map identifies one or more portions of the environment that are beyond the range that the robot can directly observe with its sensors at a point in time. These robots generate maps and their relationship to the environment along the route, using local detection in a small area (e.g., on the order of a few meters) around the robot, where the robot can determine obstacles and/or other features detected by its sensors. Upon autonomous navigation, the robot may utilize global solutions and local detection of objects to avoid collisions and/or reach its target.
For example, a robot (e.g., a floor cleaner, autonomous vehicle, or other robot) may be programmed with a demonstration. In the example of a floor cleaner, an operator can control the robot along a desired path in a desired environment. Thus, the robot may generate a map, recording its location (e.g., location on a global map and/or pose of a feature relative to the environment) as the operator controls it. The robot may generate a map using odometers and sensors (e.g., scans by light detection and ranging ("LIDAR") sensors and/or any other sensors described in this disclosure). Subsequently, the robot can walk autonomously along the route with little operator control.
The challenge with the robot in this illustration is to construct an accurate map that the robot can use to autonomously walk along the route after the presentation. The presentation process may include complex sets of movements and actions (e.g., turning, stopping, parking, turning on and off flashing lights and other signals, raising and lowering brushes, turning off and on water flow, turning off and on vacuum, etc.) associated with a particular pose and/or trajectory and recognition of objects. During the presentation, the drawing of the robot may not be completely accurate (e.g., there is an offset, noise, and/or error), and the robot may need to determine how the map should be rendered in order to accurately reflect the true state of the environment.
Fig. 2 is a diagram of a top view of the robot 202 while the operator demonstrates a path, according to some embodiments of the present disclosure. In this manner, fig. 2 may illustrate the path that robot 202 actually travels in environment 200 as it draws. The robot 202 may be any robot described in this disclosure. For example, the robot 202 may be a robotic floor cleaner, such as a robotic floor scrubber, vacuum cleaner, steam engine, bumper, mop, polisher, sweeper, polisher, and the like. The environment 200 may be a space in which the robot 202 navigates. In the example of a robotic floor cleaner, environment 200 may be a space having a floor to be cleaned. For example, the environment 200 may be a store, warehouse, office building, residence, storage facility, or the like. One or more of the objects 208, 210, 212, 218 may be shelves, displays, objects, items, people, animals, or any other entity or thing that may be located on the floor or otherwise hinder the robot 202 from navigating in the environment 200. Thus, the route 206 may be a clean path traveled by the robot 202. Route 206 may follow a path that travels between objects 208, 210, 212, and 218, as illustrated by example route 206. For example, if the objects 208, 210, 212, 218 are shelves in a store, the robot 202 may proceed along an aisle of the store and clean the floor of the aisle. However, other routes are also contemplated, such as, but not limited to, a route that traverses back and forth along an open floor area and/or any cleaning path that a user may use to clean a floor (e.g., in the case of a user manually operating a floor cleaner). Thus, the route 206 is intended only as an illustrative example and may be presented in a different manner than illustrated. Also, as illustrated, one example of the environment 200 is shown, but it should be understood that the environment 200 may take any number of forms and arrangements (e.g., rooms and/or buildings of any size, configuration, and/or layout) and is not limited by example illustrations of the present disclosure.
In route 206, robot 202 may begin at an initial location 222, which initial location 222 may be a starting point/starting location of robot 202. In some cases, the home locator 220 may be positioned near, substantially near, and/or at the initial position 222. For example, the home locator 220 may be positioned adjacent to the initial position 222, or within range of at least one sensor of the robot 202, such that the sensors of the robot 202 may detect the home locator 220 from the initial position 222. The operator may then demonstrate the route 206 to the robot 202 until the robot 202 reaches an end position where the operator may stop operating the robot 202. The end position may be specified by a user and/or determined by the robot 202. In some cases, the end location may be a location in the route 206 after which the robot 202 has cleaned the desired floor area. As previously described, the route 206 may be closed loop or open loop. As an illustrative example, the end location may be a storage location of the robot 202, such as a temporary parking spot, a storage room/closet, and so forth. In some cases, the end position may be a point at which the user is training and/or programming a task of the robot 202. In some cases, the end position may be the same as and/or substantially similar to the initial position 222. For example, the robot 202 may detect a return to the same and/or substantially similar orientation as the initial position 222 by detecting the home locator 220.
In the context of a floor cleaner (e.g., floor scrubber, vacuum cleaner, etc.), the robot 202 may or may not clean at every point along the route 206. For example, if the robot 202 is a robotic floor scrubber, the cleaning system (e.g., water flow, cleaning brush, etc.) of the robot 202 may only operate in some portions of the route 206 and not in other portions of the route 206. For example, the robot 202 may associate some actions (e.g., turn water on/off, spray water, turn vacuum on/off, move vacuum hose position, swing arm, raise/lower lift, move sensor, turn on/off sensor, etc.) with a particular position and/or trajectory along the demonstration route (e.g., when moving in a particular direction or moving in a particular sequence along the route 206). In the context of floor cleaners, such an association may be desirable when only some areas of the floor are to be cleaned while other areas are not and/or in some tracks. In these cases, robot 202 may turn the cleaning system on in areas where the user demonstrates to robot 202 for cleaning and turn the cleaning system off in areas where cleaning is not to be used.
Fig. 3 illustrates a functional block diagram of an example robot 202 in some implementations. As illustrated, the robot 202 includes a controller 254, a memory 252, a power source 256, and an operating unit 250, each of which may be operatively and/or communicatively connected to each other and to each other's components and/or subcomponents. The controller 254 controls various operations performed by the robot 202. Although a particular embodiment is illustrated in fig. 3, given this disclosure, it will be readily apparent to those skilled in the art that the architecture may vary in certain embodiments.
Controller 254 may include one or more processors (e.g., microprocessors) and other peripherals. As used herein, the terms processor, microprocessor, and digital processor may include any type of digital processing device, such as, but not limited to, a digital signal processor ("DSP"), a reduced instruction set computer ("RISC"), a general purpose ("CISC") processor, a microprocessor, a gate array (e.g., a field programmable gate array ("FPGA")), a programmable logic device ("PLD"), a reconfigurable computer architecture ("RCF"), an array processor, a secure microprocessor, and an application specific integrated circuit ("ASIC"). Such digital processors may be housed on a single monolithic integrated circuit die or distributed across multiple components.
The controller 254 may be operatively and/or communicatively connected to the memory 252. The memory 252 may include any type of integrated circuit or other storage suitable for storing digital data, including but not limited to read-only memory ("ROM"), random-access memory ("RAM"), non-volatile random-access memory ("NVRAM"), programmable read-only memory ("PROM"), electrically erasable programmable read-only memory ("EEPROM"), dynamic random-access memory ("DRAM"), mobile DRAM, synchronous DRAM ("SDRAM"), double-data-rate SDRAM ("DDR/2 SDRAM"), extended data-out RAM ("EDO"), fast page mode ("FPM") RAM, reduced latency DRAM ("RLDRAM"), static RAM ("SRAM"), flash memory (e.g., NAND/NOR), memristor memory, pseudo static RAM ("PSRAM"), and so forth. Memory 252 may provide instructions and data to controller 254. For example, the memory 252 may be a non-transitory computer-readable storage medium having stored thereon a plurality of instructions executable by a processing device (e.g., the controller 254) to operate the robot 202. In some cases, the instructions may be configured to, when executed by a processing device, cause the processing device to perform various methods, features, and/or functionalities described in the present disclosure. Accordingly, the controller 254 may perform logical and arithmetic operations based on program instructions stored within the memory 252.
The operation unit 250 may be connected to the controller 254 or any other controller to perform various operations described in the present disclosure. In some embodiments, one or more modules or no modules may be included in the operation unit 250. Throughout this disclosure, reference may be made to various controllers and/or processors. In some embodiments, a single controller (e.g., controller 254) may be used as the various controllers and/or processors described. In other embodiments, different controllers and/or processors may be used, such as controllers and/or processors specifically for and/or as part of one or more of the operating units 250. The controller 254 may send and/or receive signals, such as power signals, control signals, sensor signals, interrogation signals, status signals, data signals, electrical signals, and/or any other desired signals, including discrete and analog signals, to the operating unit 250. The controller 254 may coordinate and/or manage the operations unit 250, and/or set timing (e.g., synchronous or asynchronous), turn on/off, control power budgets, receive/transmit network instructions and/or updates, update firmware, transmit interrogation signals, receive and/or transmit status, and/or perform any operations for operating features of the robot 202.
The operation unit 250 may include various units that perform the functions of the robot 202. For example, the units of the operation unit 250 may include a mapping and positioning unit 262, a sensor unit 264, an actuator unit 268, a communication unit 266, a navigation unit 276, and a user interface unit 272. The operation unit 250 may also include other units that provide various functionalities of the robot 202. In some cases, the units of the operation unit 250 may be instantiated as software or hardware and/or both. For example, in some cases, the elements of the operation unit 250 may include computer-implemented instructions executed by a controller. In some cases, the units of the operation unit 250 may include hard-coded logic. In some cases, the units of the operation unit 250 may include both computer-implemented instructions executed by the controller and hard-coded logic. When the operation unit 250 is at least partially implemented as software, the operation unit 250 may comprise a unit/module of code configured to provide one or more functionalities.
In some embodiments, the sensor unit 264 may include a system that can detect characteristics within the robot 202 and/or around the robot 202. The sensor unit 264 may include sensors internal or external to the robot 202 and/or have components that are partially internal and/or partially external. Sensor unit 314 may include an external sensing sensor such as a sonar, LIDAR, radar, laser, video camera, infrared camera, 3D sensor, 3D camera, and/or any other sensor known in the art. The sensor unit 264 may also include proprioceptive sensors such as accelerometers, inertial measurement units, odometers, gyroscopes, speedometers, and the like. In some implementations, the sensor unit 264 may collect raw measurements (e.g., current, voltage, resistance, gate logic, etc.) and/or transformed measurements (e.g., distance, angle, detection point in an obstacle, etc.).
In some embodiments, the mapping and positioning unit 262 may include systems and methods that may computationally construct and update maps of the environment and/or route (e.g., maps 300 and 400 as will be described with reference to fig. 4A and 4B, and/or any other maps formed by the robot 202) as the robot 202 navigates within the environment. As an illustrative example, the mapping and positioning unit 262 may map the environment 200 and position the robot 202 (e.g., find an orientation and/or pose) in the map 300 at one or more points in time. At the same time, the mapping and positioning unit 262 may record a presentation route (e.g., mapping route 306) in the map 300.
The mapping and positioning unit 262 may also receive sensor data from the sensor unit 264 to position the robot 202 in a map. In some implementations, the mapping and positioning unit 262 may include a positioning system and method that allows the robot 202 to position itself in the coordinates of a map. As will be additionally described in this disclosure, the mapping and positioning unit 262 may also process measurements obtained by the robot 202, for example, by generating graphics and/or maps.
In some implementations, the communication unit 266 may include one or more receivers, transmitters, and/or transceivers. The communication unit 266 may be configured to send/receive transmission protocols, e.g.Wi-Fi, inductive wireless data transfer, radio frequency, radio transmission, radio frequency identification ("RFID"), near field communication ("NFC"), infrared, network interface, cellular technologies such as 3G (3GPP/3GPP2), high speed Downlink packet Access ("HSDPA"), high speed uplink packet Access ("HSUPA"), time division multiple Access ("TDMA"), code division multiple Access ("CDMA") (e.g., IS-95A, wideband code division multiple Access ("WCDMA"), etc.), frequency hopping spread Spectrum ("FHSS"), direct sequence spread Spectrum ("DSSS"), Global System for Mobile communications ("GSM"), personal area networks ("PAN") (e.g., PAN/802.15), worldwide interoperability for microwave Access ("WiMAX"), 802.20, Long term evolution ("LTE") (e.g., LTE/LTE-A), time division LTE ("TD-LTE") (LTE), Global mobile communicationSystems ("GSM"), narrowband/frequency division multiple access ("FDMA"), orthogonal frequency division multiplexing ("OFDM"), analog cellular, cellular digital packet data ("CDPD"), satellite systems, millimeter wave or microwave systems, acoustic, infrared (e.g., infrared data association ("IrDA")), and/or any other form of wireless data transmission.
As used herein, a network interface may include any signal, data, or software interface having a component, network, or process, including, but not limited to, those firewire (e.g., FW400, FW800, FWS800T, FWS 1600, FWS3200, etc.), Universal Serial bus ("USB") (e.g., USB 1, X, USB 2.0.0, USB 3.0, USB type C, etc.), Ethernet (e.g., 10/100, 10/100/1000 (gigabit Ethernet), 10-Gig-E, etc.), multimedia alliance over coax technology ("MoCA"), Coaxsys (e.g., TVNET)TM) A radio frequency tuner (e.g., in-band or OOB, cable modem, etc.), Wi-Fi (802.11), WiMAX (e.g., WiMAX (802.16)), PAN (e.g., PAN/802.15), cellular (e.g., 3G, LTE/LTE-A/TD-LTE/TD-LTE, GSM, etc.), IrDA family, and the like. As used herein, Wi-Fi may include one or more of the following: IEEE-Std.802.11, variations of IEEE-Std.802.11, standards related to IEEE-Std.802.11 (e.g., 802.11a/b/g/n/ac/ad/af/ah/ai/aj/aq/ax/ay), and/or other wireless standards.
The communication unit 266 may also be configured to send/receive transmission protocols over a wired connection, such as any cable having signal lines and ground. For example, such cables may include ethernet cables, coaxial cables, universal serial bus ("USB"), firewire, and/or any connection known in the art. Such protocols may be used by the communication unit 266 to communicate with external systems, such as computers, smart phones, tablets, data capture systems, mobile telecommunications networks, clouds, servers, and so forth. Communication unit 266 may be configured to send and receive signals including numbers, letters, alphanumeric characters, and/or symbols. In some cases, the signal may be encrypted using an algorithm such as a 128-bit or 256-bit key and/or other encryption algorithms that comply with standards such as advanced encryption standard ("AES"), RSA, data encryption standard ("DES"), triple DES, and so forth. The communication unit 266 may be configured to send and receive status, commands, and other data/information. For example, the communication unit 266 may communicate with a user controller to allow a user to control the robot 202. The communication unit 266 may communicate with a server/network to allow the robot 202 to send data, status, commands, and other communications to the server. The server may also be communicatively connected to computers and/or devices that may be used to remotely monitor and/or control the robot 202. The communication unit 266 may also receive updates (e.g., firmware or data updates), data, status, commands, and other communications from the server of the robot 202 and/or its operating unit 250.
In some embodiments, one or more of the operating units 250 may be instantiated remotely from the robot 202. For example, the mapping and positioning unit 262 may be positioned in the cloud and/or connected to the robot 202 via the communication unit 266. The connection may be direct and/or through a server and/or network. Accordingly, functional embodiments of the present disclosure should also be understood to encompass remote interaction, wherein data may be communicated using communication unit 266 and one or more portions of the process may be accomplished remotely.
In some implementations, the actuator unit 268 can include an actuator, such as an electric motor, a gas motor, a driven magnet system, a solenoid/ratchet system, a piezoelectric system (e.g., a peristaltic motor), a magnetostrictive element, a gesture, and/or any manner of driving an actuator known in the art. For example, such actuators may actuate wheels or other displacement enabling drivers (e.g., mechanical legs, jet engines, propellers, hydraulics, etc.). For example, the actuator unit 268 may allow the robot 202 to move and/or walk in the environment 200 and/or any other environment. In some cases, the actuator unit 268 may include actuators configured for action and/or action specific tasks, such as moving a brush for floor cleaning, moving (e.g., up, down, left, right, forward, backward movement) a squeegee, turning on/off water, spraying water, turning on/off a vacuum cleaner, moving a vacuum hose position, waving an arm, raising/lowering a lift, turning a camera and/or any sensors of the sensor unit 264, and/or any movements required by the robot 202 to perform the action.
In some embodiments, the user interface unit 272 may be configured to enable a user (e.g., the user 604 or any other user) to interact with the robot 202. For example, user interface unit 272 can include a touch panel, buttons, keypad/keyboard, ports (e.g., USB, digital video interface ("DVI"), display port, E-Sata, firewire, PS/2, serial, video graphics array ("VGA"), small computer system interface ("SCSI"), audio port, high definition multimedia interface ("HDMI"), personal computer memory card International Association ("PCMCIA") port, memory card ports (e.g., SD and miniSD), and/or ports for computer readable media), mouse, roller ball, console, vibrator, audio transducer, and/or any interface for a user to input and/or receive data and/or commands, whether wirelessly or through a wired connection (including but not limited to any of the wireless or wired connections described in this disclosure, for example, with reference to communications unit 266). User interface unit 272 may include a display, such as, but not limited to, a liquid crystal display ("LCD"), a light emitting diode ("LED") display, an LED LCD display, an in-plane switching (IPS), a cathode ray tube, a plasma display, a high definition ("HD") panel, a 4K display, a retinal display, an organic LED display, a touch screen, a surface, a canvas, and/or any display, television, monitor, panel, screen, and/or device known in the art for visual presentation. In some embodiments, the user interface unit 272 may be located in the body of the robot 202. In some embodiments, the user interface unit 272 may be located remotely from the body of the robot 202, but may be communicatively connected to the robot 202 (e.g., via the communication unit 266) directly or indirectly (e.g., via a network or cloud).
In some implementations, the navigation unit 276 may include components and/or software configured to provide directional instructions for the robot 202 for navigation. The navigation unit 276 may process map and positioning information generated by the mapping and positioning unit 262, sensor data from the sensor unit 264 and/or other operational units 250. For example, the navigation unit 276 may receive the map 300 from the mapping and positioning unit 262. The navigation unit 276 may also receive positioning information from the mapping and positioning unit 262, which may indicate, at least in part, the location of the robot 202 within the map 300 including the route 306. The navigation unit 276 may also receive sensor data from the sensor unit 264, which may be indicative of objects surrounding the robot 202, at least in part. Using one or more of the map, location, and sensor data, the navigation unit 276 may instruct the robot 202 where to walk (e.g., travel forward, left, right, backward, and/or in any other direction).
The navigation unit 276 may also perform actions and/or action specific tasks such as moving a brush for floor cleaning, moving (e.g., up, down, left, right, forward, backward) a squeegee, turning on/off water, spraying water, turning on/off a vacuum cleaner, moving a vacuum hose position, waving an arm, raising/lowering a lift, turning any sensors of the camera and/or sensor unit 264, and/or any actions taken by the robot 202. In some cases, such actions and/or action-specific tasks may be indicated in the map and performed by the actuator unit 268.
In some embodiments, power source 256 may comprise one or more batteries, including, but not limited to, lithium batteries, lithium ion batteries, nickel cadmium batteries, nickel metal hydride batteries, zinc carbon batteries, silver oxide batteries, zinc carbon batteries, zinc-air batteries, mercury oxide batteries, alkaline batteries, or any other type of battery known in the art. Some batteries may be rechargeable, for example wirelessly (e.g., via a resonant circuit and/or a resonant tank circuit) and/or by plugging into an external power source. Power source 256 may also be any energy supply including wall sockets and electrical devices that convert solar, wind, water, nuclear, hydrogen, gasoline, natural gas, fossil fuels, mechanical energy, steam, and/or any power source into electricity.
In some embodiments, the operating system 270 may be configured to manage the memory 252, the controller 254, the power supply 256, the modules in the operating unit 250, and/or any software, hardware, and/or features of the robot 202. For example and without limitation, the operating system 270 may include device drivers that manage the hardware resources of the robot 202.
As previously mentioned, any of the aforementioned components of the robot 202 may be instantiated in software and/or hardware. For example, a unit/module may be a piece of hardware and/or a piece of code running on a computer.
Fig. 4A is an overhead view of a map 300 and a route 306 that robot 202 generates as it travels in environment 200, according to some embodiments of the present disclosure. In some embodiments, the generation of the map 300 may be performed by the mapping and positioning unit 262. The map 300 may include pixels, where each pixel corresponds to a drawing area of the environment 200. The number of pixels in the map 300 may be determined based on the resolution of the map 300. The layout of the map 300 may be substantially similar to the layout of the environment 200, where each pixel in the map 300 may approximate a location in the environment 200. The robot 202 may be presented on the map 300 as a robot indicator 302. Each pixel may represent a region, which may or may not be uniform. For example, a pixel may represent a region unit, an nxm or nxn region unit, or a non-uniform region unit.
Mapping may be performed by applying data obtained at least in part by the sensor units 264 into a two-dimensional ("2D"), three-dimensional ("3D"), and/or four-dimensional ("4D") map that represents, at least in part, the environment 200. For example, the map 300 may include depictions that represent, at least in part, obstacles and/or objects detected by the robot 202. The map 300 may also record a demonstration route, such as a route learned by the robot 202 under user control. For example, the drawn route 322 may include coordinates (e.g., x and y in a 2D map and x, y, and z in a 3D map) based at least in part on the position, orientation, and/or pose (e.g., including one or more of position, shift, and orientation) of the robot 202 relative to a reference such as the initialized position 222. For example, the pose may have (x, y, θ) coordinates. As used herein, the term orientation has its ordinary and customary meaning. For example, in some cases, the position may include a position with respect to a displacement, coordinates, etc. of the object, the robot 202, etc. In some cases, the orientation may also include the orientation of the object, the robot 202, and the like. Thus, in some cases, the terms orientation and pose may be used interchangeably to include one or more of position, shift, and orientation. The map 300 formed by the demonstration process may record the environment sensed by a portion and/or substantially the entire robot 202 during one or more demonstrations/trainings. For this reason, some may refer to map 300 as a global map. In some cases, the map 300 may be static in that the map 300 may not be substantially updated after the demonstration/training. In some embodiments, the map 300 and the drawn route 306 may also be generated separately (e.g., by a user using a computer) and uploaded onto the robot 202.
In some embodiments, a pixel (and/or a voxel in 3D) of map 300 may have one or more states, where the pixel state is at least partially indicative of a characteristic of the position/location in environment 200 represented by the pixel. For example, the pixels of the map 300 may be binary, with a first pixel state (e.g., pixel value) indicating, at least in part, an unobstructed (e.g., walkable) location and a second pixel state indicating, at least in part, a blocked (e.g., non-walkable) location. For example, a pixel value of zero (0) may indicate, at least in part, an unobstructed position, while a pixel value of one (1) may indicate, at least in part, an occluded position.
In some implementations, instead of or in addition to the aforementioned binary states, the pixels of the map 300 may have other pixel states, such as one or more of the following: a pixel state that is at least partially indicative of an unknown location (e.g., an orientation/location with no information); a pixel state that indicates, at least in part, a location/position to which it should not go; a pixel state at least partially indicative of being part of a walkable route; a pixel state that indicates, at least in part, an area that the robot 202 has traveled; a pixel state that indicates, at least in part, an area to which the robot 202 has not traveled; indicating, at least in part, a pixel state of an object; a pixel state at least partially indicative of water accumulation; and/or any other categorization of the position/location on the map 300. In some embodiments, the pixel status and/or data associated with the pixel may be at least partially indicative of motion and/or motion specific tasks, such as moving a brush for floor cleaning, moving (e.g., up, down, left, right, forward, backward movement) a squeegee, turning water on/off, spraying water, turning a vacuum on/off, moving a vacuum hose position, waving an arm, raising/lowering a lift, turning a camera and/or any sensor of sensor unit 264, and/or any other motion performed by robot 202.
The pixels of the map 300 may also store more than one value or pixel state. For example, each pixel of the map 300 may store a plurality of values, such as values stored in a vector and/or matrix. These values may include values that indicate, at least in part, the position/pose (e.g., including position and/or orientation) of the robot 202 when the position is measured at points (e.g., pixels) along the route 306. These values may also include whether the robot 202 should clean or not clean the position/location, and/or other actions/action specific tasks the robot 202 should take.
The robot 202 may travel along a route (e.g., depicted as 206 in fig. 2), which may be reflected in the map 300 as a route 306. At each location where the robot 202 travels along the route, the robot 202 may determine its position and/or orientation, in some cases its position and/or orientation relative to an initialized location (e.g., initialized location 222), a home locator (e.g., home locator 220), and/or another reference point. These mapping and positioning functions may be performed by mapping and positioning unit 262. The initialized location may be represented on the map 300 as a drawing position 322. In some cases, a mapped position 322 may be determined relative to the home locator (which may also be and/or otherwise represented in the needle point on the map 300 in some embodiments).
For example, the robot 202 may use an odometer to measure or estimate its distance from the initialized location 204 (or another reference point), where it uses proprioceptive sensors (e.g., wheel encoders (e.g., rotary encoders), visual odometers, inertial measurement units ("IMUs") (including accelerometers, magnetometers, angular rate sensors), etc.) of the sensor unit 264 to track the movement of the robot 202 since the initialized location 202. As an illustrative example, one or more of the proprioceptive sensors may be wheel encoders that measure and/or estimate distance based on rotation of wheels of the robot 202. As another illustrative example, a visual odometer may be used to measure or estimate the distance traveled and/or the orientation of the robot 202 from successive images taken by the camera. The visual odometer may construct an optical flow field (e.g., using the Lucas-Kanade method or other methods) and/or estimate camera motion, such as by using a kalman filter or projection. As another non-limiting example, the IMU may be used to measure and/or estimate the position and/or orientation of the robot 202.
As the robot indicator 302 (e.g., as seen in fig. 6A and elsewhere) progresses along the map 300 in a manner substantially similar to the robot 202 walking in the environment 200, the robot 202 may record a route 306 in the map 300. Advantageously, in some embodiments, map 300 and route 306 may be formed together, with robot 202 mapping environment 200 and recording route 306 at a substantially similar time. Thus, in some embodiments, the map 300 and the route 306 may be paired together, with each recorded route being stored only with a particular map.
Each location that is part of the route 206 may correspond to a pixel on the route 306 in the map 300, where the pixel status of the route 306 indicates that the pixel is part of a walkable route. The robot 202 may also measure the position and/or orientation of the robot 202 relative to the object using the one or more sensor units 264 as the robot 202 travels. These measurements may be made at decentralized times and/or locations, denoted as nodes. The robot 202 may also use one or more of the sensor units 264 to obtain scans to detect its environment. In this manner, the robot 202 may detect and/or measure the orientation and/or orientation of the robot 202 relative to an object, such as a shelf or wall.
Where the robot 202 detects objects, the robot 202 may use the sensor units 264 to detect and/or measure the orientation and/or direction of those objects relative to the robot 202 in multiple directions. At the same time, the robot 202 may use the sensor unit 264 to estimate the position and/or orientation of the robot 202. As the robot 202 moves in the environment, different objects may come into range of the sensors of the sensor unit 264. For example, the sensor may be positioned on the front side of the robot 202 and may have a predetermined range. For example, the robot 202 may detect objects within a predetermined range on the front side. Similarly, other sensors may each have ranges and detect objects within those ranges. The sensors may be positioned at the front, back, left side, right side, bottom, top, and/or any combination thereof.
In some cases, the sensor unit 264 may not sense certain areas. For example, an object may obstruct the robot 202 for sensing a certain area, or the area may appear in a blind spot (e.g., where the measurement range of the sensor is not covered).
The environment rendered in map 300 is a larger set than the set illustrated in fig. 2 as environment 200. Route 306 includes traversing aisles and other movements, including traversing paths between aisles shown horizontally. As shown in the map 300, the robot 202 may clean the floor by traveling multiple times on each aisle as it traverses the aisles.
In some cases, fig. 4A may be an overhead view of a complete map 300 according to some embodiments of the present disclosure. The robot 202 may begin and end at the drawing position 322. The complete map 300 of fig. 4A illustrates a complete, accurate map. However, it is not straightforward to form a map substantially similar to ground truth merely by plotting the data collected by the sensor unit 264 on the map. The sensors of the sensor unit 264 have noise, measurement offset, and the like. Thus, the sensors of the sensor unit 264 may not always unambiguously identify features in the environment. The sensors of the sensor unit 264 may also not always unambiguously identify the orientation and/or pose of the robot 202.
Because the identification of the sensor unit 264 may not be unambiguous, the robot 202 may probabilistically determine its orientation and/or the orientation of the features of the environment 200. Because the robot 202 moves in the environment 200 while drawing, it can record sensor data from the sensor unit 264 as well as internal robot commands (e.g., move forward, left, right, back, rotate, etc.). The sensor data from the sensor unit 264 may also be compared to itself or other data, such as by scan matching to determine relative orientation. Thus, given at least sensor data, scan matching, and internal robot commands, the robot 202 (e.g., using the mapping and positioning unit 262) may construct a posterior probability distribution function of the map. Other factors may also be considered.
Since the measurement results are ambiguous, if no correction is made, the robot 202 may generate a map that does not reflect its surroundings and the travel route. Fig. 4B is a map 400 that does not accurately reflect the surroundings and travel route of the robot 202, according to some embodiments of the present disclosure. The map 400 is distorted due to noise, sensor offset, and other factors. Thus, the route 406 appears to be spread throughout. The rendered environment around the route 406 appears to expand and overlap itself with features that do not reflect ground truth. In contrast, the map 400 may be corrected to the map 300, where the map 400 is actually a map having the same environment and route as the map 300; however, the map 400 is distorted, and the map 300 more accurately reflects reality. It is observed that the environment actually comprises a series of aisles (e.g. shown as columns).
Fig. 5 is a process flow diagram of an example method 550 for forming a map, according to some embodiments of the present disclosure. For example, block 552 includes forming a graph containing nodes. Block 554 includes constraining the graph to begin and end at substantially the same location. Block 556 includes performing scan matching on the extended scan group determined from the scan at the node. Block 558 includes determining a confidence level for the location of each node. Block 560 includes forming and solving a cost function to determine the location of each node. Block 562 includes presenting the map from the scan of located nodes. This process flow diagram is provided for illustration purposes only. In some cases, one or more blocks may be omitted in this process. Moreover, various other blocks may be added and/or used in the alternative as described in this disclosure.
In some cases, the forming of the graph containing nodes of block 522 of fig. 5 may be performed by the mapping and positioning unit 262 and/or the controller 254 as the robot 202 travels in the environment. By way of another example, fig. 6A is a top view of a graph including scattered measurements of robot 202 (illustrated in fig. 2) along a route 622 in graph 600, according to some embodiments of the present disclosure. Route 622 is for illustration purposes. In some cases, the route 622 may represent a route and/or trajectory connecting the determined node of the robot 202 with the dispersed nodes 602A-602D, the robot 202 illustrated in the graphical space of the graph 600 as the robot indicator 302. Thus, as illustrated by the figures herein, routes are drawn for ease of reference, however, the systems and methods described herein may be applied before and/or after a route is determined.
As the robot 202 walks along the route represented by the route 622 illustrated in the graph 600, it may obtain measurements at nodes 602A through 602D (also illustrated in the map space of the graph 600), for example, by using the sensor unit 264. One skilled in the art will appreciate that there may be multiple nodes, and the number of nodes may be determined based at least in part on: a resolution of sensors of the sensor unit 264, a speed of the robot 202 (e.g., represented by the robot indicator 302), a noise and/or error of sensors of the sensor unit 264, a predetermined adjustment of the robot 202 (e.g., setting a node distance or establishing a time between nodes), etc. Fig. 6A illustrates only one example configuration of example route 622. For example, the distance between nodes may be set to a predefined parameter in standard units (e.g., millimeters, centimeters, meters, inches, feet, etc.) or in relative units (number of clicks, pixels, length, etc.). The distance may be determined based on a desired resolution, which may depend on desired accuracy, memory size, noise, and like considerations. For example, fewer nodes may occupy less storage space, but provide less information than more nodes. One or more systems and methods for collecting information from nodes 602A-602D and for mapping may be performed by mapping and positioning unit 262 and/or controller 254.
Each of the nodes 602A-602D may represent an orientation of a measurement (e.g., a scan) of the robot 202. Each of the nodes 602A-602D may be associated with a pose (e.g., including measurements of the robot 202, e.g., (x, y, θ)). The pose of the robot 202 may be relative to a position and/or orientation (e.g., the initialized position 222 and/or the home locator 220) and/or a characteristic of the environment 200. The pose may be determined from data obtained by a laser scan, a LIDAR, an odometer, and/or any other sensor of the sensor unit 264. Further, in some cases, the nodes may be positioned at substantially similar points, such as when the robot 202 passes the same location multiple times.
Each edge between nodes 602A-602D may have an associated probability function and/or probability distribution, illustrated as ranges 604A-604D for each of nodes 602A-602D, respectively. In some cases, the ranges 604A-604D may be calculated using a posterior probability distribution. In some cases, the range 604A-604D may be derived from the mean calculated orientation plus covariance. In some cases, the probability distribution may be calculated using a bayesian framework, where the uncertainty of the nodes may be stored in one or more data structures, such as a matrix (e.g., a covariance matrix). One of ordinary skill in the art will appreciate that the ranges 604A-604D may also be calculated using other statistical methods, such as cross-covariance, variance, chi-squared distribution, and/or any other statistical method used for simultaneous localization and mapping ("SLAM"), for example, and/or other techniques known in the art for determining pose. In some cases, the range 604A-604D may represent, at least in part, an area around each node 602A-602D within which an actual position of the robot 202 (e.g., illustrated as the robot indicator 302) relative to the graph 600 may be. For example, node 602B is not positioned as illustrated, and node 602B may be positioned virtually anywhere within range 604B and have a different pose, including orientation and/or orientation, within range 604B. In some cases, each position within any of the ranges 604A to 604D may have an associated probability, with some positions being more likely than others. In some cases, due to mixed noise, sensor offsets, and other factors, subsequent ones of the ranges 604A-604D may become large, as seen from the illustrated orientation of the robot indicator 302, which represents a greater uncertainty in the positioning of the nodes 602A-602D. However, some factors such as identification of unique features, scan matching, calibration, external identification, etc., may actually reduce the size of one or more of the ranges 604A-604D as compared to a previous range in the ranges 604A-604D.
Fig. 6B is a top view of a diagram 650 that includes a continuation of the route 622 of fig. 6A, according to some embodiments of the present disclosure. As illustrated, the robot 202 may start at node 602A, as represented on graph 650. Node 602A may be located relative to indicator 620, and indicator 620 may be a drawn/illustrated orientation of the home locator. As described with reference to fig. 6, the robot 202 may map the nodes 602A-602D. However, one challenge that may arise is that errors may accumulate as the robot progresses over the route 622. For example, the range may continue to expand as viewed from the location of node 622A (whose bearing may be determined relative to a home locator such as indicator 620). For example, range 628 may be greater than range 604D, and range 624 associated with node 622B may be greater. Thus, the larger the range, the more ambiguous the locations of those nodes may be. Accordingly, optimizing the graph 650 using conventional SLAM and/or any other algorithm may be computationally expensive and/or inaccurate. In attempting to find the optimization, the computation, time, and processing costs may be high.
In a certain embodiment, constraining the graphics to begin and end at substantially the same location of block 554 of fig. 5 may be performed by the mapping and positioning unit 262 and/or the controller 254. Fig. 6C illustrates a constraint of graph 690 of fig. 6B, wherein the starting and ending positions are at the same and/or substantially the same position, according to some embodiments of the present disclosure. Graph 690 illustrates the nodes of graph 650, where nodes 622A and 622B are now constrained to be in substantially the same position. In some embodiments, the nodes 622A and 622B may be in the same and/or substantially the same location in order to constrain the graph 690 and the nodes therein to a graph that begins and ends at the same and/or substantially similar locations. In some cases, the graphic ending at the same and/or substantially the same location may be determined by detecting a home locator as illustrated by indicator 620. In these cases, if the robot 202 detects the same home locator (e.g., represented by indicator 620 on graph 690) at the beginning and/or end of the route 622, such detection may indicate, at least in part, that the nodes 622A and 622B are in the same and/or substantially the same location.
In some embodiments, substantially similar locations may be predetermined, for example, based on one or more thresholds related to x-distance, y-distance, z-distance, angle, and the like. For example, there may be distance and/or angle thresholds, such as those based on standard units (e.g., inches, meters, degrees, and/or other standard units) or relative units (e.g., number of clicks, pixels, length, etc.). The threshold may be determined based at least in part on: resolution of sensors of sensor unit 264, noise of sensors of sensor unit 264, sensor offset of sensors of sensor unit 264, acceptable error in mapping/plotting, empirical evidence of performance, field of view of reference features (e.g., home locators and/or other objects), and/or other factors.
Advantageously, the constraint may reduce ambiguity. From the known (and/or assumed) bearing jobs of nodes 622A and 622B, which are the starting and ending bearings, respectively, error propagation can be made in both directions. Thus, for example, range 626 is greatly reduced compared to a similar node of FIG. 6B, and the maximum range is elsewhere, such as range 628. Clearly, this reduction in ambiguity of node location may further reduce computation, time, and processing costs when optimizing graph 690, as compared to graph 650. Ranges 804A, 804B, and 804C will be discussed in further detail with reference to FIG. 8.
In some cases, the confidence of the location may be determined by other methods, such as scan matching. For example, the robot 202 may use scan matching, which may include recording multiple scans (e.g., laser scans) of multiple sets of nodes to determine their relative orientation. In some cases, scan matching may be used to determine a rigid body transformation (e.g., using translation and/or rotation) that just aligns the scan with the graphic/map, the scan and the scan, and/or the graphic/map and the graphic/map.
For example, from two nodes x0And x1Robot sensing environment can obtain LIDAR scans z0And z1. These LIDAR scans may capture scans of the environment. For example, if some parts of the environment are from x0And x1All can be seen, then scan matching can be used to find the projected point z1So that they have z0Rigid transformation of the alignment. In some forms, scan matching has been used for mapping algorithms such as SLAM.
In some cases, performing scan matching of the extended scan group determined from the scan at the node at block 556 may be performed by mapping and positioning unit 262 and/or controller 254. Fig. 7A is a process flow diagram of an example method 700 for forming an extended scan group in accordance with some embodiments of the present disclosure. Block 720 includes selecting a root node. The root node may include nodes from which other nodes are viewed (e.g., orientation, distance, pose, angle, etc., and/or certainty relative to the root node). Block 722 encompasses possible overlap of the measurement results of the look-up nodes. The overlap of the measurements of the nodes may include an overlap where the scans have common characteristics and/or are in close proximity. For example, the overlap may include a scan of nodes that have captured the same feature. For example, as the robot 202 walks, it may use the sensor unit 264 to measure its environment. In some cases, such measurements may be done using scanning LIDAR. Thus, scanning the LIDAR may measure objects within its range as the robot 202 walks. Individual scans may capture the same object, but at different points in time and/or space, based on the proximity of the nodes where the robot 202 obtained the scans (e.g., a first scan may contain the robot 202 as it approaches the object, a second scan may contain the robot 202 as it approaches the object, a third scan may contain the robot 202 as it passes the object, a fourth scan may contain the object as the robot 202 moves farther, etc.). In this way, these overlapping measurements may capture different aspects of the same feature in the environment. Furthermore, the time and/or space of these measurements may also be substantially close. For example, proximity may include nodes that are in close proximity to each other or within a predetermined number of nodes in sequence. Proximity may also include a time within a predetermined threshold time, such as a threshold time determined based at least in part on sensor resolution, capture time, speed of the robot 202, and so forth.
Block 724 includes forming a group of nodes based at least in part on the overlap to form an extended scan group. Each may include a predetermined number of nodes, such as 2, 3, 4, 5, or more nodes. The number of nodes may be selected based at least in part on the desired accuracy of the scan matching, the data size of each scan, computing resources, and the like. In many cases, three nodes may be used to balance the amount of information provided by having more scans and the complexity of performing scan matching.
In some cases, not every node will be in a group. The lookup group may additionally include grouping only nodes that contain and/or are near unique characteristics. The unique features may comprise features that may be distinctive and/or identifiable. For example, an entire hallway with nearly identical walls may not contain unique features. For example, unique features may include objects, obstacles, fixtures, and/or any aspect that may enable a location to be distinguished from at least some other locations. Advantageously, if only groups with unique features are used, the robot 202 may further reduce consumption of system resources by focusing on scan matching, which would provide useful and/or clearer information. Advantageously, relying on unique features can reduce false positives.
In some cases, the same node may be used for multiple extended scan groups. In some cases, different extended scan groups may include the same node. However, in some cases, each node may be limited to being in only one extended scan group. Thus, as included in various implementations, determining whether a node belongs to a particular group depends on one or more factors. The relative weight of each of these factors implemented may be adjusted for different implementations based at least in part on the environment, the unique features (and/or lack thereof), empirical performance metrics, and the like. The first factor may include the amount of unique features captured in the scan of the node. For example, if more measurements of a unique feature are captured in a scan of a node, the scan of the node will be more likely to be included in an extended scan group of as many measurements of the unique feature than other scans of other nodes having fewer measurements of the unique feature. The second factor may include proximity to other nodes. Nodes that are close to each other will more likely be included together in an extended scan group. Proximity may be based at least in part on distance and/or angle thresholds determined based at least in part on sensor resolution, node density (e.g., how many nodes are in space, where proximity may be defined as closer to a situation where there are more nodes in a given space than there are fewer nodes in a given space), empirical results, and so forth. The third factor may include the presence of other unique features within the scan. For example, a scan at a node may have measurements of multiple unique characteristics. To optimize matching, it may be advantageous to have more extended scan groups. Thus, when more scans are needed to complete the extended scan group, it is preferable to have scans in which multiple unique features are assigned to multiple sets of scans that would otherwise be too few to form an extended scan group. In this case, the robot 202 may be calibrated so that more extended scan groups are more likely to be generated. On the other hand, in other cases, having a larger extended scan group may be beneficial to speed up computation. A fourth factor may include expanding the size of the scan group. As previously described, each extended scan group may be a predetermined size. Thus, determining whether a particular scan of a node is sibling with other scans of the node depends on the size of the extended scan group.
In some cases, the scans may be matched into an extended scan group using recursive and/or iterative grouping algorithms. In some cases, each set of extended scan groups may be evaluated based on one or more performance criteria. The performance criteria may be based on one or more of the factors described above. In some cases, algorithms such as minimal cut sets, graph theory, and/or other grouping algorithms may be implemented to partition the nodes of the graph and associated scans into extended scan groups.
Once the groups are identified, the location and/or orientation of the scan may be determined using a scan matcher (e.g., as part of mapping and positioning unit 262). Fig. 7B is a conceptual diagram of a scan matching transformation, according to some embodiments of the present disclosure. In some cases, multiple nodes may be used for the scan matcher, forming an extended scan group. The extended scan group may be compared to other extended scan groups to determine relative translation (and thus position) and/or rotation. Advantageously, expanding the scan group may provide more context (e.g., information, data, etc.) and/or improve one or more of the following: speed of graphics/map optimization, accuracy of graphics/maps relative to ground truth, and computational and other resource costs of optimization. If a large scan group has matched a location, the robot 202 may have a relatively high confidence in the location of the node involved in the scan match. In some cases, such as a match of the extended scan group with a previous scan, the confidence may match at least the confidence of the previous scan. In some cases, there may be greater confidence if the extended scan group and the previous scan are recorded by the robot 202 as having substantially similar positions and/or orientations. Further, in some cases, known information may be included, where the location of features in the graph are known a priori. Thus, such information may further increase confidence.
The scan match may identify a location based at least in part on an overlap of the extended scan group with another extended scan group. The overlap may be determined by algorithms known in the art, such as correlation, iterative comparisons, intersection regions, correlations (e.g., point-to-point, point-to-feature, feature-to-feature, etc.), regression algorithms, nearest point algorithms (e.g., iterative closest point ("ICP")), nearest neighbor search algorithms, and so forth.
For example, the framework 702A illustrates the nodes 708A, 710A, and 712A, as well as the features 704A and 706A detected based on the scans at the nodes 708A, 710A, and 712A. Transformation function 714 illustrates a rigid body transformation that may perform scan matching of frame 702A. The transform function 714 may perform translational and/or rotational movement of the scan/node in the framework 702A. Thus, framework 702B illustrates the translated and rotated versions of nodes 708A, 710A, and 712A as nodes 708B, 710B, and 712, and the translated and rotated versions of detected features 704A and 706A as detected features 704B and 706B. In some embodiments, the transformation function 714 may not be a rigid body transformation, in which the relative orientations of the nodes 708A, 710A, and 712A and/or one or more of the detected features 704A and 704B may change relative to each other.
In some cases, determining the confidence level for the location of each node according to block 558 of fig. 5 may be performed by mapping and locating unit 262 and/or controller 254. Fig. 8 illustrates example results of scan matching of the graphically extended scan group illustrated in fig. 6C, in accordance with some implementations of the present disclosure. The extended scanning group 800 includes nodes 802A, 802B, and 802C. Because the scan matcher identifies the locations of the nodes 802A, 802B, and 802C, their position determinations have increased confidence. Thus, ranges 804A, 804B, and 804C may be narrowed, which reflects greater confidence. Thus, the robot 202 may determine a confidence based at least in part on scan matches, propagation of errors (e.g., as reflected by the ranges 602A, 602B, 602C, 602D, 626, and/or 628, and/or other ranges calculated by the robot 202), and/or other factors.
After determining the confidence level, forming and solving the cost function to determine the location of each node according to block 560 of fig. 5 may be performed by mapping and positioning unit 262 and/or controller 254. In some cases, one or more nodes of a graph may have conceptual springs (e.g., connections) that characterize the spatial relationships between the nodes and the confidence of the relationships (e.g., in covariance and/or other statistical classification forms). The confidence may be based at least in part on the ranges illustrated in fig. 6A-6C and 8.
Fig. 9 illustrates springs to/from node 602A in graph 600 of fig. 6A, according to some embodiments of the present disclosure. As illustrated, node 602A is connected to a plurality of nodes, including nodes 602B, 602C, and 602D, by springs 650AB, 650AC, and 650 AD. Node 602A may also be connected to any and/or all of the other nodes in the respective graph 600. However, in some cases not all nodes in the map may be connected. Optionally, some nodes may be disconnected in order to prevent and/or reduce the generation of unrealistic maps. For example, in some embodiments, a node may selectively disconnect based at least in part on the error exceeding a predetermined threshold uncertainty, being less than a predetermined threshold uncertainty, and/or any other predetermined constraint and/or limit.
Each of springs 950AB, 950AC, and 950AD may include one or more of the following: (1) relative position between node 602A and the corresponding node at the other end (e.g., node 602D for spring 950AD, node 602C for spring 950AC, and node 602B for spring 950 AB), and/or (2) confidence in the relative positioning.
For example, a relative position may include a plurality of relative descriptions. In two dimensions (e.g., those obtained by 2D scanning), the relative positions may include horizontal, vertical, and/or angular, e.g., at Δ x,Δ y and Δ θ. Other ways of representing relative positions are also contemplated, such as those having different coordinate systems and/or dimensions. For example, the relative position can be represented by Δ x, Δ y, Δ z, Δ θ, Δ r, Δ θ,Etc. One of ordinary skill in the art will appreciate that there are many ways to describe relative positions and/or orientations in two dimensions, three dimensions, four dimensions, and/or any dimension. The confidence may be expressed as a covariance matrix. For example, the covariance matrix may include the covariance between the elements of the positions of the nodes (e.g., between nodes 602A and 602D of spring 650AD, between nodes 602A and 602C of spring 650AC, and between nodes 602A and 602B of spring 650 AB). In some embodiments, confidence may also be represented by other statistical markers known in the art, such as variance, sample variance, correlation, and the like. The confidence may depend in part on the range, such as the range calculated with respect to fig. 6A-6C and 8.
In some cases, the springs may define a cost function, wherein optimizing the graph includes minimizing the potential energy of the system defined by the springs. In this way, finding the best map may also be described as minimizing a cost function and/or finding a substantially maximum possible position for each node (e.g., based on a probability distribution). Solving the cost function may utilize a least squares regression method, a non-linear least squares optimization, interference theory, a weighted least squares method, a kalman filter, and/or any other method known in the art for solving the cost function. Generally similar to physical springs, a balanced configuration is one in which the net force on each node is generally equal to zero and/or a local minimum. The spring may be expressed as a weight and/or an orientation.
Once the position of the nodes in the graph (e.g., graph 600, 650, and/or 690, and/or any graph generated by robot 202) is determined, robot 202 may construct a map (e.g., map 300, 500A, and/or 500B, and/or any map generated by robot 202), for example, by processing the graph with mapping and positioning unit 262.
Rendering the map from the scan of located nodes of block 562 of fig. 5 may be performed by mapping and locating unit 262 and/or controller 254. For example, fig. 10 is a diagram of pixels of a map constructed by scanning, according to some embodiments of the present disclosure. For example, the matrix 1000 includes cells corresponding to locations on a map. In some cases, each cell may correspond to one pixel. Nodes 1004A and 1004B in a cell in matrix 1000 indicate, at least in part, the relative positions of nodes 1004A and 1004B. The location may be based on standard units (e.g., centimeters, meters, and/or any other standard measurement) and/or relative units (e.g., pixels, number of clicks, and/or any other relative measurement) with respect to a reference point and/or coordinate system. Nodes 1004A and 10048 are provided for purposes of illustration. The location, number, and/or spacing of the nodes may depend on the scans that the robot 202 obtains as it traverses the environment.
Each scan may provide information indicative, at least in part, of the relative positions of objects around the robot 202 at the nodes. In some cases, a scan (e.g., a scan that scans a LIDAR) may provide measurements to an object. For example, in some cases, ray tracing may be used, where the position of an object along a ray extending from the robot 202 across space may be used to identify the position and/or orientation of the object. In matrix 1000, dashed lines, such as lines 1006, indicate light rays, at least in part. When detecting an object along a light ray, cells, such as cell 1002, may be marked. For purposes of visual illustration, marker 1008 is an "X" that indicates, at least in part, that an object is detected at a location corresponding to cell 1002.
In contrast, if the ray passes through a location without an object, the corresponding cell may be marked with an "O", such as illustrated with mark 1010. Cells corresponding to locations where no light passes through may be unmarked. Other designations are contemplated. One of ordinary skill in the art will appreciate that the cells may be labeled in any desired manner. In addition, more information may be associated with each cell, for example by associating each cell with another data structure and/or using a data structure with more dimensions. For example, one or more cells may also be associated with additional information, such as the location of the route of the robot 202, actions, and/or performance of action-specific tasks, environmental characteristics (e.g., water, carpet, reflections, and/or any other description), exclusion areas, and so forth. Such additional information may be derived from the scanning (e.g., from the sensors of sensor unit 264) and/or derived separately and inserted into the map. Such additional information may be added at this stage and/or later after map construction.
Thus, by multiple ray traces that can be observed from the scan, the position of the object that the robot 202 sees at various nodes can be mapped. The accumulation of scans of nodes may together form a map. In some cases, there may be overlap between scans at a node. In some cases, such overlapping scans may agree on the positioning of the object. However, in some cases, overlapping scans may not be consistent and provide conflicting information. In some cases, these may be caused by measurement errors and/or moving objects. Thus, probabilities may be assigned to the recognition units. In some cases, the probability may be calculated as the number of scans in which the object was detected (or in some cases, scans in which the object was not detected) divided by the total number of scans. In this way, other statistical data may also be used, such as a weighted average, statistical tests, variance, and/or any other statistical method that reflects the uncertainty of the collision information. In some cases, the presence and/or absence of an object may be determined in a map based on a predetermined statistical threshold. For example, if a predetermined statistical threshold is met (exceeded or not exceeded, depending on how it is set), then the location of the object will fill in the generated map (e.g., map 300, 500A or 500B, and/or any other map generated by robot 202). For example, when the predetermined statistical threshold is a percentage, if the percentage of scans that identify objects in a location exceeds a predetermined percentage (e.g., 60, 70, 80, 90, and/or any other percentage based on a desired certainty), the location will be identified as corresponding to an object in the map. As another example, in the event that no object is identified as being present, if a percentage of scans in which no object is identified in a location exceeds a predetermined percentage (e.g., 60, 70, 80, 90, and/or any other percentage), then the location will be identified as not corresponding to an object in the map (e.g., empty space). Similar statistical analysis may be used to identify any of the other/additional information described in this disclosure. One of ordinary skill in the art will appreciate that the predetermined statistical threshold may be defined in several ways, whether positive (e.g., the presence of an object and/or characteristic) or negative (e.g., the absence of an object and/or characteristic). Advantageously, moving objects that may only be present in some scans but not others may be filtered out using data statistics.
In some embodiments, the process may also be performed on a map and/or graphic. For example, in some cases, the pixel state may indicate, at least in part, whether the robot 202 may walk through a certain area. In some cases, certain locations may not be adequately observed during the presentation, objects in the environment may have moved, and/or there may be uncertainty in the measurements. Using the user interface element 272, the user may edit the map and/or graphics to add additional information. For example, the user may edit the map to identify areas that the robot 202 may traverse and/or areas that the robot 202 may not traverse.
The robot 202 may learn user input. For example, the robot 202 may store a library in the memory 252 that includes one or more of the following: (i) an original map and/or graphic, (ii) a map and/or graphic with user input, (iii) other maps and/or graphics. In some cases, a library may contain approximately 1, 5, 10, 100, 1000, 10,000, 100,000, 1,000,000, 10,000,000, or any number of maps and/or graphics. In some embodiments, the library may be stored in a network (e.g., cloud, server, etc.) and may not be maintained within memory 252. The library may be used to train (e.g., based on a machine learning algorithm) the robot 202 to determine one or more associations between the original map/graphic and the map/graphic with the user input. In this way, the robot 202 may learn the changes made in the library due to user input and make substantially similar changes when it encounters substantially similar situations.
In some embodiments, the robot 202 may also change existing graphics and/or maps during subsequent walks in the environment. For example, the robot 202 may collect additional data and/or scans in the new graphic and/or map and use the additional data and/or scans to supplement the existing graphic and/or map.
For example, the robot 202 may generate a first graphic and/or map beginning at a first home locator and traveling along a first route for a first period of time. At a second, subsequent time period, the robot 202 may travel along a path that is substantially similar to the first route, beginning with the first home locator and generating a second graphic and/or map, except that data that was not collected during the first time period is collected (e.g., a scan). Advantageously, when the robot 202 begins the first home locator in the first time period and the second time period, the robot 202 may use its location to link the scans from the first graphic and/or map and the second graphic and/or map together. In this way, the robot 202 may add additional coverage to the first graphic and/or map as measured in the second graphic and/or map, and/or vice versa.
In some embodiments, when different home locators are used as starting points for the first graphic and/or map and the second graphic and/or map, the robot 202 may transform them into a common coordinate system in order to more easily combine the scans of each.
Fig. 11 is a process flow diagram of an example method 1100 for a robot to generate a map, according to some embodiments of the present disclosure. Block 1102 includes the robot traveling in the environment. Block 1104 includes forming a graph including a plurality of nodes, where each node corresponds to a scan obtained by a sensor of the robot at a location in the environment. Block 1106 includes constraining the graph to begin and end at substantially similar locations. Block 1108 includes performing scan matching on the extended scan group determined at least in part from the group of the plurality of nodes. Block 1110 includes associating a range of possible locations with each of the plurality of nodes based at least in part on the scan matching. Block 1112 includes determining a confidence level associated with each possible location range. Block 1114 includes optimizing the graph to find possible locations of the plurality of nodes based at least in part on the confidence level. Block 1116 includes generating a map from the optimized graph.
Fig. 12 is a process flow diagram of an example method 1200 for operating a robot, according to some embodiments of the present disclosure. Block 1202 includes causing a sensor to generate a scan of an environment at a plurality of nodes, wherein each node of the plurality of nodes is associated with a location. Block 1204 includes forming a graph of a plurality of nodes based on the generated scan. Block 1206 includes determining an extended scan group based at least in part on a scan associated with a group of a plurality of nodes. Block 1208 includes performing scan matching on the extended scan group.
As used herein, a computer and/or computing device may include, but is not limited to, a personal computer ("PC") and a minicomputer, whether desktop, laptop, or otherwise, a mainframe computer, workstation, server, personal digital assistant ("PDA"), handheld computer, embedded computer, programmable logic device, personal communicator, tablet computer, mobile device, portable navigation device, J2ME equipped device, cellular telephone, smart phone, personal integrated communication or entertainment device, and/or any other device capable of executing a set of instructions and processing an input data signal.
As used herein, a computer program and/or software can include any sequence of human or machine recognizable steps for performing a function. Such computer programs and/or software can be presented in any programming language or environment, including for example, C/C + +, C #, Fortran, COBOL, MATLABTMPASCAL, Python, assembly language,Markup languages (e.g., HTML, SGML, XML, and VoXML), and the like, as well as object-oriented environments, such as the common object request Broker architecture ("CORBA"), JAVATM(including J2ME, Java Bean, etc.), binary runtime environments (e.g., BREW), etc.
As used herein, a connection, link, transport channel, delay line, and/or wireless may include a causal link between any two or more entities (whether physical or logical/virtual) that enables the exchange of information between the entities.
It will be appreciated that while certain aspects of the disclosure have been described in terms of a particular sequence of steps of a method, these descriptions are merely illustrative of the broader methods of the disclosure, and may be in accordance with
As required by the particular application. In some cases, certain steps may become unnecessary or optional. In addition, certain steps or functionality may be added to the disclosed embodiments or the order of execution of two or more steps may be changed. All such variations are considered to be encompassed within the disclosure disclosed and claimed herein.
While the above detailed description has shown, described, and pointed out novel features of the disclosure as applied to various embodiments, it will be understood that various omissions, substitutions, and changes in the form and details of the device or process illustrated may be made by those skilled in the art without departing from the disclosure. The foregoing description is of the best mode presently contemplated for carrying out the present disclosure. This description is in no way meant to be limiting, but should be taken as illustrative of the general principles of the disclosure. The scope of the present disclosure should be determined with reference to the claims.
While the disclosure has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive. The present disclosure is not limited to the disclosed embodiments. Variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed disclosure, from a study of the drawings, the disclosure, and the appended claims.
It should be noted that the use of particular terminology when describing certain features or aspects of the disclosure should not be taken to imply that the terminology is being re-defined herein to be restricted to including any specific characteristics of the features or aspects of the disclosure with which that terminology is associated. The terms and phrases used in this application, and variations thereof, and particularly in the appended claims, should be construed to be open ended as opposed to limiting unless otherwise expressly stated. As examples of the foregoing, the term "comprising" should be understood as "including and not limited to," "including and not limited to," and the like; the term "comprising" as used herein is synonymous with "including," "containing," or "characterized by," and is inclusive or open-ended and does not exclude additional, unrecited elements or method steps; the term "having" is to be understood as "having at least"; the term "for example" is to be understood as "for example and without limitation"; the term "comprising" is to be understood as "including but not limited to"; the term "example" is used in the discussion to provide illustrative examples of an item, rather than an exhaustive or limiting list of items, and should be understood as "for example, but not limited to"; adjectives such as "well-known," "conventional," "standard," and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but rather should be read to encompass technologies that may be available or known, conventional, or standard now or at any time in the future; and the use of terms such as "preferably," "preferred," "desired," or "desirable," and words of similar import, should not be construed as implying that certain features are critical, essential, or even essential to the structure or function of the disclosure, but are merely intended to highlight alternative or additional features that may or may not be utilized in a particular embodiment. Likewise, a group of items linked with the conjunction "and" should not be read as requiring that each and every one of those items be present in the grouping, but rather should be read as "and/or" unless expressly stated otherwise. Similarly, a group of items linked with the conjunction "or" should not be read as requiring mutual exclusivity among that group, but rather should be read as "and/or" unless expressly stated otherwise. The terms "about" or "approximately" and the like are synonymous and are used to indicate that the value modified by the term has an understood range associated therewith, where the range may be ± 20%, ± 15%, ± 10%, ± 5%, or ± 1%. The term "substantially" is used to indicate that a result (e.g., a measured value) is close to a target value, where close may mean, for example, that the result is within 80% of the value, within 90% of the value, within 95% of the value, or within 99% of the value. Also, as used herein, "defining" or "determining" may include "predefined" or "predetermined" and/or otherwise determined values, conditions, thresholds, measurements, and the like.
Claims (20)
1. A robot, comprising:
a sensor configured to obtain a scan of an environment at a plurality of nodes, wherein each node is associated with a location; and
a mapping and positioning unit configured to:
forming a graph of the nodes based at least in part on the obtained scans;
determining an extended scan group based at least in part on the group of the plurality of nodes; and
performing scan matching on the extended scan group.
2. The robot of claim 1 wherein the drawing and positioning unit is further configured to constrain the graphics to begin and end at substantially similar locations.
3. The robot of claim 1, wherein the mapping and localization unit is further configured to determine a confidence associated with the location of each node.
4. The robot of claim 3, wherein the mapping and localization unit is further configured to generate a cost function based at least in part on the confidence and location of each node, wherein the mapping and localization unit determines the location of each node in the graph based on solving for a minimum of the cost function.
5. The robot of claim 1, further comprising a processor further configured to present a map according to the graphics, the map presented based at least in part on the scanned ray trace.
6. The robot of claim 1, further comprising an actuator configured to move the robot between positions.
7. The robot of claim 1, further comprising a user interface configured for user control of the robot.
8. The robot of claim 1, further comprising a navigation unit configured to autonomously navigate the robot.
9. A non-transitory computer readable storage device having a plurality of instructions stored thereon, the instructions being executable by a processing device to operate a robot, the instructions being configured to, when executed by the processing device, cause the processing device to:
causing a sensor to generate a scan of an environment at a plurality of nodes, wherein each node of the plurality of nodes is associated with a location;
forming a graph of the plurality of nodes based on the generated scans;
determining an extended scan group based at least in part on a scan associated with a group of the plurality of nodes; and
performing scan matching on the extended scan group.
10. The non-transitory computer-readable storage device of claim 9, further comprising one or more instructions that, when executed by the processing device, further cause the processing device to constrain the graphics to begin and end at substantially similar locations.
11. The non-transitory computer-readable storage device of claim 9, wherein each extended scan group comprises three or more scans.
12. The non-transitory computer-readable storage device of claim 9, further comprising one or more instructions that, when executed by the processing device, further cause the processing device to determine a cost function based at least in part on a confidence associated with the location of each node.
13. The non-transitory computer-readable storage device of claim 12, further comprising one or more instructions that, when executed by the processing device, further cause the processing device to generate a map based at least in part on minimizing the cost function.
14. A method for a robot to generate a map, comprising:
the robot travels in an environment;
forming a graph comprising a plurality of nodes, wherein each node corresponds to a scan obtained by a sensor of the robot at a location in the environment;
constraining the graph to begin and end at substantially similar locations;
performing scan matching on an extended scan group determined at least in part from a group of the plurality of nodes;
associating a range of possible locations with each of the plurality of nodes based at least in part on the scan matching;
determining a confidence level associated with each possible location range;
optimizing the graph to find possible locations of the plurality of nodes based at least in part on the associated confidence levels; and
and generating the map according to the optimized graph.
15. The method of claim 14, wherein the generating of the map comprises ray tracing with the scans obtained by the sensors, the scans being associated with the plurality of nodes.
16. The method of claim 14, wherein optimizing the graph comprises:
generating a cost function based at least in part on: (1) a relative position of each of the plurality of nodes, and (2) the confidence level associated with each of the plurality of nodes; and
solving for a minimum value of the cost function.
17. The method of claim 14, wherein constraining the graphic to begin and end at the substantially similar location further comprises constraining the graphic to begin and end where visible to a home locator.
18. The method of claim 14, wherein the robot traveling in the environment further comprises walking under user control.
19. The method of claim 14, wherein the associated confidence additionally comprises using a probability distribution that indicates, at least in part, a probability that a given node is at a given location.
20. The method of claim 14, wherein the extended scan group is determined based at least in part on the group of the plurality of nodes further comprising:
selecting a root node as a reference position;
looking for possible overlaps of measurements of unique features in scans obtained at other nodes; and
grouping the other nodes into an extended scan group based at least in part on an overlap of unique features of the scans at those respective other nodes.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/340,807 | 2016-11-01 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK40011873A true HK40011873A (en) | 2020-07-17 |
| HK40011873B HK40011873B (en) | 2023-07-14 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| KR102455845B1 (en) | Robot mapping system and method | |
| US12393196B2 (en) | Systems and methods for training a robot to autonomously travel a route | |
| US9329598B2 (en) | Simultaneous localization and mapping for a mobile robot | |
| Li et al. | Localization and navigation for indoor mobile robot based on ROS | |
| US11340630B2 (en) | Systems and methods for robust robotic mapping | |
| US12085942B1 (en) | Preventing regressions in navigation determinations using logged trajectories | |
| Hu et al. | A small and lightweight autonomous laser mapping system without GPS | |
| Nabbe et al. | Opportunistic use of vision to push back the path-planning horizon | |
| HK40011873A (en) | Systems and methods for robotic mapping | |
| Liu et al. | Processed rgb-d slam using open-source software | |
| HK40011873B (en) | Systems and methods for robotic mapping | |
| Yuan et al. | A review of LiDAR simultaneous localization and mapping techniques for multi-robot | |
| Fu | Design and development of an FPGA-based hardware accelerator for corner feature extraction and genetic algorithm-based SLAM system | |
| Liu et al. | Processed RGB-D slam based on hog-man algorithm | |
| Neto | AI-Enhanced Indoor Localization System for Autonomous Mobile Robots | |
| Lyu | Hybrid Aerial and Ground-based Mobile Robot for Retail Inventory | |
| CN116627066A (en) | An intelligent service robot for complex environments | |
| HK40005488B (en) | Systems and methods for training a robot to autonomously travel a route | |
| HK40005488A (en) | Systems and methods for training a robot to autonomously travel a route | |
| Duzhen et al. | VSLAM and Navigation System of Unmanned Ground Vehicle Based on RGB-D camera | |
| Vidal-Calleja et al. | Autonomous single camera exploration |