[go: up one dir, main page]

WO2018107916A1 - 一种基于环境地图的机器人安防巡检方法及其机器人 - Google Patents

一种基于环境地图的机器人安防巡检方法及其机器人 Download PDF

Info

Publication number
WO2018107916A1
WO2018107916A1 PCT/CN2017/108725 CN2017108725W WO2018107916A1 WO 2018107916 A1 WO2018107916 A1 WO 2018107916A1 CN 2017108725 W CN2017108725 W CN 2017108725W WO 2018107916 A1 WO2018107916 A1 WO 2018107916A1
Authority
WO
WIPO (PCT)
Prior art keywords
current
robot
map
dimensional plane
monitoring area
Prior art date
Application number
PCT/CN2017/108725
Other languages
English (en)
French (fr)
Inventor
张帆
Original Assignee
南京阿凡达机器人科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京阿凡达机器人科技有限公司 filed Critical 南京阿凡达机器人科技有限公司
Priority to US15/870,857 priority Critical patent/US20180165931A1/en
Publication of WO2018107916A1 publication Critical patent/WO2018107916A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0248Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/12Target-seeking control

Definitions

  • the invention relates to the field of security monitoring technology, in particular to a robot security inspection method based on environment map and a robot thereof.
  • the problem to be solved by the present invention is to provide a robot security inspection method based on environment map and a robot thereof, and the method and the robot used can realize active uninterrupted monitoring, and can realize active tracking of unsafe factors.
  • an environment map-based robot security inspection method includes the following steps: S3, when a robot performs a patrol inspection of a monitoring area according to a monitoring route, when a preset shooting time interval is reached, Obtaining current depth data; S4, according to the current depth data, current odometer information, and a two-dimensional plane map of the monitoring area, Determining a current position of the robot in the two-dimensional plane map, and determining whether the current position has an abnormal factor; S5, when the abnormal factor exists, performing a corresponding operation according to the abnormal factor; S6 when the absence of the When an abnormal factor occurs, the monitoring area is continuously inspected according to the monitoring route.
  • the method further includes the following steps: S1, when receiving the map establishment instruction, the robot traverses the monitoring area, and according to the depth data and the depth of each obstacle in the monitoring area acquired during the traversal process The odometer information corresponding to the data is used to establish a two-dimensional plane map of the monitoring area; S2 plans the monitoring route according to the patrol inspection starting point, the patrol inspection end point and the two-dimensional plane map.
  • step S1 When receiving the map establishment instruction, the robot traverses the monitoring area, and acquires depth data of each obstacle in the monitoring area during the traversal process; S12 sets a preset height The depth data in the range is projected to a preset horizontal plane to obtain corresponding two-dimensional lidar data; S13 establishes the monitoring area according to the lidar data and the odometer information corresponding to the lidar data. Dimensional map.
  • the step S4 locates the current position of the robot in the two-dimensional plane map according to the current depth data, the current odometer information, and the two-dimensional plane map of the monitoring area, and determines the current position.
  • Whether there is an abnormal factor includes the following steps: S41 determines whether there is an unmarked obstacle in the two-dimensional plane map, and when there is an unmarked obstacle in the two-dimensional plane map, the abnormal factor is considered to exist.
  • the step S5 includes the following steps: S510, when the unmarked obstacle exists, according to the current depth Data and current odometer information, the unmarked obstacle is marked in the two-dimensional plane map, and the two-dimensional plane map is updated; S511 is based on the current position and the updated two-dimensional plane map, The monitoring route is updated, and the monitoring area is inspected according to the updated monitoring route.
  • the step S4 locates the robot in the two-dimensional plane map according to the current depth data, current odometer information, and a two-dimensional plane map of the monitoring area.
  • the front position, and determining whether the current position has an abnormal factor comprises the following steps: S42 determining whether the human bone data is recognized, and when the human bone data is recognized, the abnormal factor is considered to exist, and when the In the case of human bone data, it is considered that the abnormal factor does not exist; the step S5 includes the following steps: S520 moves to a direction close to the living body corresponding to the human bone data when the human bone data is recognized; The current facial feature of the living body; S522, when the current facial feature of the living body is successfully acquired, matching the current facial feature with a preset facial feature in the preset biometric facial feature database; S523 when the matching is successful If the matching is unsuccessful, the tracking operation is performed on the living body, and the warning information is sent.
  • the method further includes the following steps: S525: when the current facial feature of the living body is not successfully acquired, acquiring password information to the living body; S526, the acquired password information and the preset password database The preset password information in the matching is matched; when the matching is successful, the abnormal factor is considered to be absent; and when the matching is unsuccessful, the tracking operation is performed on the living body, and the warning information is sent.
  • the method further includes the following steps: S7: when the robot performs the inspection of the monitoring area according to the monitoring route, when the preset detection time interval is reached, the current smoke concentration value is obtained; and S8 determines whether the current smoke concentration value exceeds the preset. a smoke concentration threshold; S9, when the current smoke concentration value exceeds a preset smoke concentration threshold, sending a warning message; S10, when the current smoke concentration value does not exceed a preset smoke concentration threshold, continuing to monitor the area according to the monitored route Conduct inspections.
  • the present invention also provides a robot, comprising: a data acquisition module, configured to acquire current depth data when a preset shooting time interval is reached in a process of performing inspection on a monitoring area according to a monitoring route; and a determining module, configured to The current depth data, current odometer information, and a two-dimensional plane map of the monitoring area, locate a current position of the robot in the two-dimensional plane map, and determine whether the current location has an abnormal factor; And when the abnormal factor exists, performing a corresponding operation according to the abnormal factor; when the abnormal factor does not exist, continuing to patrol the monitoring area according to the monitoring route.
  • a data acquisition module configured to acquire current depth data when a preset shooting time interval is reached in a process of performing inspection on a monitoring area according to a monitoring route
  • a determining module configured to The current depth data, current odometer information, and a two-dimensional plane map of the monitoring area, locate a current position of the robot in the two-dimensional plane map, and determine whether the current location has an abnormal factor
  • the execution module includes: a map creation submodule, configured to receive a map when receiving When the command is issued, the robot traverses the monitoring area, and establishes a two-dimensional plane map of the monitoring area according to the depth data of each obstacle in the monitoring area acquired in the traversing process and the odometer information corresponding to the depth data.
  • the route planning sub-module is configured to plan the monitoring route according to the inspection start point, the inspection end point, and the two-dimensional plane map.
  • the data acquisition module is further configured to: when receiving the map establishment instruction, the robot traverses the monitoring area, and acquires depth data of each obstacle in the monitoring area during the traversal process; the map establishing submodule And, when receiving the map establishment instruction, the robot traverses the monitoring area, and establishes the odometer information corresponding to the depth of each obstacle in the monitoring area and the odometer information corresponding to the depth data acquired in the traversing process.
  • the two-dimensional plane map of the monitoring area is specifically; the map establishing sub-module is configured to project the depth data in a preset height range to a preset horizontal plane to obtain corresponding two-dimensional lidar data; The lidar data and the odometer information corresponding to the lidar data are described, and a two-dimensional plane map of the monitoring area is established.
  • the determining module is configured to locate a current position of the robot in the two-dimensional plane map according to the current depth data, current odometer information, and a two-dimensional plane map of the monitoring area, and determine the location Whether the abnormality factor exists in the current location includes: the determining module, configured to determine whether there is an unmarked obstacle in the two-dimensional plane map, and when there is an unmarked obstacle in the two-dimensional plane map, There is the abnormal factor, when there is no unmarked obstacle in the two-dimensional plane map, the abnormal factor is considered to be absent; the execution module is configured to, when the abnormal factor exists, according to the The abnormally performing the corresponding operation includes: the executing module, configured to mark the unmarked obstacle according to the current depth data and current odometer information when the unlabeled obstacle exists Updating the two-dimensional plane map in a two-dimensional plane map; and updating the monitoring road according to the current location and the updated two-dimensional plane map , Inspection and monitoring of the monitoring area based on the updated route.
  • the determining module is configured to locate a current position of the robot in the two-dimensional plane map according to the current depth data, current odometer information, and a two-dimensional plane map of the monitoring area, and determine the location Whether the current location has an abnormal factor includes: the judgment mode a block, configured to determine whether the human bone data is recognized, and when the human bone data is recognized, the abnormal factor is considered to exist, and when the human bone data is not recognized, the abnormal factor is considered to be absent;
  • the execution module configured to perform a corresponding operation according to the abnormal factor when the abnormal factor is present, includes: the executing module, configured to: when the human bone data is recognized, to correspond to the human bone data Moving the direction of the living body; and acquiring a current facial feature of the living body; and, when the current facial feature of the living body is successfully acquired, the current facial feature and the preset facial feature database in the living body The preset facial features are matched; when the matching is successful, the abnormal factor is considered to be absent; when the matching is unsuccessful, the tracking operation is performed
  • the executing module is configured to: when the abnormal factor is present, perform a corresponding operation according to the abnormal factor, further comprising: the executing module, when the current facial feature of the living body is not successfully acquired Obtaining the password information from the living body; and matching the obtained password information with the preset password information in the preset password database; when the matching is successful, the abnormal factor is considered to be absent; Upon successful, a tracking operation is performed on the organism and a warning message is issued.
  • the method further includes: a smoke detecting module, configured to acquire a current smoke concentration value when the robot performs the inspection of the monitoring area according to the monitoring route, and when the preset detection time interval is reached, the determining module is further used for determining Whether the current smoke concentration value exceeds a preset smoke concentration threshold; the execution module is further configured to issue a warning message when the current smoke concentration value exceeds a preset smoke concentration threshold; and, when the current smoke concentration value When the preset smoke concentration threshold is not exceeded, the monitoring area is continuously inspected according to the monitoring route.
  • a smoke detecting module configured to acquire a current smoke concentration value when the robot performs the inspection of the monitoring area according to the monitoring route, and when the preset detection time interval is reached, the determining module is further used for determining Whether the current smoke concentration value exceeds a preset smoke concentration threshold
  • the execution module is further configured to issue a warning message when the current smoke concentration value exceeds a preset smoke concentration threshold
  • the monitoring area is continuously inspected according to the monitoring route.
  • the environment map-based robot security inspection method and the robot thereof can perform traversal inspection according to the environment map to avoid monitoring dead angles; actively discover unsafe factors and perform security policy confirmation; actively track unsafe factors; no need for assistance at night Lighting can also work properly.
  • the method and the robot of the invention have strong initiative, and actively prevent the unsafe factors, thereby greatly improving the effectiveness, timeliness and stability of the unannounced inspection.
  • FIG. 1 is a flowchart of a method for security inspection of a robot based on an environment map according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of the displacement of the robot of the security inspection method of FIG. 1;
  • FIG. 3 is a schematic diagram of human body recognition according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of face recognition and voice identity verification according to the present invention.
  • Figure 5 is a structural view of a robot used in the present invention.
  • FIG. 6 is a flow chart of an embodiment of a method for robot security inspection based on environment map according to the present invention.
  • FIG. 7 is a partial flow chart of an embodiment of a robot security inspection method based on an environment map according to the present invention.
  • FIG. 8 is a flow chart of another embodiment of a method for robot security inspection based on environment map according to the present invention.
  • FIG. 9 is a partial flow chart of an embodiment of a robot security inspection method based on an environment map according to the present invention.
  • FIG. 10 is a partial flow chart of an embodiment of a robot security inspection method based on an environment map according to the present invention
  • FIG. 11 is a schematic structural view of an embodiment of a robot of the present invention.
  • Figure 12 is a block diagram showing another embodiment of the robot of the present invention.
  • an environmental map-based robot security inspection method includes the following steps:
  • S3 acquires current depth data when the robot performs the inspection of the monitoring area according to the monitoring route, and when the preset shooting time interval is reached;
  • the robot when the robot starts the inspection, there will be a two-dimensional plane map of the inspection area of the inspection (can be uploaded by the user, or can be drawn by the robot according to the instructions) and the monitoring route (can be set by the user) It can also be planned by the robot according to the inspection starting point, the inspection end point and the 2D plane map.
  • the depth camera installed on the robot will capture the current position according to a certain shooting frequency (also can be understood as the preset shooting time interval).
  • Depth map ie, current depth data
  • the depth map refers to the three-dimensional space coordinate data of the obstacle (or spatial object) in the captured monitoring area with respect to the depth camera.
  • the current depth data is converted into corresponding 2D lidar data (the lidar data can display the contour of the obstacle), which is compared with the current odometer information and the 2D plane map, thereby locating the robot now The current position in the 2D flat map.
  • the robot matches the current depth data acquired by the current depth camera with the previously established two-dimensional plane map, thereby locating the position in the monitoring area where the robot is currently located; the robot moves the inspection according to the planned monitoring route.
  • the current position of the robot is based on the adaptive Monte Carlo positioning method (AMCL).
  • AMCL adaptive Monte Carlo positioning method
  • the particle filter is used to track the pose of the robot in the two-dimensional plane map of the monitoring area according to the lidar data and the current odometer information corresponding to the current depth data.
  • the odometer information refers to the angle of the movement of the motor such as the motor, the number of rotations, and the like, and the odometer information is recorded inside the robot that can walk.
  • the contour of the obstacle obtained after the current depth data conversion is used for positioning, and the current odometer information is assisted to ensure a more accurate positioning on the two-dimensional plane map.
  • the current depth data can be the outline of a chair, and there are three chairs in the two-dimensional plane map.
  • the current odometer information is needed to locate which chair, thereby locating the current position of the robot on the two-dimensional plane map.
  • Location; current odometer information can go left for the motor After 10 meters, another 5 meters to the right, and then the current position on the two-dimensional plane map is located according to the outline of a chair parsed from the current attempt data.
  • the robot In addition to locating the current position according to the current depth data, the current odometer information, and the two-dimensional plane map, it is also possible to determine whether there is an abnormal factor in the current location according to the current depth data, for example, whether a person (ie, a living body) is detected, and the inspection is performed. Whether the process finds new obstacles and the like not marked on the two-dimensional plane map. Therefore, different operations are performed according to different abnormal factors. If everything is ok, the robot will continue to inspect according to the monitoring route.
  • an abnormal factor in the current location according to the current depth data for example, whether a person (ie, a living body) is detected, and the inspection is performed. Whether the process finds new obstacles and the like not marked on the two-dimensional plane map. Therefore, different operations are performed according to different abnormal factors. If everything is ok, the robot will continue to inspect according to the monitoring route.
  • the robot performs inspection on the monitoring area according to the monitoring route, reduces the labor, and does not need to arrange the camera in the monitoring area; and if an abnormal factor is found during the inspection, the action can be taken in time.
  • step S3 in addition to the same as described above, as shown in FIG. 8, before step S3, the following steps are further included:
  • the robot when receiving the map establishment instruction, the robot traverses the monitoring area, and establishes the monitoring area according to the depth data of each obstacle in the monitoring area acquired in the traversing process and the odometer information corresponding to the depth data.
  • 2D flat map
  • S2 plans the monitoring route according to the inspection start point, the inspection end point, and the two-dimensional plane map.
  • the robot when the robot wants to inspect a certain monitoring area, it is necessary to obtain a two-dimensional plane map of the monitoring area, thereby planning a monitoring route, and positioning the current position during the inspection.
  • the two-dimensional plane map is established by the robot itself. Before the official inspection, the operator controls the robot to go through the monitoring area to be inspected first.
  • the robot acquires the depth data of each object in the monitoring area through the depth camera installed in the head, and then establishes a two-dimensional plane map of the entire monitoring area according to the corresponding odometer information obtained when the depth data is obtained. It can be established while walking, and when the monitoring area is completed, the establishment of a two-dimensional plane map is completed.
  • the robot can also control the robot to go through the monitoring area by a random change program inside the robot to establish a two-dimensional plane map.
  • the operator can input the inspection starting point and the inspection terminal, from It is more intelligent and labor-saving for the robot to monitor the route according to the two-dimensional plane map.
  • the Dijkstra optimal path algorithm is used to calculate the minimum cost path from the inspection starting point to the inspection end point on the two-dimensional plane map as the monitoring route of the robot.
  • step S1 is as follows:
  • the robot when receiving the map establishment instruction, the robot traverses the monitoring area, and acquires depth data of each obstacle in the monitoring area during the traversal process;
  • S13 Establish a two-dimensional plane map of the monitoring area according to the odometer information corresponding to the lidar data and the lidar data (corresponding depth data).
  • a two-dimensional planar map for establishing an unknown environment (ie, a two-dimensional grid map) adopts a Gmapping algorithm in a SLAM (simultaneous localization and mapping) method, and the specific process is as follows:
  • the depth camera acquires a depth map (ie, depth data, or depth distance data) during the traversal of the monitoring area, and can project the three-dimensional space by projecting the depth data in the preset height range to the depth camera horizontal plane.
  • the depth data is converted into two-dimensional lidar data.
  • the particle filter method is used to construct a two-dimensional grid map of the unknown environment (ie, the two-dimensional plane map of the monitoring area).
  • the process of the security inspection by the robot mainly includes the following four points:
  • step S4 is based on the current depth data, current odometer information, and a two-dimensional plane map of the monitoring area. Determining the current position of the robot in the two-dimensional plane map, and determining whether the current location has an abnormal factor includes the following steps:
  • the step S5 includes the following steps:
  • S511 Update the monitoring route according to the current location and the updated two-dimensional plane map, and perform a patrol inspection on the monitoring area according to the updated monitoring route.
  • the robot will combine the current position and conduct inspections according to the monitoring route.
  • the robot After updating the 2D plane map, the robot will re-plan the monitoring route according to the current location, the inspection endpoint of the original monitoring route, and the updated 2D plane map, and continue the inspection according to the current location and the updated monitoring route.
  • the robot can update its monitoring route according to the changed two-dimensional plane map, which is more flexible during the inspection process and has better inspection results.
  • the step S4 is based on the current depth data, the current odometer information, and the second of the monitoring area.
  • the dimension plane map, the current position of the robot in the two-dimensional plane map, and determining whether the current location has an abnormal factor includes the following steps:
  • S42 determines whether human bone data is recognized. When the human bone data is recognized, the abnormal factor is considered to exist, and when the human bone data is not recognized, the abnormal factor is considered to be absent;
  • the step S5 includes the following steps:
  • S520 moves to a direction close to the living body corresponding to the human skeleton data when the human skeleton data is recognized;
  • S524 performs a tracking operation on the living body when the matching is unsuccessful, and issues a warning message.
  • the robot determines whether the human bone data is recognized, thereby determining whether the living body exists during the inspection to determine the current position. Is there an abnormal factor? You can first determine whether there are unmarked obstacles, and then determine whether the human bone data is recognized. You can also first determine whether the human bone data is recognized, and then determine whether there are unmarked obstacles; or two threads can judge in parallel, two The order of the persons is not limited.
  • the human bone data identification method includes the following steps: if the current depth map obtained by the depth camera (ie, the current depth data) includes the line as shown in FIG. 3, To identify human bone data.
  • the robot detects an abnormal factor, and the operation is performed according to the operation corresponding to the recognition of the human skeleton data.
  • Facial recognition method includes The following steps:
  • the robot detects human bone data, it is close to the person, and the current facial features of the person are detected by the robot's RGB camera;
  • the preset biometric facial feature database stores a plurality of preset facial features, and the plurality of preset facial features belong to different people, and the chance of appearing in the monitoring area if the current facial features cannot be combined with any of the preset facial features If the match is successful, it means that the person is a stranger, there is a risk factor, and it is necessary to perform tracking operation and issue a warning message.
  • the warning message is issued to perform corresponding operations according to a pre-defined security policy or guardian operation, for example, a pre-defined security policy is to perform a buzzer alarm, and the guardian operates to send an alarm message to the guardian.
  • the robot can continue to perform the inspection according to the monitoring route. If the robot deviates from the monitoring route in the process of acquiring the current facial features, it can return to the origin after judging that there is no abnormal factor, and continue to perform inspection according to the monitoring route; or can locate its current position, and re-plan according to the current location and the inspection terminal. Monitor routes, etc.
  • step S521 the following steps are further included:
  • S526 matches the obtained password information with preset password information in a preset password database
  • S524 performs a tracking operation on the living body when the matching is unsuccessful, and issues a warning message.
  • the RGB camera cannot correctly recognize the face of the person to be tested, and the person to be authenticated can be authenticated by means of a voice password.
  • the robot finds that it has not successfully acquired the current facial features of the organism, it will The body obtains the password information, for example, the robot says to the creature, please report the password information; when the organism hears the voice, it will report the password information by itself; after the robot receives the password information reported by the organism through the microphone array, Matches the default password information in the default password database.
  • the preset password database may store a plurality of preset password information, and the matching is successful if the password information reported by the organism matches one of the preset password information, and when the password information cannot match any of the preset password information, Then the match is considered unsuccessful.
  • the preset password information can be a sentence, a song name, etc., and is set by the guardian (ie, the user of the robot).
  • the current living organism When the matching is successful, the current living organism is considered to be not a risk factor, and there is no abnormal factor, and the patrol inspection is continued according to the monitoring route; when the matching is unsuccessful, the current living organism is regarded as a risk factor, and the tracking operation is performed thereon, and Send a warning message.
  • the operation of the warning information here is to perform corresponding operations according to a pre-defined security policy or a guardian operation, for example, a pre-defined security policy is to perform a buzzer alarm, and the guardian operates to send an alarm information to the guardian.
  • the S7 obtains the current smoke concentration value when the robot performs the inspection of the monitoring area according to the monitoring route, and when the preset detection time interval is reached;
  • S8 determines whether the current smoke concentration value exceeds a preset smoke concentration threshold
  • the smoke sensor also monitors the smoke concentration value in the monitoring area, and the robot judges the current smoke concentration value obtained, and if the preset smoke concentration threshold is exceeded, It is considered a risk factor and a warning message is issued.
  • the operation of the warning information is to perform corresponding operations according to a pre-defined security policy or a guardian's operation.
  • a pre-defined security policy is to perform a buzzer alarm, and the guardian operates to send an alarm information to the guardian.
  • the robot is used for inspection, which reduces manpower and makes the monitoring more flexible.
  • the depth camera has night vision function, and feedback, warning and active tracking of abnormal factors and smoke concentration conditions have better monitoring effects.
  • the robot applied to the security inspection method of the above technical solution includes:
  • the depth camera 1 is mainly used for acquiring the depth data of the indoor object relative to the robot and the internal structure of the living body, thereby establishing a two-dimensional grid map of the monitoring area and realizing the positioning of the robot. Since the depth camera uses infrared structured light for object depth detection, it can still work normally in the dark at night;
  • the RGB color camera 2 is configured to acquire a color image of the monitoring area for facial recognition and scene viewing;
  • a smoke sensor 3 for sensing smoke in the monitored area
  • a microphone array 4 for picking up external sounds or discriminating the general direction of the sound source
  • the speaker 5 is used to play sounds such as inquiries and alarms.
  • a robot in another embodiment, includes: a data acquisition module 10, configured to perform a preset shooting interval during a patrol of a monitoring area according to a monitoring route.
  • a determining module 20 configured to locate a current of the robot in the two-dimensional plane map according to the current depth data, current odometer information, and a two-dimensional plane map of the monitoring area Positioning, and determining whether the current location has an abnormal factor;
  • the executing module 30 is configured to perform a corresponding operation according to the abnormal factor when the abnormal factor is present, and continue to patrol the monitoring area according to the monitoring route when the abnormal factor does not exist.
  • the data acquisition module is a depth camera of the robot.
  • the depth camera installed on the robot will capture the current position according to a certain shooting frequency (also can be understood as the preset shooting time interval).
  • Depth map ie, current depth data
  • the depth map refers to the three-dimensional space coordinate data of the obstacle (or spatial object) in the captured monitoring area with respect to the depth camera.
  • the current depth data is converted into corresponding 2D lidar data (the lidar data can display the contour of the obstacle), which is compared with the current odometer information and the 2D plane map, thereby locating the robot now The current position in the 2D flat map.
  • the robot matches the current depth data acquired by the current depth camera with the previously established two-dimensional plane map, thereby locating the position in the monitoring area where the robot is currently located; the robot moves the inspection according to the planned monitoring route.
  • the current position of the robot is based on the adaptive Monte Carlo positioning method (AMCL).
  • AMCL adaptive Monte Carlo positioning method
  • the particle filter is used to track the pose of the robot in the two-dimensional plane map of the monitoring area according to the lidar data and the current odometer information corresponding to the current depth data.
  • the odometer information refers to the angle of the movement of the motor such as the motor, the number of rotations, and the like, and the odometer information is recorded inside the robot that can walk.
  • the contour of the obstacle obtained after the current depth data conversion is used for positioning, and the current odometer information is assisted to ensure a more accurate positioning on the two-dimensional plane map.
  • the current depth data can be the outline of a chair, and there are three chairs in the two-dimensional plane map.
  • the current odometer information is needed to locate which chair to locate the machine.
  • the current position of the person on the 2D plan map; the current odometer information can be 15 meters away from the left of the motor and 7 meters to the right, and then according to the outline of a chair parsed by the current attempt data. Position the current position on the 2D flat map.
  • the robot In addition to locating the current position according to the current depth data, the current odometer information, and the two-dimensional plane map, it is also possible to determine whether there is an abnormal factor in the current location according to the current depth data, for example, whether a person (ie, a living body) is detected, and the inspection is performed. Whether the process finds new obstacles and the like not marked on the two-dimensional plane map. Therefore, different operations are performed according to different abnormal factors. If everything is ok, the robot will continue to inspect according to the monitoring route.
  • an abnormal factor in the current location according to the current depth data for example, whether a person (ie, a living body) is detected, and the inspection is performed. Whether the process finds new obstacles and the like not marked on the two-dimensional plane map. Therefore, different operations are performed according to different abnormal factors. If everything is ok, the robot will continue to inspect according to the monitoring route.
  • the robot performs inspection on the monitoring area according to the monitoring route, reduces the labor, and does not need to arrange the camera in the monitoring area; and if an abnormal factor is found during the inspection, the action can be taken in time.
  • the execution module 30 includes:
  • a map creation sub-module 31 configured to: when receiving the map establishment instruction, the robot traverses the monitoring area, and according to the depth data of each obstacle in the monitoring area acquired during the traversal process and the odometer corresponding to the depth data Information, establishing a two-dimensional plane map of the monitoring area;
  • the route planning sub-module 32 is configured to plan the monitoring route according to the inspection start point, the inspection end point, and the two-dimensional plane map.
  • the robot when the robot wants to inspect a certain monitoring area, it is necessary to obtain a two-dimensional plane map of the monitoring area, thereby planning a monitoring route, and positioning the current position during the inspection.
  • the two-dimensional plane map is established by the robot itself.
  • the operator controls the robot to go through the monitoring area to be inspected first.
  • the robot is installed through the head during the walking of the monitoring area.
  • the depth camera acquires the depth data of each object in the monitoring area, and then establishes a two-dimensional plane map of the entire monitoring area according to the corresponding odometer information obtained when the depth data is obtained, which can be established while walking, when the monitoring area is completed. This completes the establishment of a two-dimensional plane map.
  • the robot can also control the robot to go through the monitoring area by a random change program inside the robot to establish a two-dimensional plane map.
  • the operator can input the inspection starting point and the inspection terminal, so that the robot can monitor the route according to the two-dimensional plane map, which is more intelligent and labor-saving.
  • the Dijkstra optimal path algorithm is used to calculate the minimum cost path from the inspection starting point to the inspection end point on the two-dimensional plane map as the monitoring route of the robot.
  • the data acquisition module 10 is further configured to: when receiving the map establishment instruction, the robot traverses the monitoring area, and acquires depth data of each obstacle in the monitoring area during the traversal process;
  • the map establishing sub-module 31 is configured to: when receiving the map establishing instruction, the robot traverses the monitoring area, and according to the depth data of each obstacle in the monitoring area acquired in the traversing process, corresponding to the depth data
  • the odometer information is configured to establish a two-dimensional plane map of the monitoring area, where the map establishing sub-module is configured to project the depth data in a preset height range to a preset horizontal plane to obtain a corresponding two-dimensional
  • the lidar data is further configured to establish a two-dimensional plane map of the monitoring area according to the odometer information corresponding to the lidar data and the lidar data (corresponding depth data).
  • a two-dimensional planar map for establishing an unknown environment (ie, a two-dimensional grid map) adopts a Gmapping algorithm in a SLAM (simultaneous localization and mapping) method, and the specific process is as follows:
  • the depth camera acquires a depth map (ie, depth data, or depth distance data) during the traversal of the monitoring area, and can project the three-dimensional space by projecting the depth data in the preset height range to the depth camera horizontal plane.
  • the depth data is converted into two-dimensional lidar data.
  • the particle filter method is used to construct a two-dimensional grid map of the unknown environment (ie, the two-dimensional plane map of the monitoring area).
  • the determining module 20 is configured to locate the location according to the current depth data, current odometer information, and a two-dimensional plane map of the monitoring area, except for the same as the above. Determining whether the current position of the robot is in the two-dimensional plane map, and determining whether the current location has an abnormal factor includes: the determining module 20, configured to determine whether there is an unmarked obstacle in the two-dimensional plane map, When there is an unmarked obstacle in the two-dimensional plane map, the abnormal factor is considered to exist, and when there is no unmarked obstacle in the two-dimensional plane map, the abnormal factor is considered to be absent;
  • the execution module 30 is configured to perform a corresponding operation according to the abnormal factor when the abnormal factor is present, including: the executing module 30, configured to: when the unlabeled obstacle exists, according to the current Depth data and current odometer information, the unmarked obstacle is marked in the two-dimensional plane map, and the two-dimensional plane map is updated; and according to the current location and the updated two-dimensional plane map And updating the monitoring route, and performing inspection on the monitoring area according to the updated monitoring route.
  • the robot will combine the current position and conduct inspections according to the monitoring route.
  • the robot After updating the 2D plane map, the robot will re-plan the monitoring route according to the current location, the inspection endpoint of the original monitoring route, and the updated 2D plane map, and continue the inspection according to the current location and the updated monitoring route.
  • the robot can update its monitoring route according to the changed two-dimensional plane map, which is more flexible during the inspection process and has better inspection results.
  • the determining module 20 is configured to locate the location according to the current depth data, current odometer information, and a two-dimensional plane map of the monitoring area, except for the same as the above. Determining whether the current position of the robot is in the two-dimensional plane map, and determining whether the current location has an abnormal factor includes: the determining module 20, configured to determine whether When the human skeleton data is recognized, the abnormal factor is considered to exist, and when the human skeleton data is not recognized, the abnormal factor is considered to be absent;
  • the execution module 30 is configured to: when the abnormal factor is present, perform a corresponding operation according to the abnormal factor: the executing module is configured to: when the human bone data is recognized, correspond to the human bone data The direction of movement of the organism;
  • the robot determines whether the human bone data is recognized, thereby determining whether the living body exists during the inspection to determine the current position. Is there an abnormal factor? You can first determine whether there are unmarked obstacles, and then determine whether the human bone data is recognized. You can also first determine whether the human bone data is recognized, and then determine whether there are unmarked obstacles; or two threads can judge in parallel, two The order of the persons is not limited.
  • the current facial features of the organism are acquired by the robot's RGB color camera.
  • the preset biometric facial feature database stores a plurality of preset facial features, and the plurality of preset facial features belong to different people, and the chance of appearing in the monitoring area if the current facial features cannot be combined with any of the preset facial features If the match is successful, it means that the person is a stranger, there is a risk factor, and it is necessary to perform tracking operation and issue a warning message.
  • the warning message is issued to perform corresponding operations according to a pre-defined security policy or a guardian's operation.
  • a pre-defined security policy is to perform a buzzer (or speaker) alarm, and the guardian operates to send an alarm message to the guardian.
  • the robot can continue to perform the inspection according to the monitoring route. If the robot deviates from the monitoring route in the process of acquiring the current facial feature, it can return to the origin after judging that there is no abnormal factor, and continue to perform inspection according to the monitoring route; In order to locate its current location, re-plan the monitoring route according to the current location and the inspection terminal.
  • the executing module 30 is configured to: when the abnormal factor is present, performing the corresponding operation according to the abnormal factor further includes:
  • the executing module is configured to acquire password information from the living body when the current facial feature of the living body is not successfully acquired;
  • the living body is inquired through the speaker, and the password information is acquired through the microphone array.
  • the RGB camera cannot recognize the face of the person to be tested normally, and the person to be authenticated can be authenticated by voice password.
  • the robot When the robot finds that it has not successfully acquired the current facial features of the organism, it will obtain password information from the organism. For example, if the robot says to the creature, please report the password information; when the organism hears the voice, it will report it to itself.
  • the password information is sent; after the robot receives the password information reported by the organism through the microphone array, it matches the preset password information in the preset password database.
  • the preset password database may store a plurality of preset password information, and the matching is successful if the password information reported by the organism matches one of the preset password information, and when the password information cannot match any of the preset password information, Then the match is considered unsuccessful.
  • the preset password information can be a sentence, a song name, etc., and is set by the guardian (ie, the user of the robot).
  • the current living organism When the matching is successful, the current living organism is considered to be not a risk factor, and there is no abnormal factor, and the patrol inspection is continued according to the monitoring route; when the matching is unsuccessful, the current living organism is regarded as a risk factor, and the tracking operation is performed thereon, and Send a warning message.
  • the operation of the warning information here is to perform corresponding operations according to a pre-defined security policy or a guardian operation, for example, a pre-defined security policy is to perform a buzzer alarm, and the guardian operates to send an alarm information to the guardian.
  • the method further includes:
  • the smoke detecting module 40 is configured to acquire a current smoke concentration value when the robot performs the inspection of the monitoring area according to the monitoring route, and when the preset detection time interval is reached;
  • the determining module 20 is further configured to determine whether the current smoke concentration value exceeds a preset smoke concentration threshold
  • the executing module 30 is further configured to: issue an alert message when the current smoke concentration value exceeds a preset smoke concentration threshold; and, when the current smoke density value does not exceed a preset smoke concentration threshold, continue according to the Monitor the route to patrol the monitoring area.
  • the smoke detecting module is a smoke sensor of the robot.
  • the smoke sensor collects the current smoke concentration value at a preset frequency, that is, when the robot patrols the monitoring area according to the monitoring route, when the preset detection time interval is reached, the smoke detecting module acquires the current smoke concentration value.
  • the smoke sensor also monitors the smoke concentration value in the monitored area, and the robot will judge the current smoke concentration value obtained. If the preset smoke concentration threshold is exceeded, it is considered dangerous. Factor, issue a warning message.
  • the operation of the warning information is to perform corresponding operations according to a pre-defined security policy or a guardian's operation.
  • a pre-defined security policy is to perform a buzzer alarm, and the guardian operates to send an alarm information to the guardian.
  • Judging the current smoke concentration value, identifying human bone data, and detecting unmarked obstacles these three can be executed in parallel or in a certain order. For example, when the current position is located, the current smoke attempt is first determined. Whether the value exceeds the preset smoke concentration threshold, if it is exceeded, a warning message is sent, waiting for the guardian to come over; if not, it is judged whether the human body data is recognized, and if it is recognized, the current facial feature or password information is acquired to match. If the matching is unsuccessful, the tracking operation is performed, and a warning message is sent; if the matching is successful, it is further determined whether an unmarked obstacle is detected, and if not detected, the patrol is continued according to the monitoring route, and the above steps are repeated. If detected, update the 2D plane map, and then patrol according to the updated monitoring route, repeat the above steps.
  • the robot is used for inspection, which reduces manpower and makes the monitoring more flexible.
  • the depth camera has night vision function, and feedback, warning and active tracking of abnormal factors and smoke concentration conditions are provided. Have a good monitoring effect.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Manipulator (AREA)
  • Alarm Systems (AREA)

Abstract

一种基于环境地图的机器人安防巡检方法及其机器人,能够根据环境地图进行遍历巡查,避免监控死角;主动发现不安全因素并进行安全策略确认;主动跟踪不安全因素;夜间无需辅助照明也能够正常工作。安防巡检方法及其机器人主动性强,对不安全因素进行主动防御,大大提高了安防巡检的有效性和及时性、稳定性。安防巡检方法包括建立整个监控区的二维平面地图;规划出监控路线;定位出机器人当前所处监控区域中的位置;按照规划的监控路线移动巡查步骤。

Description

一种基于环境地图的机器人安防巡检方法及其机器人
本申请要求2016年12月14日提交的申请号为:201611154363.6、发明名称为“一种基于环境地图的机器人安防巡检方法及其机器人”的中国专利申请的优先权,其全部内容合并在此。
技术领域
本发明涉及安防监控技术领域,具体是一种基于环境地图的机器人安防巡检方法及其机器人。
背景技术
目前的安防监控领域大都采用被动式的视频监控方法,一般是在特定监测点安装摄像头,在特定区域集中显示各监测点图像,通过人工排查各个摄像机拍摄的图像审核不安全因素。该方法具有诸多缺点:1、需要人工不断排查多个摄像机的监控画面,容易造成视觉疲劳造成漏判不安全因素。2、摄像机的视角相对固定且不易实现大范围运动,如需监控大范围场景需要布置较多的摄像头,容易造成监控死角。3、当发现不安全因素视频监控方式不容易进行主动跟踪。4、监控相机大都是RGB摄像机不具备夜视功能,夜间监控能力大大下降,往往需要增加辅助照明装置从而造成很多不利影响。
发明内容
本发明要解决的问题是提供一种基于环境地图的机器人安防巡检方法及其机器人,该方法及所使用的机器人能够实现主动不间断监控,并且可以实现对不安全因素进行主动跟踪。
为实现上述发明目的,本发明的一种基于环境地图的机器人安防巡检方法,包括以下步骤:S3在机器人根据监控路线对监控区域进行巡检的过程中、当达到预设拍摄时间间隔时,获取当前深度数据;S4根据所述当前深度数据、当前里程计信息和所述监控区域的二维平面地图,定位所 述机器人在所述二维平面地图中的当前位置,并判断所述当前位置是否存在异常因素;S5当存在所述异常因素时,根据所述异常因素执行相应的操作;S6当不存在所述异常因素时,继续根据所述监控路线对监控区域进行巡检。
进一步,所述步骤S3之前还包括以下步骤:S1当接收到地图建立指令时,机器人遍历所述监控区域,并根据遍历过程中获取的所述监控区域内各障碍物的深度数据和所述深度数据对应的里程计信息,建立所述监控区域的二维平面地图;S2根据巡检起点、巡检终点和所述二维平面地图,规划出所述监控路线。
进一步,所述步骤S1的具体过程如下:S11当接收到地图建立指令时,机器人遍历所述监控区域,并在遍历过程中获取所述监控区域内各障碍物的深度数据;S12将预设高度范围内的所述深度数据向预设水平面进行投影,得到对应的二维的激光雷达数据;S13根据所述激光雷达数据和所述激光雷达数据对应的里程计信息,建立所述监控区域的二维平面地图。
进一步,所述步骤S4根据所述当前深度数据、当前里程计信息和所述监控区域的二维平面地图,定位所述机器人在所述二维平面地图中的当前位置,并判断所述当前位置是否存在异常因素包括以下步骤:S41判断是否存在所述二维平面地图中未标注的障碍物,当存在所述二维平面地图中未标注的障碍物时,则认为存在所述异常因素,当不存在所述二维平面地图中未标注的障碍物时,则认为不存在所述异常因素;所述步骤S5包括以下步骤:S510当存在所述未标注的障碍物时,根据所述当前深度数据和当前里程计信息,将所述未标注的障碍物标注在所述二维平面地图中,并更新所述二维平面地图;S511根据所述当前位置和更新的所述二维平面地图,更新所述监控路线,并根据更新的所述监控路线对监控区域进行巡检。
进一步,所述步骤S4根据所述当前深度数据、当前里程计信息和所述监控区域的二维平面地图,定位所述机器人在所述二维平面地图中的当 前位置,并判断所述当前位置是否存在异常因素包括以下步骤:S42判断是否识别到人体骨骼数据,当识别到所述人体骨骼数据时,则认为存在所述异常因素,当识别不到所述人体骨骼数据时,则认为不存在所述异常因素;所述步骤S5包括以下步骤:S520当识别到人体骨骼数据时,向靠近所述人体骨骼数据对应的生物体的方向移动;S521获取所述生物体的当前面部特征;S522当成功获取到所述生物体的当前面部特征时,将所述当前面部特征与预设生物体面部特征数据库中的预设面部特征进行匹配;S523当匹配成功时,则认为不存在所述异常因素;S524当匹配不成功时,对所述生物体执行跟踪操作,并发出警示信息。
进一步,所述步骤S521之后还包括以下步骤:S525当未成功获取到所述生物体的当前面部特征时,向所述生物体获取口令信息;S526将获取的所述口令信息与预设口令数据库中的预设口令信息进行匹配;S523当匹配成功时,则认为不存在所述异常因素;S524当匹配不成功时,对所述生物体执行跟踪操作,并发出警示信息。
进一步,还包括以下步骤:S7在机器人根据监控路线对监控区域进行巡检的过程中、当达到预设检测时间间隔时,获取当前烟雾浓度值;S8判断所述当前烟雾浓度值是否超过预设烟雾浓度阈值;S9当所述当前烟雾浓度值超过预设烟雾浓度阈值时,发出警示信息;S10当所述当前烟雾浓度值未超过预设烟雾浓度阈值时,继续根据所述监控路线对监控区域进行巡检。
本发明还提供一种机器人,包括:数据获取模块,用于在根据监控路线对监控区域进行巡检的过程中、当达到预设拍摄时间间隔时,获取当前深度数据;判断模块,用于根据所述当前深度数据、当前里程计信息和所述监控区域的二维平面地图,定位所述机器人在所述二维平面地图中的当前位置,并判断所述当前位置是否存在异常因素;执行模块,用于当存在所述异常因素时,根据所述异常因素执行相应的操作;当不存在所述异常因素时,继续根据所述监控路线对监控区域进行巡检。
进一步,所述执行模块包括:地图建立子模块,用于当接收到地图建 立指令时,机器人遍历所述监控区域,并根据遍历过程中获取的所述监控区域内各障碍物的深度数据和所述深度数据对应的里程计信息,建立所述监控区域的二维平面地图;路线规划子模块,用于根据巡检起点、巡检终点和所述二维平面地图,规划出所述监控路线。
进一步,所述数据获取模块,进一步用于当接收到地图建立指令时,机器人遍历所述监控区域,并在遍历过程中获取所述监控区域内各障碍物的深度数据;所述地图建立子模块,用于当接收到地图建立指令时,机器人遍历所述监控区域,并根据遍历过程中获取的所述监控区域内各障碍物的深度数据和所述深度数据对应的里程计信息,建立所述监控区域的二维平面地图具体为;所述地图建立子模块,用于将预设高度范围内的所述深度数据向预设水平面进行投影,得到对应的二维的激光雷达数据;再根据所述激光雷达数据和所述激光雷达数据对应的里程计信息,建立所述监控区域的二维平面地图。
进一步,所述判断模块,用于根据所述当前深度数据、当前里程计信息和所述监控区域的二维平面地图,定位所述机器人在所述二维平面地图中的当前位置,并判断所述当前位置是否存在异常因素包括:所述判断模块,用于判断是否存在所述二维平面地图中未标注的障碍物,当存在所述二维平面地图中未标注的障碍物时,则认为存在所述异常因素,当不存在所述二维平面地图中未标注的障碍物时,则认为不存在所述异常因素;所述执行模块,用于当存在所述异常因素时,根据所述异常因素执行相应的操作包括:所述执行模块,用于当存在所述未标注的障碍物时,根据所述当前深度数据和当前里程计信息,将所述未标注的障碍物标注在所述二维平面地图中,并更新所述二维平面地图;再根据所述当前位置和更新的所述二维平面地图,更新所述监控路线,并根据更新的所述监控路线对监控区域进行巡检。
进一步,所述判断模块,用于根据所述当前深度数据、当前里程计信息和所述监控区域的二维平面地图,定位所述机器人在所述二维平面地图中的当前位置,并判断所述当前位置是否存在异常因素包括:所述判断模 块,用于判断是否识别到人体骨骼数据,当识别到所述人体骨骼数据时,则认为存在所述异常因素,当识别不到所述人体骨骼数据时,则认为不存在所述异常因素;所述执行模块,用于当存在所述异常因素时,根据所述异常因素执行相应的操作包括:所述执行模块,用于当识别到人体骨骼数据时,向靠近所述人体骨骼数据对应的生物体的方向移动;以及,获取所述生物体的当前面部特征;以及,当成功获取到所述生物体的当前面部特征时,将所述当前面部特征与预设生物体面部特征数据库中的预设面部特征进行匹配;当匹配成功时,则认为不存在所述异常因素;当匹配不成功时,对所述生物体执行跟踪操作,并发出警示信息。
进一步,所述执行模块,用于当存在所述异常因素时,根据所述异常因素执行相应的操作还包括:所述执行模块,用于当未成功获取到所述生物体的当前面部特征时,向所述生物体获取口令信息;以及,将获取的所述口令信息与预设口令数据库中的预设口令信息进行匹配;当匹配成功时,则认为不存在所述异常因素;当匹配不成功时,对所述生物体执行跟踪操作,并发出警示信息。
进一步,还包括:烟雾检测模块,用于在机器人根据监控路线对监控区域进行巡检的过程中、当达到预设检测时间间隔时,获取当前烟雾浓度值;所述判断模块,进一步用于判断所述当前烟雾浓度值是否超过预设烟雾浓度阈值;所述执行模块,进一步用于当所述当前烟雾浓度值超过预设烟雾浓度阈值时,发出警示信息;以及,当所述当前烟雾浓度值未超过预设烟雾浓度阈值时,继续根据所述监控路线对监控区域进行巡检。
本发明的一种基于环境地图的机器人安防巡检方法及其机器人,能够根据环境地图进行遍历巡查,避免监控死角;主动发现不安全因素并进行安全策略确认;主动跟踪不安全因素;夜间无需辅助照明也能够正常工作。本发明的方法及机器人主动性强,对不安全因素进行主动防御,大大提高了暗访巡检的有效性和及时性、稳定性。
附图说明
图1为本发明一个实施例中基于环境地图的机器人安防巡检方法的流程图;
图2为图1安防巡检方法的机器人位移示意图;
图3为本发明一个实施例人体识别示意图;
图4为本发明人脸识别及语音身份验证示意图;
图5为本发明使用的机器人的结构图;
图6是本发明基于环境地图的机器人安防巡检方法一个实施例的流程图;
图7是本发明基于环境地图的机器人安防巡检方法一个实施例的部分流程图;
图8是本发明基于环境地图的机器人安防巡检方法另一个实施例的流程图;
图9是本发明基于环境地图的机器人安防巡检方法一个实施例的部分流程图;
图10是本发明基于环境地图的机器人安防巡检方法一个实施例的部分流程图;
图11是本发明机器人一个实施例的结构示意图;
图12是本发明机器人另一个实施例的结构示意图。
具体实施方式
下面结合附图,对本发明提出的一种基于环境地图的机器人安防巡检方法及其机器人进行详细说明。
在本发明的一个实施例中,如图6、图1所示,一种基于环境地图的机器人安防巡检方法,包括以下步骤:
S3在机器人根据监控路线对监控区域进行巡检的过程中、当达到预设拍摄时间间隔时,获取当前深度数据;
S4根据所述当前深度数据、当前里程计信息和所述监控区域的二维平面地图,定位所述机器人在所述二维平面地图中的当前位置,并判断所述当前位置是否存在异常因素;
S5当存在所述异常因素时,根据所述异常因素执行相应的操作;
S6当不存在所述异常因素时,继续根据所述监控路线对监控区域进行巡检。
具体的,机器人在开始巡检时,会有其巡检的监控区域的二维平面地图(可以由使用者上传上去,也可以由机器人根据指令自己绘制等)和监控路线(可以由使用者设置,也可以由机器人根据巡检起点、巡检终点和二维平面地图自行规划等)。
在机器人根据监控路线对监控区域进行巡检的过程中,其上安装的深度相机会按照一定的拍摄频率(也可以理解为预设拍摄时间间隔)来获取其所处的当前位置可以拍摄到的深度图(即,当前深度数据)。深度图是指,拍摄到的监控区域中的障碍物(或者说,空间物体)相对于深度相机的三维空间坐标数据。当前深度数据会进行转换,从而转换成相应的二维的激光雷达数据(激光雷达数据可以显示障碍物的轮廓),将其根据当前里程计信息和二维平面地图进行比对,从而定位机器人现在在二维平面地图中所处的当前位置。
机器人将当前的深度相机获取的当前深度数据与之前建立的二维平面地图进行匹配,从而定位出机器人当前所处监控区域中的位置;机器人按照规划的监控路线移动巡查。
机器人当前位置定位采用自适应的蒙特卡洛定位方式(AMCL),采用粒子滤波器根据当前深度数据对应的激光雷达数据及当前里程计信息来跟踪监控区域的二维平面地图中机器人的位姿。
里程计信息是指,机器人中的马达等运动机构执行的角度、旋转的圈数等,只要可以行走的机器人其内部都会记录其里程计信息。一般通过当前深度数据转换后得到的障碍物的轮廓来进行定位,之所以要辅助当前里程计信息是为了保证可以在二维平面地图上得到较精确的定位。
例如:当前深度数据可以为一把椅子的轮廓,而二维平面地图中存在三把椅子,这时就需要当前里程计信息来定位哪把椅子,从而定位出机器人在二维平面地图上的当前位置;当前里程计信息可以为马达向左走了 10米后,向右又走了5米,从而再根据由当前尝试数据解析出来的一把椅子的轮廓来定位出二维平面地图上的当前位置。
除了根据当前深度数据、当前里程计信息和二维平面地图来定位当前位置,还可以根据当前深度数据来判断当前位置是否存在异常因素,例如:是否检测到人(即生物体)、在巡检过程是否发现有新的未标注在二维平面地图上的障碍物等。从而根据不同的异常因素,执行不同的操作。若一切正常,机器人就继续按照监控路线进行巡检。
本实施例中,机器人会根据监控路线自行对监控区域进行巡检,降低了人工,无需在监控区域布置摄像头;且若在巡检过程中发现异常因素,能及时采取行动。
在本发明的另一个实施例中,除与上述相同的之外,如图8所示,步骤S3之前还包括以下步骤:
S1当接收到地图建立指令时,机器人遍历所述监控区域,并根据遍历过程中获取的所述监控区域内各障碍物的深度数据和所述深度数据对应的里程计信息,建立所述监控区域的二维平面地图;
S2根据巡检起点、巡检终点和所述二维平面地图,规划出所述监控路线。
具体的,当机器人想要对某一监控区域进行巡检时,必然要得到此监控区域的二维平面地图,从而规划监控路线、在巡检时定位当前位置等。
二维平面地图在本实施例中是由机器人自行建立,正式巡检前,会由操作人员控制机器人将所要巡检的监控区域先走一遍,
机器人在监控区域行走过程中,通过头部安装的深度相机获取监控区域内各物体的深度数据,再根据获得到此深度数据时对应的里程计信息,建立整个监控区域的二维平面地图,其可以边走边建立,当走完一遍监控区域时,也就完成了二维平面地图的建立。当然在正式巡检前,也可以由机器人内部的随机变路程序来控制机器人走一遍监控区域,从而建立二维平面地图。
在获得二维平面地图后,操作人员可以输入巡检起点和巡检终端,从 而让机器人自行根据二维平面地图规划监控路线,更智能、省力。机器人根据建立的监控区域二维平面地图规划出监控路线时,采用的Dijkstra最优路径的算法,计算二维平面地图上从巡检起点到达巡检终点的最小花费路径,作为机器人的监控路线。
优选地,如图8所示,步骤S1的具体过程如下:
S11当接收到地图建立指令时,机器人遍历所述监控区域,并在遍历过程中获取所述监控区域内各障碍物的深度数据;
S12将预设高度范围内的所述深度数据向预设水平面进行投影,得到对应的二维的激光雷达数据;
S13根据所述激光雷达数据和所述激光雷达数据(对应的深度数据)对应的里程计信息,建立所述监控区域的二维平面地图。
具体的,建立未知环境(即监控区域)的二维平面地图(即二维网格地图)采用的是SLAM(simultaneous localization and mapping)方法中的Gmapping算法,具体过程如下:
1)深度相机在遍历监控区域的过程中会获取深度图(即深度数据,或者说,深度距离数据),通过将预设高度范围内的深度数据向深度相机水平面进行投影,即可将三维空间的深度数据转化为二维的激光雷达数据。
例如:深度相机距离地面的高度为Z=50厘米,预设高度范围设为0-100厘米,则将高度符合0-100厘米的深度数据向Z=50厘米的水平面进行投影,从而得到对应的二维的激光雷达数据。
2)Gmapping算法根据转换后得到的激光雷达数据结合机器人的里程计信息,使用粒子滤波的方法最终构建出未知环境的二维网格地图(即,监控区域的二维平面地图)。
如图2所示,机器人进行安防巡检的过程,主要包括以下四点信息:
①机器人建立的巡视环境(即监控区域)的二维平面地图。
②机器人巡视开始位置(巡检起点)A。
③机器人巡视目标位置(巡检终点)B。
④机器人由巡检起点A到巡检终点B规划出的巡视路径(即,监控路线)。
在本发明的另一个实施例中,除与上述相同的之外,如图7所示,步骤S4根据所述当前深度数据、当前里程计信息和所述监控区域的二维平面地图,定位所述机器人在所述二维平面地图中的当前位置,并判断所述当前位置是否存在异常因素包括以下步骤:
S41判断是否存在所述二维平面地图中未标注的障碍物,当存在所述二维平面地图中未标注的障碍物时,则认为存在所述异常因素,当不存在所述二维平面地图中未标注的障碍物时,则认为不存在所述异常因素;
所述步骤S5包括以下步骤:
S510当存在所述未标注的障碍物时,根据所述当前深度数据和当前里程计信息,将所述未标注的障碍物标注在所述二维平面地图中,并更新所述二维平面地图;
S511根据所述当前位置和更新的所述二维平面地图,更新所述监控路线,并根据更新的所述监控路线对监控区域进行巡检。
具体的,若无异常因素,机器人会结合当前位置,根据监控路线进行巡检。
但是当行进过程中遇到二维平面地图中没有标注的障碍物时,则认为发现了异常因素,会将障碍物标注在二维平面地图中,以更新此监控区域的二维平面地图。将未标注的障碍物标注在二维平面地图中采用的方法与建立二维平面地图的方法一样。
当更新完二维平面地图后,机器人会根据当前位置、原先监控路线的巡检终点和更新后的二维平面地图,重新规划监控路线,并根据当前位置和更新后的监控路线继续巡检。
机器人可以根据变更的二维平面地图更新自己的监控路线,在巡检过程中更灵活,具有更好的巡检效果。
在本发明的另一个实施例中,除与上述相同的之外,如图9所示,所述步骤S4根据所述当前深度数据、当前里程计信息和所述监控区域的二 维平面地图,定位所述机器人在所述二维平面地图中的当前位置,并判断所述当前位置是否存在异常因素包括以下步骤:
S42判断是否识别到人体骨骼数据,当识别到所述人体骨骼数据时,则认为存在所述异常因素,当识别不到所述人体骨骼数据时,则认为不存在所述异常因素;
所述步骤S5包括以下步骤:
S520当识别到人体骨骼数据时,向靠近所述人体骨骼数据对应的生物体的方向移动;
S521获取所述生物体的当前面部特征;
S522当成功获取到所述生物体的当前面部特征时,将所述当前面部特征与预设生物体面部特征数据库中的预设面部特征进行匹配;
S523当匹配成功时,则认为不存在所述异常因素;
S524当匹配不成功时,对所述生物体执行跟踪操作,并发出警示信息。
具体的,机器人除了根据当前深度数据判断是否存在二维平面地图上未标注的障碍物外,还会判断是否识别到人体骨骼数据,从而判断是否在巡检过程中存在生物体,来判断当前位置是否存在异常因素。可以先判断是否存在未标注的障碍物,再判断是否识别到人体骨骼数据;也可以先判断是否识别到人体骨骼数据,再判断是否存在未标注的障碍物;或者由两个线程并行判断,两者的先后顺序不作限定。
如图3所示,以生物体为人体为例,人体骨骼数据识别法包括以下步骤:通过深度相机获取的当前深度图(即当前深度数据)中若包含如图3中所示的线条,即为识别到人体骨骼数据。
当识别到人体骨骼数据就认为机器人监测到异常因素,会根据识别到人体骨骼数据对应的操作执行。
如图4所示,以生物体为人为例,当检测到人体骨骼数据后,因要判断此人是否为安全因素,因此,需要靠近此人,以获取其的当前面部特征,并通过面部识别法来判断当前的生物体是否为安全因素。面部识别法包括 以下步骤:
当机器人检测到人体骨骼数据时,靠近此人,并通过机器人的RGB摄像机检测此人的当前面部特征;
将当前面部特征与预设生物体面部特征数据库中存储的各预设面部特征进行匹配,如果匹配成功则完成身份验证,不予追踪;如果匹配不成功,则将此人识别为危险因素(陌生人)。
预设生物体面部特征数据库中会存储有多个预设面部特征,多个预设面部特征属于不同的人,其会有机会出现在监控区域,若当前面部特征无法与任何一个预设面部特征匹配成功,则说明此人为陌生人,存在危险因素,需要对其执行跟踪操作,并发出警示信息。发出警示信息为根据预先制定的安全策略或者监护人操作执行相应操作,例如:预先制定的安全策略为要进行蜂鸣器报警,监护人操作为要向监护人发送报警信息等。
若当前面部特征可以与其中一个预设面部特征匹配成功,则说明此人不是危险因素,也就说明不存在异常因素,机器人可以根据监控路线继续执行巡检。若机器人在获取当前面部特征的过程中偏离了监控路线,其可以在判断不存在异常因素后返回原点,继续根据监控路线进行巡检;也可以定位其当前位置,根据当前位置和巡查终端重新规划监控路线等。
优选地,步骤S521之后还包括以下步骤:
S525当未成功获取到所述生物体的当前面部特征时,向所述生物体获取口令信息;
S526将获取的所述口令信息与预设口令数据库中的预设口令信息进行匹配;
S523当匹配成功时,则认为不存在所述异常因素;
S524当匹配不成功时,对所述生物体执行跟踪操作,并发出警示信息。
具体的,当处于晚上或者光线不足,RGB摄像机不能正常识别待测者人脸,可以通过语音口令的方式对待测者进行身份验证。
当机器人发现其未能成功获取生物体的当前面部特征时,其会向生物 体获取口令信息,例如:机器人对生物说,请报出口令信息;生物体在听到此语音时,会自行报出口令信息;机器人通过麦克风阵列收到生物体报出的口令信息后,会与预设口令数据库中的预设口令信息进行匹配。
预设口令数据库中可以存储有多个预设口令信息,只要生物体报出的口令信息与其中一个预设口令信息匹配则认为匹配成功,当口令信息无法与任何一个预设口令信息匹配时,则认为匹配不成功。预设口令信息可以为一句话、一首歌名等,由监护人(即机器人的使用者)自行设置。
当匹配成功,则认为当前的生物体不是危险因素,不存在异常因素,继续根据监控路线执行巡检;当匹配不成功时,则认为当前的生物体为危险因素,对其执行跟踪操作,并发出警示信息。此处的警示信息的操作为根据预先制定的安全策略或者监护人操作执行相应操作,例如:预先制定的安全策略为要进行蜂鸣器报警,监护人操作为要向监护人发送报警信息等。
在本发明的另一个实施例中,除与上述相同的之外,如图10所示,还包括以下步骤:
S7在机器人根据监控路线对监控区域进行巡检的过程中、当达到预设检测时间间隔时,获取当前烟雾浓度值;
S8判断所述当前烟雾浓度值是否超过预设烟雾浓度阈值;
S9当所述当前烟雾浓度值超过预设烟雾浓度阈值时,发出警示信息;
S10当所述当前烟雾浓度值未超过预设烟雾浓度阈值时,继续根据所述监控路线对监控区域进行巡检。
具体的,在机器人根据监控路线进行巡检的过程中,其烟雾传感器还会监测监控区域内的烟雾浓度值,机器人会对获取的当前烟雾浓度值进行判断,若超过预设烟雾浓度阈值,则认为是危险因素,发出警示信息。
警示信息的操作为根据预先制定的安全策略或者监护人操作执行相应操作,例如:预先制定的安全策略为要进行蜂鸣器报警,监护人操作为要向监护人发送报警信息等。
判断当前烟雾浓度值、识别人体骨骼数据、检测到未标注的障碍物, 这三者可以并行执行,也可以按照一定的先后顺序进行判断,例如:当定位到当前位置时,先判断当前烟雾尝试值是否超过预设烟雾浓度阈值,如果超过的话,就发出警示信息,等待监护人过来处理;如果未超过的话,判断是否识别到人体骷髅数据,若识别到就去获取当前面部特征或口令信息进行匹配,若匹配不成功,则执行跟踪操作,并发出警示信息;若匹配成功,则进一步判断是否检测到未标注的障碍物,若未检测到,则继续根据监控路线进行巡检,重复上述的步骤;若检测到,则更新二维平面地图,再根据更新的监控路线进行巡检,重复上述的步骤。
采用机器人进行巡检,降低了人力,且监控更灵活,深度相机具有夜视功能,且会对异常因素、烟雾浓度情况进行反馈、警示、主动跟踪,具有较好的监控效果。
如图5所示,应用于上述技术方案的安防巡检方法的机器人,包括:
深度相机1,主要用于获取室内物体相对于机器人的深度数据及生物体内部结构,从而建立监控区域的二维网格地图及实现机器人定位。由于深度相机采用红外结构光进行物体深度探测,所以在晚上黑暗情况下仍然能够正常工作;
RGB彩色相机2,用于获取监控区域的彩色图像,用于进行面部识别及场景查看;
烟雾传感器3,用于感应监控区域的烟雾;
麦克风阵列4,用于拾取外部的声音或者对声源大致的方向进行判别;
扬声器5,用于播放声音如询问、警报。
在本发明的另一个实施例中,一种机器人,如图11所示,包括:数据获取模块10,用于在根据监控路线对监控区域进行巡检的过程中、当达到预设拍摄时间间隔时,获取当前深度数据;当前深度数据包括:当前监控区域中障碍物相对于机器人的距离数据及生物体内部结构(如果有的话);
判断模块20,用于根据所述当前深度数据、当前里程计信息和所述监控区域的二维平面地图,定位所述机器人在所述二维平面地图中的当前 位置,并判断所述当前位置是否存在异常因素;
执行模块30,用于当存在所述异常因素时,根据所述异常因素执行相应的操作;当不存在所述异常因素时,继续根据所述监控路线对监控区域进行巡检。
具体的,机器人在开始巡检时,会有其巡检的监控区域的二维平面地图(可以由使用者上传上去,也可以由机器人根据指令自己绘制等)和监控路线(可以由使用者设置,也可以由机器人根据巡检起点、巡检终点和二维平面地图自行规划等)。数据获取模块为机器人的深度相机。
在机器人根据监控路线对监控区域进行巡检的过程中,其上安装的深度相机会按照一定的拍摄频率(也可以理解为预设拍摄时间间隔)来获取其所处的当前位置可以拍摄到的深度图(即,当前深度数据)。深度图是指,拍摄到的监控区域中的障碍物(或者说,空间物体)相对于深度相机的三维空间坐标数据。当前深度数据会进行转换,从而转换成相应的二维的激光雷达数据(激光雷达数据可以显示障碍物的轮廓),将其根据当前里程计信息和二维平面地图进行比对,从而定位机器人现在在二维平面地图中所处的当前位置。
机器人将当前的深度相机获取的当前深度数据与之前建立的二维平面地图进行匹配,从而定位出机器人当前所处监控区域中的位置;机器人按照规划的监控路线移动巡查。
机器人当前位置定位采用自适应的蒙特卡洛定位方式(AMCL),采用粒子滤波器根据当前深度数据对应的激光雷达数据及当前里程计信息来跟踪监控区域的二维平面地图中机器人的位姿。
里程计信息是指,机器人中的马达等运动机构执行的角度、旋转的圈数等,只要可以行走的机器人其内部都会记录其里程计信息。一般通过当前深度数据转换后得到的障碍物的轮廓来进行定位,之所以要辅助当前里程计信息是为了保证可以在二维平面地图上得到较精确的定位。
例如:当前深度数据可以为一把椅子的轮廓,而二维平面地图中存在三把椅子,这时就需要当前里程计信息来定位哪把椅子,从而定位出机器 人在二维平面地图上的当前位置;当前里程计信息可以为马达向左走了15米后,向右又走了7米,从而再根据由当前尝试数据解析出来的一把椅子的轮廓来定位出二维平面地图上的当前位置。
除了根据当前深度数据、当前里程计信息和二维平面地图来定位当前位置,还可以根据当前深度数据来判断当前位置是否存在异常因素,例如:是否检测到人(即生物体)、在巡检过程是否发现有新的未标注在二维平面地图上的障碍物等。从而根据不同的异常因素,执行不同的操作。若一切正常,机器人就继续按照监控路线进行巡检。
本实施例中,机器人会根据监控路线自行对监控区域进行巡检,降低了人工,无需在监控区域布置摄像头;且若在巡检过程中发现异常因素,能及时采取行动。
在本发明的另一个实施例中,除与上述相同的之外,如图12所示,执行模块30包括:
地图建立子模块31,用于当接收到地图建立指令时,机器人遍历所述监控区域,并根据遍历过程中获取的所述监控区域内各障碍物的深度数据和所述深度数据对应的里程计信息,建立所述监控区域的二维平面地图;
路线规划子模块32,用于根据巡检起点、巡检终点和所述二维平面地图,规划出所述监控路线。
具体的,当机器人想要对某一监控区域进行巡检时,必然要得到此监控区域的二维平面地图,从而规划监控路线、在巡检时定位当前位置等。
二维平面地图在本实施例中是由机器人自行建立,正式巡检前,会由操作人员控制机器人将所要巡检的监控区域先走一遍,机器人在监控区域行走过程中,通过头部安装的深度相机获取监控区域内各物体的深度数据,再根据获得到此深度数据时对应的里程计信息,建立整个监控区域的二维平面地图,其可以边走边建立,当走完一遍监控区域时,也就完成了二维平面地图的建立。当然在正式巡检前,也可以由机器人内部的随机变路程序来控制机器人走一遍监控区域,从而建立二维平面地图。
在获得二维平面地图后,操作人员可以输入巡检起点和巡检终端,从而让机器人自行根据二维平面地图规划监控路线,更智能、省力。机器人根据建立的监控区域二维平面地图规划出监控路线时,采用的Dijkstra最优路径的算法,计算二维平面地图上从巡检起点到达巡检终点的最小花费路径,作为机器人的监控路线。
优选地,所述数据获取模块10,进一步用于当接收到地图建立指令时,机器人遍历所述监控区域,并在遍历过程中获取所述监控区域内各障碍物的深度数据;
所述地图建立子模块31,用于当接收到地图建立指令时,机器人遍历所述监控区域,并根据遍历过程中获取的所述监控区域内各障碍物的深度数据和所述深度数据对应的里程计信息,建立所述监控区域的二维平面地图具体为;所述地图建立子模块,用于将预设高度范围内的所述深度数据向预设水平面进行投影,得到对应的二维的激光雷达数据;再根据所述激光雷达数据和所述激光雷达数据(对应的深度数据)对应的里程计信息,建立所述监控区域的二维平面地图。
具体的,建立未知环境(即监控区域)的二维平面地图(即二维网格地图)采用的是SLAM(simultaneous localization and mapping)方法中的Gmapping算法,具体过程如下:
1)深度相机在遍历监控区域的过程中会获取深度图(即深度数据,或者说,深度距离数据),通过将预设高度范围内的深度数据向深度相机水平面进行投影,即可将三维空间的深度数据转化为二维的激光雷达数据。
例如:深度相机距离地面的高度为Z=50厘米,预设高度范围设为0-100厘米,则将高度符合0-100厘米的深度数据向Z=50厘米的水平面进行投影,从而得到对应的二维的激光雷达数据。
2)Gmapping算法根据转换后得到的激光雷达数据结合机器人的里程计信息,使用粒子滤波的方法最终构建出未知环境的二维网格地图(即,监控区域的二维平面地图)。
在本发明的另一个实施例中,除与上述相同的之外,所述判断模块20,用于根据所述当前深度数据、当前里程计信息和所述监控区域的二维平面地图,定位所述机器人在所述二维平面地图中的当前位置,并判断所述当前位置是否存在异常因素包括:所述判断模块20,用于判断是否存在所述二维平面地图中未标注的障碍物,当存在所述二维平面地图中未标注的障碍物时,则认为存在所述异常因素,当不存在所述二维平面地图中未标注的障碍物时,则认为不存在所述异常因素;
所述执行模块30,用于当存在所述异常因素时,根据所述异常因素执行相应的操作包括:所述执行模块30,用于当存在所述未标注的障碍物时,根据所述当前深度数据和当前里程计信息,将所述未标注的障碍物标注在所述二维平面地图中,并更新所述二维平面地图;再根据所述当前位置和更新的所述二维平面地图,更新所述监控路线,并根据更新的所述监控路线对监控区域进行巡检。
具体的,若无异常因素,机器人会结合当前位置,根据监控路线进行巡检。
但是当行进过程中遇到二维平面地图中没有标注的障碍物时,则认为发现了异常因素,会将障碍物标注在二维平面地图中,以更新此监控区域的二维平面地图。将未标注的障碍物标注在二维平面地图中采用的方法与建立二维平面地图的方法一样。
当更新完二维平面地图后,机器人会根据当前位置、原先监控路线的巡检终点和更新后的二维平面地图,重新规划监控路线,并根据当前位置和更新后的监控路线继续巡检。
机器人可以根据变更的二维平面地图更新自己的监控路线,在巡检过程中更灵活,具有更好的巡检效果。
在本发明的另一个实施例中,除与上述相同的之外,所述判断模块20,用于根据所述当前深度数据、当前里程计信息和所述监控区域的二维平面地图,定位所述机器人在所述二维平面地图中的当前位置,并判断所述当前位置是否存在异常因素包括:所述判断模块20,用于判断是否识 别到人体骨骼数据,当识别到所述人体骨骼数据时,则认为存在所述异常因素,当识别不到所述人体骨骼数据时,则认为不存在所述异常因素;
所述执行模块30,用于当存在所述异常因素时,根据所述异常因素执行相应的操作包括:所述执行模块,用于当识别到人体骨骼数据时,向靠近所述人体骨骼数据对应的生物体的方向移动;
以及,获取所述生物体的当前面部特征;
以及,当成功获取到所述生物体的当前面部特征时,将所述当前面部特征与预设生物体面部特征数据库中的预设面部特征进行匹配;当匹配成功时,则认为不存在所述异常因素;当匹配不成功时,对所述生物体执行跟踪操作,并发出警示信息。
具体的,机器人除了根据当前深度数据判断是否存在二维平面地图上未标注的障碍物外,还会判断是否识别到人体骨骼数据,从而判断是否在巡检过程中存在生物体,来判断当前位置是否存在异常因素。可以先判断是否存在未标注的障碍物,再判断是否识别到人体骨骼数据;也可以先判断是否识别到人体骨骼数据,再判断是否存在未标注的障碍物;或者由两个线程并行判断,两者的先后顺序不作限定。
由机器人的RGB彩色相机来获取生物体的当前面部特征。
预设生物体面部特征数据库中会存储有多个预设面部特征,多个预设面部特征属于不同的人,其会有机会出现在监控区域,若当前面部特征无法与任何一个预设面部特征匹配成功,则说明此人为陌生人,存在危险因素,需要对其执行跟踪操作,并发出警示信息。发出警示信息为根据预先制定的安全策略或者监护人操作执行相应操作,例如:预先制定的安全策略为要进行蜂鸣器(或扬声器)报警,监护人操作为要向监护人发送报警信息等。
若当前面部特征可以与其中一个预设面部特征匹配成功,则说明此人不是危险因素,也就说明不存在异常因素,机器人可以根据监控路线继续执行巡检。若机器人在获取当前面部特征的过程中偏离了监控路线,其可以在判断不存在异常因素后返回原点,继续根据监控路线进行巡检;也可 以定位其当前位置,根据当前位置和巡查终端重新规划监控路线等。
优选地,执行模块30,用于当存在所述异常因素时,根据所述异常因素执行相应的操作还包括:
所述执行模块,用于当未成功获取到所述生物体的当前面部特征时,向所述生物体获取口令信息;
以及,将获取的所述口令信息与预设口令数据库中的预设口令信息进行匹配;当匹配成功时,则认为不存在所述异常因素;当匹配不成功时,对所述生物体执行跟踪操作,并发出警示信息。
具体的,通过扬声器来询问生物体,再通过麦克风阵列来获取口令信息。
当处于晚上或者光线不足,RGB摄像机不能正常识别待测者人脸,可以通过语音口令的方式对待测者进行身份验证。
当机器人发现其未能成功获取生物体的当前面部特征时,其会向生物体获取口令信息,例如:机器人对生物说,请报出口令信息;生物体在听到此语音时,会自行报出口令信息;机器人通过麦克风阵列收到生物体报出的口令信息后,会与预设口令数据库中的预设口令信息进行匹配。
预设口令数据库中可以存储有多个预设口令信息,只要生物体报出的口令信息与其中一个预设口令信息匹配则认为匹配成功,当口令信息无法与任何一个预设口令信息匹配时,则认为匹配不成功。预设口令信息可以为一句话、一首歌名等,由监护人(即机器人的使用者)自行设置。
当匹配成功,则认为当前的生物体不是危险因素,不存在异常因素,继续根据监控路线执行巡检;当匹配不成功时,则认为当前的生物体为危险因素,对其执行跟踪操作,并发出警示信息。此处的警示信息的操作为根据预先制定的安全策略或者监护人操作执行相应操作,例如:预先制定的安全策略为要进行蜂鸣器报警,监护人操作为要向监护人发送报警信息等。
在本发明的另一个实施例中,除与上述相同的之外,如图12所示,还包括:
烟雾检测模块40,用于在机器人根据监控路线对监控区域进行巡检的过程中、当达到预设检测时间间隔时,获取当前烟雾浓度值;
所述判断模块20,进一步用于判断所述当前烟雾浓度值是否超过预设烟雾浓度阈值;
所述执行模块30,进一步用于当所述当前烟雾浓度值超过预设烟雾浓度阈值时,发出警示信息;以及,当所述当前烟雾浓度值未超过预设烟雾浓度阈值时,继续根据所述监控路线对监控区域进行巡检。
具体的,烟雾检测模块为机器人的烟雾传感器。烟雾传感器会以预设的频率来采集当前烟雾浓度值,即在机器人根据监控路线对监控区域进行巡检的过程中、当达到预设检测时间间隔时,烟雾检测模块获取当前烟雾浓度值。
在机器人根据监控路线进行巡检的过程中,其烟雾传感器还会监测监控区域内的烟雾浓度值,机器人会对获取的当前烟雾浓度值进行判断,若超过预设烟雾浓度阈值,则认为是危险因素,发出警示信息。
警示信息的操作为根据预先制定的安全策略或者监护人操作执行相应操作,例如:预先制定的安全策略为要进行蜂鸣器报警,监护人操作为要向监护人发送报警信息等。
判断当前烟雾浓度值、识别人体骨骼数据、检测到未标注的障碍物,这三者可以并行执行,也可以按照一定的先后顺序进行判断,例如:当定位到当前位置时,先判断当前烟雾尝试值是否超过预设烟雾浓度阈值,如果超过的话,就发出警示信息,等待监护人过来处理;如果未超过的话,判断是否识别到人体骷髅数据,若识别到就去获取当前面部特征或口令信息进行匹配,若匹配不成功,则执行跟踪操作,并发出警示信息;若匹配成功,则进一步判断是否检测到未标注的障碍物,若未检测到,则继续根据监控路线进行巡检,重复上述的步骤;若检测到,则更新二维平面地图,再根据更新的监控路线进行巡检,重复上述的步骤。
采用机器人进行巡检,降低了人力,且监控更灵活,深度相机具有夜视功能,且会对异常因素、烟雾浓度情况进行反馈、警示、主动跟踪,具 有较好的监控效果。
以上实施例仅用以说明本发明的技术方案,并非用于限定本发明的保护范围。凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,其均应涵盖在本发明的权利要求范围当中。

Claims (14)

  1. 一种基于环境地图的机器人安防巡检方法,其特征在于,包括以下步骤:
    S3在机器人根据监控路线对监控区域进行巡检的过程中、当达到预设拍摄时间间隔时,获取当前深度数据;
    S4根据所述当前深度数据、当前里程计信息和所述监控区域的二维平面地图,定位所述机器人在所述二维平面地图中的当前位置,并判断所述当前位置是否存在异常因素;
    S5当存在所述异常因素时,根据所述异常因素执行相应的操作;
    S6当不存在所述异常因素时,继续根据所述监控路线对监控区域进行巡检。
  2. 根据权利要求1所述的基于环境地图的机器人安防巡检方法,其特征在于,所述步骤S3之前还包括以下步骤:
    S1当接收到地图建立指令时,机器人遍历所述监控区域,并根据遍历过程中获取的所述监控区域内各障碍物的深度数据和所述深度数据对应的里程计信息,建立所述监控区域的二维平面地图;
    S2根据巡检起点、巡检终点和所述二维平面地图,规划出所述监控路线。
  3. 根据权利要求2所述的基于环境地图的机器人安防巡检方法,其特征在于,所述步骤S1的具体过程如下:
    S11当接收到地图建立指令时,机器人遍历所述监控区域,并在遍历过程中获取所述监控区域内各障碍物的深度数据;
    S12将预设高度范围内的所述深度数据向预设水平面进行投影,得到对应的二维的激光雷达数据;
    S13根据所述激光雷达数据和所述激光雷达数据对应的里程计信息,建立所述监控区域的二维平面地图。
  4. 根据权利要求1所述的基于环境地图的机器人安防巡检方法,其特征在于:
    所述步骤S4根据所述当前深度数据、当前里程计信息和所述监控区域的二维平面地图,定位所述机器人在所述二维平面地图中的当前位置,并判断所述当前位置是否存在异常因素包括以下步骤:
    S41判断是否存在所述二维平面地图中未标注的障碍物,当存在所述二维平面地图中未标注的障碍物时,则认为存在所述异常因素,当不存在所述二维平面地图中未标注的障碍物时,则认为不存在所述异常因素;
    所述步骤S5包括以下步骤:
    S510当存在所述未标注的障碍物时,根据所述当前深度数据和当前里程计信息,将所述未标注的障碍物标注在所述二维平面地图中,并更新所述二维平面地图;
    S511根据所述当前位置和更新的所述二维平面地图,更新所述监控路线,并根据更新的所述监控路线对监控区域进行巡检。
  5. 根据权利要求1所述的基于环境地图的机器人安防巡检方法,其特征在于:
    所述步骤S4根据所述当前深度数据、当前里程计信息和所述监控区域的二维平面地图,定位所述机器人在所述二维平面地图中的当前位置,并判断所述当前位置是否存在异常因素包括以下步骤:
    S42判断是否识别到人体骨骼数据,当识别到所述人体骨骼数据时,则认为存在所述异常因素,当识别不到所述人体骨骼数据时,则认为不存在所述异常因素;
    所述步骤S5包括以下步骤:
    S520当识别到人体骨骼数据时,向靠近所述人体骨骼数据对应的生物体的方向移动;
    S521获取所述生物体的当前面部特征;
    S522当成功获取到所述生物体的当前面部特征时,将所述当前面部特征与预设生物体面部特征数据库中的预设面部特征进行匹配;
    S523当匹配成功时,则认为不存在所述异常因素;
    S524当匹配不成功时,对所述生物体执行跟踪操作,并发出警示信息。
  6. 根据权利要求5所述的基于环境地图的机器人安防巡检方法,其特征在于,所述步骤S521之后还包括以下步骤:
    S525当未成功获取到所述生物体的当前面部特征时,向所述生物体获取口令信息;
    S526将获取的所述口令信息与预设口令数据库中的预设口令信息进行匹配;
    S523当匹配成功时,则认为不存在所述异常因素;
    S524当匹配不成功时,对所述生物体执行跟踪操作,并发出警示信息。
  7. 根据权利要求1所述的基于环境地图的机器人安防巡检方法,其特征在于,还包括以下步骤:
    S7在机器人根据监控路线对监控区域进行巡检的过程中、当达到预设检测时间间隔时,获取当前烟雾浓度值;
    S8判断所述当前烟雾浓度值是否超过预设烟雾浓度阈值;
    S9当所述当前烟雾浓度值超过预设烟雾浓度阈值时,发出警示信息;
    S10当所述当前烟雾浓度值未超过预设烟雾浓度阈值时,继续根据所述监控路线对监控区域进行巡检。
  8. 一种机器人,其特征在于,包括:
    数据获取模块,用于在根据监控路线对监控区域进行巡检的过程中、 当达到预设拍摄时间间隔时,获取当前深度数据;
    判断模块,用于根据所述当前深度数据、当前里程计信息和所述监控区域的二维平面地图,定位所述机器人在所述二维平面地图中的当前位置,并判断所述当前位置是否存在异常因素;
    执行模块,用于当存在所述异常因素时,根据所述异常因素执行相应的操作;当不存在所述异常因素时,继续根据所述监控路线对监控区域进行巡检。
  9. 根据权利要求8所述的机器人,其特征在于,所述执行模块包括:
    地图建立子模块,用于当接收到地图建立指令时,机器人遍历所述监控区域,并根据遍历过程中获取的所述监控区域内各障碍物的深度数据和所述深度数据对应的里程计信息,建立所述监控区域的二维平面地图;
    路线规划子模块,用于根据巡检起点、巡检终点和所述二维平面地图,规划出所述监控路线。
  10. 根据权利要求9所述的机器人,其特征在于:
    所述数据获取模块,进一步用于当接收到地图建立指令时,机器人遍历所述监控区域,并在遍历过程中获取所述监控区域内各障碍物的深度数据;
    所述地图建立子模块,用于当接收到地图建立指令时,机器人遍历所述监控区域,并根据遍历过程中获取的所述监控区域内各障碍物的深度数据和所述深度数据对应的里程计信息,建立所述监控区域的二维平面地图具体为;所述地图建立子模块,用于将预设高度范围内的所述深度数据向预设水平面进行投影,得到对应的二维的激光雷达数据;再根据所述激光雷达数据和所述激光雷达数据对应的里程计信息,建立所述监控区域的二维平面地图。
  11. 根据权利要求8所述的机器人,其特征在于:
    所述判断模块,用于根据所述当前深度数据、当前里程计信息和所述监控区域的二维平面地图,定位所述机器人在所述二维平面地图中的当前位置,并判断所述当前位置是否存在异常因素包括:所述判断模块,用于判断是否存在所述二维平面地图中未标注的障碍物,当存在所述二维平面地图中未标注的障碍物时,则认为存在所述异常因素,当不存在所述二维平面地图中未标注的障碍物时,则认为不存在所述异常因素;
    所述执行模块,用于当存在所述异常因素时,根据所述异常因素执行相应的操作包括:所述执行模块,用于当存在所述未标注的障碍物时,根据所述当前深度数据和当前里程计信息,将所述未标注的障碍物标注在所述二维平面地图中,并更新所述二维平面地图;再根据所述当前位置和更新的所述二维平面地图,更新所述监控路线,并根据更新的所述监控路线对监控区域进行巡检。
  12. 根据权利要求8所述的机器人,其特征在于:
    所述判断模块,用于根据所述当前深度数据、当前里程计信息和所述监控区域的二维平面地图,定位所述机器人在所述二维平面地图中的当前位置,并判断所述当前位置是否存在异常因素包括:所述判断模块,用于判断是否识别到人体骨骼数据,当识别到所述人体骨骼数据时,则认为存在所述异常因素,当识别不到所述人体骨骼数据时,则认为不存在所述异常因素;
    所述执行模块,用于当存在所述异常因素时,根据所述异常因素执行相应的操作包括:所述执行模块,用于当识别到人体骨骼数据时,向靠近所述人体骨骼数据对应的生物体的方向移动;
    以及,获取所述生物体的当前面部特征;
    以及,当成功获取到所述生物体的当前面部特征时,将所述当前面部特征与预设生物体面部特征数据库中的预设面部特征进行匹配;当匹配成功时,则认为不存在所述异常因素;当匹配不成功时,对所述生物 体执行跟踪操作,并发出警示信息。
  13. 根据权利要求12所述的机器人,其特征在于,所述执行模块,用于当存在所述异常因素时,根据所述异常因素执行相应的操作还包括:
    所述执行模块,用于当未成功获取到所述生物体的当前面部特征时,向所述生物体获取口令信息;
    以及,将获取的所述口令信息与预设口令数据库中的预设口令信息进行匹配;当匹配成功时,则认为不存在所述异常因素;当匹配不成功时,对所述生物体执行跟踪操作,并发出警示信息。
  14. 根据权利要求8所述的机器人,其特征在于,还包括:
    烟雾检测模块,用于在机器人根据监控路线对监控区域进行巡检的过程中、当达到预设检测时间间隔时,获取当前烟雾浓度值;
    所述判断模块,进一步用于判断所述当前烟雾浓度值是否超过预设烟雾浓度阈值;
    所述执行模块,进一步用于当所述当前烟雾浓度值超过预设烟雾浓度阈值时,发出警示信息;以及,当所述当前烟雾浓度值未超过预设烟雾浓度阈值时,继续根据所述监控路线对监控区域进行巡检。
PCT/CN2017/108725 2016-12-14 2017-10-31 一种基于环境地图的机器人安防巡检方法及其机器人 WO2018107916A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/870,857 US20180165931A1 (en) 2016-12-14 2018-01-13 Robot security inspection method based on environment map and robot thereof

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611154363.6 2016-12-14
CN201611154363.6A CN106598052A (zh) 2016-12-14 2016-12-14 一种基于环境地图的机器人安防巡检方法及其机器人

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/870,857 Continuation US20180165931A1 (en) 2016-12-14 2018-01-13 Robot security inspection method based on environment map and robot thereof

Publications (1)

Publication Number Publication Date
WO2018107916A1 true WO2018107916A1 (zh) 2018-06-21

Family

ID=58802410

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/108725 WO2018107916A1 (zh) 2016-12-14 2017-10-31 一种基于环境地图的机器人安防巡检方法及其机器人

Country Status (2)

Country Link
CN (1) CN106598052A (zh)
WO (1) WO2018107916A1 (zh)

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108724222A (zh) * 2018-08-14 2018-11-02 广东校园卫士网络科技有限责任公司 一种智慧校园安防管理系统及方法
CN108958024A (zh) * 2018-08-15 2018-12-07 深圳市烽焌信息科技有限公司 机器人巡逻方法及机器人
CN109522845A (zh) * 2018-11-19 2019-03-26 国网四川省电力公司电力科学研究院 基于智能机器人的配电变压器试验安全作业监督方法
CN110221608A (zh) * 2019-05-23 2019-09-10 中国银联股份有限公司 一种巡检设备的方法及装置
CN110253530A (zh) * 2019-08-05 2019-09-20 陕西中建建乐智能机器人有限公司 一种具有蛇形探测头的巡检智能机器人及其巡检方法
CN110319888A (zh) * 2019-08-02 2019-10-11 宣城市安工大工业技术研究院有限公司 一种石油化工巡检机器人及其工作方法
CN110648419A (zh) * 2019-09-19 2020-01-03 陕西中建建乐智能机器人有限公司 一种管廊巡检机器人巡检系统及方法
CN110647110A (zh) * 2019-08-19 2020-01-03 广东电网有限责任公司 一种电网调度巡检机器人巡检指令生成系统及方法
CN110703781A (zh) * 2019-10-30 2020-01-17 中国船舶重工集团公司第七一六研究所 安保巡逻机器人的路径控制方法
CN110836668A (zh) * 2018-08-16 2020-02-25 科沃斯商用机器人有限公司 定位导航方法、装置、机器人及存储介质
CN110889455A (zh) * 2019-12-02 2020-03-17 西安科技大学 一种化工园巡检机器人的故障检测定位及安全评估方法
CN110991387A (zh) * 2019-12-11 2020-04-10 西安安森智能仪器股份有限公司 一种机器人集群图像识别的分布式处理方法及系统
CN111290403A (zh) * 2020-03-23 2020-06-16 内蒙古工业大学 搬运自动导引运输车的运输方法和搬运自动导引运输车
CN111798127A (zh) * 2020-07-02 2020-10-20 北京石油化工学院 基于动态火灾风险智能评估的化工园区巡检机器人路径优化系统
CN111844054A (zh) * 2019-04-26 2020-10-30 鸿富锦精密电子(烟台)有限公司 巡检机器人、巡检机器人系统及巡检机器人巡检方法
CN111932623A (zh) * 2020-08-11 2020-11-13 北京洛必德科技有限公司 一种基于移动机器人的人脸数据自动采集标注方法、系统及其电子设备
CN112014799A (zh) * 2020-08-05 2020-12-01 七海行(深圳)科技有限公司 一种数据采集方法及巡检装置
CN112332541A (zh) * 2020-10-29 2021-02-05 国网山西省电力公司检修分公司 一种用于变电站的监测系统和方法
CN112781585A (zh) * 2020-12-24 2021-05-11 国家电投集团郑州燃气发电有限公司 一种通过5g网络连接智能巡检机器人及平台的方法
CN113015099A (zh) * 2021-02-02 2021-06-22 深圳市地质局 一种基于智能手机的智能巡检方法
CN113075686A (zh) * 2021-03-19 2021-07-06 长沙理工大学 一种基于多传感器融合的电缆沟智能巡检机器人建图方法
US11080990B2 (en) 2019-08-05 2021-08-03 Factory Mutual Insurance Company Portable 360-degree video-based fire and smoke detector and wireless alerting system
CN113353173A (zh) * 2021-06-01 2021-09-07 福勤智能科技(昆山)有限公司 一种自动导引车
CN113381331A (zh) * 2021-06-23 2021-09-10 国网山东省电力公司济宁市任城区供电公司 一种变电站智能巡检系统
CN113375664A (zh) * 2021-06-09 2021-09-10 成都信息工程大学 基于点云地图动态加载的自主移动装置定位方法
CN113485368A (zh) * 2021-08-09 2021-10-08 国电南瑞科技股份有限公司 一种架空输电线路巡线机器人导航、巡线方法及装置
CN113514066A (zh) * 2021-06-15 2021-10-19 西安科技大学 一种同步定位与地图构建和路径规划方法
CN113993005A (zh) * 2021-10-27 2022-01-28 南方电网大数据服务有限公司 电网设备检查方法、装置、计算机设备和存储介质
WO2022068366A1 (zh) * 2020-09-30 2022-04-07 灵动科技(北京)有限公司 地图构建方法、装置、设备和存储介质
CN114339168A (zh) * 2022-03-04 2022-04-12 北京云迹科技股份有限公司 一种区域安全监控方法、装置、电子设备及存储介质
CN114333097A (zh) * 2021-12-16 2022-04-12 上海海神机器人科技有限公司 一种联动式摄像安防警示系统及监控方法
CN114419748A (zh) * 2021-11-18 2022-04-29 国网黑龙江省电力有限公司鹤岗供电公司 一种基于离线地图的电力线路巡检系统
CN114498917A (zh) * 2021-12-15 2022-05-13 国网安徽省电力有限公司超高压分公司 数字化换流站运检的监管方法及其监管系统
CN114619452A (zh) * 2022-04-01 2022-06-14 沈阳吕尚科技有限公司 消杀机器人的控制系统以及控制方法
CN117808274A (zh) * 2024-03-01 2024-04-02 山西郎腾信息科技有限公司 一种煤矿井下瓦斯安全智能巡检系统
CN118034291A (zh) * 2024-02-28 2024-05-14 北京晶品特装科技股份有限公司 机器人避障方法及系统
CN119168329A (zh) * 2024-11-19 2024-12-20 南京淼孚自动化有限公司 一种基于物联网的检测机器人巡检调度管理系统及方法

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106598052A (zh) * 2016-12-14 2017-04-26 南京阿凡达机器人科技有限公司 一种基于环境地图的机器人安防巡检方法及其机器人
CN107328418B (zh) * 2017-06-21 2020-02-14 南华大学 移动机器人在陌生室内场景下的核辐射探测路径自主规划方法
CN107449427B (zh) * 2017-07-27 2021-03-23 京东方科技集团股份有限公司 一种生成导航地图的方法及设备
CN107741745B (zh) * 2017-09-19 2019-10-22 浙江大学 一种实现移动机器人自主定位与地图构建的方法
CN107589749B (zh) * 2017-09-19 2019-09-17 浙江大学 水下机器人自主定位与节点地图构建方法
CN107566743B (zh) 2017-10-30 2019-10-11 珠海市一微半导体有限公司 移动机器人的视频监控方法
CN107729862B (zh) * 2017-10-30 2020-09-01 珠海市一微半导体有限公司 机器人视频监控的保密处理方法
CN107767424A (zh) * 2017-10-31 2018-03-06 深圳市瑞立视多媒体科技有限公司 多相机系统的标定方法、多相机系统及终端设备
CN107797556B (zh) * 2017-11-01 2018-08-28 广州供电局有限公司 一种利用巡维机器人实现服务器启停的方法
WO2019100269A1 (zh) * 2017-11-22 2019-05-31 深圳市沃特沃德股份有限公司 机器人移动控制方法、系统和机器人
CN109839118A (zh) * 2017-11-24 2019-06-04 北京京东尚科信息技术有限公司 路径规划方法、系统、机器人和计算机可读存储介质
CN108115727A (zh) * 2017-12-19 2018-06-05 北斗七星(重庆)物联网技术有限公司 一种安防机器人巡防的方法、装置及系统
CN108053473A (zh) * 2017-12-29 2018-05-18 北京领航视觉科技有限公司 一种室内三维模型数据的处理方法
CN108214514A (zh) * 2018-02-02 2018-06-29 菏泽学院 一种智能化校园安防机器人
CN108500992A (zh) * 2018-04-09 2018-09-07 中山火炬高新企业孵化器有限公司 一种多功能的移动安防机器人
CN108564676B (zh) * 2018-04-20 2021-07-06 南瑞集团有限公司 一种水电厂智能巡检系统及方法
CN108673501B (zh) * 2018-05-17 2022-06-07 中国科学院深圳先进技术研究院 一种机器人的目标跟随方法及装置
CN108802687A (zh) * 2018-06-25 2018-11-13 大连大学 混响房间内分布式麦克风阵列多声源定位方法
CN109040677A (zh) * 2018-07-27 2018-12-18 中山火炬高新企业孵化器有限公司 园区安防预警防御系统
CN109551495A (zh) * 2018-12-19 2019-04-02 广东日美光电科技有限公司 机器人展示架
CN109752300A (zh) * 2019-01-02 2019-05-14 五邑大学 一种涂料生产安全智能巡检机器人、系统及方法
CN109822572A (zh) * 2019-02-22 2019-05-31 广州高新兴机器人有限公司 一种基于机器人的机房巡检监控方法及系统
CN110163968B (zh) * 2019-05-28 2020-08-25 山东大学 Rgbd相机大型三维场景构建方法及系统
CN112773262A (zh) * 2019-11-08 2021-05-11 珠海市一微半导体有限公司 基于扫地机器人的安防控制方法、扫地机器人及芯片
JP7146727B2 (ja) * 2019-12-04 2022-10-04 株式会社日立製作所 自走式点検装置、及び、設備点検システム
CN111104523A (zh) * 2019-12-20 2020-05-05 西南交通大学 基于语音辅助的视听协同学习机器人及学习方法
CN111190420B (zh) * 2020-01-07 2021-11-12 大连理工大学 一种多移动机器人在安防领域中协作搜索与围捕方法
CN112669486B (zh) * 2020-07-16 2022-06-10 深圳瀚德智能技术有限公司 一种智慧安防巡检管理系统
CN112263803A (zh) * 2020-10-26 2021-01-26 杭州电子科技大学 基于实时场景巡视和自动检测灭火的无人车智能安防系统及控制方法
CN112454352B (zh) * 2020-10-30 2024-01-23 杨兴礼 自身找平及导航并移动方法、系统、电子设备及介质
CN113269945A (zh) * 2021-04-20 2021-08-17 重庆电子工程职业学院 一种计算机人工智能预警系统
CN113190016A (zh) * 2021-05-21 2021-07-30 南京工业大学 一种用于洁净室的移动机器人检测系统与方法
CN114147740B (zh) * 2021-12-09 2024-08-09 中科计算技术西部研究院 基于环境状态的机器人巡查规划系统及方法
CN114489070A (zh) * 2022-01-24 2022-05-13 美的集团(上海)有限公司 家居巡检方法、非易失性可读存储介质和计算机设备
CN114589718B (zh) * 2022-04-25 2024-08-16 山东阿图机器人科技有限公司 一种侦察防御机器人及其作业方法
CN115755879A (zh) * 2022-09-27 2023-03-07 西南科技大学 一种环境监测智能小车的控制方法及其小车
CN115585809A (zh) * 2022-09-28 2023-01-10 南方电网数字电网研究院有限公司 一种仓库巡逻机器人的巡逻方法、系统和可读存储介质
CN116009529A (zh) * 2022-11-11 2023-04-25 青岛杰瑞自动化有限公司 一种石油勘探区巡逻机器人控制方法及系统、电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070156286A1 (en) * 2005-12-30 2007-07-05 Irobot Corporation Autonomous Mobile Robot
CN101786272A (zh) * 2010-01-05 2010-07-28 深圳先进技术研究院 一种用于家庭智能监控服务的多感知机器人
CN101920498A (zh) * 2009-06-16 2010-12-22 泰怡凯电器(苏州)有限公司 实现室内服务机器人同时定位和地图创建的装置及机器人
CN204856812U (zh) * 2015-08-03 2015-12-09 高世恒 一种自动巡逻报警机器人
CN205507540U (zh) * 2016-03-28 2016-08-24 山东国兴智能科技有限公司 一种带人脸识别和学习功能的智能巡逻机器人
CN106155093A (zh) * 2016-07-22 2016-11-23 王威 一种基于计算机视觉的机器人跟随人体的系统和方法
CN106598052A (zh) * 2016-12-14 2017-04-26 南京阿凡达机器人科技有限公司 一种基于环境地图的机器人安防巡检方法及其机器人

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102096398B1 (ko) * 2013-07-03 2020-04-03 삼성전자주식회사 자율 이동 로봇의 위치 인식 방법
CN104536445B (zh) * 2014-12-19 2018-07-03 深圳先进技术研究院 移动导航方法和系统
CN105904468A (zh) * 2016-06-13 2016-08-31 北京科技大学 一种具有自主地图构建和无线充电的多功能巡逻机器人
CN205787785U (zh) * 2016-07-05 2016-12-07 罗广岳 一种危险品仓库巡逻机器人
CN106168805A (zh) * 2016-09-26 2016-11-30 湖南晖龙股份有限公司 基于云计算的机器人自主行走的方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070156286A1 (en) * 2005-12-30 2007-07-05 Irobot Corporation Autonomous Mobile Robot
CN101920498A (zh) * 2009-06-16 2010-12-22 泰怡凯电器(苏州)有限公司 实现室内服务机器人同时定位和地图创建的装置及机器人
CN101786272A (zh) * 2010-01-05 2010-07-28 深圳先进技术研究院 一种用于家庭智能监控服务的多感知机器人
CN204856812U (zh) * 2015-08-03 2015-12-09 高世恒 一种自动巡逻报警机器人
CN205507540U (zh) * 2016-03-28 2016-08-24 山东国兴智能科技有限公司 一种带人脸识别和学习功能的智能巡逻机器人
CN106155093A (zh) * 2016-07-22 2016-11-23 王威 一种基于计算机视觉的机器人跟随人体的系统和方法
CN106598052A (zh) * 2016-12-14 2017-04-26 南京阿凡达机器人科技有限公司 一种基于环境地图的机器人安防巡检方法及其机器人

Cited By (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108724222A (zh) * 2018-08-14 2018-11-02 广东校园卫士网络科技有限责任公司 一种智慧校园安防管理系统及方法
CN108958024A (zh) * 2018-08-15 2018-12-07 深圳市烽焌信息科技有限公司 机器人巡逻方法及机器人
CN110836668A (zh) * 2018-08-16 2020-02-25 科沃斯商用机器人有限公司 定位导航方法、装置、机器人及存储介质
CN109522845A (zh) * 2018-11-19 2019-03-26 国网四川省电力公司电力科学研究院 基于智能机器人的配电变压器试验安全作业监督方法
CN111844054A (zh) * 2019-04-26 2020-10-30 鸿富锦精密电子(烟台)有限公司 巡检机器人、巡检机器人系统及巡检机器人巡检方法
CN110221608A (zh) * 2019-05-23 2019-09-10 中国银联股份有限公司 一种巡检设备的方法及装置
CN110319888B (zh) * 2019-08-02 2023-12-22 宣城市安工大工业技术研究院有限公司 一种石油化工巡检机器人及其工作方法
CN110319888A (zh) * 2019-08-02 2019-10-11 宣城市安工大工业技术研究院有限公司 一种石油化工巡检机器人及其工作方法
CN110253530A (zh) * 2019-08-05 2019-09-20 陕西中建建乐智能机器人有限公司 一种具有蛇形探测头的巡检智能机器人及其巡检方法
US11080990B2 (en) 2019-08-05 2021-08-03 Factory Mutual Insurance Company Portable 360-degree video-based fire and smoke detector and wireless alerting system
CN110647110A (zh) * 2019-08-19 2020-01-03 广东电网有限责任公司 一种电网调度巡检机器人巡检指令生成系统及方法
CN110647110B (zh) * 2019-08-19 2023-04-28 广东电网有限责任公司 一种电网调度巡检机器人巡检指令生成系统及方法
CN110648419A (zh) * 2019-09-19 2020-01-03 陕西中建建乐智能机器人有限公司 一种管廊巡检机器人巡检系统及方法
CN110703781A (zh) * 2019-10-30 2020-01-17 中国船舶重工集团公司第七一六研究所 安保巡逻机器人的路径控制方法
CN110889455A (zh) * 2019-12-02 2020-03-17 西安科技大学 一种化工园巡检机器人的故障检测定位及安全评估方法
CN110889455B (zh) * 2019-12-02 2023-05-12 西安科技大学 一种化工园巡检机器人的故障检测定位及安全评估方法
CN110991387A (zh) * 2019-12-11 2020-04-10 西安安森智能仪器股份有限公司 一种机器人集群图像识别的分布式处理方法及系统
CN110991387B (zh) * 2019-12-11 2024-02-02 西安安森智能仪器股份有限公司 一种机器人集群图像识别的分布式处理方法及系统
CN111290403A (zh) * 2020-03-23 2020-06-16 内蒙古工业大学 搬运自动导引运输车的运输方法和搬运自动导引运输车
CN111798127A (zh) * 2020-07-02 2020-10-20 北京石油化工学院 基于动态火灾风险智能评估的化工园区巡检机器人路径优化系统
CN111798127B (zh) * 2020-07-02 2022-08-23 北京石油化工学院 基于动态火灾风险智能评估的化工园区巡检机器人路径优化系统
CN112014799A (zh) * 2020-08-05 2020-12-01 七海行(深圳)科技有限公司 一种数据采集方法及巡检装置
CN112014799B (zh) * 2020-08-05 2024-02-09 七海行(深圳)科技有限公司 一种数据采集方法及巡检装置
CN111932623A (zh) * 2020-08-11 2020-11-13 北京洛必德科技有限公司 一种基于移动机器人的人脸数据自动采集标注方法、系统及其电子设备
WO2022068366A1 (zh) * 2020-09-30 2022-04-07 灵动科技(北京)有限公司 地图构建方法、装置、设备和存储介质
CN112332541A (zh) * 2020-10-29 2021-02-05 国网山西省电力公司检修分公司 一种用于变电站的监测系统和方法
CN112781585A (zh) * 2020-12-24 2021-05-11 国家电投集团郑州燃气发电有限公司 一种通过5g网络连接智能巡检机器人及平台的方法
CN113015099A (zh) * 2021-02-02 2021-06-22 深圳市地质局 一种基于智能手机的智能巡检方法
CN113075686B (zh) * 2021-03-19 2024-01-12 长沙理工大学 一种基于多传感器融合的电缆沟智能巡检机器人建图方法
CN113075686A (zh) * 2021-03-19 2021-07-06 长沙理工大学 一种基于多传感器融合的电缆沟智能巡检机器人建图方法
CN113353173A (zh) * 2021-06-01 2021-09-07 福勤智能科技(昆山)有限公司 一种自动导引车
CN113375664A (zh) * 2021-06-09 2021-09-10 成都信息工程大学 基于点云地图动态加载的自主移动装置定位方法
CN113375664B (zh) * 2021-06-09 2023-09-01 成都信息工程大学 基于点云地图动态加载的自主移动装置定位方法
CN113514066A (zh) * 2021-06-15 2021-10-19 西安科技大学 一种同步定位与地图构建和路径规划方法
CN113381331A (zh) * 2021-06-23 2021-09-10 国网山东省电力公司济宁市任城区供电公司 一种变电站智能巡检系统
CN113485368A (zh) * 2021-08-09 2021-10-08 国电南瑞科技股份有限公司 一种架空输电线路巡线机器人导航、巡线方法及装置
CN113485368B (zh) * 2021-08-09 2024-06-07 国电南瑞科技股份有限公司 一种架空输电线路巡线机器人导航、巡线方法及装置
CN113993005A (zh) * 2021-10-27 2022-01-28 南方电网大数据服务有限公司 电网设备检查方法、装置、计算机设备和存储介质
CN114419748A (zh) * 2021-11-18 2022-04-29 国网黑龙江省电力有限公司鹤岗供电公司 一种基于离线地图的电力线路巡检系统
CN114419748B (zh) * 2021-11-18 2024-04-12 国网黑龙江省电力有限公司鹤岗供电公司 一种基于离线地图的电力线路巡检系统
CN114498917A (zh) * 2021-12-15 2022-05-13 国网安徽省电力有限公司超高压分公司 数字化换流站运检的监管方法及其监管系统
CN114333097A (zh) * 2021-12-16 2022-04-12 上海海神机器人科技有限公司 一种联动式摄像安防警示系统及监控方法
CN114339168A (zh) * 2022-03-04 2022-04-12 北京云迹科技股份有限公司 一种区域安全监控方法、装置、电子设备及存储介质
CN114339168B (zh) * 2022-03-04 2022-06-03 北京云迹科技股份有限公司 一种区域安全监控方法、装置、电子设备及存储介质
CN114619452A (zh) * 2022-04-01 2022-06-14 沈阳吕尚科技有限公司 消杀机器人的控制系统以及控制方法
CN114619452B (zh) * 2022-04-01 2024-05-31 沈阳吕尚科技有限公司 消杀机器人的控制系统以及控制方法
CN118034291A (zh) * 2024-02-28 2024-05-14 北京晶品特装科技股份有限公司 机器人避障方法及系统
CN117808274A (zh) * 2024-03-01 2024-04-02 山西郎腾信息科技有限公司 一种煤矿井下瓦斯安全智能巡检系统
CN117808274B (zh) * 2024-03-01 2024-05-28 山西郎腾信息科技有限公司 一种煤矿井下瓦斯安全智能巡检系统
CN119168329A (zh) * 2024-11-19 2024-12-20 南京淼孚自动化有限公司 一种基于物联网的检测机器人巡检调度管理系统及方法

Also Published As

Publication number Publication date
CN106598052A (zh) 2017-04-26

Similar Documents

Publication Publication Date Title
WO2018107916A1 (zh) 一种基于环境地图的机器人安防巡检方法及其机器人
US20180165931A1 (en) Robot security inspection method based on environment map and robot thereof
CN109240311B (zh) 基于智能机器人的室外电力场地施工作业监督方法
WO2019085716A1 (zh) 移动机器人的交互方法、装置、移动机器人及存储介质
CN106292657B (zh) 可移动机器人及其巡逻路径设置方法
CN106341661B (zh) 巡逻机器人
US9432633B2 (en) Visual command processing
CN108297058A (zh) 智能安防机器人及其自动巡检方法
WO2018195955A1 (zh) 一种基于飞行器的设施检测方法及控制设备
WO2021103987A1 (zh) 扫地机器人控制方法、扫地机器人及存储介质
US20060184274A1 (en) Autonomously moving robot
JP2011018094A (ja) 巡回警備支援システム、方法及びプログラム
HK1222247A1 (zh) 一種影像識別系統及方法
CN110850723A (zh) 一种基于变电站巡检机器人系统的故障诊断及定位方法
CN106931945A (zh) 机器人导航方法和系统
JP2008200770A (ja) ロボットおよびその制御方法
JPWO2018061616A1 (ja) 監視システム
JP2005315746A (ja) 自己位置同定方法及び該装置
CN110796032A (zh) 基于人体姿态评估的视频围栏及预警方法
CN111064935B (zh) 一种智慧工地人员姿态检测方法及系统
CN109040677A (zh) 园区安防预警防御系统
CN113791627A (zh) 一种机器人导航方法、设备、介质和产品
CN110703760B (zh) 一种用于安防巡检机器人的新增可疑物体检测方法
CN106774324B (zh) 一种双摄像头三维识别巡逻机器人
CN110136186A (zh) 一种用于移动机器人目标测距的检测目标匹配方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17880431

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17880431

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17880431

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13/05/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17880431

Country of ref document: EP

Kind code of ref document: A1