CN112405540B - Robot control method, device, robot and readable storage medium - Google Patents
Robot control method, device, robot and readable storage medium Download PDFInfo
- Publication number
- CN112405540B CN112405540B CN202011259304.1A CN202011259304A CN112405540B CN 112405540 B CN112405540 B CN 112405540B CN 202011259304 A CN202011259304 A CN 202011259304A CN 112405540 B CN112405540 B CN 112405540B
- Authority
- CN
- China
- Prior art keywords
- intersection
- robot
- target
- following target
- crowd
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000003860 storage Methods 0.000 title claims abstract description 12
- 230000006399 behavior Effects 0.000 claims description 67
- 238000012544 monitoring process Methods 0.000 claims description 11
- 238000011065 in-situ storage Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 5
- 238000004891 communication Methods 0.000 description 6
- 239000003086 colorant Substances 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
The application discloses a robot control method, a device, a robot and a readable storage medium, wherein the robot control method is applied to the robot, and comprises the following steps: the method comprises the steps of obtaining an intersection image of an intersection where the robot is located to pass, selecting a following target from target people in the intersection image, judging whether the robot can pass through the intersection to pass or not based on behavior information of the following target, and controlling the robot to pass through the intersection to pass based on the behavior information of the following target. The application solves the technical problem that the robot is poor in control effect.
Description
Technical Field
The present application relates to the field of intelligent device control technologies, and in particular, to a robot control method and apparatus, a robot, and a readable storage medium.
Background
With the rapid development of robots, robots are also more and more widely applied, and when the robots work in cities, the robots often encounter the problem of crossing roads, at present, intersection images are shot through cameras of the robots, and then traffic lights in the intersection images are identified to judge whether the roads cross or not, however, the accuracy of the robots for identifying the traffic lights is low due to the fact that the colors of the traffic lights are easily affected by other lights, for example, building lights, vehicle lights and the like, and then the robots are capable of judging whether the roads are low through the traffic lights for identifying the intersections, and the control effect of the robots is reduced.
Disclosure of Invention
The present application mainly aims to provide a robot control method, an apparatus, a robot and a readable storage medium, and aims to solve the technical problem in the prior art that the robot control effect is poor.
To achieve the above object, the present application provides a robot control method applied to a robot, the robot control method including:
acquiring an intersection image of an intersection where the robot is located to pass, and selecting a following target from target people in the intersection image;
and controlling the robot to pass through the intersection to be passed through based on the behavior information of the following target.
Optionally, the step of controlling the robot to pass through the intersection to be passed through based on the behavior information of the following target includes:
visually following the following target to analyze whether the following target passes through the intersection to be passed;
if so, controlling the robot to pass through the intersection to be passed through;
and if not, controlling the robot to wait in situ.
Optionally, the step of visually following the following target to analyze whether the following target passes through the intersection to be passed includes:
carrying out image recognition on the intersection image to determine an intersection waiting area in the intersection to be passed;
and visually following the following target, and analyzing whether the following target passes through the intersection to be passed through by monitoring whether the following target moves beyond the intersection waiting area.
Optionally, after the step of visually following the following target, analyzing whether the following target is passing through the intersection to be passed by monitoring whether the following target moves beyond the intersection waiting area, the robot control method further includes:
acquiring crowd movement behavior information of a target crowd corresponding to the following target;
judging whether the crowd moving behavior information is matched with the behavior information;
if not, returning to the step of selecting the following target from the target crowd in the intersection image;
and if so, executing a step of controlling the robot to pass through the intersection to be passed based on the behavior information of the following target.
Optionally, the step of selecting a following target from the crowd in the intersection image includes:
if the intersection to be passed through is a multidirectional intersection, acquiring the motion path information of the robot;
and selecting the following target from the crowd in the intersection image based on the running path information.
Optionally, the step of selecting the following target from the crowd in the intersection image based on the operation path information includes:
determining a travel direction of the robot based on the motion path information;
identifying face orientation information in the intersection image, and determining a target crowd matched with the advancing direction in the intersection image based on the face orientation information;
and selecting the following target from the target group.
Optionally, the face orientation information at least comprises a face orientation,
the step of determining a target crowd matching the traveling direction in the intersection image based on the face orientation information comprises:
selecting the orientation of each target face consistent with the advancing direction from the orientations of the faces;
and selecting the target crowd from the intersection image based on the orientation of each target face.
The present application further provides a robot control device, the robot control device is applied to the robot, the robot control device is a virtual device, the robot control device includes:
the selection module is used for acquiring an intersection image of an intersection where the robot is located to pass and selecting a following target from target groups in the intersection image;
and the control module is used for controlling the robot to pass through the intersection to be passed through based on the behavior information of the following target.
Optionally, the control module is further configured to:
visually following the following target to analyze whether the following target passes through the intersection to be passed;
if so, controlling the robot to pass through the intersection to be passed through;
and if not, controlling the robot to wait in situ.
Optionally, the control module is further configured to:
carrying out image recognition on the intersection image to determine an intersection waiting area in the intersection to be passed;
and visually following the following target, and analyzing whether the following target passes through the intersection to be passed through by monitoring whether the following target moves beyond the intersection waiting area.
Optionally, the robot control apparatus is further configured to:
acquiring crowd movement behavior information of a target crowd corresponding to the following target;
judging whether the crowd moving behavior information is matched with the behavior information;
if not, returning to the step of selecting the following target from the target crowd in the intersection image;
and if so, executing a step of controlling the robot to pass through the intersection to be passed based on the behavior information of the following target.
Optionally, the selecting module is further configured to:
if the intersection to be passed through is a multidirectional intersection, acquiring the motion path information of the robot;
and selecting the following target from the crowd in the intersection image based on the running path information.
Optionally, the selecting module is further configured to:
determining a travel direction of the robot based on the motion path information;
identifying face orientation information in the intersection image, and determining a target crowd matched with the advancing direction in the intersection image based on the face orientation information;
and selecting the following target from the target group.
Optionally, the selecting module is further configured to:
selecting the orientation of each target face consistent with the advancing direction from the orientations of the faces;
and selecting the target crowd from the intersection image based on the orientation of each target face.
The present application further provides a robot, the robot includes robot control device, robot control device is entity equipment, robot control device includes: a memory, a processor and a program of the robot control method stored on the memory and executable on the processor, the program of the robot control method, when executed by the processor, being operable to implement the steps of the robot control method as described above.
The present application also provides a readable storage medium having stored thereon a program for implementing a robot control method, which when executed by a processor, implements the steps of the robot control method as described above.
Compared with the technical means that the robot adopted in the prior art judges whether to cross the road by identifying traffic lights in the road junction image. The method and the device for tracking the intersection of the robot acquire the intersection image of the intersection where the robot is located to pass through, and select the following target from target groups in the intersection image. And then determining the time for crossing the road based on the behavior information of the following target, and further controlling the robot to follow the following target to pass through the intersection to be passed through, so that the robot is controlled to cross the road without a traffic light identification mode. The technical defect that the accuracy of the robot for identifying the traffic lights is low due to the fact that the colors of the lights of the traffic lights are easily influenced by other lights, and then the robot judges whether the accuracy of the road is low through the traffic lights of the identified intersection is overcome, and the control effect of the robot is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart of a first embodiment of a robot control method according to the present application;
FIG. 2 is a schematic flow chart of a robot control method according to a second embodiment of the present application;
fig. 3 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
The objectives, features, and advantages of the present application will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In a first embodiment of a robot control method of the present application, referring to fig. 1, the robot control method is applied to a robot, the robot control method including:
step S10, acquiring intersection images of an intersection where the robot is to pass, and selecting a following target from target people in the intersection images;
in this embodiment, it should be noted that a camera is disposed on the robot and is configured to capture a surrounding image of the robot within a preset area range, where the preset area range may be a circular area with a preset radius and a circle center of the robot, and the following target at least includes one person in the target group.
The method comprises the steps of obtaining intersection images of intersections where the robot is located to pass through, and selecting following targets from target groups in the intersection images. Specifically, a camera on the robot is used for shooting intersection images at the intersection to be passed through, and people in the intersection images are identified to obtain all people to be selected. If the intersection to be passed through is a one-way intersection, randomly selecting one crowd from all the crowds to be selected as a target crowd, and selecting a preset number of people from the target crowd as the following target; and if the intersection to be passed through is a multidirectional intersection, selecting the crowd with the road passing direction consistent with the advancing direction of the robot from the crowds to be selected as the target crowd based on the road passing direction of the crowds to be selected, and selecting a preset number of people from the target crowd as the following targets.
Further, in step S10, the step of selecting a following target from the people in the intersection image includes:
step S11, if the intersection to be passed through is a multidirectional intersection, the motion path information of the robot is obtained;
in this embodiment, it should be noted that the multi-directional intersection is an intersection where a plurality of routes go, and the motion path information is motion path information when the robot works, where the motion path information includes a traveling direction, a traveling speed, a traveling path, and the like of the robot.
And step S12, selecting the following target from the crowd in the intersection image based on the running path information.
In this embodiment, based on the operation path information, the following target is selected from the crowd in the intersection image. Specifically, the advancing direction of each crowd in the intersection image is identified, and the advancing direction of each crowd in the intersection image is obtained. And then selecting the advancing direction of the crowd to be selected, which is consistent with the advancing direction of the robot, in the advancing direction of each crowd. And then selecting a target crowd from the crowds corresponding to the advancing direction of each crowd to be selected, and selecting a preset number of people from the target crowd as the following targets.
The step of identifying the advancing direction of each crowd in the intersection image to obtain the advancing direction of each crowd in the intersection image further comprises:
and acquiring the face orientation of each person in each group of people in the intersection image. And then based on the face orientation of each person in each crowd, taking the face orientation of each person in each crowd as the individual advancing direction. And then the crowd traveling direction of each crowd can be determined based on the individual traveling direction of each person in each crowd. For example, assuming that there is a crowd S including 10 persons, where a person traveling direction of 9 persons out of the 10 persons is a and a traveling direction of one person is B, the ratio of the number of person traveling directions of the crowd having the person traveling direction a is 90%, and if the preset threshold value of the ratio of the number of person traveling directions is 80%, the direction a is taken as the crowd traveling direction because the ratio of the number of person traveling directions is greater than the preset threshold value of the ratio of the number of person traveling directions.
Further, in step S12, the step of selecting the following target from the crowd in the intersection image based on the travel path information includes:
step S121, determining the advancing direction of the robot based on the motion path information;
step S122, identifying face orientation information in the intersection image, and determining a target crowd matched with the advancing direction in the intersection image based on the face orientation information;
in this embodiment, the face orientation information at least includes a face orientation.
And identifying face orientation information in the intersection image, and determining a target crowd matched with the advancing direction in the intersection image based on the face orientation information. Specifically, the face directions of all people in the intersection image are identified, and then the face directions of all people in the intersection image are used as the respective individual advancing directions. Further classifying all persons in the road junction image based on the individual traveling directions and the traveling direction of the robot, and taking persons corresponding to the individual traveling directions consistent with the traveling direction of the robot as first-class groups; and taking people with the traveling directions different from the traveling direction of the robot and corresponding to the traveling directions of the people as second groups of people to obtain the first groups of people and the second groups of people. And then the first-class crowd is taken as the target crowd, and the target crowd is matched with the face orientation information.
Further, in step S122, the face orientation information at least includes a face orientation,
the step of determining a target crowd matching the traveling direction in the intersection image based on the face orientation information comprises:
step A10, selecting each target face orientation consistent with the advancing direction from each face orientation;
in the present embodiment, since a person usually has a face orientation as a traveling direction, and if the face orientation of the person coincides with the traveling direction of the robot, the traveling directions of the person and the robot should coincide with each other.
Step A20, selecting the target crowd from the intersection image based on the orientation of each target face.
In this embodiment, the target people are selected from the intersection images based on the orientation of each target face. Specifically, a crowd composed of people corresponding to each target face in the intersection image is taken as the target crowd.
And S123, selecting the following target from the target crowd.
In this embodiment, a preset number of people are selected as the following target from the target group.
And step S20, controlling the robot to pass through the intersection to be passed through based on the behavior information of the following target.
In this embodiment, the behavior information includes movement behavior information of the following target, where the movement behavior information includes a movement position, a movement path, a movement direction, and the like.
And controlling the robot to pass through the intersection to be passed through based on the behavior information of the following target. Specifically, the image of the intersection is identified to identify an intersection waiting area of the intersection to be passed through, wherein the intersection waiting area is an area where a crowd waits for a road to pass through, and the following target is visually followed. And in the visual following process, monitoring the moving behavior of the following target to obtain the moving behavior information of the following target, and further judging whether the following target moves beyond the intersection waiting area in the advancing direction of the robot or not based on the moving behavior information. If so, proving that the following target passes through the intersection to be passed through, further judging that the robot can pass through the intersection to be passed through, controlling the robot and the following target to store a preset distance, and following the following target to pass through the intersection to be passed through; if not, the following target is proved not to pass through the intersection to be passed through, and then the robot is judged not to pass through the intersection to be passed through.
Compared with the technical means that the robot adopted in the prior art identifies traffic lights in the road junction image and judges whether the road is crossed or not, the embodiment of the application provides a robot control method. According to the embodiment of the application, the intersection image of the intersection where the robot is located to pass is obtained, and the following target is selected from the target group in the intersection image. And then determining the time for crossing the road based on the behavior information of the following target, and further controlling the robot to follow the following target to pass through the intersection to be passed through, so that the robot is controlled to cross the road without a traffic light identification mode. The technical defect that the accuracy of the robot for identifying the traffic lights is low due to the fact that the colors of the lights of the traffic lights are easily influenced by other lights, and then the robot judges whether the accuracy of the road is low through the traffic lights of the identified intersection is overcome, and the control effect of the robot is improved.
Further, referring to fig. 2, in another embodiment of the robot control method based on the first embodiment in the present application, in step S20, the step of controlling the robot to pass through the intersection to be passed based on the behavior information of the following target includes:
step S21, visually following the following target to analyze whether the following target passes through the intersection to be passed;
in this embodiment, it should be noted that when the intersection to be passed is not available, the crowd generally waits for the road in the intersection waiting area, and the behavior information includes movement behavior information.
Visually following the following target to analyze whether the following target passes through the intersection to be passed through. Specifically, the image of the intersection is identified to identify an intersection waiting area of the intersection to be passed through, wherein the intersection waiting area is an area where a crowd waits for a road, and the following target is visually followed to monitor the movement behavior of the following target in the intersection waiting area. And then, based on the movement behavior, judging whether the following target moves beyond the intersection waiting area, and taking a movement behavior judgment result of judging whether the following target moves beyond the intersection waiting area as the movement behavior information.
Further, in step S21, the step of visually following the following target to analyze whether the following target is passing through the intersection to be passed includes:
step S211, carrying out image recognition on the intersection image to determine an intersection waiting area in the intersection to be passed;
in this embodiment, the image of the intersection is recognized to determine an intersection waiting area in the intersection to be passed through. Specifically, based on a preset image recognition model, image recognition is carried out on the intersection image so as to recognize the position coordinates of an intersection waiting area in the intersection image, and then the intersection waiting area in the intersection to be passed is determined based on the position coordinates of the intersection waiting area.
Step S212, visually following the following target, and analyzing whether the following target passes through the intersection to be passed through by monitoring whether the following target moves beyond the intersection waiting area;
in this embodiment, the following target is visually followed, and whether the following target passes through the intersection to be passed is analyzed by monitoring whether the following target moves beyond the intersection waiting area. Specifically, the following target is visually followed to monitor the spatial position of the following target in real time, and then whether the spatial position exceeds the position of the path waiting area in the traveling direction of the robot is judged. If so, judging that the following target moves beyond the intersection waiting area, and further indicating that the following target passes through the intersection to be passed; if not, judging that the following target does not move beyond the intersection waiting area, and further indicating that the following target does not pass through the intersection to be passed through.
Wherein, after the step of visually following the following target, analyzing whether the following target is passing through the intersection to be passed by monitoring whether the following target moves beyond the intersection waiting area, the robot control method further comprises:
step B21, acquiring the crowd movement behavior information of the target crowd corresponding to the following target;
in this embodiment, the crowd movement behavior information of the target crowd corresponding to the following target is obtained. Specifically, second movement behaviors of other people in the target crowd corresponding to the following target are monitored. And then judging whether other people in the target crowd move beyond the intersection waiting area or not based on the second moving behaviors, obtaining second moving behavior judgment results corresponding to the second moving behaviors, and taking the second moving behavior judgment results as the crowd moving behavior information.
Step B22, judging whether the crowd moving behavior information is matched with the behavior information;
in this embodiment, it should be noted that the behavior information includes movement behavior information.
And judging whether the crowd moving behavior information is matched with the behavior information. Specifically, each target judgment result consistent with the movement behavior judgment result is determined in each second movement behavior judgment result, and the target number of the target judgment results is counted. And further calculating the ratio of the target number to the number of the second movement behavior judgment results to obtain the judgment result number ratio. Further judging whether the judging result quantity ratio is larger than a preset judging result quantity ratio threshold value or not, and if so, judging that the crowd movement behavior information is matched with the movement behavior information; and if not, judging that the crowd moving behavior information is not matched with the moving behavior information.
Step B23, if not, returning to the step of selecting the following target from the target crowd in the intersection image;
in this embodiment, if not, the step of selecting the following target from the target group in the intersection image is returned. Specifically, if the crowd movement behavior information is not matched with the movement behavior information, it is determined that the following target does not pass through the intersection to be passed according to the traffic rule, and then the step of selecting the following target from the target crowd in the intersection image is returned.
And B24, if yes, controlling the robot to pass through the intersection to be passed through based on the behavior information of the following target.
In this embodiment, if yes, a step of controlling the robot to pass through the intersection to be passed through based on the behavior information of the following target is performed. Specifically, if the crowd movement behavior information is not matched with the movement behavior information, it is determined that the following target normally passes through the intersection to be passed according to the traffic rule, and then the step of controlling the robot to pass through the intersection to be passed based on the behavior information of the following target is executed.
Step S22, if yes, controlling the robot to pass through the intersection to be passed;
in this embodiment, if yes, the robot is controlled to pass through the intersection to be passed through. Specifically, if the following target passes through the intersection to be passed, it is proved that the intersection to be passed can pass through at the moment, and the robot is controlled to pass through the intersection to be passed.
And step S23, if not, controlling the robot to wait originally.
In this embodiment, if not, the robot is controlled to wait in place. Specifically, if the following target does not pass through the intersection to be passed through, it is proved that the intersection to be passed through cannot pass through at the moment, and the robot is controlled to wait in place.
The method comprises the steps of firstly, visually following the following target to analyze whether the following target passes through the intersection to be passed, wherein the fact whether the following target passes through the intersection to be passed is behavior information of the following target. If so, controlling the robot to pass through the intersection to be passed through; if not, the robot is controlled to wait in situ, so that the robot is controlled to pass through the intersection to be passed based on the behavior information of the following target, and the robot is controlled to pass through the road without the need of identifying traffic lights. The method lays a foundation for overcoming the technical defect that the accuracy of the robot for identifying the traffic lights is low and further the robot judges whether the accuracy of the road is low by identifying the traffic lights at the intersection due to the fact that the colors of the lights of the traffic lights are easily influenced by other lights.
Referring to fig. 3, fig. 3 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
As shown in fig. 3, the robot control apparatus may include: a processor 1001, for example a CPU, a memory 1005, a communication bus 1002, and the robot control device may be a display terminal such as a smart tv, a smart phone, and the like. The communication bus 1002 is used for realizing connection communication between the processor 1001 and the memory 1005. The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a memory device separate from the processor 1001 described above.
Optionally, the robot control device may further include a rectangular user interface, a network interface, a camera, RF (Radio Frequency) circuitry, a sensor, audio circuitry, a WiFi module, and the like. The rectangular user interface may comprise a Display screen (Display), an input sub-module such as a Keyboard (Keyboard), and the optional rectangular user interface may also comprise a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
Those skilled in the art will appreciate that the configuration of the robotic control device illustrated in fig. 3 does not constitute a limitation of the robotic control device and may include more or fewer components than illustrated, or some components may be combined, or a different arrangement of components.
As shown in fig. 3, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, and a robot control method program. The operating system is a program that manages and controls the hardware and software resources of the device, supporting the operation of the robot control method program and other software and/or programs. The network communication module is used for realizing communication among the components in the memory 1005 and with other hardware and software in the robot control method system.
In the robot control apparatus shown in fig. 3, the processor 1001 is configured to execute a robot control method program stored in the memory 1005 to implement the steps of the robot control method described in any one of the above.
The specific implementation of the robot control device of the present application is substantially the same as that of each embodiment of the robot control method, and is not described herein again.
The embodiment of the application provides a robot control device, robot control device is applied to the robot, robot control device includes:
the selection module is used for acquiring an intersection image of an intersection where the robot is located to pass and selecting a following target from target groups in the intersection image;
and the control module is used for controlling the robot to pass through the intersection to be passed through based on the behavior information of the following target.
Optionally, the control module is further configured to:
visually following the following target to analyze whether the following target passes through the intersection to be passed;
if so, controlling the robot to pass through the intersection to be passed through;
and if not, controlling the robot to wait in situ.
Optionally, the control module is further configured to:
carrying out image recognition on the intersection image to determine an intersection waiting area in the intersection to be passed;
and visually following the following target, and analyzing whether the following target passes through the intersection to be passed through by monitoring whether the following target moves beyond the intersection waiting area.
Optionally, the robot control apparatus is further configured to:
acquiring crowd movement behavior information of a target crowd corresponding to the following target;
judging whether the crowd moving behavior information is matched with the behavior information;
if not, returning to the step of selecting the following target from the target crowd in the intersection image;
and if so, executing a step of controlling the robot to pass through the intersection to be passed based on the behavior information of the following target.
Optionally, the selecting module is further configured to:
if the intersection to be passed through is a multidirectional intersection, acquiring the motion path information of the robot;
and selecting the following target from the crowd in the intersection image based on the running path information.
Optionally, the selecting module is further configured to:
determining a travel direction of the robot based on the motion path information;
identifying face orientation information in the intersection image, and determining a target crowd matched with the advancing direction in the intersection image based on the face orientation information;
and selecting the following target from the target group.
Optionally, the selecting module is further configured to:
selecting the orientation of each target face consistent with the advancing direction from the orientations of the faces;
and selecting the target crowd from the intersection image based on the orientation of each target face.
The specific implementation of the robot control apparatus of the present application is substantially the same as that of the above embodiments of the robot control method, and is not described herein again.
The embodiment of the application provides a readable storage medium, and the readable storage medium stores one or more programs, which can be executed by one or more processors for implementing the steps of the robot control method described in any one of the above.
The specific implementation of the readable storage medium of the present application is substantially the same as that of each embodiment of the robot control method, and is not described herein again.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.
Claims (8)
1. A robot control method, comprising:
acquiring an intersection image of an intersection where the robot is located to pass, and selecting a following target from target people in the intersection image;
carrying out image recognition on the intersection image to determine an intersection waiting area in the intersection to be passed;
visually following the following target, and analyzing whether the following target passes through the intersection to be passed through by monitoring whether the following target moves beyond the intersection waiting area;
if so, controlling the robot to pass through the intersection to be passed through;
and if not, controlling the robot to wait in situ.
2. The robot control method according to claim 1, wherein after the step of visually following the following target, analyzing whether the following target is passing through the intersection to be passed by monitoring whether the following target moves beyond the intersection waiting area, the robot control method further comprises:
acquiring crowd movement behavior information of a target crowd corresponding to the following target;
judging whether the crowd moving behavior information is matched with the behavior information;
if not, returning to the step of selecting the following target from the target crowd in the intersection image;
and if so, executing a step of controlling the robot to pass through the intersection to be passed based on the behavior information of the following target.
3. The robot control method according to claim 1, wherein the step of selecting a following target from the population in the intersection image comprises:
if the intersection to be passed through is a multidirectional intersection, acquiring the motion path information of the robot;
and selecting the following target from the crowd in the intersection image based on the motion path information.
4. The robot control method according to claim 3, wherein the step of selecting the following target among the population in the intersection image based on the movement path information includes:
determining a travel direction of the robot based on the motion path information;
identifying face orientation information in the intersection image, and determining a target crowd matched with the advancing direction in the intersection image based on the face orientation information;
and selecting the following target from the target group.
5. The robot control method according to claim 4, wherein the face orientation information includes at least a face orientation,
the step of determining a target crowd matching the traveling direction in the intersection image based on the face orientation information comprises:
selecting the orientation of each target face consistent with the advancing direction from the orientations of the faces;
and selecting the target crowd from the intersection image based on the orientation of each target face.
6. A robot control apparatus, comprising:
the selection module is used for acquiring an intersection image of an intersection where the robot is located to pass and selecting a following target from target groups in the intersection image;
the control module is used for carrying out image recognition on the intersection image so as to determine an intersection waiting area in the intersection to be passed; visually following the following target, and analyzing whether the following target passes through the intersection to be passed through by monitoring whether the following target moves beyond the intersection waiting area; if so, controlling the robot to pass through the intersection to be passed through; and if not, controlling the robot to wait in situ.
7. A robot including a robot control apparatus, characterized by comprising: a memory, a processor, and a program stored on the memory for implementing the robot control method,
the memory is used for storing a program for realizing the robot control method;
the processor is configured to execute a program implementing the robot control method to implement the steps of the robot control method according to any one of claims 1 to 5.
8. A readable storage medium, characterized in that the readable storage medium has stored thereon a program for implementing a robot control method, the program being executed by a processor to implement the steps of the robot control method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011259304.1A CN112405540B (en) | 2020-11-11 | 2020-11-11 | Robot control method, device, robot and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011259304.1A CN112405540B (en) | 2020-11-11 | 2020-11-11 | Robot control method, device, robot and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112405540A CN112405540A (en) | 2021-02-26 |
CN112405540B true CN112405540B (en) | 2022-01-07 |
Family
ID=74830796
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011259304.1A Active CN112405540B (en) | 2020-11-11 | 2020-11-11 | Robot control method, device, robot and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112405540B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107807652A (en) * | 2017-12-08 | 2018-03-16 | 灵动科技(北京)有限公司 | Merchandising machine people, the method for it and controller and computer-readable medium |
CN108351654A (en) * | 2016-02-26 | 2018-07-31 | 深圳市大疆创新科技有限公司 | System and method for visual target tracking |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10671058B2 (en) * | 2015-09-03 | 2020-06-02 | Nec Corporation | Monitoring server, distributed-processing determination method, and non-transitory computer-readable medium storing program |
CN105759839B (en) * | 2016-03-01 | 2018-02-16 | 深圳市大疆创新科技有限公司 | Unmanned plane visual tracking method, device and unmanned plane |
CN106155065A (en) * | 2016-09-28 | 2016-11-23 | 上海仙知机器人科技有限公司 | A kind of robot follower method and the equipment followed for robot |
CN106774315B (en) * | 2016-12-12 | 2020-12-01 | 深圳市智美达科技股份有限公司 | Autonomous navigation method and device for robot |
KR101907548B1 (en) * | 2016-12-23 | 2018-10-12 | 한국과학기술연구원 | Moving and searching method of mobile robot for following human |
CN107160392A (en) * | 2017-05-26 | 2017-09-15 | 深圳市天益智网科技有限公司 | Method, device and terminal device and robot that view-based access control model is positioned and followed |
CN109992008A (en) * | 2017-12-29 | 2019-07-09 | 深圳市优必选科技有限公司 | Target following method and device for robot |
CN108673501B (en) * | 2018-05-17 | 2022-06-07 | 中国科学院深圳先进技术研究院 | Method and device for target following of a robot |
CN108908287A (en) * | 2018-09-09 | 2018-11-30 | 张伟 | Follow robot and its control method and device |
CN111571591B (en) * | 2020-05-22 | 2021-07-30 | 中国科学院自动化研究所 | Four-eye bionic eye device, device and method for searching target |
-
2020
- 2020-11-11 CN CN202011259304.1A patent/CN112405540B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108351654A (en) * | 2016-02-26 | 2018-07-31 | 深圳市大疆创新科技有限公司 | System and method for visual target tracking |
CN107807652A (en) * | 2017-12-08 | 2018-03-16 | 灵动科技(北京)有限公司 | Merchandising machine people, the method for it and controller and computer-readable medium |
Also Published As
Publication number | Publication date |
---|---|
CN112405540A (en) | 2021-02-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102189262B1 (en) | Apparatus and method for collecting traffic information using edge computing | |
Guo et al. | Dense construction vehicle detection based on orientation-aware feature fusion convolutional neural network | |
US20080126031A1 (en) | System and Method for Measuring Performances of Surveillance Systems | |
CN112014845A (en) | Vehicle obstacle positioning method, device, equipment and storage medium | |
CN112419722B (en) | Traffic abnormal event detection method, traffic control method, device and medium | |
CN112200830A (en) | Target tracking method and device | |
CN109598977A (en) | Vehicle checking method, device, system and server based on video monitoring | |
CN110738867B (en) | Parking space detection method, device, equipment and storage medium | |
CN113910224B (en) | Robot following method and device and electronic equipment | |
CN113674523A (en) | Traffic accident analysis method, device and equipment | |
CN113611131B (en) | Vehicle passing method, device, equipment and computer readable storage medium | |
CN115366920A (en) | Decision method and apparatus, device and medium for autonomous driving of a vehicle | |
CN109615925A (en) | Vehicle parking control method, device, system and server based on video monitoring | |
Salma et al. | Smart parking guidance system using 360o camera and haar-cascade classifier on iot system | |
CN114241373A (en) | End-to-end vehicle behavior detection method, system, equipment and storage medium | |
Dinh et al. | Development of a tracking-based system for automated traffic data collection for roundabouts | |
CN110738169B (en) | Traffic flow monitoring method, device, equipment and computer readable storage medium | |
CN112405540B (en) | Robot control method, device, robot and readable storage medium | |
US20220300774A1 (en) | Methods, apparatuses, devices and storage media for detecting correlated objects involved in image | |
CN112784707A (en) | Information fusion method and device, integrated detection equipment and storage medium | |
CN113460040A (en) | Parking path determination method and device, vehicle and storage medium | |
Chen et al. | Managing edge AI cameras for traffic monitoring | |
Byzkrovnyi et al. | Comparison of Object Detection Algorithms for the Task of Person Detection on Jetson TX2 NX Platform | |
CN113435352B (en) | Civilized city scoring method and device, electronic equipment and storage medium | |
Chavan et al. | Pothole detection system using YOLO v4 algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address |
Address after: Unit 7-11, 6th Floor, Building B2, No. 999-8 Gaolang East Road, Wuxi Economic Development Zone, Wuxi City, Jiangsu Province, China 214000 Patentee after: Youdi Robot (Wuxi) Co.,Ltd. Country or region after: China Address before: 5D, Building 1, Tingwei Industrial Park, No. 6 Liufang Road, Xingdong Community, Xin'an Street, Bao'an District, Shenzhen City, Guangdong Province Patentee before: UDITECH Co.,Ltd. Country or region before: China |
|
CP03 | Change of name, title or address |