[go: up one dir, main page]

CN109164812B - Mobile robot multi-behavior fusion enzyme numerical film control method in unknown environment - Google Patents

Mobile robot multi-behavior fusion enzyme numerical film control method in unknown environment Download PDF

Info

Publication number
CN109164812B
CN109164812B CN201811235990.1A CN201811235990A CN109164812B CN 109164812 B CN109164812 B CN 109164812B CN 201811235990 A CN201811235990 A CN 201811235990A CN 109164812 B CN109164812 B CN 109164812B
Authority
CN
China
Prior art keywords
robot
wall
target
obstacle
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201811235990.1A
Other languages
Chinese (zh)
Other versions
CN109164812A (en
Inventor
张葛祥
黄振
王学渊
荣海娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN201811235990.1A priority Critical patent/CN109164812B/en
Publication of CN109164812A publication Critical patent/CN109164812A/en
Application granted granted Critical
Publication of CN109164812B publication Critical patent/CN109164812B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Feedback Control In General (AREA)

Abstract

本发明公开了一种未知环境下移动机器人多行为融合酶数值膜控制方法,计算目标和机器人的直线距离和角度、判断机器人所处的环境、判断机器人与目标之间的直线距离是否为历史最小值、判断目标和障碍物或壁面是否在机器人同一侧、进行多行为选择、执行选择的行为和判断移动机器人是否到达目标等步骤。本发明将酶数值膜系统和多行为融合算法结合,引入机器人与目标间距离判断及目标、障碍物或壁面是否在机器人同一侧判断,在一定程度上解决了移动机器人未知环境下自主行走的死锁问题和绕远问题。采用酶数值膜控制器融合多种行为控制器,使机器人能适应复杂环境。

Figure 201811235990

The invention discloses a multi-behavior fusion enzyme numerical membrane control method for a mobile robot in an unknown environment, which calculates the straight-line distance and angle between the target and the robot, judges the environment where the robot is located, and judges whether the straight-line distance between the robot and the target is the smallest in history value, judging whether the target and the obstacle or wall are on the same side of the robot, making multi-behavior selection, executing the selected behavior, and judging whether the mobile robot reaches the target, etc. The invention combines the enzyme numerical membrane system and the multi-behavior fusion algorithm, and introduces the distance judgment between the robot and the target and the judgment of whether the target, obstacle or wall is on the same side of the robot, which solves the problem of autonomous walking of mobile robots in an unknown environment to a certain extent. Locking issues and bypassing issues. The enzyme numerical membrane controller is used to integrate various behavioral controllers, so that the robot can adapt to the complex environment.

Figure 201811235990

Description

Mobile robot multi-behavior fusion enzyme numerical film control method in unknown environment
Technical Field
The invention relates to the technical field of intelligent robot control, in particular to a multi-behavior fusion enzyme numerical film control method for a mobile robot in an unknown environment.
Background
The membrane calculation (or membrane system) is the youngest branch of natural calculation, is a calculation model abstracted from the information processing cooperation mode of cell groups such as functions and structures of cells, organs and tissues and has excellent characteristics such as parallelism, uncertainty and distribution. The research shows that: theoretically, the membrane computing model has the same computing power as the Turing machine, and even has the possibility of exceeding the limitation of the Turing machine. The enzyme numerical membrane system is one of membrane systems, has the excellent characteristics of distribution, parallelism, easy programming and modularization, and is suitable for robot control.
Disclosure of Invention
The invention aims to provide a method for controlling a multi-behavior fusion enzyme numerical film of a mobile robot in an unknown environment. The mobile robot acquires environment information by using a sensor, classifies different environments, and selects and executes one of a plurality of behaviors according to different environments and the distance and the angle between a target and the robot until the target point is reached.
The technical scheme for realizing the purpose of the invention is as follows:
a method for controlling a multi-behavior fusion enzyme numerical value film of a mobile robot in an unknown environment is characterized by comprising the following steps
Step 1: calculating the linear distance CurDest between the target and the robot and the Angle between the target and the forward direction of the robot;
step 2: according to the information acquired by the sensor carried by the robot, the environment of the robot is judged, wherein the environment comprises a left wall, a right wall, a corridor, a left wall corner, a right wall corner, a front wall, an upper left side, an upper right side, two sides, dead corners and no barriers, and C is used for respectivelyi=1,2,...,110, wherein i is 1,2, 11 corresponds to the 11 environments respectively; the environment is divided into 4 types: wall type EwaCorresponding to the environment of the left wall, the right wall, the corridor, the left wall corner and the right wall corner; obstacle type EobCorresponding to the environment of the front wall, the upper left side and the upper right side; dead zone type EdeCorresponding to the environments of two sides and dead corners; obstacle-free type EnoCorresponding to the environment without obstacles;
and step 3: judging whether the straight-line distance CurDist between the robot and the target is the historical minimum MinDist: when MinDist > CurDes, the history is the minimum value, the variable IfMin is 1, and MinDist is made to be CurDes; otherwise, IfMin is 0; and 4, step 4: judging whether the target and the barrier or the wall surface are on the same side of the robot:
judging whether the target is on the left side or the right side of the robot according to the Angle, and when the Angle is greater than 0, the target is on the right side of the robot, and when the Angle is less than 0, the target is on the left side of the robot;
such as C 10 or C 40 or C7If the value is 0, judging that the obstacle or the wall surface is positioned on the left side of the robot, and representing the obstacle or the wall surface by using a variable IfLeft as 1; such as C 20 or C 50 or C8If the value is 0, judging that the obstacle or the wall surface is positioned on the right side of the robot, and representing the obstacle or the wall surface by using a variable IfLeft as 0;
when Angle is greater than 0 and IfLeft is 1, the target and the obstacle or the wall surface are not on the same side of the robot, and the variable ifSame is 0;
when Angle is greater than 0 and IfLeft is 0, the target and the obstacle or the wall surface are on the same side of the robot, and IfSame is 1; when Angle is <0 and IfLeft is 1, IfSame is 1; when Angle is less than 0 and IfLeft is 0, IfSame is 0;
and 5: after multi-behavior selection is carried out, behavior variables are output to an execution system:
when the robot is in CiWhen the environment is 0, i is 9, 10, the type of the dead zone is determined, and the dead zone is selected
Figure BDA0001837526030000021
When the robot is in CiWhen the environment is 0 and i is 11, judging as a barrier-free type: selecting Com if IfMin is 1gr1, if ifMin is 0
Figure BDA0001837526030000022
When the robot is in CiWhen the environment is 0, i is 6,7 and 8, the obstacle type is judged:
and if IfMin is equal to 0, selecting a corresponding obstacle avoidance behavior: i-6 selection
Figure BDA0001837526030000023
i-7 selection
Figure BDA0001837526030000024
i-8 selection
Figure BDA0001837526030000025
If ifMin is equal to 1, further judging whether the target and the obstacle are on the same side of the robot, and when Ifsame is equal to 1, selecting a corresponding obstacle avoidance behavior: i-6 selection
Figure BDA0001837526030000026
i-7 selection
Figure BDA0001837526030000027
i-8 selection
Figure BDA0001837526030000028
Selecting Com when IfSame is 0gr=1;
When the robot is in CiWhen i is equal to 1,2, 3,4, 5, the environment is judgedThe wall surface type:
if IfMin is 0, selecting the corresponding wall-following behavior: i is selected from 1 or 4
Figure BDA0001837526030000029
i is 2 or 5 selected
Figure BDA00018375260300000210
i-3 selection
Figure BDA00018375260300000211
If ifMin is equal to 1, further judging whether the target and the wall surface are on the same side of the robot, and if Ifsame is equal to 1, selecting a corresponding wall-following behavior: i is selected from 1 or 4
Figure BDA00018375260300000212
i is 2 or 5 selected
Figure BDA00018375260300000213
i-3 selection
Figure BDA00018375260300000214
Selecting Com when IfSame is 0gr=1;
The above behavior variables are respectively: front obstacle avoiding barrier
Figure BDA00018375260300000215
Obstacle avoidance device for left obstacle
Figure BDA00018375260300000216
Obstacle avoidance device for right obstacle
Figure BDA00018375260300000217
Left side wall face following wall
Figure BDA00018375260300000218
Right side wall surface following wall
Figure BDA00018375260300000219
Crossing channel
Figure BDA00018375260300000220
Tendency target ComgrIn-situ U-turn ComdeAnd self-rotation
Figure BDA00018375260300000221
Step 6: the execution system executes the behavior output in the step 5;
and 7: judging whether the mobile robot reaches the target, namely judging whether CurDest is equal to 0: if the value is equal to 0, the robot reaches the target, and the control is ended; if not, returning to the step 1 for continuation.
Further, in the step 2, the method for determining the environment of the robot based on the information acquired by the sensor mounted on the robot includes: the information acquired by the sensor is 8 distances d around the robotxX is 1,2 … 8; wherein d is1、d2Distance, d, obtained for the sensor at the right front side of the robot3Distance, d, obtained for the sensor on the right side of the robot4Distance, d, obtained for the sensor on the right rear side of the robot5Distance, d, obtained for the sensor at the left rear side of the robot6Distance, d, obtained for the sensor on the left side of the robot7、d8The distance obtained for the sensor located on the left front side of the robot; when d isx<t and t are distance threshold values, the sensor detects the obstacle and sends the corresponding distance dxBinarization, wherein 1 represents that an obstacle is detected and 0 represents that the obstacle is not detected;
Figure BDA0001837526030000031
and then determining the environment of the robot according to the table.
The invention provides a multi-behavior fusion enzyme numerical value membrane control method for the field of robot control and membrane system control in an unknown environment, which combines an enzyme numerical value membrane system and a multi-behavior fusion algorithm, introduces the judgment of the distance between a robot and a target and the judgment of whether the target and an obstacle (or a wall surface) are on the same side of the robot, and solves the problems of deadlock and winding of autonomous walking of a mobile robot in the unknown environment to a certain extent. And an enzyme numerical value membrane controller is adopted to fuse various behavior controllers, so that the robot can adapt to complex environments.
Drawings
FIG. 1 is a multi-behavior fusion flow diagram;
FIG. 2 is a diagram of eleven robot environment modes;
FIG. 3 is a robot distance sensor profile;
FIG. 4 is a diagram of a multi-activity fusion enzyme numerical membrane of the present invention;
FIG. 5 is a graph of experimental results comparing the present invention and fuzzy logic control in a G-type and convoluted obstacle environment.
Detailed Description
The following further describes the embodiments of the present invention with reference to the drawings.
A multi-behavior fusion enzyme numerical membrane control flow chart is shown in FIG. 1.
The specific related technologies adopted by the invention are as follows:
1) placing a mobile robot in an unknown environment with the unknown environment and a robot target as input, given a target point (x) to be reached by the robotg,yg). According to the distance formula
Figure BDA0001837526030000041
Calculating the distance CurDest between the target and the mobile robot and calculating the Angle of the target relative to the forward direction of the robot according to an Angle calculation formula (taking the right side of the forward direction of the robot as the positive direction).
2) Control of mobile robot
a) Acquisition and determination of the environment in which a mobile robot is located
In order to make a mobile robot cope with a complicated environment, the present invention divides the environment type in which the mobile robot is located into (as shown in fig. 2 below): the wall comprises a left wall, a right wall, a corridor, a left wall corner, a right wall corner, a front wall, an upper left side, an upper right side, two sides, dead corners and no obstacles. With Ci=1,...,11Is represented by (C)i=1,...,110 denotes in the corresponding ambient mode: left wall, right wall, corridor, left wall corner, right wall corner, front wall, upper left side, upper right side, two sides, dead corner, no obstacle).
The mobile robot judges the environment of the robot according to the information acquired by the mounted sensor. Specifically, for example, a mobile robot is equipped with 8 distance sensors (see fig. 3), and the 8 sensors are distributed in a ring shape on the mobile robot. 8 sensors can acquire 8 distance information (d) around the robot1、d2…d8) (wherein d is1、d2Distance information obtained for a sensor located on the right front side of the robot, d3Distance information obtained for sensors on the right side of the robot, d4Distance information obtained for the sensor on the right rear side of the robot, d5Distance information obtained for the sensors on the left rear side of the robot, d6Distance information obtained for the sensor on the left side of the robot, d7、d8Distance information obtained for a sensor located on the front left side of the robot). Setting a threshold value t when dx<When t (x is 1,2 … 8), the sensor is considered to detect an obstacle, and the corresponding distance value d is settBinarized, 1 represents detected obstacle and 0 represents not detected. After the processing, the binary sensor distance value of the current environment where the robot is located can be obtained. Eleven environments correspond to eleven binary sensor distance values (see table 1). And comparing the current binary sensor distance value with binary sensor distance values corresponding to eleven environment types, judging the current environment, and outputting the current environment to the multi-behavior fusion enzyme numerical value membrane system. Enzyme numerical membrane systems classify the input environment into 4 broad categories: the wall type (left wall, right wall, corridor, left corner and right corner environment correspond to the type), the barrier type (front wall, upper left side and upper right side environment correspond to the type), the dead zone type (two sides and dead angle environment correspond to the type), and the barrier-free type (barrier-free environment corresponds to the type). Respectively using variable Ewa、Eob、Ede、EnoAnd (4) showing.
TABLE 1 Environment type and binary sensor value correspondence table
Figure BDA0001837526030000051
b) Judging whether the distance between the robot and the target is the historical minimum value or not
The historical minimum value between the robot and the target is stored by using a variable MinDist (initialized by using CurdIst), and whether the distance between the robot and the target CurdIst is the minimum value or not is judged by comparing the CurdIst and the MinDist. Min dist > curdest, is the minimum history value (indicated by variable IfMin ═ 1), and min dist ═ curdest.
c) Determine whether the target, the obstacle (or the wall surface) are on the same side of the robot
The input Angle can determine whether the target is on the left or right side of the robot. Angle>0, target on the right side of the robot; angle<0, target on the left side of the robot. The position of the obstacle (or wall) can be determined by the input environment Ci=1,...,11And (4) determining. C 70 represents the upper left environment, that is, it is determined that the obstacle is located on the left side of the robot (represented by variable IfLeft 1); c8An upper right environment is indicated by 0, i.e. an obstacle is located on the right side of the robot (indicated by the variable IfLeft being 0). C 10 denotes left wall environment or C4The left corner is represented by 0, and the wall surface is judged to be positioned on the left side of the robot (represented by a variable IfLeft which is 1); c 20 represents the right wall or C5The right corner is indicated by 0, and it is determined that the obstacle is located on the right side of the robot (indicated by the variable IfLeft being 0).
Based on the above determination, it can be determined whether the target and the obstacle (or the wall surface) are on the same side of the robot. When Angle >0 and IfLeft ═ 1, the target, the obstacle (or the wall) are not on the same side of the robot (represented by the variable IfSame ═ 0); angle >0, if IfLeft is 0, IfSame is 1; when Angle is less than 0 and IfLeft is equal to 1, IfSame is equal to 1; when Angle is less than 0 and IfLeft is 0, IfSame is 0. Summarized as the following formula
Figure BDA0001837526030000061
d) Multiple behavior selection
The invention defines five behaviors, namely obstacle avoidance (obstacle avoidance of front, left and right obstacles), wall following (wall following and passing through channels of left and right side walls), target tendency, in-situ turning and autorotation. Respectively using obstacle avoidance action variables
Figure BDA0001837526030000062
Wall-dependent behavior variables
Figure BDA0001837526030000063
Tendency target behavior variable ComgrIn-situ turn-around behavior variable ComdeSelf-rotation behavior variables
Figure BDA0001837526030000064
And (4) showing.
When a specific condition is satisfied, a corresponding behavior is selected (i.e., the corresponding behavior variable value is assigned 1), and a behavior variable is output.
When the robot is in CiWhen the environment is 0(i is 9 or 10), the environment type is judged to be the dead zone type, the robot is blocked due to the advancing direction, and the in-situ turning behavior is selected
Figure BDA0001837526030000065
When the robot is in C11When the environment is 0, the obstacle-free type is judged. If IfMin is equal to 1, then the trend target behavior Com is selectedgr1 is ═ 1; if IfMin is equal to 0, selecting the rotation behavior
Figure BDA0001837526030000066
When the robot is in CiWhen the environment is 0(i is 6,7, 8), it is determined as the obstacle type. If IfMin is equal to 0, selecting corresponding obstacle avoidance behavior
Figure BDA0001837526030000067
If IfMin is equal to 1, judging whether the target and the obstacle are on the same side of the robot, and if IfSame is equal to 1, selecting a corresponding obstacle avoidance behavior
Figure BDA0001837526030000068
When IfSame is equal to 0, the tendency toward target behavior Com is selectedgr=1。
When the robot is in CiWhen the environment is 0(i is 1,2, 5), it is determined as the wall type. If IfMin is equal to 0, selecting corresponding wall-following behavior
Figure BDA0001837526030000071
If IfMin is equal to 1, judging whether the target and the wall surface are on the same side of the robot, and if IfSame is equal to 1, selecting the corresponding wall-following behavior
Figure BDA0001837526030000072
When IfSame is equal to 0, the tendency toward target behavior Com is selectedgr=1。
The multi-behavior fusion enzyme numerical membrane system outputs behavior variables to an execution system.
3) Perform corresponding actions
The behavior of the enzyme numerical membrane system output is performed.
4) Determining whether the mobile robot reaches a target
And judging whether the distance value CurDest is equal to 0. If the value is equal to 0, the mobile robot reaches the target, and the control is ended. If not, the mobile robot continues to acquire the environment state and determines the execution behavior until the target is reached.
The invention is simulated on a PC, and the CPU is 2.8HZ, 4GB RAM, software platforms MATLAB2012, Windows7OS and Webots. Taking the mobile robot Epuck as an example, the Epuck robot has 8 infrared sensors and is driven by two wheels in a differential mode.
Referring to fig. 4 and 5, the invention adopts the following steps:
step 1, taking unknown environment and robot target as input
The mobile robot Epuck is placed in the unknown environment shown in figure 5, and the Epuck obtains the distance D between the current Epuck and the target according to a distance and angle calculation formulacurAnd Angle, input to the multi-behavior fusion enzyme numerical membrane system of fig. 4. Angle passes through variable Agr1、Agr2Preservation, Agr1=Angle,Agr2=-Angle。
Step 2. control of the mobile robot
a) Acquisition and determination of the type of environment in which a mobile robot is located
The environmental modes are divided into 11 types: left wall, right wall, corridor, left wall corner, right wall corner, front wall, left upper side, right upper side, two sides, dead corner, no obstacle, using Ci=1,...,11Is represented by (C)i=1,...,110 is in the corresponding environment mode: left wall, right wall, corridor, left wall corner, right wall corner, front wall, upper left side, upper right side, two sides, dead corner, no obstacle). The Epuck acquires 8 pieces of distance information (sensor value d) around the robot through 8 infrared sensors1、d2…d8). The 8 sensors are distributed on the mobile robot in a ring shape (as in fig. 3), (where d1、d2Distance information obtained for a sensor located on the right front side of the robot, d3Distance information obtained for sensors on the right side of the robot, d4Distance information obtained for the sensor on the right rear side of the robot, d5Distance information obtained for the sensors on the left rear side of the robot, d6Distance information obtained for the sensor on the left side of the robot, d7、d8Distance information obtained for a sensor located on the front left side of the robot). Threshold value 70 when dx>70(x is 1,2 … 8), the sensor is considered to detect the obstacle, and d is set tox1, dx<70,dxA 0 indicates that no obstacle was detected. (the value of the Epuck sensor decreases as the distance of the obstacle increases). The 8 sensor processed values constitute a set of binary sensor values, such as: when the disease is not obstructed [ 00000000]Comparing the currently obtained binary set with the defined 11 environment binary set (see table 1), and determining the environment C of the roboti=1,...,11And output to the enzyme numerical membrane system of figure 4. Enzyme numerical membrane systems classify the input environment into 4 broad categories: wall type (left wall, right wall, corridor, left corner and right corner environment correspond to the type), barrier type (front wall, upper left side and upper right side environment correspond to the type), dead zone type (two sides and dead angle environment correspond to the type), and barrier-freeType (no obstacle environment corresponds to this class). Respectively using variable Ewa、Eob、Ede、EnoAnd (4) showing. This step is accomplished by the Judge Environment model of FIG. 4, which contains a total of 11 rules Pri=1,2..11And Case. Rules 1-5 (i.e., Pr)i=1,2..5Case) divides the environment in which the robot is located into wall types (E)wa1), rule 6,7,8 is divided into obstacle types (E)obRule 9, 10 is divided into dead zone types (dead zone type directly corresponds to in-place turnaround behavior, so there is a direct allocation Com implementedde1), rule 11 is divided into barrier-free types (E)no=1)。
Here, the rule execution is described as rule 9, and the rule execution manner is the same hereinafter. Rule 9 is Pr9,Case:C9+2(Ec→)1|Comde+1|ET。Pr9Case is a regular name, separated from expressions by colon, C9+2 is the value generation rule, 1| Comde+1|ETValue assignment rule, enzyme variable E in parenthesesc(when an enzyme variable is greater than a variable in a value generation rule, the rule will only activate execution; where E is requiredc>C9). When E isc>C9When rule 9 is active, the value generation rule generates a value C9+2, after the value generation rule is executed, the variable in the generation rule is cleared to 0 (i.e., C)9Will be cleared to 0)), the resulting value is assigned by an assignment rule, C9+2 divided into 2 portions, one for each variable ComdeOne portion of the solution is given to ETI.e. Comde=(C9+2)/2,ET=(C9+2)/2。
b) Judging whether the distance between the robot and the target is the historical minimum value or not
This step is accomplished in the fig. 4 subparagraph distanceifminimal. Rule 1 determines whether the value is the minimum historical value, and rule 2 outputs and stores the minimum historical value.
Figure BDA0001837526030000081
Rule 1,2 execution, D min1 is expressed as the historical minimum, Output Dcur
c) Determine whether the target, the obstacle (or the wall surface) are on the same side of the robot
And judging whether the target and the obstacle are on the same side of the robot or not is completed by rules 2-7,10 and 11 in the submembrane Judge RobotStateObstacle in the figure 4.
Rules 4,5 determine the position of the target and the robot. A. thegr1<0 (i.e., Angle)<0) When, the enzyme variable Ea[0]>Agr1The set of rules 4 is activated, the rule is activated,
Figure BDA0001837526030000091
indicating that the target is on the left side of the robot. A. thegr2<0 (i.e., Angle)>0) When, the enzyme variable Ea[0]>Agr2The set of rules 5 is activated,
Figure BDA0001837526030000092
indicating that the target is to the right of the robot.
Rule Pr2、3,obstaclenUsed for judging the position relation between the obstacle and the robot. When the environment mode C7When the value is 0 (upper left side), the obstacle is located at the left side of the robot, and the rule Pr is activated2,obstaclen: variable Oleft[-1]=1,Oright[-1]0; environmental mode C8When the value is 0 (upper right side), the obstacle is located at the right side of the robot, and the rule Pr is activated3,obstaclen: variable Oright[-1]=1,Oleft[-1]=0。
Rule Pr6.7.10.11,obstaclenBy passing
Figure BDA0001837526030000093
Oleft、OrightThe relationship between the obstacle, the target and the robot is obtained. When O is presentleft1 (obstacle to the left of the robot),
Figure BDA0001837526030000094
(target on robot right) indicating that obstacle is on robot left target right, rule Pr is activated6,obstaclen: enzyme variable EOG lr2; when O is presentright=1,
Figure BDA0001837526030000095
When, indicating that the right target is on the left, rule Pr is activated7,obstaclen: enzyme variable EOG rl2; when O is presentleft=1,
Figure BDA0001837526030000096
When the system is in use, the system indicates that the obstacles and the targets are all on the left, and activates the rule Pr10,obstaclen: enzyme variables Eog left2; when O is presentright=1,
Figure BDA0001837526030000097
When the system is in use, the system indicates that the obstacles and the targets are all right, and activates the rule Pr11,obstaclen: enzyme variables Eogright=2。
And judging whether the target and the wall surface are on the same side of the robot or not by rules 2-7,10 and 11 in JudgGeRobotStatewall. The principle is the same, and therefore, the description is omitted.
d) Selection of multiple behaviors
The invention defines 5 behaviors, namely obstacle avoidance (obstacle avoidance of front, left and right obstacles), wall following (wall following and passing through channels of left and right side walls), target tendency, in-situ turning and rotation behaviors. Respectively using obstacle avoidance action variables
Figure BDA0001837526030000098
Figure BDA0001837526030000099
Wall-dependent behavior variables
Figure BDA00018375260300000910
Tendency target behavior variable ComgrIn-situ turn-around behavior variable ComdeSelf-rotation behavior variables
Figure BDA00018375260300000911
Shown (as in fig. 4). A behavior variable equal to 1 indicates that such behavior variable is selected.
When the robot is in CiWhen the environment is 0(i is 9 or 10), the environment type is judged to be the dead zone type, the robot is blocked due to the advancing direction, and the in-situ turning behavior is selected
Figure BDA0001837526030000101
This is accomplished in part by rules 9 or 10 in the Judge Environment model of FIG. 4. Variable ETAnd (4) stopping the evolution calculation of the membrane system and outputting
Figure BDA0001837526030000102
When the robot is in C11When the environment is 0, the type is judged to be barrier-free (E)no1). At this time, if D min1, choose to trend toward target behavior Com gr1 is ═ 1; if D isminWhen the rotation behavior is 0, the rotation behavior is selected
Figure BDA0001837526030000103
This part is completed by the daughter membrane SelectGoalReachingCase of FIG. 4. If D ismin Rule 1 cannot be executed, and further rule 2 cannot be executed, rule 3 has no enzyme variables and is executed every time it evolves,
Figure BDA0001837526030000104
further rule 4 execution, Comgr=1,E T1 denotes the evolutionary calculation of the stop membrane system. If D ismin Rule 1, rule 2 performs, 0,
Figure BDA0001837526030000105
ET=1。
when the robot is in CiWhen the environment is 0(i is 6,7, 8), it is determined as the obstacle type (E)ob1). FIG. 4 daughter membranes SelectObstacleAvoidecense and Judge RobotStateObstacle judge what behavior to choose. If D ismin Rule 1 execution in SelectObstacleAvoideceCase 0
Figure BDA0001837526030000106
Rules 2 or 3 or 4 are then implemented to select the corresponding obstacle avoidance behavior
Figure BDA0001837526030000107
After rule 5 is executed
Figure BDA0001837526030000108
Making the rules in the judggerootstateobstacle unexecutable. If D ismin1, rule 1 cannot be executed in selectobstacleedamyces case, further, rules 2,3, and 4 cannot be executed, and after rule 5 is executed
Figure BDA0001837526030000109
If the target and the obstacle are on the same side of the robot (Eog)left2 or Eogright2), rule 12 or 13 is activated,
Figure BDA00018375260300001010
or
Figure BDA00018375260300001011
If the target and the obstacle are not on the same side of the robot (EOG)lr2 or EOGrl2), rule 8 or 9 is active,
Figure BDA00018375260300001012
ET=1。
when the robot is in CiWhen the environment is 0(i is 1,2, 5), the environment type is determined to be the wall type (E)wa1). FIG. 4 the daughter membranes SelectWallFollowCase and JudggeRobotStatewall judge which behaviour is selected. The principle is the same as the obstacle avoidance situation, and therefore, the description is omitted.
Step 3, executing corresponding behaviors
The behavior of the membrane system output is performed.
Step 4, judging whether the mobile robot reaches the target or not
Judging the distance value DcurWhether or not it is equal to 0. If the value is equal to 0, the mobile robot reaches the target, and the control is ended. If not, the mobile robot continues to acquire the environment state and determines the execution behavior until the target is reached.
From the experimental results of fig. 5, it can be seen that the multi-behavior fusion membrane control (MBCMC) walking path is shorter (fig. 5 left) and the dead zone (fig. 5 right) that the fuzzy logic control cannot escape can be escaped.

Claims (2)

1.一种未知环境下移动机器人多行为融合酶数值膜控制方法,其特征在于,包括1. a mobile robot multi-behavior fusion enzyme numerical membrane control method under an unknown environment, is characterized in that, comprising 步骤1:计算目标和机器人的直线距离CurDist和目标相对于机器人前向方向的角度Angle;Step 1: Calculate the straight-line distance CurDist of the target and the robot and the angle of the target relative to the forward direction of the robot; 步骤2:根据机器人所搭载的传感器获取的信息,判断机器人所处的环境,包括左墙、右墙、走廊、左墙角、右墙角、前墙、左上侧、右上侧、两侧、死角和无障碍物,分别用Ci=1,2,...,11=0表示,其中i=1,2,...,11分别对应上述11种环境;将所处的环境分为4类:壁面类型Ewa,对应左墙、右墙、走廊、左墙角和右墙角的环境;障碍物类型Eob,对应前墙、左上侧和右上侧的环境;死区类型Ede,对应两侧和死角的环境;无障碍物类型Eno,对应无障碍物的环境;Step 2: According to the information obtained by the sensors carried by the robot, determine the environment where the robot is located, including left wall, right wall, corridor, left wall corner, right wall corner, front wall, upper left side, upper right side, both sides, dead corner and no Obstacles are represented by C i=1, 2,..., 11 =0 respectively, where i=1, 2,..., 11 correspond to the above 11 environments respectively; the environments are divided into 4 categories: Wall type E wa , corresponding to the environment of left wall, right wall, corridor, left wall and right corner; obstacle type E ob , corresponding to the environment of front wall, upper left and upper right; dead zone type E de , corresponding to both sides and Dead-end environment; no obstacle type E no , corresponding to the environment without obstacles; 步骤3:判断机器人与目标之间的直线距离CurDist是否为历史最小值MinDist:当MinDist>CurDist时为历史最小值,用变量IfMin=1表示,并令MinDist=CurDist;否则,IfMin=0;Step 3: Determine whether the straight-line distance between the robot and the target, CurDist, is the historical minimum value MinDist: when MinDist>CurDist, the historical minimum value is represented by the variable IfMin=1, and MinDist=CurDist; otherwise, IfMin=0; 步骤4:判断目标和障碍物或壁面是否在机器人同一侧:Step 4: Determine whether the target and the obstacle or wall are on the same side of the robot: 根据角度Angle判断目标在机器人的左侧还是右侧,当Angle>0目标在机器人右侧,当Angle<0目标在机器人左侧;Judging whether the target is on the left or right side of the robot according to the angle, when Angle>0 the target is on the right side of the robot, when Angle<0 the target is on the left side of the robot; 如C1=0或C4=0或C7=0,则判断障碍物或壁面位于机器人左侧,用变量IfLeft=1表示;If C 1 =0 or C 4 =0 or C 7 =0, judge that the obstacle or wall is located on the left side of the robot, which is represented by the variable IfLeft=1; 如C2=0或C5=0或C8=0,则判断障碍物或壁面位于机器人右侧,用变量IfLeft=0表示;当Angle>0,IfLeft=1时,目标和障碍物或壁面不在机器人同一侧,用变量IfSame=0表示;当Angle>0,IfLeft=0时,目标和障碍物或壁面在机器人同一侧,IfSame=1;当Angle<0,IfLeft=1时,IfSame=1;Angle<0,IfLeft=0时,IfSame=0;If C 2 =0 or C 5 =0 or C 8 =0, judge that the obstacle or wall is located on the right side of the robot, which is represented by the variable IfLeft=0; when Angle>0, IfLeft=1, the target and the obstacle or wall are Not on the same side of the robot, represented by the variable IfSame=0; when Angle>0, IfLeft=0, the target and the obstacle or wall are on the same side of the robot, IfSame=1; When Angle<0, IfLeft=1, IfSame=1 ;Angle<0, IfLeft=0, IfSame=0; 步骤5:进行多行为选择后,输出行为变量给执行系统:Step 5: After multi-behavior selection, output behavior variables to the execution system: 当机器人处于Ci=0,i=9、10环境时,判断为死区类型,选择
Figure FDA0002319135830000011
When the robot is in the environment of C i = 0, i = 9, 10, it is judged to be a dead zone type, and select
Figure FDA0002319135830000011
当机器人处于Ci=0,i=11环境时,判断为无障碍类型:若IfMin=1选择Comgr=1,若IfMin=0选择
Figure FDA0002319135830000012
When the robot is in the environment of C i = 0, i = 11, it is judged as an obstacle-free type: if IfMin=1, select Com gr =1, if IfMin=0, select
Figure FDA0002319135830000012
当机器人处于Ci=0,i=6、7、8环境时,判断为障碍物类型:When the robot is in the environment of C i = 0, i = 6, 7, 8, it is judged as an obstacle type: 若IfMin=0选择相应的避障行为:i=6选择
Figure FDA0002319135830000013
i=7选择
Figure FDA0002319135830000014
i=8选择
Figure FDA0002319135830000015
If IfMin=0, select the corresponding obstacle avoidance behavior: i=6 select
Figure FDA0002319135830000013
i=7 choices
Figure FDA0002319135830000014
i=8 options
Figure FDA0002319135830000015
若IfMin=1,进一步判断目标和障碍物是否在机器人同一侧,当IfSame=1选择相If IfMin=1, further judge whether the target and the obstacle are on the same side of the robot. 应的避障行为:i=6选择
Figure FDA0002319135830000021
i=7选择
Figure FDA0002319135830000022
i=8选择
Figure FDA0002319135830000023
当IfSame=0选择Comgr=1;
The corresponding obstacle avoidance behavior: i=6 option
Figure FDA0002319135830000021
i=7 choices
Figure FDA0002319135830000022
i=8 options
Figure FDA0002319135830000023
When IfSame=0 select Com gr =1;
当机器人处于Ci=0,i=1、2、3、4、5环境时,判断为壁面类型:When the robot is in the environment of C i = 0, i = 1, 2, 3, 4, and 5, it is judged as a wall type: 若IfMin=0选择相应的随墙行为:i=1或4选择
Figure FDA0002319135830000024
i=2或5选择
Figure FDA0002319135830000025
Figure FDA00023191358300000218
i=3选择
Figure FDA0002319135830000026
If IfMin=0 select the corresponding wall behavior: i=1 or 4 select
Figure FDA0002319135830000024
i=2 or 5 choice
Figure FDA0002319135830000025
Figure FDA00023191358300000218
i=3 options
Figure FDA0002319135830000026
若IfMin=1,进一步判断目标和壁面是否在机器人同一侧,当IfSame=1,选择相应的随墙行为:i=1或4选择
Figure FDA0002319135830000027
i=2或5选择
Figure FDA0002319135830000028
i=3选择
Figure FDA0002319135830000029
Figure FDA00023191358300000219
当IfSame=0选择Comgr=1;
If IfMin=1, further judge whether the target and the wall are on the same side of the robot. When IfSame=1, select the corresponding wall-following behavior: i=1 or 4 to select
Figure FDA0002319135830000027
i=2 or 5 choice
Figure FDA0002319135830000028
i=3 options
Figure FDA0002319135830000029
Figure FDA00023191358300000219
When IfSame=0 select Com gr =1;
上述行为变量分别为:前方障碍物避障
Figure FDA00023191358300000210
左方障碍物避障
Figure FDA00023191358300000211
右方障碍物避障
Figure FDA00023191358300000212
左侧壁面随墙
Figure FDA00023191358300000213
右侧壁面随墙
Figure FDA00023191358300000214
穿越通道
Figure FDA00023191358300000215
趋向目标Comgr、原地调头Comde和自转
Figure FDA00023191358300000216
The above behavior variables are: obstacle avoidance ahead
Figure FDA00023191358300000210
Obstacle avoidance on the left
Figure FDA00023191358300000211
Obstacle avoidance on the right
Figure FDA00023191358300000212
left side wall
Figure FDA00023191358300000213
right side wall
Figure FDA00023191358300000214
crossing the channel
Figure FDA00023191358300000215
Trending Target Com gr , Spot U-turn Com de and Rotation
Figure FDA00023191358300000216
步骤6:执行系统执行步骤5输出的行为;Step 6: The execution system executes the behavior outputted in Step 5; 步骤7:判断移动机器人是否到达目标,即判断CurDist是否等于0:等于0则表示机器人到达目标,则结束控制;不为0则返回步骤1继续。Step 7: Determine whether the mobile robot has reached the target, that is, determine whether CurDist is equal to 0: if it is equal to 0, it means that the robot has reached the target, and the control is ended; if it is not 0, return to step 1 to continue.
2.如权利要求1所述的一种未知环境下移动机器人多行为融合酶数值膜控制方法,其特征在于,所述步骤2中根据机器人所搭载的传感器获取的信息,判断机器人所处的环境的方法为:所述传感器获取的信息为机器人周边的8个距离dx,x=1,2…8;其中d1、d2为机器人右前侧的传感器获得的距离,d3为机器人右侧的传感器获得的距离,d4为机器人右后侧的传感器获得的距离,d5为机器人左后侧的传感器获得的距离,d6为机器人左侧的传感器获得的距离,d7、d8为位于机器人左前侧的传感器获得的距离;当dx<t,t为距离阈值,则传感器检测到障碍物,并将对应的距离dx二进制化,1代表检测到障碍物,0代表未检测到障碍物;2. The multi-behavior fusion enzyme numerical membrane control method for a mobile robot in an unknown environment as claimed in claim 1, wherein in the step 2, according to the information obtained by the sensor carried by the robot, the environment where the robot is located is judged The method is: the information obtained by the sensor is 8 distances d x around the robot, x=1,2...8; where d 1 and d 2 are the distances obtained by the sensor on the right front side of the robot, and d 3 is the right side of the robot. The distance obtained by the sensor on the left side of the robot, d 4 is the distance obtained by the sensor on the right rear side of the robot, d 5 is the distance obtained by the sensor on the left rear side of the robot, d 6 is the distance obtained by the sensor on the left side of the robot, and d 7 and d 8 are The distance obtained by the sensor located on the left front side of the robot; when d x < t, t is the distance threshold, the sensor detects an obstacle, and the corresponding distance d x is binarized, 1 means an obstacle is detected, 0 means no detection obstacle;
Figure FDA00023191358300000217
Figure FDA00023191358300000217
Figure FDA0002319135830000031
Figure FDA0002319135830000031
之后根据上表确定机器人所处的环境。Then determine the environment where the robot is located according to the above table.
CN201811235990.1A 2018-10-23 2018-10-23 Mobile robot multi-behavior fusion enzyme numerical film control method in unknown environment Expired - Fee Related CN109164812B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811235990.1A CN109164812B (en) 2018-10-23 2018-10-23 Mobile robot multi-behavior fusion enzyme numerical film control method in unknown environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811235990.1A CN109164812B (en) 2018-10-23 2018-10-23 Mobile robot multi-behavior fusion enzyme numerical film control method in unknown environment

Publications (2)

Publication Number Publication Date
CN109164812A CN109164812A (en) 2019-01-08
CN109164812B true CN109164812B (en) 2020-04-07

Family

ID=64879061

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811235990.1A Expired - Fee Related CN109164812B (en) 2018-10-23 2018-10-23 Mobile robot multi-behavior fusion enzyme numerical film control method in unknown environment

Country Status (1)

Country Link
CN (1) CN109164812B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110456679B (en) * 2019-05-17 2021-05-14 西南交通大学 FPGA-based robot numerical membrane control system and its construction method
CN110427634B (en) * 2019-05-17 2022-08-02 西南交通大学 Communication system for realizing reaction system based on FPGA and construction method thereof
CN110147108B (en) * 2019-06-04 2021-08-03 西南交通大学 An obstacle avoidance control method for mobile robots based on membrane computing
CN110262481B (en) * 2019-06-04 2021-06-22 西南交通大学 A mobile robot obstacle avoidance control method based on enzyme numerical membrane system
CN110286685A (en) * 2019-07-23 2019-09-27 中科新松有限公司 A kind of mobile robot

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AT507035B1 (en) * 2008-07-15 2020-07-15 Airbus Defence & Space Gmbh SYSTEM AND METHOD FOR AVOIDING COLLISION
CN100568144C (en) * 2008-09-04 2009-12-09 湖南大学 A multi-behavior fusion automatic navigation method for mobile robots in unknown environments
CN101758827B (en) * 2010-01-15 2013-06-12 南京航空航天大学 Automatic obstacle avoiding method of intelligent detection vehicle based on behavior fusion in unknown environment
CN104050390B (en) * 2014-06-30 2017-05-17 西南交通大学 Mobile robot path planning method based on variable-dimension particle swarm membrane algorithm
CN104317297A (en) * 2014-10-30 2015-01-28 沈阳化工大学 Robot obstacle avoidance method under unknown environment
CN207824888U (en) * 2017-06-27 2018-09-07 安徽奇智科技有限公司 A kind of obstruction-avoiding control system of intelligent mobile robot
CN107480597B (en) * 2017-07-18 2020-02-07 南京信息工程大学 Robot obstacle avoidance method based on neural network model

Also Published As

Publication number Publication date
CN109164812A (en) 2019-01-08

Similar Documents

Publication Publication Date Title
CN109164812B (en) Mobile robot multi-behavior fusion enzyme numerical film control method in unknown environment
Hong et al. Application of fuzzy logic in mobile robot navigation
Pandey et al. Path planning navigation of mobile robot with obstacles avoidance using fuzzy logic controller
Liu et al. Situation-aware decision making for autonomous driving on urban road using online POMDP
Bao et al. A fuzzy behavior-based architecture for mobile robot navigation in unknown environments
Dongshu et al. Behavior-based hierarchical fuzzy control for mobile robot navigation in dynamic environment
Haider et al. Robust mobile robot navigation in cluttered environments based on hybrid adaptive neuro-fuzzy inference and sensor fusion
Mohanty et al. Path planning strategy for mobile robot navigation using MANFIS controller
Sanders et al. Improving steering of a powered wheelchair using an expert system to interpret hand tremor
Chhotray et al. Navigational control analysis of two-wheeled self-balancing robot in an unknown terrain using back-propagation neural network integrated modified DAYANI approach
Cao et al. Fuzzy logic control for an automated guided vehicle
Stavrinidis et al. An ANFIS-based strategy for autonomous robot collision-free navigation in dynamic environments
Lee et al. Fuzzy wall-following control of a wheelchair
Yu et al. Autonomous formation selection for ground moving multi-robot systems
Raj et al. Discussion on different controllers used for the navigation of mobile robot
Chen et al. Robust type-2 fuzzy control of an automatic guided vehicle for wall-following
Ullah et al. Integrated collision avoidance and tracking system for mobile robot
Cerbaro et al. WaiterBot: Comparison of fuzzy logic approaches for obstacle avoidance in dynamic unmapped environments using a laser scanning system (LiDAR)
Parasuraman Sensor fusion for mobile robot navigation: Fuzzy Associative Memory
LAOUICI et al. Hybrid method for the navigation of mobile robot using fuzzy logic and spiking neural networks
Hamad et al. Path Planning of Mobile Robot Based on Modification of Vector Field Histogram using Neuro-Fuzzy Algorithm.
Nia et al. Virtual force field algorithm for a behaviour-based autonomous robot in unknown environments
Nurmaini et al. Swarm robots control system based fuzzy-pso
Santiago et al. Interval type-2 fuzzy and PID dual-mode controller for an autonomous mobile robot
Wijanto Design of Deliberative and Reactive Hybrid Control System for Autonomous Stuff-Delivery Robot Rover

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200407

Termination date: 20211023