[go: up one dir, main page]

CN115235470A - A kind of agricultural robot positioning and navigation method and system based on visual detection - Google Patents

A kind of agricultural robot positioning and navigation method and system based on visual detection Download PDF

Info

Publication number
CN115235470A
CN115235470A CN202210806544.1A CN202210806544A CN115235470A CN 115235470 A CN115235470 A CN 115235470A CN 202210806544 A CN202210806544 A CN 202210806544A CN 115235470 A CN115235470 A CN 115235470A
Authority
CN
China
Prior art keywords
agricultural robot
map
pose
environment
navigation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210806544.1A
Other languages
Chinese (zh)
Inventor
张仁远
吴雄伟
刘恩博
王增辉
唐炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202210806544.1A priority Critical patent/CN115235470A/en
Publication of CN115235470A publication Critical patent/CN115235470A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
    • Y02A40/25Greenhouse technology, e.g. cooling systems therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

本发明提出了一种基于视觉检测的农业机器人定位导航方法及系统,属于农业机器人技术领域。本发明方法,包括:在温室环境的农作物生长周期开始阶段,利用所述农业机器人安装的三维激光雷达建立所述温室环境的环境地图,并基于所述环境地图推算出温室环境垄间的视觉标签码在世界坐标系的位姿;基于所述环境地图和所述视觉标签码在世界坐标系下的位姿,通过视觉检测法确定所述农业机器人的位姿;将所述环境地图作为所述农业机器人的导航地图,并基于所述农业机器人的位姿对所述农业机器人进行自主导航。本发明得到了一种适用于快速生长型作物的农业机器人视觉辅助定位方法。

Figure 202210806544

The invention provides a method and system for positioning and navigating an agricultural robot based on visual detection, and belongs to the technical field of agricultural robots. The method of the invention includes: at the beginning stage of the crop growth cycle in the greenhouse environment, using the three-dimensional laser radar installed by the agricultural robot to establish an environmental map of the greenhouse environment, and calculating the visual labels between the ridges of the greenhouse environment based on the environmental map The pose of the code in the world coordinate system; based on the pose of the environment map and the visual label code in the world coordinate system, the pose of the agricultural robot is determined by visual detection method; the environment map is used as the A navigation map of the agricultural robot, and autonomous navigation of the agricultural robot based on the pose of the agricultural robot. The invention obtains a visual aided positioning method for agricultural robots suitable for fast-growing crops.

Figure 202210806544

Description

一种基于视觉检测的农业机器人定位导航方法及系统A method and system for positioning and navigation of agricultural robot based on visual detection

技术领域technical field

本发明涉及农业机器人技术领域,具体涉及一种基于视觉检测的农业机器人定位导航方法及系统。The invention relates to the technical field of agricultural robots, in particular to a method and system for positioning and navigation of agricultural robots based on visual detection.

背景技术Background technique

目前的室内导航系统,有自动巡线,无线定位和激光SLAM定位等多种方式。但是在农业温室大棚环境中,特别是一些路面未硬化的环境,自动巡线导航系统铺设不便,无线定位方式会受到信号干扰而激光SLAM定位导航结果很大成度上取决于激光扫描信息与地图的匹配效果,在农业环境中作物不同生长周期环境变化很大,无法通过前期建立的地图进行导航工作。The current indoor navigation system includes automatic line patrol, wireless positioning and laser SLAM positioning and other methods. However, in the agricultural greenhouse environment, especially in the environment where the road surface is not hardened, the automatic line patrol navigation system is inconvenient to lay, and the wireless positioning method will be interfered by the signal, and the laser SLAM positioning and navigation results largely depend on the laser scanning information and the map. Matching effect, in the agricultural environment, the environment of different growth cycles of crops changes greatly, and it is impossible to navigate through the map established in the previous stage.

发明内容SUMMARY OF THE INVENTION

针对上述问题,本发明提出了一种基于视觉检测的农业机器人定位导航方法,包括:In view of the above problems, the present invention proposes a positioning and navigation method for agricultural robots based on visual detection, including:

在温室环境的农作物生长周期开始阶段,利用所述农业机器人安装的三维激光雷达建立所述温室环境的环境地图,并基于所述环境地图推算出温室环境垄间的视觉标签码在世界坐标系下的位姿;At the beginning of the crop growth cycle in the greenhouse environment, use the three-dimensional laser radar installed by the agricultural robot to establish an environmental map of the greenhouse environment, and calculate the visual label code between the ridges of the greenhouse environment based on the environmental map in the world coordinate system. pose;

基于所述环境地图和所述视觉标签码在世界坐标系的位姿,通过视觉检测法确定所述农业机器人的位姿;Based on the pose of the environment map and the visual tag code in the world coordinate system, the pose of the agricultural robot is determined by a visual detection method;

将所述环境地图作为所述农业机器人的导航地图,并基于所述农业机器人的位姿对所述农业机器人进行自主导航。The environment map is used as the navigation map of the agricultural robot, and the agricultural robot is autonomously navigated based on the pose of the agricultural robot.

可选的,农业机器人采用履带轮进行移动,所述农业机器人的车体顶部中心安装有三维激光雷达,车体顶部的可升降云台上安装有摄像头,车体底盘中心安装有IMU姿态传感器,车体两侧车轮安装有编码器。Optionally, the agricultural robot uses crawler wheels to move, a three-dimensional laser radar is installed in the center of the top of the vehicle body of the agricultural robot, a camera is installed on the liftable pan/tilt on the top of the vehicle body, and an IMU attitude sensor is installed in the center of the chassis of the vehicle body. Encoders are installed on the wheels on both sides of the vehicle body.

可选的,利用所述农业机器人安装的三维激光雷达建立所述温室环境的环境地图,包括:Optionally, using the three-dimensional laser radar installed by the agricultural robot to establish an environmental map of the greenhouse environment, including:

通过所述三维激光雷达发射多线激光,并基于三维建图算法构建温室环境的3D点云地图,根据所述3D点云地图根据温室环境的垄的高度进行投影,基于所述投影确定用于农业机器人导航的2D栅格地图;The three-dimensional lidar emits multi-line laser light, and builds a 3D point cloud map of the greenhouse environment based on a three-dimensional mapping algorithm. 2D grid map for agricultural robot navigation;

所述环境地图,包括:3D点云地图和2D栅格地图。The environment map includes: 3D point cloud map and 2D grid map.

可选的,方法还包括:将视觉标签码在世界坐标系的位姿存储至标签码数组中。Optionally, the method further includes: storing the pose of the visual label code in the world coordinate system into the label code array.

可选的,基于所述环境地图和所述视觉标签码在世界坐标系下的位姿,通过视觉检测法确定所述农业机器人的位姿,包括:Optionally, based on the pose of the environment map and the visual label code in the world coordinate system, the pose of the agricultural robot is determined by a visual detection method, including:

通过农业机器人安装的摄像头识别温室环境垄间的视觉标签码,若所述摄像头识别到视觉标签码,调用标签码数组中存储的视觉标签码在世界坐标系下的位姿,通过视觉检测法对所述农业机器人的位姿进行修正,确定所述农业机器人的位姿。The visual label code between the ridges in the greenhouse environment is recognized by the camera installed on the agricultural robot. If the camera recognizes the visual label code, the pose of the visual label code stored in the label code array in the world coordinate system is called, and the visual detection method is used to detect the visual label code. The pose of the agricultural robot is corrected to determine the pose of the agricultural robot.

可选的,通过农业机器人安装的摄像头识别温室环境垄间的视觉标签码,若所述摄像头未识别到视觉标签码,通过编码器和IMU姿态传感器对所述农业机器人进行定位。Optionally, the visual label code between the ridges in the greenhouse environment is recognized by a camera installed on the agricultural robot, and if the camera does not recognize the visual label code, the agricultural robot is positioned by an encoder and an IMU attitude sensor.

可选的,将所述环境地图作为所述农业机器人的导航地图,对所述农业机器人进行导航,包括:Optionally, the environment map is used as the navigation map of the agricultural robot to navigate the agricultural robot, including:

调用环境地图,将所述环境地图中的2D栅格地图作为导航地图;Call the environment map, and use the 2D grid map in the environment map as the navigation map;

基于导航地图和调用ROS的导航功能包,对所述农业机器人进行导航。The agricultural robot is navigated based on a navigation map and a navigation function package that invokes ROS.

可选的,导航功能包,包括:预先规划出的农业机器人在温室环境垄间的运动路径。Optionally, the navigation function package includes: a pre-planned movement path of the agricultural robot between the ridges in the greenhouse environment.

再一方面,本发明还提出了一种基于视觉检测的农业机器人定位导航系统,包括:On the other hand, the present invention also proposes an agricultural robot positioning and navigation system based on visual detection, including:

数据采集单元,用于在温室环境的农作物生长周期开始阶段,利用所述农业机器人安装的三维激光雷达建立所述温室环境的环境地图,并基于所述环境地图推算出温室环境垄间的视觉标签码在世界坐标系下的位姿;A data acquisition unit is used to establish an environmental map of the greenhouse environment by using the three-dimensional laser radar installed by the agricultural robot at the beginning of the crop growth cycle in the greenhouse environment, and calculate the visual labels between the ridges of the greenhouse environment based on the environmental map The pose of the code in the world coordinate system;

定位单元,基于所述环境地图和所述视觉标签码在世界坐标系下的位姿,通过视觉检测法确定所述农业机器人的位姿;a positioning unit, based on the environment map and the pose of the visual tag code in the world coordinate system, to determine the pose of the agricultural robot through a visual detection method;

数据采集单元,用于将所述环境地图作为所述农业机器人的导航地图,并基于所述农业机器人的位姿对所述农业机器人进行自主导航。The data acquisition unit is used for taking the environment map as the navigation map of the agricultural robot, and autonomously navigating the agricultural robot based on the pose of the agricultural robot.

与现有技术相比,本发明的有益效果为:Compared with the prior art, the beneficial effects of the present invention are:

本发明通过在温室环境的农作物生长周期开始阶段,利用所述温室环境中的农业机器人通过安装的三维激光雷达建立所述温室环境的环境地图,并基于所述环境地图推算得到,所述温室环境垄间的视觉标签码在世界坐标系下的位姿;基于所述环境地图和所述视觉标签码相对于世界坐标系的位姿,通过视觉辅助所述农业机器人的定位;将所述环境地图作为所述农业机器人的导航地图,对所述农业机器人进行导航。得到一种适用于快速生长型作物的农业机器人视觉辅助定位方法。且本发明不需要铺设导航线路且基本不会受到农作物和杂草生长的影响,可以实现农业机器人在温室环境下农作物生长全周期的自主定位与导航功能。The present invention establishes an environmental map of the greenhouse environment by using an agricultural robot in the greenhouse environment through the three-dimensional laser radar installed in the initial stage of the crop growth cycle in the greenhouse environment, and calculates and obtains based on the environmental map, the greenhouse environment The pose of the visual label code between the ridges under the world coordinate system; based on the environment map and the pose of the visual label code relative to the world coordinate system, the positioning of the agricultural robot is visually aided; the environment map As a navigation map of the agricultural robot, the agricultural robot is navigated. A vision-assisted positioning method for agricultural robots suitable for fast-growing crops is obtained. Moreover, the present invention does not need to lay a navigation line and is basically not affected by the growth of crops and weeds, and can realize the autonomous positioning and navigation functions of the agricultural robot in the whole cycle of crop growth in a greenhouse environment.

附图说明Description of drawings

图1为本发明实施例的方法流程图;1 is a flow chart of a method according to an embodiment of the present invention;

图2为本发明实施例的农业机器人的结构示意图;2 is a schematic structural diagram of an agricultural robot according to an embodiment of the present invention;

图3为本发明实施例的温室环境垄间视觉标签码的布置位置示意图;3 is a schematic diagram of the arrangement position of visual label codes between ridges in a greenhouse environment according to an embodiment of the present invention;

图4为本发明实施例的步骤S1具体实施的流程图;4 is a flowchart of the specific implementation of step S1 in an embodiment of the present invention;

图5为本发明实施例的步骤S2具体实施的流程图;5 is a flowchart of the specific implementation of step S2 in an embodiment of the present invention;

图6为本发明实施例的系统结构图。FIG. 6 is a system structure diagram of an embodiment of the present invention.

具体实施方式Detailed ways

现在参考附图介绍本发明的示例性实施方式,然而,本发明可以用许多不同的形式来实施,并且不局限于此处描述的实施例,提供这些实施例是为了详尽地且完全地公开本发明,并且向所属技术领域的技术人员充分传达本发明的范围。对于表示在附图中的示例性实施方式中的术语并不是对本发明的限定。在附图中,相同的单元/元件使用相同的附图标记。Exemplary embodiments of the present invention will now be described with reference to the accompanying drawings, however, the present invention may be embodied in many different forms and is not limited to the embodiments described herein, which are provided for the purpose of this thorough and complete disclosure invention, and fully convey the scope of the invention to those skilled in the art. The terms used in the exemplary embodiments shown in the drawings are not intended to limit the invention. In the drawings, the same elements/elements are given the same reference numerals.

除非另有说明,此处使用的术语(包括科技术语)对所属技术领域的技术人员具有通常的理解含义。另外,可以理解的是,以通常使用的词典限定的术语,应当被理解为与其相关领域的语境具有一致的含义,而不应该被理解为理想化的或过于正式的意义。Unless otherwise defined, terms (including scientific and technical terms) used herein have the commonly understood meanings to those skilled in the art. In addition, it is to be understood that terms defined in commonly used dictionaries should be construed as having meanings consistent with the context in the related art, and should not be construed as idealized or overly formal meanings.

实施例1:Example 1:

本发明提出了一种基于视觉检测的农业机器人定位导航方法,如图1所示,包括:The present invention proposes a positioning and navigation method for agricultural robots based on visual detection, as shown in Figure 1, including:

步骤S1、在温室环境的农作物生长周期开始阶段,利用所述农业机器人安装的三维激光雷达建立所述温室环境的环境地图,并基于所述环境地图推算出温室环境垄间的视觉标签码在世界坐标系下的位姿;Step S1, in the initial stage of the crop growth cycle of the greenhouse environment, use the three-dimensional laser radar installed by the agricultural robot to establish an environmental map of the greenhouse environment, and calculate the visual label code between the greenhouse environment ridges based on the environmental map in the world. The pose in the coordinate system;

步骤S2、基于所述环境地图和所述视觉标签码在世界坐标系下的位姿,通过视觉检测法确定所述农业机器人的位姿;Step S2, determining the pose of the agricultural robot by a visual detection method based on the environment map and the pose of the visual tag code in the world coordinate system;

步骤S3、将所述环境地图作为所述农业机器人的导航地图,并基于所述农业机器人的位姿对所述农业机器人进行自主导航。Step S3, using the environment map as a navigation map of the agricultural robot, and autonomously navigating the agricultural robot based on the pose of the agricultural robot.

其中,步骤S1中所用的农业机器人,如图2所示,采用履带轮进行移动,所述农业机器人的车体顶部中心安装有三维激光雷达,车体顶部的可升降云台上安装有摄像头,车体底盘中心安装有IMU姿态传感器,车体两侧车轮安装有编码器。Wherein, the agricultural robot used in step S1, as shown in FIG. 2, adopts crawler wheels to move, the center of the top of the vehicle body of the agricultural robot is installed with a three-dimensional laser radar, and a camera is installed on the liftable platform on the top of the vehicle body, An IMU attitude sensor is installed in the center of the chassis of the car body, and encoders are installed on the wheels on both sides of the car body.

步骤S1中,用所述农业机器人安装的三维激光雷达建立所述温室环境的环境地图,包括:In step S1, an environment map of the greenhouse environment is established with the three-dimensional laser radar installed by the agricultural robot, including:

通过所述三维激光雷达发射多线激光,并基于三维建图算法构建温室环境的3D点云地图,根据所述3D点云地图根据温室环境的垄的高度进行投影,基于所述投影确定用于农业机器人导航的2D栅格地图;The three-dimensional lidar emits multi-line laser light, and builds a 3D point cloud map of the greenhouse environment based on a three-dimensional mapping algorithm. 2D grid map for agricultural robot navigation;

所述环境地图,包括:3D点云地图和2D栅格地图。The environment map includes: 3D point cloud map and 2D grid map.

步骤S1的具体实施步骤,如图3所示,包括::The specific implementation steps of step S1, as shown in Figure 3, include:

步骤SS21在进行初次建图时,只需要使用三维激光雷达结合IMU和编码器进行SLAM建图工作,并需要操纵机器人(农业机器人)在温室环境垄间运动,直到完成整个大棚的建图工作,进一步地将3D点云地图转换为2D栅格地图进一步地提供给后面的导航功能使用;Step SS21 In the initial construction of the map, it is only necessary to use the 3D lidar combined with the IMU and the encoder to perform the SLAM mapping work, and the robot (agricultural robot) needs to be manipulated to move between the ridges in the greenhouse environment until the construction of the entire greenhouse is completed. The 3D point cloud map is further converted into a 2D grid map, which is further provided for the subsequent navigation functions;

SS22:在使用多线激光雷达建图过程中,打开摄像头实时识别视觉标签码,一旦识别到标签码,便计算其与相机相对位置关系并记录下来;SS22: In the process of using multi-line lidar to build a map, turn on the camera to identify the visual label code in real time. Once the label code is identified, the relative positional relationship with the camera is calculated and recorded;

SS23:利用激光SLAM算法实时匹配出来的机器人位姿和步骤SS22中获取的标签码与相机的相对位置关系,标定标签码在环境中的位姿,即通过将SLAM算法发布的机器人自身坐标系B到世界坐标系W的TF转换为齐次变换矩阵TBW,并实际测量相机坐标系C与机器人自身坐标系B的相对位置关系,获取相机坐标系C相对于机器人自身坐标系B的齐次变换矩阵TCB,当相机观察到标签码时可以计算出相机坐标系C相对于标签码坐标系M的齐次变换矩阵TCM,定义标签码坐标系M相对于世界坐标系W的齐次变换矩阵为TMW,TMW的计算公式如下;SS23: Use the laser SLAM algorithm to match the robot pose in real time and the relative position relationship between the label code and the camera obtained in step SS22, and calibrate the pose of the label code in the environment, that is, the robot's own coordinate system B released by the SLAM algorithm The TF to the world coordinate system W is converted into a homogeneous transformation matrix T BW , and the relative positional relationship between the camera coordinate system C and the robot's own coordinate system B is actually measured, and the homogeneous transformation of the camera coordinate system C relative to the robot's own coordinate system B is obtained. The matrix T CB , when the camera observes the label code, can calculate the homogeneous transformation matrix T CM of the camera coordinate system C relative to the label code coordinate system M, and define the homogeneous transformation matrix of the label code coordinate system M relative to the world coordinate system W is T MW , the calculation formula of T MW is as follows;

TMW=TBWTCB(TCM)-1 T MW = T BW T CB (T CM ) -1

将齐次变换矩阵TBW中的平移矩阵与旋转矩阵提取出来,获得标签码在世界坐标系下的姿态PM=(xM,yMM);Extract the translation matrix and rotation matrix in the homogeneous transformation matrix T BW to obtain the attitude P M =(x M , y M , θ M ) of the tag code in the world coordinate system;

SS24:判断相同ID的标签码数据数量,当其大于100时执行SS25,否则跳转执行SS22;SS24: Determine the number of label code data with the same ID, when it is greater than 100, execute SS25, otherwise jump to execute SS22;

SS25:这一步计算同一ID标签码的位姿均值,计算出来后剔除与其相差较大的值再二次求取均值;SS25: This step calculates the mean value of the pose of the same ID tag code, and after the calculation, removes the value with a large difference and then obtains the mean value twice;

SS26:将上述计算得到的视觉标签码位姿均值作为标签码在世界坐标系中的实际位姿保存在标签码数组中。SS26: Save the average pose of the visual label code calculated above as the actual pose of the label code in the world coordinate system in the label code array.

步骤S2中,基于所述环境地图和所述视觉标签码在世界坐标系下的位姿,通过视觉辅助所述农业机器人的定位,包括:In step S2, based on the environment map and the pose of the visual tag code in the world coordinate system, the positioning of the agricultural robot is visually aided, including:

通过农业机器人安装的摄像头识别温室环境垄间的视觉标签码,视觉标签码在温室环境的位置如图4所示,若所述编码器识别到视觉标签码,调用标签码数组中存储的视觉标签码在世界坐标系下的位姿,对所述农业机器人的位姿进行修正,即完成视觉辅助所述农业机器人的定位。The visual label code between the ridges in the greenhouse environment is recognized by the camera installed on the agricultural robot. The location of the visual label code in the greenhouse environment is shown in Figure 4. If the encoder recognizes the visual label code, the visual label stored in the label code array is called. The pose of the code in the world coordinate system is corrected, and the pose of the agricultural robot is corrected, that is, the positioning of the agricultural robot is visually aided.

步骤S2中通过农业机器人安装的标签码数组识别温室环境垄间的视觉标签码,若所述编码器未识别到视觉标签码,通过编码器和IMU姿态传感器对所述农业机器人进行定位。In step S2, the visual label code between the ridges in the greenhouse environment is identified by the label code array installed by the agricultural robot. If the encoder does not recognize the visual label code, the agricultural robot is positioned by the encoder and the IMU attitude sensor.

步骤S2具体实施如图5所示,包括:The specific implementation of step S2 is shown in Figure 5, including:

SS31:接收传感器的数据,包括编码器、IMU和视觉摄像头数据;SS31: Receive sensor data, including encoder, IMU and vision camera data;

SS32:判断是否识别到视觉标签码的数据信息,如果没有则执行SS33,有的话执行SS34;SS32: Determine whether the data information of the visual label code is recognized, if not, execute SS33, if there is, execute SS34;

SS33:如果相机在t时刻没有观察到标签码,则计算K时刻(上一次观察到视觉标签码的时刻)利用视觉标签码获取的姿态θk与IMU获取的姿态γk的差值Δθ,在t时刻IMU获取的姿态γt中加入Δθ,作为对IMU姿态的修正,修正后t时刻机器人的姿态为θt=γt+Δθ。同时记录下t-1时刻编码器测得的左右轮速度为vL和vR,并计算机器人的速度v:SS33: If the camera does not observe the label code at time t, calculate the difference Δθ between the attitude θ k obtained by the visual label code and the attitude γ k obtained by the IMU at time K (the moment when the visual label code was observed last time), at Δθ is added to the posture γ t obtained by the IMU at time t as a correction to the posture of the IMU, and the posture of the robot at time t after the correction is θ tt +Δθ. At the same time, record the left and right wheel speeds measured by the encoder at time t-1 as v L and v R , and calculate the speed v of the robot:

Figure BDA0003737956960000051
Figure BDA0003737956960000051

进一步计算出t-1时刻到t时刻机器人移动的距离Δd及Δd在x与y方向上的分量Δx与Δy:Further calculate the distance Δd moved by the robot from time t-1 to time t and the components Δx and Δy of Δd in the x and y directions:

Δd=v·ΔtΔd=v·Δt

Δx=Δd·cosθt-1 Δx=Δd·cosθ t-1

Δy=Δd·sinθt-1 Δy=Δd·sinθ t-1

最后计算t时刻机器人的位置xt=xt-1+Δx,yt=yt-1+Δy,则t时刻机器人的姿态为Pt=(xt,ytt);Finally, calculate the position of the robot at time t x t =x t-1 +Δx, y t =y t-1 +Δy, then the posture of the robot at time t is P t =(x t ,y tt );

SS34:如果相机在t时刻可以观察到标签码,则利用PnP算法求解出相机坐标系C相对于标签码坐标系M的旋转变换矩阵R与平移矩阵t,将其转换为齐次变换矩阵TCMSS34: If the camera can observe the label code at time t, use the PnP algorithm to solve the rotation transformation matrix R and translation matrix t of the camera coordinate system C relative to the label code coordinate system M, and convert them into a homogeneous transformation matrix T CM :

Figure BDA0003737956960000061
Figure BDA0003737956960000061

SS35:利用SS33中标签码位置的标定可以获取标签码坐标系M相对于世界坐标系W的齐次变换矩阵TMW,通过实际测量相机坐标系C与机器人自身坐标系B的相对位置关系,获取机器人自身坐标系B相对于相机坐标系C的齐次变换矩阵TBC,定义机器人自身坐标系相对于世界坐标系的齐次变换矩阵为TBW,则:SS35: The homogeneous transformation matrix T MW of the label code coordinate system M relative to the world coordinate system W can be obtained by using the calibration of the label code position in SS33. By actually measuring the relative positional relationship between the camera coordinate system C and the robot's own coordinate system B, obtain The homogeneous transformation matrix T BC of the robot's own coordinate system B relative to the camera coordinate system C defines the homogeneous transformation matrix of the robot's own coordinate system relative to the world coordinate system as T BW , then:

TBW=TMWTCMTBC T BW = T MW T CM T BC

最后将齐次变换矩阵TBW中的平移矩阵与旋转矩阵提取出来,获得t时刻机器人相对于世界的位姿Pt=(xt,ytt);Finally, the translation matrix and rotation matrix in the homogeneous transformation matrix T BW are extracted to obtain the pose P t =(x t , y t , θ t ) of the robot relative to the world at time t;

SS36:将t时刻机器人在世界坐标系下的位姿发布出来。SS36: Publish the pose of the robot in the world coordinate system at time t.

步骤S3中,将所述环境地图作为所述农业机器人的导航地图,并基于所述农业机器人的位姿对所述农业机器人进行自主导航,包括:In step S3, the environment map is used as the navigation map of the agricultural robot, and the agricultural robot is autonomously navigated based on the pose of the agricultural robot, including:

调用环境地图,将所述环境地图中的2D栅格地图作为导航地图;Call the environment map, and use the 2D grid map in the environment map as the navigation map;

基于导航地图和调用ROS的导航功能包,对所述农业机器人进行导航。The agricultural robot is navigated based on a navigation map and a navigation function package that invokes ROS.

其中,导航功能包,包括:预先规划出的农业机器人在温室环境垄间的运动路径。Among them, the navigation function package includes: a pre-planned movement path of the agricultural robot between the ridges in the greenhouse environment.

基于同一发明构思,本发明还提出了一种基于视觉检测的农业机器人定位导航系统200,如图6所示,包括:Based on the same inventive concept, the present invention also proposes an agricultural robot positioning and navigation system 200 based on visual detection, as shown in FIG. 6 , including:

数据采集单元201,用于在温室环境的农作物生长周期开始阶段,利用所述农业机器人安装的三维激光雷达建立所述温室环境的环境地图,并基于所述环境地图推算出温室环境垄间的视觉标签码在世界坐标系的位姿;The data acquisition unit 201 is used for establishing an environmental map of the greenhouse environment by using the three-dimensional laser radar installed by the agricultural robot at the beginning of the crop growth cycle in the greenhouse environment, and calculating the vision between the ridges of the greenhouse environment based on the environmental map. The pose of the tag code in the world coordinate system;

定位单元202,基于所述环境地图和所述视觉标签码在世界坐标系下的位姿,通过视觉检测法确定所述农业机器人的位姿;The positioning unit 202 determines the pose of the agricultural robot through a visual detection method based on the environment map and the pose of the visual tag code in the world coordinate system;

数据采集单元203,用于将所述环境地图作为所述农业机器人的导航地图,并基于所述农业机器人的位姿对所述农业机器人进行自主导航。The data collection unit 203 is configured to use the environment map as a navigation map of the agricultural robot, and autonomously navigate the agricultural robot based on the pose of the agricultural robot.

与现有技术相比,本发明的有益效果为:Compared with the prior art, the beneficial effects of the present invention are:

本发明通过在温室环境的农作物生长周期开始阶段,利用所述温室环境中的农业机器人通过安装的三维激光雷达建立所述温室环境的环境地图,并基于所述环境地图推算得到,所述温室环境垄间的视觉标签码相对于世界坐标系的位姿;基于所述环境地图和所述视觉标签码在世界坐标系下的位姿,通过视觉辅助所述农业机器人的定位;将所述环境地图作为所述农业机器人的导航地图,对所述农业机器人进行导航。得到一种适用于快速生长型作物的农业机器人视觉辅助定位方法。且本发明不需要铺设导航线路且基本不会受到农作物和杂草生长的影响,可以实现农业机器人在温室环境下农作物生长全周期的自主定位与导航功能。The present invention establishes an environmental map of the greenhouse environment by using an agricultural robot in the greenhouse environment through the three-dimensional laser radar installed in the initial stage of the crop growth cycle in the greenhouse environment, and calculates and obtains based on the environmental map, the greenhouse environment The pose of the visual label code between the ridges relative to the world coordinate system; based on the environment map and the pose of the visual label code in the world coordinate system, the positioning of the agricultural robot is visually aided; the environment map As a navigation map of the agricultural robot, the agricultural robot is navigated. A vision-assisted positioning method for agricultural robots suitable for fast-growing crops is obtained. Moreover, the present invention does not need to lay a navigation line and is basically not affected by the growth of crops and weeds, and can realize the autonomous positioning and navigation functions of the agricultural robot in the whole cycle of crop growth in a greenhouse environment.

本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.

本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block in the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing device to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing device produce Means for implementing the functions specified in a flow or flow of a flowchart and/or a block or blocks of a block diagram.

这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions The apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.

这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded on a computer or other programmable data processing device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process such that The instructions provide steps for implementing the functions specified in the flow or blocks of the flowcharts and/or the block or blocks of the block diagrams.

以上仅为本发明的实施例而已,并不用于限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均包含在发明待批的本发明的权利要求范围之内。The above are only embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention are included in the invention pending approval. within the scope of the claims.

Claims (9)

1. An agricultural robot positioning and navigation method based on visual inspection is characterized by comprising the following steps:
at the beginning stage of a crop growth cycle of a greenhouse environment, establishing an environment map of the greenhouse environment by using a three-dimensional laser radar installed by the agricultural robot, and calculating the pose of a visual tag code among ridges of the greenhouse environment in a world coordinate system based on the environment map;
determining the pose of the agricultural robot through a visual detection method based on the environment map and the pose of the visual tag code in a world coordinate system;
and taking the environment map as a navigation map of the agricultural robot, and performing autonomous navigation on the agricultural robot based on the pose of the agricultural robot.
2. The positioning and navigation method according to claim 1, wherein the agricultural robot moves by using crawler wheels, a three-dimensional laser radar is installed in the center of the top of the vehicle body of the agricultural robot, a camera is installed on a lifting platform on the top of the vehicle body, an IMU attitude sensor is installed in the center of a chassis of the vehicle body, and encoders are installed on wheels on two sides of the vehicle body.
3. The positioning and navigation method according to claim 1, wherein the establishing of the environment map of the greenhouse environment by using the agricultural robot-mounted three-dimensional laser radar comprises:
emitting multi-line laser through the three-dimensional laser radar, constructing a 3D point cloud map of a greenhouse environment based on a three-dimensional mapping algorithm, projecting according to the 3D point cloud map and the height of ridges of the greenhouse environment, and determining a 2D grid map for navigation of the agricultural robot based on the projection;
the environment map comprises: a 3D point cloud map and a 2D grid map.
4. The method of claim 1, further comprising: and storing the pose of the visual tag code in the world coordinate system into the tag code array.
5. The positioning and navigation method according to claim 1, wherein the determining the pose of the agricultural robot by a visual detection method based on the pose of the environment map and the visual tag code in a world coordinate system comprises:
the method comprises the steps that a camera installed on the agricultural robot identifies visual label codes among ridges of a greenhouse environment, if the camera identifies the visual label codes, the position and pose of the visual label codes stored in a label code array under a world coordinate system are called, the position and pose of the agricultural robot are corrected through a visual detection method, and the position and pose of the agricultural robot are determined.
6. The positioning and navigation method according to claim 5, wherein the camera mounted by the agricultural robot recognizes a visual label code between ridges of the greenhouse environment, and if the camera does not recognize the visual label code, the agricultural robot is positioned by an encoder and an IMU attitude sensor.
7. The positioning and navigation method according to claim 1, wherein the navigating the agricultural robot by using the environment map as the navigation map of the agricultural robot comprises:
calling an environment map, and taking a 2D grid map in the environment map as a navigation map;
and navigating the agricultural robot based on the navigation map and the navigation function package calling the ROS.
8. The positioning and navigation method according to claim 7, wherein the navigation function package comprises: and (3) pre-planning the movement path of the agricultural robot among ridges of the greenhouse environment.
9. An agricultural robot positioning and navigation system based on visual inspection, characterized in that the system comprises:
the data acquisition unit is used for establishing an environment map of the greenhouse environment by using a three-dimensional laser radar installed by the agricultural robot at the beginning stage of the crop growth cycle of the greenhouse environment, and deducing the pose of the visual label codes among ridges of the greenhouse environment in a world coordinate system based on the environment map;
the positioning unit is used for determining the pose of the agricultural robot through a visual detection method based on the environment map and the pose of the visual tag code in a world coordinate system;
and the data acquisition unit is used for taking the environment map as a navigation map of the agricultural robot and carrying out autonomous navigation on the agricultural robot based on the pose of the agricultural robot.
CN202210806544.1A 2022-07-08 2022-07-08 A kind of agricultural robot positioning and navigation method and system based on visual detection Pending CN115235470A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210806544.1A CN115235470A (en) 2022-07-08 2022-07-08 A kind of agricultural robot positioning and navigation method and system based on visual detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210806544.1A CN115235470A (en) 2022-07-08 2022-07-08 A kind of agricultural robot positioning and navigation method and system based on visual detection

Publications (1)

Publication Number Publication Date
CN115235470A true CN115235470A (en) 2022-10-25

Family

ID=83671313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210806544.1A Pending CN115235470A (en) 2022-07-08 2022-07-08 A kind of agricultural robot positioning and navigation method and system based on visual detection

Country Status (1)

Country Link
CN (1) CN115235470A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119022927A (en) * 2024-08-14 2024-11-26 浙江大学 A method, device, medium and product for automatic inspection of robots in greenhouse
CN120333463A (en) * 2025-06-17 2025-07-18 西北工业大学 Spatial positioning method and device of wall-climbing robot based on visual recognition

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2662664A2 (en) * 2012-05-11 2013-11-13 Honeywell International Inc. Systems and methods for landmark selection for navigation
CN112650255A (en) * 2020-12-29 2021-04-13 杭州电子科技大学 Robot indoor and outdoor positioning navigation system method based on vision and laser radar information fusion
CN113403942A (en) * 2021-07-07 2021-09-17 西北工业大学 Label-assisted bridge detection unmanned aerial vehicle visual navigation method
CN113902888A (en) * 2021-09-30 2022-01-07 中国电子技术标准化研究院 A method and system for rapid construction of machine vision environment
WO2022007385A1 (en) * 2020-07-09 2022-01-13 上海思岚科技有限公司 Laser and visual positioning fusion method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2662664A2 (en) * 2012-05-11 2013-11-13 Honeywell International Inc. Systems and methods for landmark selection for navigation
WO2022007385A1 (en) * 2020-07-09 2022-01-13 上海思岚科技有限公司 Laser and visual positioning fusion method and device
CN112650255A (en) * 2020-12-29 2021-04-13 杭州电子科技大学 Robot indoor and outdoor positioning navigation system method based on vision and laser radar information fusion
CN113403942A (en) * 2021-07-07 2021-09-17 西北工业大学 Label-assisted bridge detection unmanned aerial vehicle visual navigation method
CN113902888A (en) * 2021-09-30 2022-01-07 中国电子技术标准化研究院 A method and system for rapid construction of machine vision environment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119022927A (en) * 2024-08-14 2024-11-26 浙江大学 A method, device, medium and product for automatic inspection of robots in greenhouse
CN119022927B (en) * 2024-08-14 2025-11-25 浙江大学 A method, equipment, medium, and product for automated inspection of greenhouses using robots
CN120333463A (en) * 2025-06-17 2025-07-18 西北工业大学 Spatial positioning method and device of wall-climbing robot based on visual recognition

Similar Documents

Publication Publication Date Title
CN111673755B (en) Picking robot control system and method based on visual servo
CN112740274B (en) System and method for VSLAM scale estimation using optical flow sensors on robotic devices
JP2022542241A (en) Systems and methods for augmenting visual output from robotic devices
JP2019537077A (en) Simultaneous positioning map creation navigation method, apparatus and system using indicators
CN113610910B (en) A mobile robot obstacle avoidance method
CN110433467B (en) Operation method and device of table tennis ball picking robot based on binocular vision and ant colony algorithm
CN111766603B (en) Laser SLAM method, system, medium and equipment for mobile robot based on vision-assisted positioning of AprilTag code
US11112780B2 (en) Collaborative determination of a load footprint of a robotic vehicle
CN114714357A (en) Sorting and carrying method, sorting and carrying robot and storage medium
CN110673608A (en) Robot navigation method
CN115235470A (en) A kind of agricultural robot positioning and navigation method and system based on visual detection
Yang et al. Two-stage multi-sensor fusion positioning system with seamless switching for cooperative mobile robot and manipulator system: M. Yang, E. Yang
CN118276585A (en) Automatic obstacle avoidance target recognition method for robots
CN118097592A (en) Robotic arm grasping point positioning method, device, replenishment robot, equipment and medium
CN115836262B (en) Image-based trajectory planning method and motion control method and mobile machine using the same
CN117032215A (en) Mobile robot object identification and positioning method based on binocular vision
CN115289966A (en) Goods shelf detecting and positioning system and method based on TOF camera
CN119690082A (en) Position determining method and device, mowing robot and medium
CN118603087A (en) A mobile robot positioning method integrating laser radar and visual feature code
CN117665811A (en) Autonomous navigation method and device in low-density canopy environment
CN115533890B (en) Adaptive mobile operation device and method
CN117774708A (en) Orchard vehicle automatic navigation system based on laser radar and machine vision fusion
Gu et al. Navigation of biped wall-climbing robots using BIM and ArUco markers
CN116734833A (en) SLAM method and system based on 2D lidar and camera in medical environment
CN115902977A (en) Dual positioning method and system for substation robot based on vision and GPS

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination