[go: up one dir, main page]

CN110517287A - Method, device, equipment and storage medium for obtaining motion trajectory of robotic fish - Google Patents

Method, device, equipment and storage medium for obtaining motion trajectory of robotic fish Download PDF

Info

Publication number
CN110517287A
CN110517287A CN201910410101.9A CN201910410101A CN110517287A CN 110517287 A CN110517287 A CN 110517287A CN 201910410101 A CN201910410101 A CN 201910410101A CN 110517287 A CN110517287 A CN 110517287A
Authority
CN
China
Prior art keywords
robotic fish
image
fish
neuron
seed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910410101.9A
Other languages
Chinese (zh)
Inventor
王学伟
王�琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Graphic Communication
Original Assignee
Beijing Institute of Graphic Communication
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Graphic Communication filed Critical Beijing Institute of Graphic Communication
Priority to CN201910410101.9A priority Critical patent/CN110517287A/en
Publication of CN110517287A publication Critical patent/CN110517287A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种获取机器鱼运动轨迹的方法、装置、设备及存储介质,用以解决相关技术中在对水下的机器鱼的运动轨迹进行跟踪时出现机器鱼位置丢失的问题。获取机器鱼运动轨迹的方法包括:获取在水中游动的机器鱼的图像;对所述图像进行分割,得到多个子图像;在所述多个子图像中识别所述机器鱼,以确定所述机器鱼在所述图像中的位置;当在所述多个子图像中均无法识别到所述机器鱼时,控制所述机器鱼执行预设动作后,再次获取所述机器鱼的图像以及确定所述机器鱼在再次获取到的图像中的位置;根据至少两次获得的所述机器鱼在所述图像中的位置确定所述机器鱼的运动轨迹。本发明可有效地对水下的机器鱼进行位置跟踪。

The invention discloses a method, a device, a device and a storage medium for obtaining the motion track of a robotic fish, which are used to solve the problem in the related art that the position of the robotic fish is lost when tracking the motion track of the underwater robotic fish. The method for acquiring the motion trajectory of the robotic fish includes: acquiring an image of the robotic fish swimming in water; segmenting the image to obtain a plurality of sub-images; identifying the robotic fish in the plurality of sub-images to determine the robotic fish The position of the fish in the image; when the robotic fish cannot be identified in the plurality of sub-images, after controlling the robotic fish to perform a preset action, acquire the image of the robotic fish again and determine the The position of the robotic fish in the re-acquired images; the trajectory of the robotic fish is determined according to the positions of the robotic fish in the images acquired at least twice. The invention can effectively track the position of the underwater robot fish.

Description

获取机器鱼运动轨迹的方法、装置、设备及存储介质Method, device, equipment and storage medium for obtaining motion trajectory of robotic fish

技术领域technical field

本发明涉及轨迹跟踪技术领域,特别是指一种获取机器鱼运动轨迹的方 法、装置、设备及存储介质。The present invention relates to the technical field of trajectory tracking, in particular to a method, device, equipment and storage medium for obtaining the trajectory of a robotic fish.

背景技术Background technique

近年来,随着仿生学技术的不断进步,仿鱼水下推进技术的水下机器人 (机器鱼)的研究日益引起关注,成为水下机器人领域研究的热点之一。机 器鱼不仅能够在复杂环境下进行水下作业、海洋监测、侦察等方面发挥作用, 而且为研制新型水下航行器提供了一种新的思路。In recent years, with the continuous advancement of bionics technology, the research on underwater robots (robotic fish) that imitates fish underwater propulsion technology has attracted increasing attention and has become one of the hot spots in the field of underwater robots. Robotic fish can not only play a role in underwater operations, ocean monitoring, and reconnaissance in complex environments, but also provide a new way of thinking for the development of new underwater vehicles.

目前,国内外主要集中于个体机器鱼的研究,而在实际应用中,由于任 务的复杂性、不确定性、并发性使得需要采用多条机器鱼协作来完成任务。 由于机器鱼自身尚无定位和遥测能力,视觉系统是其唯一感知环境的“器官”, 例如可通过摄像头(例如,CCD(Charge Coupled Device,电荷耦合器件)相 机)采集的图像经过处理与分析,提取有效信息作为决策和控制依据。只有 快速、准确地跟踪机器鱼和运动目标的位置及运动方向,决策控制模块才能 迅速作出相应决策,确保多机器鱼协作任务的完成。而完成多机器鱼协作任 务的关键技术之一是多机器鱼实时跟踪,即在视频图像中找到多条机器鱼, 并将不同帧的机器鱼一一对应后显示出各自的位置序列。但在对水下机器鱼 的运动轨迹进行跟踪时,经常出现机器鱼位置丢失,无法获取到机器鱼位置 的问题。At present, domestic and foreign research mainly focuses on individual robotic fish, but in practical applications, due to the complexity, uncertainty, and concurrency of tasks, it is necessary to use multiple robotic fish to cooperate to complete the task. Since the robot fish itself has no positioning and telemetry capabilities, the visual system is its only "organ" to perceive the environment. Extract effective information as the basis for decision-making and control. Only by quickly and accurately tracking the position and direction of movement of the robot fish and the moving target, can the decision-making control module make corresponding decisions quickly to ensure the completion of multi-robot fish collaborative tasks. One of the key technologies to complete the multi-robot fish collaboration task is multi-robot fish real-time tracking, that is, to find multiple robot fish in the video image, and display the respective position sequences after one-to-one correspondence between the robot fish in different frames. However, when tracking the trajectory of the underwater robotic fish, the position of the robotic fish is often lost and the position of the robotic fish cannot be obtained.

发明内容Contents of the invention

有鉴于此,本发明的目的在于提出一种获取机器鱼运动轨迹的方法、装 置、设备及存储介质,该方法可有效地对水下的机器鱼进行位置跟踪。In view of this, the object of the present invention is to propose a method, device, equipment and storage medium for acquiring the motion track of the robotic fish, which can effectively track the position of the underwater robotic fish.

根据本发明的第一个方面,提供了一种获取机器鱼运动轨迹的方法,包 括:获取在水中游动的机器鱼的图像;对所述图像进行分割,得到多个子图 像;在所述多个子图像中识别所述机器鱼,以确定所述机器鱼在所述图像中 的位置;当在所述多个子图像中均无法识别到所述机器鱼时,控制所述机器 鱼执行预设动作后,再次获取所述机器鱼的图像以及确定所述机器鱼在再次 获取到的图像中的位置;根据至少两次获得的所述机器鱼在所述图像中的位 置确定所述机器鱼的运动轨迹。According to the first aspect of the present invention, there is provided a method for obtaining the motion track of a robotic fish, comprising: obtaining an image of a robotic fish swimming in water; segmenting the image to obtain multiple sub-images; Identify the robotic fish in three sub-images to determine the position of the robotic fish in the image; when the robotic fish cannot be identified in the multiple sub-images, control the robotic fish to perform preset actions Afterwards, acquire the image of the robotic fish again and determine the position of the robotic fish in the image acquired again; determine the motion of the robotic fish according to the position of the robotic fish obtained at least twice in the image track.

可选的,对所述图像进行分割,包括:将待分割图像中各像素的色彩向 量作为一个输入神经元的输入向量,将与所述各像素相邻的八个像素的色彩 向量作为径向函数RBF的各特征向量;确定所述待分割图像中的种子神经元, 其中,像素点到相邻像素点的最大曼哈顿距离小于第一阈值时,该像素点为 种子像素点,该种子像素点对应的神经元为种子神经元,所述待分割图像中 的种子神经元构成种子区域;通过预设生长规则对种子区域进行生长,得到 多个分组区域;计算各分组区域的平均特征向量;使用计算得到的平均特征向量替换其所属的分组区域所有神经元中所包含的特征向量;如果存在未连 接到任何分组区域的神经元,且该神经元到其相邻分组区域的距离小于第二 阈值,则将该神经元连接到距离其最近的分组区域内;将相邻的所述分组区 域合并,得到待分割的多个区域;按照所述多个区域对所述待分割图像进行 分割,得到所述多个子图像。Optionally, segmenting the image includes: using the color vector of each pixel in the image to be segmented as an input vector of an input neuron, and using the color vectors of eight pixels adjacent to each pixel as radial Each eigenvector of the function RBF; determine the seed neuron in the image to be segmented, wherein, when the maximum Manhattan distance from a pixel point to an adjacent pixel point is less than the first threshold, the pixel point is a seed pixel point, and the seed pixel point The corresponding neuron is a seed neuron, and the seed neuron in the image to be segmented constitutes a seed region; the seed region is grown by a preset growth rule to obtain a plurality of grouped regions; the average feature vector of each grouped region is calculated; The calculated average eigenvector replaces the eigenvectors contained in all neurons in the grouping area to which it belongs; if there is a neuron that is not connected to any grouping area, and the distance between the neuron and its adjacent grouping area is less than the second threshold , then connect the neuron to the grouping area closest to it; merge the adjacent grouping areas to obtain multiple areas to be segmented; segment the image to be segmented according to the multiple areas to obtain The plurality of sub-images.

可选的,将相邻的所述分组区域合并,包括:当待合并区域的面积小于 预设面积,且所述待合并区域的颜色距离小于预设颜色距离阈值时,将所述 待合并区域合并到相邻区域。Optionally, merging adjacent grouped regions includes: when the area of the region to be merged is smaller than a preset area, and the color distance of the region to be merged is smaller than a preset color distance threshold, combining the region to be merged merged into adjacent regions.

可选的,确定所述机器鱼的位置包括:基于Meanshift的目标跟踪算法计 算所述图像中的目标区域和候选区域内像素特征值概率,得到目标模型描述 以及候选模型描述;利用相似函数度量所述目标模型以及当前帧的候选模型 的相似性;选择使所述相似函数最大的候选模型并得到目标模型的Meanshift 向量;迭代计算Meanshift向量,通过收敛得到所述机器鱼的位置。Optionally, determining the position of the robotic fish includes: calculating the target region in the image and the probability of pixel feature values in the candidate region based on a Meanshift target tracking algorithm to obtain a target model description and a candidate model description; using a similarity function to measure the The similarity of the target model and the candidate model of the current frame; select the candidate model that maximizes the similarity function and obtain the Meanshift vector of the target model; iteratively calculate the Meanshift vector, and obtain the position of the robotic fish through convergence.

可选的,控制所述机器鱼执行预设动作,包括:控制所述机器鱼按照预 设角度以及预设方向进行转向。Optionally, controlling the robotic fish to perform a preset action includes: controlling the robotic fish to turn according to a preset angle and a preset direction.

根据本发明的第二个方面,提供了一种获取机器鱼运动轨迹的装置,包 括:第一获取模块,用于获取在水中游动的机器鱼的图像;分割模块,用于 对所述图像进行分割,得到多个子图像;识别模块,用于在所述多个子图像 中识别所述机器鱼,以确定所述机器鱼在所述图像中的位置;第二获取模块, 当在所述多个子图像中均无法识别到所述机器鱼时,控制所述机器鱼执行预 设动作后,再次获取所述机器鱼的图像以及确定所述机器鱼在再次获取到的 图像中的位置;确定模块,用于根据至少两次获得的所述机器鱼在所述图像中的位置确定所述机器鱼的运动轨迹。According to the second aspect of the present invention, there is provided a device for acquiring the motion trajectory of a robotic fish, comprising: a first acquiring module, configured to acquire an image of a robotic fish swimming in water; a segmentation module, configured to process the image Carry out segmentation to obtain a plurality of sub-images; an identification module is used to identify the robotic fish in the plurality of sub-images to determine the position of the robotic fish in the image; the second acquisition module, when in the plurality of sub-images When the robotic fish cannot be identified in any of the sub-images, after controlling the robotic fish to perform a preset action, acquire the image of the robotic fish again and determine the position of the robotic fish in the image obtained again; the determination module , for determining the motion trajectory of the robotic fish according to the position of the robotic fish in the image obtained at least twice.

可选的,所述分割模块,包括:设置单元,用于将待分割图像中各像素 的色彩向量作为一个输入神经元的输入向量,将与所述各像素相邻的八个像 素的色彩向量作为径向函数RBF的各特征向量;确定单元,用于确定所述待 分割图像中的种子神经元,其中,像素点到相邻像素点的最大曼哈顿距离小 于第一阈值时,该像素点为种子像素点,该种子像素点对应的神经元为种子 神经元,所述待分割图像中的种子神经元构成种子区域;生成单元,用于通 过预设生长规则对种子区域进行生长,得到多个分组区域;第一计算单元, 用于计算各分组区域的平均特征向量;替换单元,用于使用计算得到的平均 特征向量替换其所属的分组区域所有神经元中所包含的特征向量;连接单元, 用于如果存在未连接到任何分组区域的神经元,且该神经元到其相邻分组区 域的距离小于第二阈值,则将该神经元连接到距离其最近的分组区域内;合 并单元,用于将相邻的所述分组区域合并,得到待分割的多个区域;分割单 元,用于按照所述多个区域对所述待分割图像进行分割,得到所述多个子图 像。Optionally, the segmentation module includes: a setting unit, configured to use the color vector of each pixel in the image to be segmented as an input vector of an input neuron, and set the color vectors of eight adjacent pixels to each pixel As each eigenvector of the radial function RBF; a determination unit, configured to determine the seed neuron in the image to be segmented, wherein, when the maximum Manhattan distance from a pixel point to an adjacent pixel point is less than a first threshold, the pixel point is A seed pixel point, the neuron corresponding to the seed pixel point is a seed neuron, and the seed neuron in the image to be segmented constitutes a seed region; a generation unit is used to grow the seed region by a preset growth rule to obtain multiple The grouping area; the first calculation unit is used to calculate the average eigenvector of each grouping area; the replacement unit is used to use the calculated average eigenvector to replace the eigenvectors contained in all neurons of the grouping area to which it belongs; the connection unit, If there is a neuron that is not connected to any grouping area, and the distance between the neuron and its adjacent grouping area is less than the second threshold, then the neuron is connected to the nearest grouping area; the merging unit is used Combining the adjacent grouped regions to obtain a plurality of regions to be segmented; a segmenting unit configured to segment the image to be segmented according to the plurality of regions to obtain the plurality of sub-images.

可选的,所述合并单元用于:当待合并区域的面积小于预设面积,且所 述待合并区域的颜色距离小于预设颜色距离阈值时,将所述待合并区域合并 到相邻区域。Optionally, the merging unit is configured to merge the region to be merged into an adjacent region when the area of the region to be merged is smaller than a preset area and the color distance of the region to be merged is smaller than a preset color distance threshold .

根据本发明的第三个方面,提供了一种电子设备,包括存储器、处理器 及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述 程序时实现如本发明第一个方面提供的任意一种获取机器鱼运动轨迹的方法。According to a third aspect of the present invention, there is provided an electronic device, including a memory, a processor, and a computer program stored on the memory and operable on the processor, when the processor executes the program, the computer program according to the present invention is realized. The first aspect provides any method for obtaining the trajectory of the robot fish.

根据本发明的第四个方面,提供了一种非暂态计算机可读存储介质,所 述非暂态计算机可读存储介质存储计算机指令,所述计算机指令用于使所述 计算机执行本发明第一个方面提供的任意一种获取机器鱼运动轨迹的方法。According to a fourth aspect of the present invention, a non-transitory computer-readable storage medium is provided, the non-transitory computer-readable storage medium stores computer instructions, and the computer instructions are used to cause the computer to execute the first aspect of the present invention. One aspect provides any method for obtaining the motion trajectory of the robotic fish.

从上面所述可以看出,本发明提供的获取机器鱼运动轨迹的方法,在对 在水中游动的机器鱼的运动轨迹进行跟踪时,对获得到的在水下的机器鱼的 图像进行分割后,再在分割后的图像中识别机器鱼,可提高机器鱼的识别率, 从而可尽量避免在对机器鱼的运动轨迹进行跟踪的过程中出现机器鱼位置丢 失的问题。As can be seen from the above, the method for obtaining the trajectory of the robotic fish provided by the present invention, when tracking the trajectory of the robotic fish swimming in the water, segments the obtained image of the robotic fish underwater After that, identifying the robot fish in the segmented image can improve the recognition rate of the robot fish, so as to avoid the problem of losing the position of the robot fish in the process of tracking the trajectory of the robot fish as much as possible.

附图说明Description of drawings

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实 施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面 描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲, 在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments of the present invention. Those skilled in the art can also obtain other drawings based on these drawings without creative work.

图1是根据一示例性实施例示出的一种获取机器鱼运动轨迹的方法的流 程图;Fig. 1 is a flow chart of a method for obtaining a robotic fish trajectory according to an exemplary embodiment;

图2是根据一示例性实施例示出的一种全视觉下的水中多目标实时定位 与跟踪系统的示意图;Fig. 2 is a schematic diagram of a multi-target real-time positioning and tracking system in water under full vision shown according to an exemplary embodiment;

图3是根据一示例性实施例示出的一种机器鱼的俯视图;Fig. 3 is a top view of a robotic fish according to an exemplary embodiment;

图4是根据一示例性实施例示出的一种机器鱼的仰视图;Fig. 4 is a bottom view of a robotic fish according to an exemplary embodiment;

图5是根据一示例性实施例示出的对图像进行分割的流程图;Fig. 5 is a flow chart showing image segmentation according to an exemplary embodiment;

图6是根据一示例性实施例示出的获取机器鱼运动轨迹的方法的流程图;Fig. 6 is a flow chart of a method for obtaining the trajectory of a robotic fish according to an exemplary embodiment;

图7是根据一示例性实施例示出的一种获取机器鱼运动轨迹的装置的框 图。Fig. 7 is a block diagram of a device for acquiring a motion track of a robotic fish according to an exemplary embodiment.

具体实施方式Detailed ways

为使本发明的目的、技术方案和优点更加清楚明白,以下结合具体实施 例,并参照附图,对本发明进一步详细说明。In order to make the purpose, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail below in conjunction with specific embodiments and with reference to the accompanying drawings.

需要说明的是,本发明实施例中所有使用“第一”、“第二”、“第三”以 及“第四”的表述均是为了区分两个相同名称非相同的实体或者非相同的参 量,可见“第一”、“第二”“第三”以及“第四”仅为了表述的方便,不应理 解为对本发明实施例的限定,后续实施例对此不再一一说明。It should be noted that all expressions using "first", "second", "third" and "fourth" in the embodiments of the present invention are to distinguish between two entities with the same name but different parameters or different parameters , it can be seen that "first", "second", "third" and "fourth" are only for the convenience of expression, and should not be construed as limiting the embodiments of the present invention, which will not be described one by one in the subsequent embodiments.

图1是根据一示例性实施例示出的一种获取机器鱼运动轨迹的方法的流 程图,如图1所示,该方法包括:Fig. 1 is a flow chart of a method for obtaining a robotic fish trajectory according to an exemplary embodiment. As shown in Fig. 1, the method includes:

步骤101:获取在水中游动的机器鱼的图像;Step 101: acquiring an image of a robotic fish swimming in water;

在步骤101中,可通过视觉传感器获取在水中游动的机器鱼的图像。In step 101, an image of a robotic fish swimming in water may be acquired through a visual sensor.

步骤102:对所述图像进行分割,得到多个子图像;Step 102: Segment the image to obtain multiple sub-images;

步骤103:在所述多个子图像中识别所述机器鱼,以确定所述机器鱼在 所述图像中的位置;Step 103: identifying the robotic fish in the plurality of sub-images to determine the position of the robotic fish in the image;

在步骤103中,可采用目前已有的图像识别算法依次在步骤102中分割 得到的子图像中识别机器鱼。In step 103, the existing image recognition algorithm can be used to identify the robotic fish in the sub-images obtained by segmentation in step 102.

步骤104:当在所述多个子图像中均无法识别到所述机器鱼时,控制所 述机器鱼执行预设动作后,再次获取所述机器鱼的图像以及确定所述机器鱼 在再次获取到的图像中的位置;Step 104: When the robotic fish cannot be identified in the plurality of sub-images, after controlling the robotic fish to perform a preset action, acquire the image of the robotic fish again and determine that the robotic fish is captured again position in the image of ;

在步骤104中,如在多个子图像中均无法识别到机器鱼,则说明机器鱼 出现了位置丢失的情况,出现这种情况的原因有可能是水面出现波纹,水面 存在特殊光照,或机器鱼游动的姿态使其呈现在水面的影像过小,在这种情 况下,可控制机器鱼执行指定的动作,使得机器鱼改变其在水中的姿态,例 如,可控制机器鱼向左或向右进行最大程度的转向,该最大程度的转向为机 器鱼受限于其物理结构,所能作为的最大转向角度。在机器鱼执行预设动作 后,再次获取图像,并在图像中识别机器鱼,则可增加了器鱼被成功识别的概率。In step 104, if the robotic fish cannot be recognized in multiple sub-images, it means that the robotic fish has lost its position. The swimming posture makes the image presented on the water surface too small. In this case, the robotic fish can be controlled to perform specified actions so that the robotic fish can change its posture in the water. For example, the robotic fish can be controlled to move left or right Carry out the maximum degree of turning, the maximum degree of turning is the maximum turning angle that the robotic fish can use due to its physical structure. After the robot fish performs preset actions, the image is acquired again and the robot fish is identified in the image, which can increase the probability of the robot fish being successfully identified.

需要说明的是,在步骤104中,在控制机器鱼执行预设动作后,可再次 获取机器鱼在水中游动的图像,并对再次获得的图像进行分割,得到多个子 图像,在分割得到的多个子图像中识别该机器鱼。It should be noted that in step 104, after the robot fish is controlled to perform preset actions, the image of the robot fish swimming in the water can be acquired again, and the image obtained again can be segmented to obtain multiple sub-images. The robotic fish is identified in multiple sub-images.

步骤105:根据至少两次获得的所述机器鱼在所述图像中的位置得到所 述机器鱼的运动轨迹。Step 105: Obtain the motion trajectory of the robotic fish according to the position of the robotic fish obtained at least twice in the image.

从上面所述可以看出,本发明提供的获取机器鱼运动轨迹的方法,在对 在水中游动的机器鱼的运动轨迹进行跟踪时,对获得到的在水下的机器鱼的 图像进行分割后,再在分割后的图像中识别机器鱼,可提高机器鱼的识别率, 从而可尽量避免在对机器鱼的运动轨迹进行跟踪的过程中出现机器鱼位置丢 失的问题。As can be seen from the above, the method for obtaining the trajectory of the robotic fish provided by the present invention, when tracking the trajectory of the robotic fish swimming in the water, segments the obtained image of the robotic fish underwater After that, identifying the robot fish in the segmented image can improve the recognition rate of the robot fish, so as to avoid the problem of losing the position of the robot fish in the process of tracking the trajectory of the robot fish as much as possible.

图2是根据一示例性实施例示出的一种全视觉下的水中多目标实时定位 与跟踪系统的示意图,上述方法可应用于该系统中。如图2所示,该系统包 括:视觉传感器(1)、铝制支撑架和水槽护栏(2)、水槽(3)以及机器鱼 (4)。图3是机器鱼的俯视图,图4是机器鱼的仰视图,结合图3以及图4所 示,机器鱼(4)包括:胸鳍(5)、防水鱼皮(6)、天线(7)、充电插头(8)、 铝制骨架(9)、通讯模块(10)、鱼鳍(11)、电池(12)、控制系统(13)、 第一关节(14)、第二关节(15)以及第三关节(16)。Fig. 2 is a schematic diagram of a multi-target real-time positioning and tracking system in water under full vision according to an exemplary embodiment, and the above-mentioned method can be applied to the system. As shown in Figure 2, the system includes: visual sensor (1), aluminum support frame and tank guardrail (2), tank (3) and robotic fish (4). Fig. 3 is the top view of robotic fish, and Fig. 4 is the bottom view of robotic fish, shown in conjunction with Fig. 3 and Fig. 4, robotic fish (4) comprises: pectoral fin (5), waterproof fish skin (6), antenna (7), Charging plug (8), aluminum frame (9), communication module (10), fish fin (11), battery (12), control system (13), first joint (14), second joint (15) and Third joint (16).

其中,视觉传感器(1)的末端与系统框架的顶部之间直接连接使其可以 捕获到水池全部图像,水槽护栏(2)固定于系统框架的底部四周,可对水槽(3)的周围起到保护与支撑作用;Among them, the end of the visual sensor (1) is directly connected to the top of the system frame so that it can capture all the images of the pool, and the water tank guardrail (2) is fixed around the bottom of the system frame, which can play a role in the surrounding of the water tank (3). protection and support;

机器鱼(4)可于水槽(3)的内部任意位置游动,且其任意时刻的姿态 信息被视觉传感器(1)所捕捉;The robotic fish (4) can swim anywhere inside the tank (3), and its posture information at any moment is captured by the visual sensor (1);

通讯模块(10)、电池(12)位于铝制骨架(9)的前端且刚性连接,用 于与第一关节(14)、第二关节(15)以及第三关节(16)平衡;The communication module (10), the battery (12) are positioned at the front end of the aluminum frame (9) and rigidly connected, for balancing with the first joint (14), the second joint (15) and the third joint (16);

防水鱼皮(6)可完全包裹天线(7)、铝制骨架(9)、通信模块(10)、 电池(12)、控制系统(13)、第一关节(14)、第二关节(15)以及第三关节 (16),使得这些器械与水隔离。The waterproof fish skin (6) can completely wrap the antenna (7), the aluminum frame (9), the communication module (10), the battery (12), the control system (13), the first joint (14), the second joint (15 ) and the third joint (16), so that these instruments are isolated from the water.

其中,第一关节(14)、第二关节(15)以及第三关节(16)可为多级舵 机。Wherein, the first joint (14), the second joint (15) and the third joint (16) can be multi-level servos.

在一种可实现方式中,对所述图像进行分割可包括:将待分割图像中各 像素的色彩向量作为一个输入神经元的输入向量,将与所述各像素相邻的八 个像素的色彩向量作为径向函数RBF的各特征向量;确定所述待分割图像中 的种子神经元,其中,像素点到相邻像素点的最大曼哈顿距离小于第一阈值 时,该像素点为种子像素点,该种子像素点对应的神经元为种子神经元,所 述待分割图像中的种子神经元构成种子区域;通过预设生长规则对种子区域 进行生长,得到多个分组区域;计算各分组区域的平均特征向量;使用计算 得到的平均特征向量替换其所属的分组区域所有神经元中所包含的特征向量; 如果存在未连接到任何分组区域的神经元,且该神经元到其相邻分组区域的 距离小于第二阈值,则将该神经元连接到距离其最近的分组区域内;将相邻 的所述分组区域合并,得到待分割的多个区域;按照所述多个区域对所述待 分割图像进行分割,得到所述多个子图像。In a practicable manner, segmenting the image may include: taking the color vector of each pixel in the image to be segmented as an input vector of an input neuron, and using the color vectors of eight pixels adjacent to each pixel Vector as each feature vector of radial function RBF; Determine the seed neuron in the image to be segmented, wherein, when the maximum Manhattan distance from a pixel to an adjacent pixel is less than the first threshold, the pixel is a seed pixel, The neuron corresponding to the seed pixel is a seed neuron, and the seed neuron in the image to be segmented constitutes a seed region; the seed region is grown by a preset growth rule to obtain a plurality of grouped regions; the average value of each grouped region is calculated Eigenvector; use the calculated average eigenvector to replace the eigenvectors contained in all neurons in the grouping area to which it belongs; if there is a neuron that is not connected to any grouping area, and the distance between the neuron and its adjacent grouping area If it is less than the second threshold, the neuron is connected to the nearest grouping area; the adjacent grouping areas are merged to obtain multiple areas to be segmented; the image to be segmented is processed according to the multiple areas performing segmentation to obtain the multiple sub-images.

在上述图像分割流程中,对于任一像素点s,满足公式(1),则可作为种 子像素点。In the above image segmentation process, for any pixel s that satisfies the formula (1), it can be used as a seed pixel.

μs<θμ μ s < θ μ

其中,s为像素点标号,j为相邻像素下标,||xs-cj||为曼哈顿距离,表达式 如公式(2)所示,θμ为预设阈值(即上述第一阈值),μs为像素点s和与其 相邻的8个像素点的最大距离。Among them, s is the pixel label, j is the subscript of adjacent pixels, ||x s -c j || is the Manhattan distance, the expression is shown in formula (2), θ μ is the preset threshold (that is, the first Threshold), μ s is the maximum distance between pixel s and its 8 adjacent pixels.

在上述图像分割过程中,像素点将与其相邻的8个像素点进行连接,在 颜色空间中,根据上述公式(1),像素点到相邻像素点的最大曼哈顿距离小 于预设阈值(即上述第一阈值)时,该像素点即为种子像素点。可见,种子 像素点与它的邻域像素具有相似特征,在M-PCNN图像分割算法中,种子像 素点与其邻域像素点相连接形成种子区域,种子区域捕捉相同特征像素不断 扩展。需要说明的是,种子像素的选择和种子区域的扩展是并行进行的,该 过程是图像分割中区域合并的初始阶段,重复分割被用来避免失去图像的重 要细节。在上述公式(1)中,θμ的阈值设定一个较小的值。依据颜色量化理 论,可将每一个RGB分成26个量化水平来区分17576个颜色,这个颜色分 辨率对于17000个颜色的人眼视觉感知来说足够高。依据这些数据,θμ等于 RGB空间的量化区间半径,其规范化数值设置为1/52。以下结合图5对图像 分割流程进行进一步说明。如图5所示,图像分割流程可包括:In the above image segmentation process, a pixel point will be connected to its 8 adjacent pixel points. In the color space, according to the above formula (1), the maximum Manhattan distance between a pixel point and an adjacent pixel point is less than the preset threshold (ie When the above-mentioned first threshold value), the pixel point is the seed pixel point. It can be seen that the seed pixel and its neighbor pixels have similar characteristics. In the M-PCNN image segmentation algorithm, the seed pixel and its neighbor pixels are connected to form a seed region, and the seed region captures the same feature pixels and continues to expand. It should be noted that the selection of seed pixels and the expansion of seed regions are carried out in parallel. This process is the initial stage of region merging in image segmentation, and repeated segmentation is used to avoid losing important details of the image. In the above formula (1), the threshold value of θ μ is set to a small value. According to the color quantization theory, each RGB can be divided into 26 quantization levels to distinguish 17,576 colors. This color resolution is high enough for the human visual perception of 17,000 colors. According to these data, θ μ is equal to the radius of the quantization interval in RGB space, and its normalized value is set to 1/52. The image segmentation process will be further described below in conjunction with FIG. 5 . As shown in Figure 5, the image segmentation process may include:

步骤501:输入待分割图像后,将图像的每个像素的色彩向量x作为一个 输入神经元的输入向量μ,与当前像素点相邻的8个相邻像素的色彩向量作 为径向基函数RBF的各特征向量c,利用种子选取条件确定初始种子点;利 用生长规则对种子区域进行生长,将选定的种子点作为起点,利用公式(1) 作为判决条件,向与当前像素点相邻的8个像素点生长,直到像素点不再满 足公式(1),得到区域分组编号;Step 501: After inputting the image to be segmented, use the color vector x of each pixel of the image as the input vector μ of an input neuron, and the color vectors of 8 adjacent pixels adjacent to the current pixel as the radial basis function RBF For each eigenvector c of each eigenvector c, use the seed selection condition to determine the initial seed point; use the growth rule to grow the seed area, use the selected seed point as the starting point, use formula (1) as the judgment condition, and use the formula (1) as the judgment condition to determine the initial seed point. 8 pixels are grown until the pixels no longer satisfy the formula (1), and the grouping number of the area is obtained;

步骤502:计算每个分组区域的平均特征向量σg,如式(6),σRg, σBg,分别为红、绿、蓝分量平均值,M为编号为g的分组区域内的像素点 数。将得到的平均特征向量替换为该区域所有神经元中所包含的特征向量;Step 502: Calculate the average feature vector σ g of each grouping area, such as formula (6), σR g , σB g , are the average values of the red, green, and blue components respectively, and M is the number of pixels in the grouping area numbered g . Replace the resulting average eigenvector with the eigenvectors contained in all neurons in the region;

步骤503:判断是否存在未连接的神经元,如果存在未连接的神经元, 且未连接神经元到邻接区域的差值(即二者之间的距离值)小于阈值θi时, 执行步骤504该θi为给定的阈值:利用公式(7)将其连接到最接近的相邻区 域内,否则,执行步骤505;Step 503: Determine whether there are unconnected neurons, if there are unconnected neurons, and the difference between the unconnected neuron and the adjacent area (that is, the distance between the two) is less than the threshold θi , execute step 504 The θ i is a given threshold: use formula (7) to connect it to the nearest adjacent area, otherwise, perform step 505;

其中,xμ是连接像素点μ的特征向量,σgj是邻接区域编号为gj的平均特 征向量。对所有未连接神经元同时进行连接操作,并更新阈值θi+1=θi+Δθii+1为新阈值,Δθi为阈值增量),重复上述步骤502;Among them, x μ is the feature vector of connecting pixel points μ, σ gj is the average feature vector of the adjacent area numbered gj. Perform the connection operation on all unconnected neurons at the same time, and update the threshold θ i+1 = θ i +Δθ ii+1 is the new threshold, Δθ i is the threshold increment), and repeat the above step 502;

步骤505:计算区域面积Rs和颜色距离RdStep 505: Calculate the area R s and the color distance R d ;

步骤506:判断是否存在区域的面积Rs和颜色距离Rd是否满足Rs<θs且 Rd<θd,当满足时,执行步骤507,否则,流程结束;Step 506: Determine whether the area R s of the region and the color distance R d satisfy R ss and R dd , if so, execute step 507, otherwise, the process ends;

步骤507:将得到的相邻区域进行合并,且所有空间区域并行进行合并;Step 507: Merge the obtained adjacent regions, and merge all spatial regions in parallel;

合并规则:当一个区域的面积Rs和颜色距离Rd,满足Rs<θs且Rd<θd时, θs和θd分别为预设面积大小阈值和颜色距离阈值,该区域被合并到相邻区域。 若多个邻域的颜色距离满足小于设定阈值时,每一步骤有且只有任意一个区 域被合并;Merging rule: When the area R s and color distance R d of a region satisfy R ss and R dd , θ s and θ d are the preset area size threshold and color distance threshold respectively, and the region is merged into adjacent areas. If the color distance of multiple neighborhoods is less than the set threshold, each step has and only any one area is merged;

重复步骤507,直到满足区域合并停止条件,完成彩色图像分割。Step 507 is repeated until the region merging stop condition is satisfied, and the color image segmentation is completed.

在一种可实现方式中,将相邻的所述分组区域合并可包括:当待合并区 域的面积小于预设面积,且所述待合并区域的颜色距离小于预设颜色距离阈 值时,将所述待合并区域合并到相邻区域。In a practicable manner, merging adjacent grouped regions may include: when the area of the region to be merged is smaller than a preset area, and the color distance of the region to be merged is smaller than a preset color distance threshold, combining the The regions to be merged are merged into adjacent regions.

在一种可实现方式中,确定所述机器鱼的位置可包括:基于Meanshift的 目标跟踪算法计算所述图像中的目标区域和候选区域内像素特征值概率,得 到目标模型描述以及候选模型描述;利用相似函数度量所述目标模型以及当 前帧的候选模型的相似性;选择使所述相似函数最大的候选模型并得到目标 模型的Meanshift(均值漂移算法)向量;迭代计算Meanshift向量,通过收敛 得到所述机器鱼的位置。In a practicable manner, determining the position of the robotic fish may include: calculating the target region in the image and the pixel feature value probability in the candidate region based on the Meanshift target tracking algorithm, and obtaining the target model description and the candidate model description; Utilize similarity function to measure the similarity of the candidate model of described target model and current frame; Select the candidate model that makes described similarity function maximum and obtain the Meanshift (mean shift algorithm) vector of target model; Iterative calculation Meanshift vector, obtains by convergence Describe the position of the robotic fish.

在一种可实现方式中,控制所述机器鱼执行预设动作可包括:控制所述 机器鱼按照预设角度以及预设方向进行转向。其中,该预设角度例如是机器 鱼受限于其物理结构,所能作为的最大转向角度。预设方向可为机器鱼当前 运动方向的左方或右方。In an implementable manner, controlling the robotic fish to perform a preset action may include: controlling the robotic fish to turn according to a preset angle and a preset direction. Wherein, the preset angle is, for example, the maximum steering angle that the robotic fish can use due to its physical structure. The preset direction can be left or right of the current moving direction of the robotic fish.

图6是根据一示例性实施例示出的获取机器鱼运动轨迹的方法的流程图, 以下结合图6对本发明的获取机器鱼运动轨迹的方法进行示例性说明。如图6 所示,该方法包括:Fig. 6 is a flow chart of a method for obtaining a motion trajectory of a robotic fish according to an exemplary embodiment. The method for obtaining a motion trajectory of a robotic fish according to the present invention will be exemplarily described below in conjunction with Fig. 6 . As shown in Figure 6, the method includes:

步骤601:获得视觉传感器采集到的图像(也称原始图像);Step 601: Obtain an image (also called an original image) collected by a visual sensor;

步骤602:对步骤601得到的图像进行分割,得到多个子图像;Step 602: Segment the image obtained in step 601 to obtain multiple sub-images;

步骤603:对步骤602得到的多个子图像进行二值化处理;Step 603: Binarize the multiple sub-images obtained in step 602;

步骤604:对二值化处理的各子图像进行图像分割处理;Step 604: Carry out image segmentation processing on each sub-image processed by binarization;

步骤605:在处理后的图像中识别机器鱼,得到识别结果;Step 605: Recognize the robotic fish in the processed image, and obtain the recognition result;

步骤606:执行控制程序,控制机器鱼运动;Step 606: Execute the control program to control the movement of the robotic fish;

步骤607:判断机器鱼的位置是否丢失;Step 607: Determine whether the position of the robotic fish is lost;

步骤608:如果机器鱼的位置丢失,确定机器鱼停止游动;Step 608: If the position of the robotic fish is lost, determine that the robotic fish stops swimming;

步骤609:控制机器鱼向左转向最大角度;Step 609: Control the robotic fish to turn left to the maximum angle;

步骤610:重新获取视觉传感器采集到的图像,并返回执行步骤605。Step 610: re-acquire the image collected by the vision sensor, and return to step 605.

图7是根据一示例性实施例示出的一种获取机器鱼运动轨迹的装置的框 图,如图7所示,该装置70包括:Fig. 7 is a block diagram of a device for obtaining the motion track of a robotic fish according to an exemplary embodiment. As shown in Fig. 7, the device 70 includes:

第一获取模块71,用于获取在水中游动的机器鱼的图像;The first acquiring module 71 is used to acquire the image of the robotic fish swimming in the water;

分割模块72,用于对所述图像进行分割,得到多个子图像;A segmentation module 72, configured to segment the image to obtain a plurality of sub-images;

识别模块73,用于在所述多个子图像中识别所述机器鱼,以确定所述机 器鱼在所述图像中的位置;An identification module 73, configured to identify the robotic fish in the plurality of sub-images, to determine the position of the robotic fish in the image;

第二获取模块74,当在所述多个子图像中均无法识别到所述机器鱼时, 控制所述机器鱼执行预设动作后,再次获取所述机器鱼的图像以及确定所述 机器鱼在再次获取到的图像中的位置;The second acquisition module 74, when the robotic fish cannot be identified in the plurality of sub-images, after controlling the robotic fish to perform a preset action, acquire the image of the robotic fish again and determine that the robotic fish is in the The position in the image obtained again;

确定模块75,用于根据至少两次获得的所述机器鱼在所述图像中的位置 确定所述机器鱼的运动轨迹。Determining module 75, is used for determining the motion track of described robotic fish according to the position of described robotic fish in described image that obtains at least twice.

在一种可实现方式中,所述分割模块,包括:设置单元,用于将待分割 图像中各像素的色彩向量作为一个输入神经元的输入向量,将与所述各像素 相邻的八个像素的色彩向量作为径向函数RBF的各特征向量;确定单元,用 于确定所述待分割图像中的种子神经元,其中,像素点到相邻像素点的最大 曼哈顿距离小于第一阈值时,该像素点为种子像素点,该种子像素点对应的 神经元为种子神经元,所述待分割图像中的种子神经元构成种子区域;生成 单元,用于通过预设生长规则对种子区域进行生长,得到多个分组区域;第 一计算单元,用于计算各分组区域的平均特征向量;替换单元,用于使用计 算得到的平均特征向量替换其所属的分组区域所有神经元中所包含的特征向 量;连接单元,用于如果存在未连接到任何分组区域的神经元,且该神经元 到其相邻分组区域的距离小于第二阈值,则将该神经元连接到距离其最近的 分组区域内;合并单元,用于将相邻的所述分组区域合并,得到待分割的多 个区域;分割单元,用于按照所述多个区域对所述待分割图像进行分割,得 到所述多个子图像。In a practicable manner, the segmentation module includes: a setting unit, configured to use the color vector of each pixel in the image to be segmented as an input vector of an input neuron, and use the eight neurons adjacent to each pixel The color vector of the pixel is used as each eigenvector of the radial function RBF; the determination unit is used to determine the seed neuron in the image to be segmented, wherein, when the maximum Manhattan distance from a pixel point to an adjacent pixel point is less than the first threshold, The pixel point is a seed pixel point, and the neuron corresponding to the seed pixel point is a seed neuron, and the seed neuron in the image to be segmented constitutes a seed region; a generation unit is used to grow the seed region through a preset growth rule , to obtain multiple grouping areas; the first calculation unit is used to calculate the average eigenvector of each grouping area; the replacement unit is used to use the calculated average eigenvector to replace the eigenvectors contained in all neurons of the grouping area to which it belongs ; A connection unit, configured to connect the neuron to the nearest grouping area if there is a neuron not connected to any grouping area, and the distance from the neuron to its adjacent grouping area is less than a second threshold; The merging unit is configured to merge adjacent grouped regions to obtain multiple regions to be divided; the segmentation unit is configured to segment the image to be divided according to the multiple regions to obtain the plurality of sub-images.

在一种可实现方式中,所述合并单元可用于:当待合并区域的面积小于 预设面积,且所述待合并区域的颜色距离小于预设颜色距离阈值时,将所述 待合并区域合并到相邻区域。In an implementable manner, the merging unit may be configured to: merge the regions to be merged when the area of the region to be merged is smaller than a preset area and the color distance of the region to be merged is smaller than a preset color distance threshold to the adjacent area.

可选的,确定所述机器鱼的位置可包括:基于Meanshift的目标跟踪算法 计算所述图像中的目标区域和候选区域内像素特征值概率,得到目标模型描 述以及候选模型描述;利用相似函数度量所述目标模型以及当前帧的候选模 型的相似性;选择使所述相似函数最大的候选模型并得到目标模型的 Meanshift向量;迭代计算Meanshift向量,通过收敛得到所述机器鱼的位置。Optionally, determining the position of the robotic fish may include: calculating the target region in the image and the probability of pixel feature values in the candidate region based on the Meanshift target tracking algorithm to obtain a target model description and a candidate model description; using a similarity function to measure The similarity of the target model and the candidate model of the current frame; select the candidate model that maximizes the similarity function and obtain the Meanshift vector of the target model; iteratively calculate the Meanshift vector, and obtain the position of the robotic fish through convergence.

可选的,控制所述机器鱼执行预设动作可包括:控制所述机器鱼按照预 设角度以及预设方向进行转向。Optionally, controlling the robotic fish to perform a preset action may include: controlling the robotic fish to turn according to a preset angle and a preset direction.

上述实施例的装置用于实现前述实施例中相应的方法,并且具有相应的 方法实施例的有益效果,在此不再赘述。The devices in the above embodiments are used to implement the corresponding methods in the above embodiments, and have the beneficial effects of the corresponding method embodiments, which will not be repeated here.

基于同一发明构思,本发明实施例还提供了一种电子设备,包括存储器、 处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执 行所述程序时实现如上任一实施例所述的获取机器鱼运动轨迹的方法。Based on the same inventive concept, an embodiment of the present invention also provides an electronic device, including a memory, a processor, and a computer program stored on the memory and operable on the processor. When the processor executes the program, the above-mentioned A method for obtaining the motion trajectory of a robotic fish described in an embodiment.

基于同一发明构思,本发明实施例还提供了一种非暂态计算机可读存储 介质,所述非暂态计算机可读存储介质存储计算机指令,所述计算机指令用 于使所述计算机执行如上任一实施例所述的获取机器鱼运动轨迹的方法。Based on the same inventive concept, an embodiment of the present invention also provides a non-transitory computer-readable storage medium, the non-transitory computer-readable storage medium stores computer instructions, and the computer instructions are used to make the computer execute the above-mentioned A method for obtaining the motion trajectory of a robotic fish described in an embodiment.

所属领域的普通技术人员应当理解:以上任何实施例的讨论仅为示例性 的,并非旨在暗示本公开的范围(包括权利要求)被限于这些例子;在本发 明的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合, 步骤可以以任意顺序实现,并存在如上所述的本发明的不同方面的许多其它 变化,为了简明它们没有在细节中提供。Those of ordinary skill in the art should understand that: the discussion of any of the above embodiments is exemplary only, and is not intended to imply that the scope of the present disclosure (including claims) is limited to these examples; under the idea of the present invention, the above embodiments or Combinations between technical features in different embodiments are also possible, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not presented in detail for the sake of brevity.

另外,为简化说明和讨论,并且为了不会使本发明难以理解,在所提供 的附图中可以示出或可以不示出与集成电路(IC)芯片和其它部件的公知的 电源/接地连接。此外,可以以框图的形式示出装置,以便避免使本发明难以 理解,并且这也考虑了以下事实,即关于这些框图装置的实施方式的细节是 高度取决于将要实施本发明的平台的(即,这些细节应当完全处于本领域技 术人员的理解范围内)。在阐述了具体细节(例如,电路)以描述本发明的示 例性实施例的情况下,对本领域技术人员来说显而易见的是,可以在没有这些具体细节的情况下或者这些具体细节有变化的情况下实施本发明。因此, 这些描述应被认为是说明性的而不是限制性的。In addition, well-known power/ground connections to integrated circuit (IC) chips and other components may or may not be shown in the provided figures, for simplicity of illustration and discussion, and so as not to obscure the present invention. . Furthermore, devices may be shown in block diagram form in order to avoid obscuring the invention, and this also takes into account the fact that details regarding the implementation of these block diagram devices are highly dependent on the platform on which the invention is to be implemented (i.e. , these details should be well within the understanding of those skilled in the art). Where specific details (eg, circuits) have been set forth to describe example embodiments of the invention, it will be apparent to those skilled in the art that other embodiments may be implemented without or with variations from these specific details. Implement the present invention down. Accordingly, these descriptions should be regarded as illustrative rather than restrictive.

尽管已经结合了本发明的具体实施例对本发明进行了描述,但是根据前 面的描述,这些实施例的很多替换、修改和变型对本领域普通技术人员来说 将是显而易见的。例如,其它存储器架构(例如,动态RAM(DRAM))可 以使用所讨论的实施例。Although the invention has been described in conjunction with specific embodiments of the invention, many alternatives, modifications and variations of those embodiments will be apparent to those of ordinary skill in the art from the foregoing description. For example, other memory architectures such as dynamic RAM (DRAM) may use the discussed embodiments.

本发明的实施例旨在涵盖落入所附权利要求的宽泛范围之内的所有这样 的替换、修改和变型。因此,凡在本发明的精神和原则之内,所做的任何省 略、修改、等同替换、改进等,均应包含在本发明的保护范围之内。Embodiments of the present invention are intended to embrace all such alterations, modifications and variations that fall within the broad scope of the appended claims. Therefore, within the spirit and principle of the present invention, any omission, modification, equivalent replacement, improvement, etc. should be included in the protection scope of the present invention.

Claims (10)

1.一种获取机器鱼运动轨迹的方法,其特征在于,包括:1. A method for obtaining the motion track of a robotic fish, comprising: 获取在水中游动的机器鱼的图像;Get images of robotic fish swimming in water; 对所述图像进行分割,得到多个子图像;Segmenting the image to obtain multiple sub-images; 在所述多个子图像中识别所述机器鱼,以确定所述机器鱼在所述图像中的位置;identifying the robotic fish in the plurality of sub-images to determine a position of the robotic fish in the images; 当在所述多个子图像中均无法识别到所述机器鱼时,控制所述机器鱼执行预设动作后,再次获取所述机器鱼的图像以及确定所述机器鱼在再次获取到的图像中的位置;When the robotic fish cannot be identified in the plurality of sub-images, after the robotic fish is controlled to perform a preset action, the image of the robotic fish is acquired again and it is determined that the robotic fish is in the acquired image again s position; 根据至少两次获得的所述机器鱼在所述图像中的位置确定所述机器鱼的运动轨迹。The motion track of the robotic fish is determined according to the position of the robotic fish in the image acquired at least twice. 2.根据权利要求1所述的方法,其特征在于,对所述图像进行分割,包括:2. The method according to claim 1, wherein segmenting the image comprises: 将待分割图像中各像素的色彩向量作为一个输入神经元的输入向量,将与所述各像素相邻的八个像素的色彩向量作为径向函数RBF的各特征向量;The color vector of each pixel in the image to be segmented is used as an input vector of an input neuron, and the color vectors of eight pixels adjacent to each pixel are used as each feature vector of the radial function RBF; 确定所述待分割图像中的种子神经元,其中,像素点到相邻像素点的最大曼哈顿距离小于第一阈值时,该像素点为种子像素点,该种子像素点对应的神经元为种子神经元,所述待分割图像中的种子神经元构成种子区域;Determine the seed neuron in the image to be segmented, wherein, when the maximum Manhattan distance from a pixel point to an adjacent pixel point is less than the first threshold, the pixel point is a seed pixel point, and the neuron corresponding to the seed pixel point is a seed neuron unit, the seed neuron in the image to be segmented constitutes a seed region; 通过预设生长规则对种子区域进行生长,得到多个分组区域;The seed area is grown by preset growth rules to obtain multiple grouping areas; 计算各分组区域的平均特征向量;Calculate the average eigenvector of each grouped area; 使用计算得到的平均特征向量替换其所属的分组区域所有神经元中所包含的特征向量;Use the calculated average eigenvector to replace the eigenvectors contained in all neurons in the grouping area to which it belongs; 如果存在未连接到任何分组区域的神经元,且该神经元到其相邻分组区域的距离小于第二阈值,则将该神经元连接到距离其最近的分组区域内;If there is a neuron not connected to any grouping area, and the distance from the neuron to its adjacent grouping area is less than a second threshold, then the neuron is connected to the nearest grouping area; 将相邻的所述分组区域合并,得到待分割的多个区域;Merging the adjacent grouping regions to obtain multiple regions to be divided; 按照所述多个区域对所述待分割图像进行分割,得到所述多个子图像。The image to be segmented is segmented according to the multiple regions to obtain the multiple sub-images. 3.根据权利要求2所述的方法,其特征在于,将相邻的所述分组区域合并,包括:3. The method according to claim 2, wherein merging adjacent grouping regions comprises: 当待合并区域的面积小于预设面积,且所述待合并区域的颜色距离小于预设颜色距离阈值时,将所述待合并区域合并到相邻区域。When the area of the region to be merged is smaller than a preset area and the color distance of the region to be merged is smaller than a preset color distance threshold, the region to be merged is merged into an adjacent region. 4.根据权利要求1所述的方法,其特征在于,确定所述机器鱼的位置包括:4. The method according to claim 1, wherein determining the position of the robotic fish comprises: 基于Meanshift的目标跟踪算法计算所述图像中的目标区域和候选区域内像素特征值概率,得到目标模型描述以及候选模型描述;The target tracking algorithm based on Meanshift calculates the target region and the pixel feature value probability in the candidate region in the image, and obtains a target model description and a candidate model description; 利用相似函数度量所述目标模型以及当前帧的候选模型的相似性;Using a similarity function to measure the similarity between the target model and the candidate model of the current frame; 选择使所述相似函数最大的候选模型并得到目标模型的Meanshift向量;Select the candidate model that makes the similarity function maximum and obtain the Meanshift vector of the target model; 迭代计算Meanshift向量,通过收敛得到所述机器鱼的位置。Calculate the Meanshift vector iteratively, and obtain the position of the robotic fish through convergence. 5.根据权利要求1至4任一项所述的方法,其特征在于,控制所述机器鱼执行预设动作,包括:5. The method according to any one of claims 1 to 4, wherein controlling the robotic fish to perform preset actions includes: 控制所述机器鱼按照预设角度以及预设方向进行转向。The robotic fish is controlled to turn according to a preset angle and a preset direction. 6.一种获取机器鱼运动轨迹的装置,其特征在于,包括:6. A device for obtaining the trajectory of a robotic fish, comprising: 第一获取模块,用于获取在水中游动的机器鱼的图像;The first acquisition module is used to acquire the image of the robotic fish swimming in water; 分割模块,用于对所述图像进行分割,得到多个子图像;A segmentation module, configured to segment the image to obtain multiple sub-images; 识别模块,用于在所述多个子图像中识别所述机器鱼,以确定所述机器鱼在所述图像中的位置;An identification module, configured to identify the robotic fish in the plurality of sub-images, so as to determine the position of the robotic fish in the images; 第二获取模块,当在所述多个子图像中均无法识别到所述机器鱼时,控制所述机器鱼执行预设动作后,再次获取所述机器鱼的图像以及确定所述机器鱼在再次获取到的图像中的位置;The second acquisition module, when the robotic fish cannot be identified in the plurality of sub-images, controls the robotic fish to perform a preset action, acquires the image of the robotic fish again and determines that the robotic fish is re-identified The position in the obtained image; 确定模块,用于根据至少两次获得的所述机器鱼在所述图像中的位置确定所述机器鱼的运动轨迹。A determining module, configured to determine the motion trajectory of the robotic fish according to the position of the robotic fish obtained at least twice in the image. 7.根据权利要求6所述的装置,其特征在于,所述分割模块,包括:7. The device according to claim 6, wherein the segmentation module comprises: 设置单元,用于将待分割图像中各像素的色彩向量作为一个输入神经元的输入向量,将与所述各像素相邻的八个像素的色彩向量作为径向函数RBF的各特征向量;A setting unit is used to use the color vector of each pixel in the image to be segmented as an input vector of an input neuron, and use the color vectors of eight pixels adjacent to each pixel as each feature vector of the radial function RBF; 确定单元,用于确定所述待分割图像中的种子神经元,其中,像素点到相邻像素点的最大曼哈顿距离小于第一阈值时,该像素点为种子像素点,该种子像素点对应的神经元为种子神经元,所述待分割图像中的种子神经元构成种子区域;A determining unit, configured to determine a seed neuron in the image to be segmented, wherein, when the maximum Manhattan distance from a pixel to an adjacent pixel is less than a first threshold, the pixel is a seed pixel, and the seed pixel corresponds to The neuron is a seed neuron, and the seed neuron in the image to be segmented constitutes a seed region; 生成单元,用于通过预设生长规则对种子区域进行生长,得到多个分组区域;A generation unit is used to grow the seed region through preset growth rules to obtain multiple grouped regions; 第一计算单元,用于计算各分组区域的平均特征向量;The first calculation unit is used to calculate the average feature vector of each grouping area; 替换单元,用于使用计算得到的平均特征向量替换其所属的分组区域所有神经元中所包含的特征向量;A replacement unit, configured to use the calculated average eigenvector to replace the eigenvectors contained in all neurons in the grouping area to which it belongs; 连接单元,用于如果存在未连接到任何分组区域的神经元,且该神经元到其相邻分组区域的距离小于第二阈值,则将该神经元连接到距离其最近的分组区域内;A connection unit configured to connect the neuron to the nearest grouping area if there is a neuron not connected to any grouping area, and the distance from the neuron to its adjacent grouping area is less than a second threshold; 合并单元,用于将相邻的所述分组区域合并,得到待分割的多个区域;a merging unit, configured to merge adjacent grouped regions to obtain multiple regions to be divided; 分割单元,用于按照所述多个区域对所述待分割图像进行分割,得到所述多个子图像。A segmentation unit, configured to segment the image to be segmented according to the multiple regions to obtain the multiple sub-images. 8.根据权利要求7所述的装置,其特征在于,所述合并单元用于:8. The device according to claim 7, wherein the merging unit is used for: 当待合并区域的面积小于预设面积,且所述待合并区域的颜色距离小于预设颜色距离阈值时,将所述待合并区域合并到相邻区域。When the area of the region to be merged is smaller than a preset area and the color distance of the region to be merged is smaller than a preset color distance threshold, the region to be merged is merged into an adjacent region. 9.一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现如权利要求1至5任一项所述的获取机器鱼运动轨迹的方法。9. An electronic device comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein the processor implements any of claims 1 to 5 when executing the program. The method for obtaining the motion trajectory of the robotic fish described in the item. 10.一种非暂态计算机可读存储介质,其特征在于,所述非暂态计算机可读存储介质存储计算机指令,所述计算机指令用于使所述计算机执行权利要求1至5任一项所述的获取机器鱼运动轨迹的方法。10. A non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores computer instructions, and the computer instructions are used to make the computer execute any one of claims 1 to 5 The method for obtaining the motion trajectory of the robot fish.
CN201910410101.9A 2019-05-17 2019-05-17 Method, device, equipment and storage medium for obtaining motion trajectory of robotic fish Pending CN110517287A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910410101.9A CN110517287A (en) 2019-05-17 2019-05-17 Method, device, equipment and storage medium for obtaining motion trajectory of robotic fish

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910410101.9A CN110517287A (en) 2019-05-17 2019-05-17 Method, device, equipment and storage medium for obtaining motion trajectory of robotic fish

Publications (1)

Publication Number Publication Date
CN110517287A true CN110517287A (en) 2019-11-29

Family

ID=68622512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910410101.9A Pending CN110517287A (en) 2019-05-17 2019-05-17 Method, device, equipment and storage medium for obtaining motion trajectory of robotic fish

Country Status (1)

Country Link
CN (1) CN110517287A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113917927A (en) * 2021-10-26 2022-01-11 沈阳航天新光集团有限公司 Bionic robot fish control system based on Leap Motion interaction

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140266205A1 (en) * 2013-03-12 2014-09-18 MRI Interventions, Inc. Intra-body medical devices for use in mri environments
CN104599262A (en) * 2014-12-18 2015-05-06 浙江工业大学 Multichannel pulse coupling neural network based color image segmentation technology
CN104931091A (en) * 2015-06-24 2015-09-23 金陵科技学院 Bionic robot fish measuring platform and using method thereof
CN107186708A (en) * 2017-04-25 2017-09-22 江苏安格尔机器人有限公司 Trick servo robot grasping system and method based on deep learning image Segmentation Technology
CN109241985A (en) * 2017-07-11 2019-01-18 普天信息技术有限公司 A kind of image-recognizing method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140266205A1 (en) * 2013-03-12 2014-09-18 MRI Interventions, Inc. Intra-body medical devices for use in mri environments
CN104599262A (en) * 2014-12-18 2015-05-06 浙江工业大学 Multichannel pulse coupling neural network based color image segmentation technology
CN104931091A (en) * 2015-06-24 2015-09-23 金陵科技学院 Bionic robot fish measuring platform and using method thereof
CN107186708A (en) * 2017-04-25 2017-09-22 江苏安格尔机器人有限公司 Trick servo robot grasping system and method based on deep learning image Segmentation Technology
CN109241985A (en) * 2017-07-11 2019-01-18 普天信息技术有限公司 A kind of image-recognizing method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张鑫 等: "UAV目标跟踪预测算法研究", 《计算机与数学工程》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113917927A (en) * 2021-10-26 2022-01-11 沈阳航天新光集团有限公司 Bionic robot fish control system based on Leap Motion interaction

Similar Documents

Publication Publication Date Title
CN111179324B (en) Object pose estimation method based on fusion of color and depth information in six degrees of freedom
CN111640157B (en) Checkerboard corner detection method based on neural network and application thereof
CN106940704B (en) Positioning method and device based on grid map
CN106780631B (en) Robot closed-loop detection method based on deep learning
CN115816460B (en) Mechanical arm grabbing method based on deep learning target detection and image segmentation
CN110480637B (en) An Image Recognition and Grabbing Method of Robot Arm Parts Based on Kinect Sensor
US10572762B2 (en) Image processing method for performing pattern matching for detecting a position of a detection target
CN114332214A (en) Object pose estimation method, device, electronic device and storage medium
CN111127522B (en) Depth optical flow prediction method, device, equipment and media based on monocular camera
CN108550162B (en) Object detection method based on deep reinforcement learning
CN111368759B (en) Monocular vision-based mobile robot semantic map construction system
CN111080537B (en) Underwater robot intelligent control methods, media, equipment and systems
CN108776989A (en) Low texture plane scene reconstruction method based on sparse SLAM frames
CN112347900B (en) An automatic grasping method of monocular vision underwater target based on distance estimation
CN114387513A (en) Robot grasping method, device, electronic device and storage medium
CN108074251A (en) Mobile Robotics Navigation control method based on monocular vision
CN107351080A (en) A kind of hybrid intelligent research system and control method based on array of camera units
Duffhauss et al. Mv6d: Multi-view 6d pose estimation on rgb-d frames using a deep point-wise voting network
Jiang et al. 3-d scene flow estimation on pseudo-lidar: Bridging the gap on estimating point motion
Wang et al. Joint unsupervised learning of optical flow and depth by watching stereo videos
Chen et al. Agg-net: Attention guided gated-convolutional network for depth image completion
Shao et al. Real-time tracking of moving objects on a water surface
CN110517287A (en) Method, device, equipment and storage medium for obtaining motion trajectory of robotic fish
Liu et al. MonoTAKD: Teaching assistant knowledge distillation for monocular 3D object detection
CN110322476A (en) A kind of method for tracking target improving the optimization of STC and SURF characteristic binding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191129