CN110675449B - A rip current detection method based on binocular camera - Google Patents
A rip current detection method based on binocular camera Download PDFInfo
- Publication number
- CN110675449B CN110675449B CN201910821151.6A CN201910821151A CN110675449B CN 110675449 B CN110675449 B CN 110675449B CN 201910821151 A CN201910821151 A CN 201910821151A CN 110675449 B CN110675449 B CN 110675449B
- Authority
- CN
- China
- Prior art keywords
- rip current
- image
- rip
- current
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种基于双目相机的离岸流检测方法,属于计算机视觉领域和安全保障技术领域,本发明方法包括如下步骤:首先,对离岸流进行识别的卷积神经网络算法进行训练;其次,在实际检测过程中对双目相机采集到的海浪图像利用训练好的卷积神经网络进行识别是否存在离岸流;最后,对于所识别到的离岸流进行定位;本发明方法可以基于双目立体视觉获取离岸流的三维信息;可以及时的获取较为准确的离岸流位置信息,方法简单,可操作性强,相关工作人员便可以基于离岸流的三维信息对沙滩位置进行标记,提醒游客,减少由于离岸流造成的溺水事件的发生。
The invention discloses an offshore current detection method based on a binocular camera, which belongs to the field of computer vision and the technical field of security assurance. The method of the present invention includes the following steps: first, training a convolutional neural network algorithm for identifying offshore currents secondly, in the actual detection process, the trained convolutional neural network is used to identify whether there is an rip current on the ocean wave image collected by the binocular camera; finally, the identified rip current is located; the method of the present invention can The 3D information of the rip current is obtained based on binocular stereo vision; the more accurate position information of the rip current can be obtained in time, the method is simple, and the operability is strong. Signs to remind visitors to reduce drowning incidents due to rip currents.
Description
技术领域technical field
本发明属于计算机视觉领域和安全保障技术领域,具体涉及一种基于双目相机的离岸流检测方法。The invention belongs to the field of computer vision and the technical field of security assurance, and in particular relates to a method for detecting an offshore current based on a binocular camera.
背景技术Background technique
离岸流流速大多在每秒0.3-1米,最快可达到每秒3米,其长度可达30-100米甚至更长,流向几乎和岸线垂直,可将强壮的游泳者迅速拽入深水,引起溺水。离岸流已成为继风暴潮,海浪之后,给人们滨海旅游造成危害的另一海洋灾害。约90%的海边溺亡是因为离岸流引起的。离岸流给滨海旅游的吸引力维护,海滩管理,事故纠纷处理带来了大量的问题,严重影响了沿海旅游经济的健康发展。目前我国对离岸流灾害的技术评估和安全管理等才刚刚起步,相关调查评估,危险性评价,精细化预报,安全管理和公众科普警示等极为缺乏;公众对离岸流的认识也存在盲区和误区,认知错误和警惕性的缺乏造成了很多热点旅游区出现大量溺水事件,增加了救援工作量,滨海旅游安全管理难度。因此十分需要一种高效简单的离岸流检测方法。目前,对于离岸流的传统检测方法,是在近岸放置浮标或海流计。The rip current velocity is mostly 0.3-1 meters per second, the fastest can reach 3 meters per second, and its length can reach 30-100 meters or even longer, and the flow direction is almost perpendicular to the shoreline, which can quickly drag strong swimmers into it. Deep water, causing drowning. Rip current has become another marine disaster that causes harm to people's coastal tourism after storm surge and ocean waves. About 90% of seaside drownings are caused by rip currents. The rip current has brought a lot of problems to the attractive maintenance of coastal tourism, beach management, and handling of accidents and disputes, which has seriously affected the healthy development of coastal tourism economy. At present, the technical evaluation and safety management of rip current disasters in my country have just started, and the relevant investigation and evaluation, risk assessment, refined forecast, safety management and public science warning are extremely lacking; the public's understanding of rip current also has blind spots. Misunderstandings, cognitive errors and lack of vigilance have resulted in a large number of drowning incidents in many hot tourist areas, increasing the rescue workload and making coastal tourism safety management difficult. Therefore, there is a great need for an efficient and simple method for rip current detection. At present, the traditional detection method for rip current is to place buoys or current meters near the shore.
发明内容SUMMARY OF THE INVENTION
针对现有技术中存在的上述技术问题,本发明提出了一种基于双目相机的离岸流检测方法,设计合理,克服了现有技术的不足,具有良好的效果。In view of the above technical problems existing in the prior art, the present invention proposes an offshore current detection method based on a binocular camera, which has a reasonable design, overcomes the deficiencies of the prior art, and has good effects.
为了实现上述目的,本发明采用如下技术方案:In order to achieve the above object, the present invention adopts the following technical solutions:
一种基于双目相机的离岸流检测方法,包括如下步骤:A rip current detection method based on a binocular camera, comprising the following steps:
步骤1:收集训练卷积神经网络的图像集,训练卷积神经网络;Step 1: Collect the image set for training the convolutional neural network, and train the convolutional neural network;
步骤2:利用训练好的卷积神经网络的图像识别算法,对采集到的图像进行离岸流的识别;判断离岸流是否存在;Step 2: Use the image recognition algorithm of the trained convolutional neural network to identify the rip current on the collected image; determine whether the rip current exists;
若:判断结果是离岸流存在,则找到离岸流的特征点,对离岸流的特征点进行提取;特征点包括离岸流的中点和左右边缘点;If: the judgment result is that the rip current exists, then find the feature points of the rip current, and extract the feature points of the rip current; the feature points include the midpoint and the left and right edge points of the rip current;
或判断结果是离岸流不存在,则重新采集图像进行处理;Or the judgment result is that the rip current does not exist, then the image is collected again for processing;
步骤3:利用双目定位算法,对识别到的离岸流的特征点进行定位,利用离岸流特征点的位置来标识离岸流的位置;Step 3: Use the binocular positioning algorithm to locate the identified feature points of the rip current, and use the position of the feature points of the rip current to identify the position of the rip current;
基于双目立体视觉技术获取离岸流的三维坐标;Obtaining the three-dimensional coordinates of the rip current based on binocular stereo vision technology;
步骤4:基于离岸流的特征点的位置信息,划分危险区域并反馈给相关工作人员,工作人员通过标记方式提醒游客。Step 4: Based on the location information of the feature points of the rip current, the dangerous area is divided and fed back to the relevant staff, and the staff reminds tourists by marking.
优选地,在步骤2中,双目相机包括左、右相机,由于左右相机采集的数据基本一致,因此采用单侧相机图像进行离岸流识别。Preferably, in step 2, the binocular cameras include left and right cameras. Since the data collected by the left and right cameras are basically the same, the image of the single-sided camera is used for rip current identification.
优选地,对步骤1中的卷积神经网络进行训练的具体步骤如下:Preferably, the specific steps of training the convolutional neural network in step 1 are as follows:
步骤1.1:图像数据预处理;具体包括如下步骤:Step 1.1: Image data preprocessing; specifically includes the following steps:
步骤1.1.1:将步骤1所采集的训练数据进行处理;将采集的训练数据转换成TensorFlow能识别的数据格式;Step 1.1.1: Process the training data collected in step 1; convert the collected training data into a data format that TensorFlow can recognize;
步骤1.1.2:根据图像添加label,将image和label放到数组中,将数组转化为Tensorflow能识别的格式;Step 1.1.2: Add a label according to the image, put the image and label into the array, and convert the array into a format that Tensorflow can recognize;
步骤1.1.3:将图像进行包括裁剪和补充在内的标准化处理;Step 1.1.3: Normalize the image including cropping and supplementing;
步骤1.2:基于Tensorflow框架,搭建卷积神经网络模型;Step 1.2: Build a convolutional neural network model based on the Tensorflow framework;
采用经典卷积神经网络LeNet-5模型;该模型分为7层结构:卷积层-池化层-卷积层-池化层-全连接层-全连接层-全连接输出层;其中,卷积层提取初步离岸流特征,池化层提取离岸流的主要特征,全连接层将各个部分特征汇总;The classic convolutional neural network LeNet-5 model is adopted; the model is divided into 7 layers: convolutional layer - pooling layer - convolutional layer - pooling layer - fully connected layer - fully connected layer - fully connected output layer; among them, The convolutional layer extracts the preliminary offshore current features, the pooling layer extracts the main features of the offshore current, and the fully connected layer summarizes the features of each part;
步骤1.3:利用搭建好的卷积神经网络模型,对识别离岸流的神经网络进行训练。Step 1.3: Use the constructed convolutional neural network model to train the neural network for identifying rip currents.
优选地,在步骤3中,具体包括如下步骤:Preferably, in step 3, it specifically includes the following steps:
步骤3.1:对左右相机进行标定;Step 3.1: Calibrate the left and right cameras;
采用张正友标定法对相机进行标定,获得左右相机的内外参数;The camera is calibrated by Zhang Zhengyou's calibration method, and the internal and external parameters of the left and right cameras are obtained;
步骤3.2:对左右相机图像进行校正;Step 3.2: Correct the left and right camera images;
利用标定好的左右相机所获得的内外参数,对图像进行畸变校正和立体校正;Use the internal and external parameters obtained by the calibrated left and right cameras to perform distortion correction and stereo correction on the image;
步骤3.3:对图像进行立体匹配;Step 3.3: Stereo matching the image;
利用SGBM算法对图像进行立体匹配,最终获得视差图;Use the SGBM algorithm to perform stereo matching on the image, and finally obtain the disparity map;
步骤3.4:获得离岸流的三维坐标信息;Step 3.4: Obtain the three-dimensional coordinate information of the rip current;
利用视差图和左右相机的内参数,获得深度图像,根据相机模型和标定相机获得的参数,获得离岸流特征点的三维坐标;Using the disparity map and the internal parameters of the left and right cameras, the depth image is obtained, and the three-dimensional coordinates of the rip current feature points are obtained according to the camera model and the parameters obtained by the calibration camera;
步骤3.5:将离岸流中点的位置信息当作离岸流的位置,在所获得离岸流左右边缘点划分危险区域,即根据离岸流的性质在离岸流的区域,根据实际情况进行扩大,得到危险区域。Step 3.5: Take the position information of the midpoint of the rip current as the position of the rip current, and divide the dangerous area at the left and right edge points of the obtained rip current, that is, according to the nature of the rip current in the area of the rip current, according to the actual situation Expand to get the danger zone.
优选地,在步骤3.3中,SGBM是一种半全局块匹配算法,SGBM算法具体包括如下步骤:Preferably, in step 3.3, SGBM is a semi-global block matching algorithm, and the SGBM algorithm specifically includes the following steps:
S1:图像预处理;S1: image preprocessing;
利用水平Sobel算子对图像进行处理,将像素点进行映射得到新的图像,预处理得到的原图像的梯度信息;Use the horizontal Sobel operator to process the image, map the pixels to obtain a new image, and preprocess the gradient information of the original image;
S2:代价计算;S2: cost calculation;
代价计算分为两步:一是经过预处理得到的图像的梯度信息经过基于采样的方法得到的梯度代价计算;二是原图像经过基于采样的方法得到的SAD代价计算;The cost calculation is divided into two steps: one is the gradient cost calculation of the gradient information of the preprocessed image obtained by the sampling-based method; the second is the SAD cost calculation of the original image obtained by the sampling-based method;
S3:动态规划;S3: dynamic programming;
在每个方向上按照动态规划的思想进行能量积累,然后将每个方向上的匹配代价相加得到总的匹配代价;Accumulate energy in each direction according to the idea of dynamic programming, and then add the matching cost in each direction to get the total matching cost;
S4:后处理;S4: post-processing;
后处理部分需要进行唯一性检测、亚像素插值和左右一致性检测。The post-processing part needs to perform uniqueness detection, sub-pixel interpolation and left-right consistency detection.
本发明所带来的有益技术效果:Beneficial technical effects brought by the present invention:
本发明通过双目相机对图像采集,利用基于卷积神经网络的图像识别算法对离岸流进行识别,利用双目定位原理对离岸流进行定位,对离岸流检测提供了新的方法;基于双目相机对离岸流进行检测的方法,可以及时的获取较为准确的离岸流位置信息,方法简单,人工代价低,操作性强,相关工作人员便可以利用该方法对获得的离岸流的位置信息对沙滩位置进行标记,提醒游客,减少由于离岸流造成的溺水事件的发生,为沿海旅游管理提供了切实可行的办法。The invention collects images through binocular cameras, uses an image recognition algorithm based on a convolutional neural network to identify offshore currents, uses the principle of binocular positioning to locate offshore currents, and provides a new method for offshore current detection; The method based on the binocular camera to detect the offshore current can obtain more accurate offshore current position information in time. The method is simple, the labor cost is low, and the operability is strong. The location information of the stream marks the location of the beach, reminds tourists, reduces the occurrence of drowning events caused by rip currents, and provides a practical approach for coastal tourism management.
附图说明Description of drawings
图1是一种基于双目相机的离岸流检测方法实现的流程图;Fig. 1 is a flow chart of the realization of a method for rip current detection based on a binocular camera;
图2是双目定位算法的流程图;Fig. 2 is the flow chart of binocular positioning algorithm;
图3是识别图像算法的流程图;Fig. 3 is the flow chart of recognition image algorithm;
图4是离岸流模型示意图。Figure 4 is a schematic diagram of an offshore rip current model.
具体实施方式Detailed ways
下面结合附图以及具体实施方式对本发明作进一步详细说明:The present invention is described in further detail below in conjunction with the accompanying drawings and specific embodiments:
如图1所示,一种基于双目相机的离岸流检测方法,包括如下步骤:As shown in Figure 1, a method for detecting rip current based on a binocular camera includes the following steps:
步骤1:收集训练卷积神经网络的图像集,训练卷积神经网络。Step 1: Collect the image set for training the convolutional neural network, and train the convolutional neural network.
其中,训练数据是为了后续进行图像识别提供基础,可以是利用双目相机采集的近岸海浪的,该图像既可以存在离岸流也可以不存在离岸流近岸海浪图像;也可以是在网络上存在的包含离岸流和不包含离岸流的近岸海浪图像数据;也可以是电脑生成的离岸流理想模型近岸海浪图像数据。Among them, the training data is to provide a basis for subsequent image recognition, which can be the nearshore waves collected by the binocular camera, and the image can have either rip current or no rip current. The nearshore wave image data with and without the rip current that exists on the Internet; it can also be the nearshore wave image data of the ideal model of the rip current generated by the computer.
对步骤1中的卷积神经网络进行训练的具体步骤如下:The specific steps for training the convolutional neural network in step 1 are as follows:
步骤1.1:图像数据预处理;具体包括如下步骤:Step 1.1: Image data preprocessing; specifically includes the following steps:
步骤1.1.1:将步骤1所采集的训练数据进行处理,将采集的训练数据转换成TensorFlow能识别的数据格式;Step 1.1.1: Process the training data collected in step 1, and convert the collected training data into a data format that TensorFlow can recognize;
步骤1.1.2:根据图像添加label,将image和label放到数组中,将数组转化为Tensorflow能识别的格式;Step 1.1.2: Add a label according to the image, put the image and label into the array, and convert the array into a format that Tensorflow can recognize;
步骤1.1.3:将图像进行包括裁剪和补充在内的标准化处理;Step 1.1.3: Normalize the image including cropping and supplementing;
步骤1.2:基于Tensorflow框架,搭建卷积神经网络模型;Step 1.2: Build a convolutional neural network model based on the Tensorflow framework;
采用经典卷积神经网络LeNet-5模型;该模型分为7层结构:卷积层-池化层-卷积层-池化层-全连接层-全连接层-全连接输出层;其中,卷积层提取初步离岸流特征,池化层提取离岸流的主要特征,全连接层将各个部分特征汇总;The classic convolutional neural network LeNet-5 model is adopted; the model is divided into 7 layers: convolutional layer - pooling layer - convolutional layer - pooling layer - fully connected layer - fully connected layer - fully connected output layer; among them, The convolutional layer extracts the preliminary offshore current features, the pooling layer extracts the main features of the offshore current, and the fully connected layer summarizes the features of each part;
步骤1.3:利用搭建好的卷积神经网络模型,对识别离岸流的神经网络进行训练。Step 1.3: Use the constructed convolutional neural network model to train the neural network for identifying rip currents.
步骤2:训练完成之后,进行实验操作,对实际海浪进行采集称为实际数据,实际数据也就是在实际进行离岸流观测的时候对海浪采集的图像。利用训练好的CNN卷积神经网络的图像识别算法(其流程如图2所示),对采集到的图像进行离岸流的识别;并判断离岸流是否存在;Step 2: After the training is completed, carry out the experimental operation, and the collection of actual ocean waves is called actual data, and the actual data is the images collected on the ocean waves during the actual rip current observation. Using the image recognition algorithm of the trained CNN convolutional neural network (the process is shown in Figure 2), the collected images are used to identify the rip current; and determine whether the rip current exists;
若:判断结果是离岸流存在,则找到离岸流的特征点;特征点包括离岸流的中点和左右边缘点;If: the judgment result is that the rip current exists, then find the feature points of the rip current; the feature points include the midpoint and the left and right edge points of the rip current;
或判断结果是离岸流不存在,则重新采集图像进行处理;Or the judgment result is that the rip current does not exist, then the image is collected again for processing;
双目相机包括左、右相机,由于左右相机采集的数据基本一致,因此采用单侧相机图像进行离岸流识别。The binocular camera includes left and right cameras. Since the data collected by the left and right cameras are basically the same, the single-sided camera image is used for rip current identification.
步骤3:利用双目定位算法(其流程如图3所示),对识别到的离岸流的特征点进行定位,利用离岸流特征点的位置来标识离岸流的位置;Step 3: Use the binocular positioning algorithm (the process of which is shown in Figure 3) to locate the identified feature points of the rip current, and use the position of the feature point of the rip current to identify the position of the rip current;
基于双目立体视觉技术获取离岸流的三维坐标;Obtaining the three-dimensional coordinates of the rip current based on binocular stereo vision technology;
具体包括如下步骤:Specifically include the following steps:
步骤3.1:对左右相机进行标定;Step 3.1: Calibrate the left and right cameras;
采用张正友标定法对相机进行标定,获得左右相机的内外参数;张正友标定法是一种基于移动平面模板的相机标定方法,此方法是基于传统相机标定法和相机自标定法之间的一种方法。克服了两者的缺点又结合了两者的优点。具体步骤包括计算单应性矩阵,计算内参矩阵,计算外参矩阵,计算畸变参数。The camera is calibrated by Zhang Zhengyou's calibration method, and the internal and external parameters of the left and right cameras are obtained; Zhang Zhengyou's calibration method is a camera calibration method based on a moving plane template. . Overcome the shortcomings of both and combine the advantages of both. The specific steps include calculating a homography matrix, calculating an internal parameter matrix, calculating an external parameter matrix, and calculating a distortion parameter.
步骤3.2:对左右相机图像进行校正;Step 3.2: Correct the left and right camera images;
利用标定好的左右相机所获得的内外参数,对图像进行畸变校正和立体校正;Use the internal and external parameters obtained by the calibrated left and right cameras to perform distortion correction and stereo correction on the image;
步骤3.3:对图像进行立体匹配;Step 3.3: Stereo matching the image;
利用SGBM算法对图像进行立体匹配获得视差图,最终获得图片中点实际的三维坐标信息。SGBM是一种半全局块匹配算法,具有视差效果好速度快的特点;SGBM算法的步骤如下:The SGBM algorithm is used to perform stereo matching on the image to obtain the disparity map, and finally the actual three-dimensional coordinate information of the point in the image is obtained. SGBM is a semi-global block matching algorithm, which has the characteristics of good parallax effect and fast speed; the steps of the SGBM algorithm are as follows:
S1:图像预处理;S1: image preprocessing;
利用水平Sobel算子对图像进行处理,将像素点进行映射得到新的图像,预处理得到的原图像的梯度信息;Use the horizontal Sobel operator to process the image, map the pixels to obtain a new image, and preprocess the gradient information of the original image;
S2:代价计算;S2: cost calculation;
代价计算分为两步:一是经过预处理得到的图像的梯度信息经过基于采样的方法得到的梯度代价计算;二是原图像经过基于采样的方法得到的SAD代价计算;The cost calculation is divided into two steps: one is the gradient cost calculation of the gradient information of the preprocessed image obtained by the sampling-based method; the second is the SAD cost calculation of the original image obtained by the sampling-based method;
S3:动态规划;S3: dynamic programming;
在每个方向上按照动态规划的思想进行能量积累,然后将每个方向上的匹配代价相加得到总的匹配代价;Accumulate energy in each direction according to the idea of dynamic programming, and then add the matching cost in each direction to get the total matching cost;
S4:后处理;S4: post-processing;
后处理部分需要进行唯一性检测、亚像素插值和左右一致性检测。The post-processing part needs to perform uniqueness detection, sub-pixel interpolation and left-right consistency detection.
步骤3.4:获得离岸流的三维坐标;Step 3.4: Obtain the three-dimensional coordinates of the rip current;
利用视差图和左右相机的内参数,获得深度图像,根据相机模型和标定相机获得的参数,获得离岸流特征点的三维坐标;Using the disparity map and the internal parameters of the left and right cameras, the depth image is obtained, and the three-dimensional coordinates of the rip current feature points are obtained according to the camera model and the parameters obtained by the calibration camera;
步骤3.5:将离岸流中点的位置信息当作离岸流的位置,在所获得离岸流左右边缘点划分危险区域,即根据离岸流的性质在离岸流的区域适当扩大,得到危险区域,比如:在左边缘点的左方向上增加5m,在右边缘点往右增加5m。Step 3.5: Take the position information of the midpoint of the rip current as the position of the rip current, and divide the dangerous area at the left and right edge points of the obtained rip current. Hazardous areas, for example: add 5m to the left of the left edge point and 5m to the right of the right edge point.
步骤4:基于离岸流的特征点的位置信息,划分危险区域并反馈给相关工作人员,工作人员通过标记方式提醒游客。Step 4: Based on the location information of the feature points of the rip current, divide the dangerous area and feed it back to the relevant staff, who will remind tourists by marking.
当然,上述说明并非是对本发明的限制,本发明也并不仅限于上述举例,本技术领域的技术人员在本发明的实质范围内所做出的变化、改型、添加或替换,也应属于本发明的保护范围。Of course, the above description is not intended to limit the present invention, and the present invention is not limited to the above examples. Changes, modifications, additions or substitutions made by those skilled in the art within the essential scope of the present invention should also belong to the present invention. the scope of protection of the invention.
Claims (3)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910821151.6A CN110675449B (en) | 2019-09-02 | 2019-09-02 | A rip current detection method based on binocular camera |
PCT/CN2019/115513 WO2021042490A1 (en) | 2019-09-02 | 2019-11-05 | Offshore current detection method based on binocular camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910821151.6A CN110675449B (en) | 2019-09-02 | 2019-09-02 | A rip current detection method based on binocular camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110675449A CN110675449A (en) | 2020-01-10 |
CN110675449B true CN110675449B (en) | 2020-12-08 |
Family
ID=69076671
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910821151.6A Active CN110675449B (en) | 2019-09-02 | 2019-09-02 | A rip current detection method based on binocular camera |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110675449B (en) |
WO (1) | WO2021042490A1 (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110763426B (en) * | 2019-09-29 | 2021-09-10 | 哈尔滨工程大学 | Method and device for simulating offshore flow in pool |
CN112950610A (en) * | 2021-03-18 | 2021-06-11 | 河海大学 | Method and system for monitoring and early warning of fission flow |
CN113936248B (en) * | 2021-10-12 | 2023-10-03 | 河海大学 | A hazard early warning method for beach personnel based on image recognition |
CN115100153B (en) * | 2022-06-29 | 2024-09-24 | 武汉工程大学 | In-pipe detection method, device, electronic equipment and medium based on binocular matching |
CN115663665B (en) * | 2022-12-08 | 2023-04-18 | 国网山西省电力公司超高压变电分公司 | Binocular vision-based protection screen cabinet air-open state checking device and method |
CN117131799B (en) * | 2023-08-17 | 2024-02-23 | 浙江大学 | Bottom bed shear stress calculation method based on image |
CN117395377B (en) * | 2023-12-06 | 2024-03-22 | 上海海事大学 | Multi-view fusion-based coastal bridge sea side safety monitoring method, system and medium |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
ITRM20010045A1 (en) * | 2001-01-29 | 2002-07-29 | Consiglio Nazionale Ricerche | SYSTEM AND METHOD FOR DETECTING THE RELATIVE POSITION OF AN OBJECT COMPARED TO A REFERENCE POINT. |
US20050271266A1 (en) * | 2001-06-01 | 2005-12-08 | Gregory Perrier | Automated rip current detection system |
US9165453B2 (en) * | 2012-01-12 | 2015-10-20 | Earl Senchuk | Rip current sensor and warning system with anchor |
KR101191944B1 (en) * | 2012-03-27 | 2012-10-17 | 대한민국(국토해양부 국립해양조사원장) | Method for issuing notice of warning for rip currents |
CN103308000B (en) * | 2013-06-19 | 2015-11-18 | 武汉理工大学 | Based on the curve object measuring method of binocular vision |
CN104933718B (en) * | 2015-06-23 | 2019-02-15 | 广东省智能制造研究所 | A physical coordinate positioning method based on binocular vision |
CN105389468B (en) * | 2015-11-06 | 2017-05-10 | 中国海洋大学 | A method of rip current prediction |
JP2017133901A (en) * | 2016-01-27 | 2017-08-03 | ソニー株式会社 | Monitoring device and monitoring method, and program |
KR101947782B1 (en) * | 2017-02-22 | 2019-02-13 | 한국과학기술원 | Apparatus and method for depth estimation based on thermal image, and neural network learning method |
CN106982359B (en) * | 2017-04-26 | 2019-11-05 | 深圳先进技术研究院 | Binocular video monitoring method and system and computer readable storage medium |
CN107092893B (en) * | 2017-04-28 | 2018-06-19 | 杨荧 | A kind of recognition methods based on image procossing |
CN108154134B (en) * | 2018-01-11 | 2019-07-23 | 天格科技(杭州)有限公司 | Pornographic image detection method is broadcast live in internet based on depth convolutional neural networks |
CN108665484B (en) * | 2018-05-22 | 2021-07-09 | 国网山东省电力公司电力科学研究院 | A method and system for hazard identification based on deep learning |
CN109048926A (en) * | 2018-10-24 | 2018-12-21 | 河北工业大学 | A kind of intelligent robot obstacle avoidance system and method based on stereoscopic vision |
CN109903507A (en) * | 2019-03-04 | 2019-06-18 | 上海海事大学 | A fire intelligent monitoring system and method based on deep learning |
CN110060299A (en) * | 2019-04-18 | 2019-07-26 | 中国测绘科学研究院 | Danger source identifies and positions method in passway for transmitting electricity based on binocular vision technology |
-
2019
- 2019-09-02 CN CN201910821151.6A patent/CN110675449B/en active Active
- 2019-11-05 WO PCT/CN2019/115513 patent/WO2021042490A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2021042490A1 (en) | 2021-03-11 |
CN110675449A (en) | 2020-01-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110675449B (en) | A rip current detection method based on binocular camera | |
CN108364280B (en) | Method and equipment for automatically describing structural crack and accurately measuring width | |
CN111476767B (en) | A Defect Recognition Method of High-speed Rail Fasteners Based on Heterologous Image Fusion | |
CN107025432B (en) | A kind of efficient lane detection tracking and system | |
CN111951392A (en) | A Terrain Reconstruction Method Above Low Water Level in Continental Beaches Based on Time Series Remote Sensing Images and Water Level Monitoring Data | |
CN102389361B (en) | Blindman outdoor support system based on computer vision | |
CN105204411B (en) | A kind of ship berthing auxiliary system and method based on binocular stereo vision | |
CN107884767A (en) | A kind of method of binocular vision system measurement ship distance and height | |
CN109580649B (en) | A method and system for identification and projection correction of surface cracks in engineering structures | |
CN106228579B (en) | A method for extracting dynamic water level information from video images based on geographic spatiotemporal scenes | |
CN114396921B (en) | Method for measuring tidal height and propagation speed of Yangtze river on basis of unmanned aerial vehicle | |
CN105225229A (en) | Fish based on vision signal cross dam movement locus locating device and method | |
CN105913013A (en) | Binocular vision face recognition algorithm | |
CN113936248B (en) | A hazard early warning method for beach personnel based on image recognition | |
CN111582084B (en) | A method and system for detecting foreign objects on rails from a space-based perspective based on weakly supervised learning | |
CN106156758B (en) | A kind of tidal saltmarsh method in SAR seashore image | |
CN107578397A (en) | A new non-contact contact wire wear detection method | |
CN107358632A (en) | Underwater Camera scaling method applied to underwater binocular stereo vision | |
CN104361627A (en) | SIFT-based (scale-invariant feature transform) binocular vision three-dimensional image reconstruction method of asphalt pavement micro-texture | |
CN113159042A (en) | Laser vision fusion unmanned ship bridge opening passing method and system | |
CN116778227B (en) | Target detection method, system and device based on infrared image and visible light image | |
CN111611912A (en) | A detection method for abnormal head bowing behavior of pedestrians based on human joint points | |
CN114663344A (en) | Train wheel set tread defect identification method and device based on image fusion | |
CN114241310A (en) | Intelligent identification method of dike piping hazards based on improved YOLO model | |
CN115984687A (en) | Water boundary measurement method, device, equipment and medium for dynamic bed model test of river engineering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20221130 Address after: 266555 Building 50, No. 1208, Qichangcheng Road, Huangdao District, Qingdao, Shandong Patentee after: Qingdao Jianguo Zhongji Surveying and Mapping Technology Information Co.,Ltd. Address before: 579 qianwangang Road, Huangdao District, Qingdao City, Shandong Province Patentee before: SHANDONG University OF SCIENCE AND TECHNOLOGY |