[go: up one dir, main page]

CN110069993B - A target vehicle detection method based on deep learning - Google Patents

A target vehicle detection method based on deep learning Download PDF

Info

Publication number
CN110069993B
CN110069993B CN201910206458.5A CN201910206458A CN110069993B CN 110069993 B CN110069993 B CN 110069993B CN 201910206458 A CN201910206458 A CN 201910206458A CN 110069993 B CN110069993 B CN 110069993B
Authority
CN
China
Prior art keywords
target vehicle
neural network
convolutional neural
training
deep convolutional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910206458.5A
Other languages
Chinese (zh)
Other versions
CN110069993A (en
Inventor
瞿三清
许仲聪
卢凡
陈广
董金虎
陈凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN201910206458.5A priority Critical patent/CN110069993B/en
Publication of CN110069993A publication Critical patent/CN110069993A/en
Application granted granted Critical
Publication of CN110069993B publication Critical patent/CN110069993B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/242Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及一种基于深度学习的目标车辆检测方法,包括以下步骤:1)通过设置在泊车机器人尾部的两个单线激光雷达采集目标车辆的尾部特征点云数据,并进行预处理得到二进制图像;2)对二进制图像进行标注,获取其中目标车辆尾部所在的位置,以此生成训练数据集;3)构建适用于目标车辆检测的深度卷积神经网络及其损失函数;4)将训练数据集进行增广后输入深度卷积神经网络中,并根据输出值与训练真值的差异对卷积神经网络中的参数进行训练更新,得到最优的网络参数,并根据训练好的深度卷积神经网络进行检测。与现有技术相比,本发明具有稳健性高、不依赖手工特征、检测成本低等优点。

Figure 201910206458

The present invention relates to a target vehicle detection method based on deep learning, comprising the following steps: 1) collecting the tail feature point cloud data of the target vehicle through two single-line laser radars arranged at the tail of the parking robot, and performing preprocessing to obtain a binary image 2) Annotate the binary image to obtain the position of the tail of the target vehicle to generate a training data set; 3) Construct a deep convolutional neural network and its loss function suitable for target vehicle detection; 4) Transfer the training data set After the augmentation, it is input into the deep convolutional neural network, and the parameters in the convolutional neural network are trained and updated according to the difference between the output value and the training true value, and the optimal network parameters are obtained. network is detected. Compared with the prior art, the present invention has the advantages of high robustness, no dependence on manual features, and low detection cost.

Figure 201910206458

Description

Target vehicle detection method based on deep learning
Technical Field
The invention relates to the technical field of intelligent parking, in particular to a target vehicle detection method based on deep learning.
Background
In the field of intelligent driving, detection of a target vehicle is one of key tasks for guaranteeing safe driving of an unmanned vehicle. In the technical field of intelligent parking, the detection of the position of a target vehicle is a key step for realizing the accurate alignment of a parking robot on the target vehicle. Because the laser radar is slightly influenced by the environment and can acquire accurate point cloud data of a target vehicle, the laser radar becomes the most important sensor for detecting and positioning the vehicle in the field of intelligent parking.
At present, in the technical field of intelligent parking, a traditional detection algorithm based on manual characteristics of target vehicles is mainly adopted for a target vehicle detection method. Although the traditional detection algorithm is small in calculated amount and high in speed, the characteristics of the target vehicle cannot be well matched with the manual characteristics in many scenes, so that the traditional detection algorithm for the target vehicle is low in recall rate and poor in robustness.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a target vehicle detection method based on deep learning.
The purpose of the invention can be realized by the following technical scheme:
a target vehicle detection method based on deep learning is used for realizing intelligent parking, and comprises the following steps:
1) acquiring tail characteristic point cloud data of a target vehicle through two single-line laser radars arranged at the tail of the parking robot, and preprocessing the data to obtain a binary image;
2) labeling the binary image to obtain the position of the tail of the target vehicle, so as to generate a training data set;
3) constructing a deep convolutional neural network suitable for target vehicle detection and a loss function thereof;
4) and the training data set is input into the deep convolutional neural network after being augmented, parameters in the convolutional neural network are trained and updated according to the difference between the output value and the training true value to obtain the optimal network parameters, and detection is performed according to the trained deep convolutional neural network.
The step 1) specifically comprises the following steps:
11) converting the collected point cloud data into a globally uniform Cartesian coordinate system according to a polar coordinate system taking a single-line laser radar as a coordinate origin;
12) and meshing the point cloud data after coordinate conversion and converting the point cloud data into a binary image.
In the step 11), the conversion expression is:
Figure BDA0001999108420000021
(xj0,yj0)=(xj1,yj1)R+t
wherein (r)jj) Is the position coordinate of the point j in the original point cloud data, (x)j1,yj1) For point j to a position coordinate in a Cartesian coordinate system with the lidar as the origin of coordinates, (x)j0,yj0) And the coordinates in the global unified Cartesian coordinate system, R is a conversion rotation matrix, and t is a conversion translation vector.
In the step 2), the labeling content includes pixel level labeling of the image and boundary frame labeling of the target vehicle.
The deep convolutional neural network is a Faster R-CNN convolutional neural network, takes a binary image with a set size as input, and takes the position and the confidence degree of a corresponding target vehicle on the input binary image as output.
The loss function of the deep convolutional neural network is expressed as:
Loss=Lcls(p,u)+λ[u=1]Lloc(tu,v)
Lcls(p,u)=-log(p)
Figure BDA0001999108420000022
x=(tu-v)
wherein L iscls(p, u) detection of loss subfunction for target classification, Lloc(tuV) is a distance loss subfunction, p is a predictor for the target class, u is an actual factor for the corresponding class, λ is a weighted weight of the loss function, when u is 1, it means that the target vehicle is in the region of interest, when u is 0, it means that the region of interest is the background, t isuV is in the training sample for the predicted location factorThe true position factor, x is the deviation of the predicted value from the true value.
The specific steps for augmenting the training data set are:
and performing random horizontal turning, cutting and unified zooming on the image to a fixed size, and performing corresponding turning, cutting and zooming on the marking data.
The training deep convolutional neural network specifically comprises the following steps:
and (4) according to the loss function, carrying out iterative updating on the parameters of the deep convolutional neural network by using a gradient descent back propagation method, and taking the network parameters obtained after iteration to the maximum set times as the optimal network parameters to finish training.
Compared with the prior art, the invention has the following advantages:
firstly, the robustness is high: the laser radar can acquire accurate target vehicle point cloud data under various complex working conditions, and a target vehicle detection algorithm with high robustness is used as an auxiliary tool, so that the vehicle detection method has high robustness, and the relative accuracy of a detection result can be ensured under the complex working conditions.
Secondly, do not rely on manual characteristics: compared with the traditional target vehicle detection algorithm, the method provided by the invention has the advantages that the target characteristics are learned through the deep neural network, the manual characteristics are not relied on, and the recall rate of the detection result is high.
Thirdly, the detection cost is low: the invention adopts two single line laser radars as sensors, and has low detection cost compared with a multi-line laser radar.
Drawings
FIG. 1 is a flow chart of the detection method of the present invention.
Fig. 2 is a schematic structural diagram of an intelligent parking robot in an embodiment of the invention.
Fig. 3 is a schematic diagram of a deep convolutional network structure of a target vehicle in an embodiment of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments.
The invention provides a method for detecting a target vehicle by utilizing a single-line laser radar based on deep learning. As shown in fig. 1, the method comprises the steps of:
(1) acquiring tail characteristic point cloud data of a target vehicle by using 2 single-line laser radars, and preprocessing the acquired point cloud data;
(2) manually marking the position of the tail of the target vehicle in the acquired data to construct a data set for training
(3) Construction of deep convolutional neural network and loss function suitable for target vehicle detection
(4) And (3) amplifying the data used for training in the step (2), and inputting the data into the deep convolutional neural network constructed in the step (3). And training and updating parameters in the convolutional neural network according to the difference between the output value and the training true value, and finally obtaining more ideal network parameters.
In this embodiment, the preprocessing of the point cloud data in step (1) includes two steps of coordinate transformation and image transformation of the point cloud data, as follows:
(1-1) the 2 single-line laser radars in the embodiment are located on two sides of the rear of the intelligent parking robot, the structure of the intelligent parking robot is shown in fig. 2, the two single-line laser radars collect surrounding point cloud data according to a certain ground frame rate, and the collection results take the laser radars as coordinate origin and are stored in a polar coordinate mode. And converting the collected point cloud data into a global unified Cartesian coordinate system of the parking robot.
The conversion expression is as follows:
Figure BDA0001999108420000041
(xj0,yj0)=(xj1,yj1)R+t
in the above formula, (r)jj) Representing acquired raw point cloud data(x)j1,yj1) Is a representation of a point in the collected point cloud data converted into a cartesian coordinate system with the laser radar as the origin of coordinates. (x)j0,yj0) The corresponding point cloud data points are converted to a representation in the global uniform cartesian coordinate system of the parking robot. R is the translation rotation matrix and t is the translation vector.
And (1-2) carrying out imaging processing on the point cloud data. In this embodiment, the upper limit of the distance acquired by the laser radar is set to 10m, the point cloud data after coordinate conversion is gridded, and then the binary image with the size of 250 × 250 is adjusted, if there is a data point in the corresponding grid, the binary image is set to 1, otherwise the binary image is set to 0.
In the present embodiment, step (2) is to construct a data set required for deep learning training. After the collected laser radar point cloud data is processed, training data needs to be manually marked to form a data set required by training. The labeling mode includes, but is not limited to, pixel level labeling of the image, and bounding box labeling of the target vehicle. The position of the target vehicle at least needs to be included during marking, but the attitude information of the target vehicle can be expanded and increased.
In this embodiment, step (3) is to construct a deep convolutional neural network and a loss function suitable for target vehicle detection. The construction of the deep convolutional neural network is directly related to the training data set prepared in the step (2), in the embodiment, the step (2) is marked by using a boundary box of the target vehicle, so that the structure of the deep convolutional neural network is similar to that of the Faster R-CNN in the embodiment, the construction of the main structure refers to the Faster R-CNN, and the structure of the convolutional neural network is shown in FIG. 3.
In this embodiment, in step (3), the loss function of the deep convolutional neural network is constructed with two-part weighting:
Loss=Lcls(p,u)+λ[u=1]Lloc(tu,v)
(3-1) construction of an object Classification loss function Lcls(p, u), where p is the predictor for the target class and u represents the actual factor for the corresponding class. Usually constructed using a log-loss function, where p represents the prediction probability of a certain class,the closer p is to 1, the higher the confidence, the smaller the loss.
Lcls(p,u)=-log(p)
(3-2) construction of the target detection distance Lloc(tuV). Where λ represents the weighting weight of the loss function, and may be generally taken to be λ ═ 1. [ u-1 ]]The expression takes 1 when the region of interest is the target vehicle and 0 when the region of interest is the background, i.e., if the current region of interest is an environment-independent thing, its distance loss is not considered. T in the formulauThe representation represents the predicted location factor and v represents the true location factor in the training sample. The distance-loss sub-function is usually expressed in the form of a smoothed Manhattan distance equation
Figure BDA0001999108420000051
Constructing the expression: wherein x is (t)u-v) representing the deviation of the predicted value from the true value.
Figure BDA0001999108420000052
In this embodiment, the step (4) of augmenting the training data mainly includes performing random horizontal flipping, cropping, and unified scaling on the image to a fixed size, and performing corresponding flipping, cropping, and scaling on the labeled data, and performing normalization processing on the obtained image according to a channel on the basis, where the fixed size adopted in this embodiment is 250 × 250.
In this embodiment, when initializing the training network model in step (4), pre-training the object feature extraction network by using a SoftMax loss function on the ImageNet or other image classification data sets, and using the obtained parameter values as initial parameters of the network.
In this embodiment, when the network is trained in step (4), a weighted loss function is used to calculate a comprehensive loss value, then a back propagation calculation gradient is performed, and an optimizer such as Adam is used to update network parameters, and a final result is obtained by iterating for a certain number of times. And setting the final parameter result as the network model parameter of the target vehicle detector for the detection of the target vehicle.
The invention provides a method for detecting a target vehicle by utilizing a single-line laser radar based on deep learning. The intelligent parking robot has the advantages of excellent detection performance, higher robustness and low implementation cost, and is easy to deploy to the existing intelligent parking robot for detecting the target vehicle.
It will be readily apparent to those skilled in the art that various modifications to these embodiments may be made, and the generic principles described herein may be applied to other embodiments without the use of the inventive faculty. Therefore, the present invention is not limited to the embodiments described herein, and those skilled in the art should make improvements and modifications within the scope of the present invention based on the disclosure of the present invention.

Claims (1)

1.一种基于深度学习的目标车辆检测方法,用以实现智能泊车,其特征在于,包括以下步骤:1. a target vehicle detection method based on deep learning, in order to realize intelligent parking, is characterized in that, comprises the following steps: 1)通过设置在泊车机器人尾部的两个单线激光雷达采集目标车辆的尾部特征点云数据,并进行预处理得到二进制图像,具体包括以下步骤:1) Collect the tail feature point cloud data of the target vehicle through two single-line lidars set at the tail of the parking robot, and perform preprocessing to obtain a binary image, which specifically includes the following steps: 11)将采集到的点云数据以单线激光雷达为坐标原点的极坐标系转换为全局统一的笛卡尔坐标系,转换表达式为:11) Convert the collected point cloud data from the polar coordinate system with the single-line lidar as the coordinate origin to a globally unified Cartesian coordinate system. The conversion expression is:
Figure FDA0003153987480000011
Figure FDA0003153987480000011
(xj0,yj0)=(xj1,yj1)R+t(x j0 ,y j0 )=(x j1 ,y j1 )R+t 其中,(rjj)为原始点云数据中点j的位置坐标,(xj1,yj1)为点j转换为以激光雷达为坐标原点的笛卡尔坐标系中的位置坐标,(xj0,yj0)为全局统一笛卡尔坐标系中的坐标,R为转换旋转矩阵,t为转换平移向量;Among them, (r j , φ j ) is the position coordinate of point j in the original point cloud data, (x j1 , y j1 ) is the position coordinate of point j converted to the Cartesian coordinate system with lidar as the coordinate origin, ( x j0 , y j0 ) are the coordinates in the global unified Cartesian coordinate system, R is the transformation rotation matrix, and t is the transformation translation vector; 12)对坐标转换后的点云数据网格化,转换为二进制图像;12) Gridding the point cloud data after coordinate transformation and converting it into a binary image; 2)对二进制图像进行标注,获取其中目标车辆尾部所在的位置,以此生成训练数据集,标注内容包括图像的像素级标注和目标车辆的边界框标注;2) Annotate the binary image, and obtain the position of the tail of the target vehicle, thereby generating a training data set, and the annotation content includes the pixel-level annotation of the image and the bounding box annotation of the target vehicle; 3)构建适用于目标车辆检测的深度卷积神经网络及其损失函数,所述的深度卷积神经网络为Faster R-CNN卷积神经网络,其以设定尺寸的二进制图像作为输入,以与输入二进制图像上对应目标车辆的位置和置信度为输出,深度卷积神经网络的损失函数的表达式为:3) Construct a deep convolutional neural network and its loss function suitable for target vehicle detection. The deep convolutional neural network is a Faster R-CNN convolutional neural network, which takes a binary image of a set size as an input to match The position and confidence of the corresponding target vehicle on the input binary image are output, and the expression of the loss function of the deep convolutional neural network is: Loss=Lcls(p,u)+λ[u=1]Lloc(tu,v)Loss=L cls (p,u)+λ[u=1]L loc (t u ,v) Lcls(p,u)=-log(p)L cls (p,u)=-log(p)
Figure FDA0003153987480000012
Figure FDA0003153987480000012
x=(tu-v)x=(t u -v) 其中,Lcls(p,u)为目标分类检测损失子函数,Lloc(tu,v)为距离损失子函数,p为对于目标类别的预测因子,u为对应类别的实际因子,λ为损失函数的加权权重,当u=1时表示感兴趣区域是目标车辆,当u=0时表示感兴趣区域是背景,tu为预测的位置因子,v为训练样本中真实的位置因子,x为预测值与真实值的偏差;Among them, L cls (p, u) is the target classification detection loss sub-function, L loc (t u , v) is the distance loss sub-function, p is the predictor for the target category, u is the actual factor of the corresponding category, and λ is The weighted weight of the loss function, when u=1, it means that the region of interest is the target vehicle, when u=0, it means that the region of interest is the background, t u is the predicted location factor, v is the actual location factor in the training sample, x is the deviation of the predicted value from the actual value; 4)将训练数据集进行增广后输入深度卷积神经网络中,并根据输出值与训练真值的差异对卷积神经网络中的参数进行训练更新,得到最优的网络参数,并根据训练好的深度卷积神经网络进行检测,对训练数据集进行增广具体为:4) The training data set is augmented and input into the deep convolutional neural network, and the parameters in the convolutional neural network are trained and updated according to the difference between the output value and the training true value to obtain the optimal network parameters, and according to the training A good deep convolutional neural network is used for detection, and the training data set is augmented as follows: 将图像进行随机水平翻转、裁剪并统一缩放至固定的尺寸,并且标注数据也进行相应的翻转、裁剪和缩放;The image is randomly flipped horizontally, cropped and scaled uniformly to a fixed size, and the annotation data is also flipped, cropped and scaled accordingly; 训练深度卷积神经网络具体为:The training of deep convolutional neural network is as follows: 依据损失函数,利用梯度下降反向传播方法,对深度卷积神经网络的参数进行迭代更新,将迭代至最大设定次数后得到的网络参数作为最优的网络参数,完成训练。According to the loss function, the gradient descent backpropagation method is used to iteratively update the parameters of the deep convolutional neural network, and the network parameters obtained after iterating to the maximum set number of times are used as the optimal network parameters to complete the training.
CN201910206458.5A 2019-03-19 2019-03-19 A target vehicle detection method based on deep learning Active CN110069993B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910206458.5A CN110069993B (en) 2019-03-19 2019-03-19 A target vehicle detection method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910206458.5A CN110069993B (en) 2019-03-19 2019-03-19 A target vehicle detection method based on deep learning

Publications (2)

Publication Number Publication Date
CN110069993A CN110069993A (en) 2019-07-30
CN110069993B true CN110069993B (en) 2021-10-08

Family

ID=67366360

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910206458.5A Active CN110069993B (en) 2019-03-19 2019-03-19 A target vehicle detection method based on deep learning

Country Status (1)

Country Link
CN (1) CN110069993B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560834B (en) * 2019-09-26 2024-05-10 武汉金山办公软件有限公司 Coordinate prediction model generation method and device and pattern recognition method and device
CN111178213B (en) * 2019-12-23 2022-11-18 大连理工大学 Aerial photography vehicle detection method based on deep learning
CN111177297B (en) * 2019-12-31 2022-09-02 信阳师范学院 Dynamic target speed calculation optimization method based on video and GIS
CN111523403B (en) * 2020-04-03 2023-10-20 咪咕文化科技有限公司 Method and device for acquiring target area in picture and computer readable storage medium
CN111539347B (en) * 2020-04-27 2023-08-08 北京百度网讯科技有限公司 Method and device for detecting target
CN111653103A (en) * 2020-05-07 2020-09-11 浙江大华技术股份有限公司 Target object identification method and device
CN111783844B (en) * 2020-06-10 2024-05-28 广东正扬传感科技股份有限公司 Deep learning-based target detection model training method, device and storage medium
US11834066B2 (en) * 2020-12-29 2023-12-05 GM Global Technology Operations LLC Vehicle control using neural network controller in combination with model-based controller
CN113313201B (en) * 2021-06-21 2024-10-15 南京挥戈智能科技有限公司 Multi-target detection and ranging method based on Swin transducer and ZED camera
CN114399742A (en) * 2021-12-03 2022-04-26 南京佑驾科技有限公司 Neural network-based sheltered vehicle tail frame regression method and device and storage medium
CN114219073A (en) * 2021-12-08 2022-03-22 浙江大华技术股份有限公司 Method, device, storage medium and electronic device for determining attribute information
CN114692720B (en) * 2022-02-25 2023-05-23 广州文远知行科技有限公司 Image classification method, device, equipment and storage medium based on aerial view
CN117557912A (en) * 2023-12-18 2024-02-13 大连海事大学 Iceberg scene identification method based on improved YoloV7 model

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050240323A1 (en) * 2004-03-31 2005-10-27 Honda Motor Co., Ltd. Parking lot attendant robot system
CN106355194A (en) * 2016-08-22 2017-01-25 广东华中科技大学工业技术研究院 A surface target processing method for unmanned boats based on laser imaging radar
CN106650809A (en) * 2016-12-20 2017-05-10 福州大学 Method and system for classifying vehicle-borne laser-point cloud targets
CN107239794A (en) * 2017-05-18 2017-10-10 深圳市速腾聚创科技有限公司 Point cloud data segmentation method and terminal
CN108009509A (en) * 2017-12-12 2018-05-08 河南工业大学 Vehicle target detection method
CN108830188A (en) * 2018-05-30 2018-11-16 西安理工大学 Vehicle checking method based on deep learning
US20190004508A1 (en) * 2017-07-03 2019-01-03 Volvo Car Corporation Method and system for automatic parking of a vehicle
CN109270543A (en) * 2018-09-20 2019-01-25 同济大学 A kind of system and method for pair of target vehicle surrounding vehicles location information detection
CN109270544A (en) * 2018-09-20 2019-01-25 同济大学 Mobile robot self-localization system based on shaft identification
CN109324616A (en) * 2018-09-20 2019-02-12 同济大学 Alignment method of unmanned parking and handling robot based on on-board sensors
CN109386155A (en) * 2018-09-20 2019-02-26 同济大学 Nobody towards automated parking ground parks the alignment method of transfer robot

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9710714B2 (en) * 2015-08-03 2017-07-18 Nokia Technologies Oy Fusion of RGB images and LiDAR data for lane classification
CN109118500B (en) * 2018-07-16 2022-05-10 重庆大学产业技术研究院 Image-based three-dimensional laser scanning point cloud data segmentation method
CN109063753B (en) * 2018-07-18 2021-09-14 北方民族大学 Three-dimensional point cloud model classification method based on convolutional neural network
CN109344786A (en) * 2018-10-11 2019-02-15 深圳步智造科技有限公司 Target identification method, device and computer readable storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050240323A1 (en) * 2004-03-31 2005-10-27 Honda Motor Co., Ltd. Parking lot attendant robot system
CN106355194A (en) * 2016-08-22 2017-01-25 广东华中科技大学工业技术研究院 A surface target processing method for unmanned boats based on laser imaging radar
CN106650809A (en) * 2016-12-20 2017-05-10 福州大学 Method and system for classifying vehicle-borne laser-point cloud targets
CN107239794A (en) * 2017-05-18 2017-10-10 深圳市速腾聚创科技有限公司 Point cloud data segmentation method and terminal
US20190004508A1 (en) * 2017-07-03 2019-01-03 Volvo Car Corporation Method and system for automatic parking of a vehicle
CN108009509A (en) * 2017-12-12 2018-05-08 河南工业大学 Vehicle target detection method
CN108830188A (en) * 2018-05-30 2018-11-16 西安理工大学 Vehicle checking method based on deep learning
CN109270543A (en) * 2018-09-20 2019-01-25 同济大学 A kind of system and method for pair of target vehicle surrounding vehicles location information detection
CN109270544A (en) * 2018-09-20 2019-01-25 同济大学 Mobile robot self-localization system based on shaft identification
CN109324616A (en) * 2018-09-20 2019-02-12 同济大学 Alignment method of unmanned parking and handling robot based on on-board sensors
CN109386155A (en) * 2018-09-20 2019-02-26 同济大学 Nobody towards automated parking ground parks the alignment method of transfer robot

Also Published As

Publication number Publication date
CN110069993A (en) 2019-07-30

Similar Documents

Publication Publication Date Title
CN110069993B (en) A target vehicle detection method based on deep learning
CN110097047B (en) A vehicle detection method based on deep learning using single-line lidar
CN110287849B (en) Lightweight depth network image target detection method suitable for raspberry pi
CN113052109B (en) A 3D object detection system and a 3D object detection method thereof
CN111179314B (en) A Target Tracking Method Based on Residual Dense Siamese Network
CN111507982B (en) A point cloud semantic segmentation method based on deep learning
CN112183203B (en) Real-time traffic sign detection method based on multi-scale pixel feature fusion
CN110781967A (en) A real-time text detection method based on differentiable binarization
CN111046781B (en) A Robust 3D Object Detection Method Based on Ternary Attention Mechanism
CN113888547A (en) Unsupervised Domain Adaptive Remote Sensing Road Semantic Segmentation Method Based on GAN Network
CN112052860A (en) Three-dimensional target detection method and system
CN106096561A (en) Infrared pedestrian detection method based on image block degree of depth learning characteristic
CN113536920A (en) A semi-supervised 3D point cloud object detection method
CN109242019B (en) Rapid detection and tracking method for optical small target on water surface
CN111414954B (en) A method and system for retrieving rock images
CN114266977A (en) Multi-AUV underwater target identification method based on super-resolution selectable network
CN110097599B (en) A Workpiece Pose Estimation Method Based on Part Model Expression
CN114120115A (en) A point cloud target detection method that fuses point features and grid features
CN112785636A (en) Multi-scale enhanced monocular depth estimation method
CN116862964A (en) Semantic feature guided scene depth estimation method for fisheye camera
Zhang et al. Depth monocular estimation with attention-based encoder-decoder network from single image
TW202225730A (en) High-efficiency LiDAR object detection method based on deep learning through direct processing of 3D point data to obtain a concise and fast 3D feature to solve the shortcomings of complexity and time-consuming of the current voxel network model
CN114706087A (en) A method and system for underwater terrain matching and positioning of three-dimensional imaging sonar point cloud
CN111798461B (en) Pixel-level remote sensing image cloud area detection method for guiding deep learning by coarse-grained label
CN114022516A (en) Bimodal visual tracking method based on high rank characteristics and position attention

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant