[go: up one dir, main page]

CN117437808A - A method for over-the-horizon sensing and early warning in blind spots in underground parking lots - Google Patents

A method for over-the-horizon sensing and early warning in blind spots in underground parking lots Download PDF

Info

Publication number
CN117437808A
CN117437808A CN202311151480.7A CN202311151480A CN117437808A CN 117437808 A CN117437808 A CN 117437808A CN 202311151480 A CN202311151480 A CN 202311151480A CN 117437808 A CN117437808 A CN 117437808A
Authority
CN
China
Prior art keywords
vehicle
image
perspective
data
early warning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311151480.7A
Other languages
Chinese (zh)
Other versions
CN117437808B (en
Inventor
赵聪
宋安迪
卢亚利
郑丹妮
周紫蕾
董家瑞
吴鹏展
杜豫川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202311151480.7A priority Critical patent/CN117437808B/en
Publication of CN117437808A publication Critical patent/CN117437808A/en
Application granted granted Critical
Publication of CN117437808B publication Critical patent/CN117437808B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B31/00Predictive alarm systems characterised by extrapolation or other computation using updated historic data
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0129Traffic data processing for creating historical data or processing based on historical data
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Business, Economics & Management (AREA)
  • Computational Linguistics (AREA)
  • Emergency Management (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a method for sensing and early warning beyond visual range of blind areas of an underground parking lot, which comprises the steps of collecting video images of the whole areas of the parking lot; obtaining vehicle pose information taking monitoring as an origin; building a virtual scene simulation platform of the whole domain of the parking lot; a virtual scene display screen device is arranged, a virtual picture of a first driver view angle which is removed from the shielding of the wall column is provided on the display screen, and blind area perspective is realized; predicting pose data change of the vehicle in a future period of time, measuring the relation between the current visual angle and the predicted pose data, and designing the optimal visual angle change; and evaluating the collision risk of the vehicle and carrying out early warning prompt. The invention realizes detection and early warning of the dead zone vehicles and other targets, has the characteristics of low time delay and low cost, and can effectively solve the dead zone safety problem; the method realizes the beyond-visual-distance sensing and early warning of the blind area of the underground parking lot by depending on the environment sensing and the virtual space, and provides important tools and means for solving the frequent occurrence of the accidents of the conflict points of the blind area.

Description

一种地下停车场盲区超视距感知及预警方法A method for over-the-horizon sensing and early warning in blind spots in underground parking lots

技术领域Technical field

本发明涉及交通技术领域,尤其涉及一种地下停车场盲区超视距感知及预警方法。The invention relates to the field of transportation technology, and in particular to a method for over-the-horizon sensing and early warning in blind spots in underground parking lots.

背景技术Background technique

为了保持城市功能及交通所需的空间,地下停车场不断在城市中涌现。但其内部墙柱密集,存在大量的急弯和陡坡且光线昏暗,造成大量潜在的运行冲突与安全风险,盲区事故发生率高。为扩大司机视野,减少盲区影响,传统手段为在关键墙柱上安装道路广角镜。但其容易受到光线、视角范围、放置角度等因素影响,可靠度低;广角镜导致画面畸变,不够直观,驾驶员通常聚焦于寻找车位,不易察觉;广角镜扩大的视野范围有限,给予驾驶员的反应时间较少。因此道路广角镜的实际提醒效果经常大打折扣,对避免事故发生的作用极其有限。In order to maintain the space required for urban functions and transportation, underground parking lots continue to emerge in cities. However, its internal wall columns are densely packed, there are a large number of sharp bends and steep slopes, and the lighting is dim, resulting in a large number of potential operational conflicts and safety risks, and a high incidence of accidents in blind spots. In order to expand the driver's field of vision and reduce the impact of blind spots, the traditional method is to install road wide-angle mirrors on key wall pillars. However, it is easily affected by factors such as light, viewing angle range, placement angle, etc., and its reliability is low; the wide-angle lens causes picture distortion, which is not intuitive enough. The driver usually focuses on finding a parking space and is not easy to detect; the wide-angle lens expands the field of view is limited, giving the driver poor reaction Less time. Therefore, the actual reminder effect of road wide-angle lenses is often greatly reduced, and its role in avoiding accidents is extremely limited.

目前,数字透视A柱系统已经在相关企业落地,它通过A柱屏幕显示变换处理后的盲区影像,实现了消除A柱遮挡的透视效果,较好地减少了因A柱遮挡造成的事故的发生。若将该消除盲区的思路直接移用到停车场盲区环境下,因摄像头是固定的,而司机的位置是时刻在变动的,因此无法做到实时显示以服务对象司机为第一视角的画面,存在仍无法改善道路广角镜画面不够直观的问题。因此亟需借助环境感知和虚拟平台同步运行可视化,通过将虚拟平台中撤去墙柱遮挡的画面传输到墙柱上安装的LED屏,同时提供危险车辆警示标记,最终实现停车场盲区超视距感知和预警功能,为解决地下停车场盲区冲突点事故频发这一“顽疾”提供新思路和新手段。At present, the digital see-through A-pillar system has been implemented in relevant companies. It uses the A-pillar screen to display the transformed blind spot image, achieving a perspective effect that eliminates A-pillar occlusion, and better reduces the occurrence of accidents caused by A-pillar occlusion. . If this idea of eliminating blind spots is directly applied to the parking lot blind spot environment, because the camera is fixed and the driver's position changes all the time, it is impossible to display the image from the first perspective of the service target driver in real time. There is still a problem that the road wide-angle lens picture is not intuitive enough. Therefore, there is an urgent need to use environmental awareness and virtual platforms to run visualization synchronously. By transmitting the picture in the virtual platform with the wall pillars removed to the LED screen installed on the wall pillars, and at the same time providing dangerous vehicle warning signs, we can finally achieve over-the-horizon perception of blind spots in the parking lot. and early warning functions, providing new ideas and new means to solve the "stubborn problem" of frequent accidents at conflict points in underground parking lots.

发明内容Contents of the invention

本发明的目的在于提出一种具有停车场内超视距感知和预警功能的方法,已解決上述技术问题。The purpose of the present invention is to propose a method with over-the-horizon sensing and early warning functions in a parking lot, which solves the above technical problems.

为达到上述目的,本发明提出一种地下停车场盲区超视距感知及预警方法,包括以下步骤:In order to achieve the above objectives, the present invention proposes a method for over-the-horizon sensing and early warning in blind spots in underground parking lots, which includes the following steps:

S1:采集停车场全域的视频图像;S1: Collect video images of the entire parking lot;

S2:将图像数据基于DORN算法进行深度估计,得出以监控为原点的车辆位姿信息;S2: Use the image data for depth estimation based on the DORN algorithm to obtain the vehicle pose information with the monitoring as the origin;

S3:搭建停车场全域的虚拟场景仿真平台;S3: Build a virtual scene simulation platform for the entire parking lot;

S4:在地下停车场内布设虚拟场景显示屏装置,所述显示屏装置基于所述车辆位姿信息以及虚拟场景仿真平台,在显示屏上提供撤去墙柱遮挡的第一驾驶人视角的虚拟画面,实现盲区透视;S4: Arrange a virtual scene display screen device in the underground parking lot. Based on the vehicle posture information and the virtual scene simulation platform, the display screen device provides a virtual picture from the perspective of the first driver without the obstruction of the wall pillar on the display screen. , to achieve blind spot perspective;

S5:基于车辆位姿数据,预测车辆未来一段时间内的位姿数据变化,衡量当前视角与预测位姿数据之间的关系,设计最佳视角变化;S5: Based on the vehicle's posture data, predict the vehicle's posture data changes in the future, measure the relationship between the current perspective and the predicted posture data, and design the optimal perspective change;

S51、通过轨迹分析预测算法感知用户需求,提取用户车辆的位姿数据其中x(t),y(t),z(t)为车辆在三维空间中的位置坐标,/>为车辆的姿态角(绕x,y,z轴的旋转角度),t为时间。根据过去一段时间内的位姿数据/>通过轨迹分析的预测函数/>预测未来一段时间内的位姿数据P'(tn+1);S51. Perceive the user's needs through the trajectory analysis and prediction algorithm, and extract the position and attitude data of the user's vehicle. Among them, x(t), y(t), z(t) are the position coordinates of the vehicle in the three-dimensional space,/> is the attitude angle of the vehicle (the rotation angle around the x, y, and z axes), and t is the time. Based on the pose data in the past period of time/> Prediction functions via trajectory analysis/> Predict the pose data P'(t n+1 ) in the future;

S52、通过目标函数G(V(t),P'(tn+1))衡量当前视角与预测位姿数据之间的关系,当前视角V(t)=(vx(t),vy(t),vz(t)),其中vx(t),vy(t),vz(t)分别表示当前视角在x,y,z轴的方向,目标函数值越大,表示当前视角越能满足驾驶员的需求;S52. Measure the relationship between the current perspective and the predicted pose data through the objective function G(V(t),P'(t n+1 )). The current perspective V(t)=(v x (t), v y (t), v z (t)), where v x (t), v y (t), v z (t) respectively represent the direction of the current perspective in the x, y, and z axes. The larger the value of the objective function, the greater the The more the current perspective meets the driver’s needs;

S53、设置最佳视角变化ΔV(t),使得目标函数G(V(t),P'(tn+1))值最大,即并通过V(tn+1)=V(t)+ΔV(t)更新虚拟平台中相机的位置和姿态,从而实现根据用户需求切换显示屏视角;S53. Set the optimal viewing angle change ΔV(t) to maximize the value of the objective function G(V(t),P'(t n+1 )), that is And update the position and posture of the camera in the virtual platform through V(t n+1 )=V(t)+ΔV(t), thereby switching the display perspective according to user needs;

S6:根据车辆位置数据的时序信息,评估车辆碰撞风险,生成标注数据,实现车辆之间碰撞预警的提示。S6: Based on the time series information of the vehicle position data, evaluate the vehicle collision risk, generate annotation data, and implement collision warning prompts between vehicles.

进一步的,所述S1中,所述视频图像通过停车场布设的监控进行采集;所述S1中,还包括对采集的视频图像进行切帧和预处理。Further, in S1, the video images are collected through monitoring arranged in the parking lot; in S1, frame cutting and preprocessing of the collected video images are also included.

进一步的,所述预处理包括对视频图像进行对比度、亮度以及锐度的增强。Further, the preprocessing includes enhancing the contrast, brightness and sharpness of the video image.

进一步的,所述S2包括以下步骤:Further, the S2 includes the following steps:

S21、使用DORN算法对图像序列进行深度估计,获得深度图像;该算法利用深度回归神经网络对单张图像进行估计,并利用帧间信息获得连续图像序列的深度信息;S21. Use the DORN algorithm to estimate the depth of the image sequence and obtain the depth image; the algorithm uses a deep regression neural network to estimate a single image, and uses inter-frame information to obtain the depth information of the continuous image sequence;

S22、将深度图像和原始RGB图像输入D4LCN模型,利用深度引导卷积提取图像特征;该模型采用多分支架构,对输入的深度图像和RGB图像进行多尺度、多通道的卷积,提取图像特征,进而实现对车辆位置信息的定位和识别;S22. Input the depth image and original RGB image into the D4LCN model, and use depth-guided convolution to extract image features; the model uses a multi-branch architecture to perform multi-scale and multi-channel convolution on the input depth image and RGB image to extract image features. , thereby realizing the positioning and identification of vehicle location information;

S23、在图像特征提取的基础上,使用D4LCN自带的非极大值抑制目标检测模块对车辆进行识别和定位;该算法通过对图像中的目标进行分类和回归,获得目标的位置和尺寸及方向信息;同时,利用Deepsort跟踪算法对车辆进行跟踪,保证在连续帧中能够准确地识别和定位车辆。S23. Based on the image feature extraction, use the non-maximum suppression target detection module of D4LCN to identify and locate the vehicle; this algorithm obtains the position, size and location of the target by classifying and regressing the targets in the image. Direction information; at the same time, the Deepsort tracking algorithm is used to track the vehicle to ensure that the vehicle can be accurately identified and positioned in consecutive frames.

进一步的,所述S3包括以下步骤:Further, the S3 includes the following steps:

S31、获取相应地下停车场的基础空间信息,包括墙柱数量和位置,停车位数量和位置以及摄像头布设位置和角度,使用Vue框架和three.is引擎进行场景渲染,提供用户交互界面,展示停车场基础信息;S31. Obtain the basic spatial information of the corresponding underground parking lot, including the number and location of wall columns, the number and location of parking spaces, and the location and angle of camera layout. Use the Vue framework and three.is engine to perform scene rendering, provide a user interaction interface, and display parking. Basic information about the field;

S32、使用Flask框架搭建程序提取摄像头感知的车辆位姿数据,检索出同一ID的车辆位姿信息,并按时间排序构成轨迹,通过透视变换整理形成车辆在虚拟全景视角中的轨迹数据集P={P1,P2,···,Pj,···,Pm},其中为车辆j的轨迹,xj,yj,zj为t时刻车辆j的三维坐标,/>为t时刻车辆j的姿态角;S32. Use the Flask framework to build a program to extract the vehicle pose data sensed by the camera, retrieve the vehicle pose information with the same ID, sort the trajectory by time, and organize the trajectory data set of the vehicle in the virtual panoramic perspective through perspective transformation P = {P 1 ,P 2 ,···,P j ,···,P m }, where is the trajectory of vehicle j, x j , y j , z j are the three-dimensional coordinates of vehicle j at time t,/> is the attitude angle of vehicle j at time t;

S33、前后端通过RESTfulAPI进行数据交互,实现虚拟平台全景化运行。S33, front-end and back-end interact with data through RESTfulAPI to realize panoramic operation of the virtual platform.

进一步的,所述S4包括以下步骤:Further, the S4 includes the following steps:

S41、根据停车场空间布局和车辆走向,在墙柱上布设虚拟场景显示屏装置,其高度、大小、方向和亮度均应以便于驾驶员观察为原则;S41. According to the parking space layout and vehicle direction, arrange a virtual scene display device on the wall column. Its height, size, direction and brightness should be in order to facilitate the driver's observation;

S42、基于虚拟空间平台追踪驾驶员视角,在显示屏上提供撤去墙柱遮挡的第一驾驶人视角的虚拟画面,实现盲区透视。S42. Based on the virtual space platform, track the driver's perspective and provide a virtual picture of the first driver's perspective without the obstruction of the wall pillar on the display screen to achieve blind spot perspective.

进一步的,所述S6包括以下步骤:Further, the S6 includes the following steps:

S61、基于车辆的位姿数据提取相应的运动轨迹和速度的时序信息,通过vf=vi+at,s=(vi+vf)t/2得到车辆的速度和加速度,其中vi和vf表示车辆的初始速度和末速度,a表示车辆的加速度,t表示相邻帧时间间隔,s表示车辆行驶距离,可由车辆相邻帧的位置数据得到;S61. Extract the corresponding motion trajectory and speed timing information based on the vehicle's pose data, through v f = vi +at, s=(v i +v f )t/2 obtains the speed and acceleration of the vehicle, where v i and v f represent the initial speed and final speed of the vehicle, a represents the acceleration of the vehicle, t represents the time interval between adjacent frames, and s represents The vehicle traveling distance can be obtained from the position data of adjacent frames of the vehicle;

S62、评估车辆碰撞风险,计算两车辆按当前速度行驶发生碰撞所需的时间TTC=L/(v1-v2),其中L为车辆1和车辆2之间的距离,v1和v2分别表示车辆1和车辆2的速度;S62. Evaluate the risk of vehicle collision and calculate the time required for a collision between two vehicles traveling at the current speed TTC = L/(v 1 - v 2 ), where L is the distance between vehicle 1 and vehicle 2, v 1 and v 2 represent the speed of vehicle 1 and vehicle 2 respectively;

S63、根据停车场车辆运行情况,设置两车间的距离阈值,当L低于一定阈值时,根据TTC值标注不同的风险等级,完成碰撞预警提示,。S63. Set the distance threshold between the two vehicles according to the vehicle operation conditions in the parking lot. When L is lower than a certain threshold, different risk levels are marked according to the TTC value to complete the collision warning prompt.

与现有技术相比,本发明的优势之处在于:Compared with the existing technology, the advantages of the present invention are:

1、本发明实现了对盲区车辆及其他目标的检测与预警,具有低时延和低成本的特点,可有效解决盲区安全问题;依托环境感知和虚拟空间实现了地下停车场盲区超视距感知与预警,为解决盲区冲突点事故频发提供了重要工具和手段。1. The present invention realizes the detection and early warning of vehicles and other targets in blind spots, has the characteristics of low delay and low cost, and can effectively solve the safety problem of blind spots; it relies on environmental perception and virtual space to realize over-the-horizon perception of blind spots in underground parking lots. and early warning, providing important tools and means to solve the frequent accidents at blind spots and conflict points.

2、本发明对室内车辆定位精准,显示屏装置根据车辆信息提供精确的视角,从而为车辆在地库行驶的安全性提供有效的保证。2. The present invention accurately positions indoor vehicles, and the display screen device provides accurate viewing angles based on vehicle information, thereby providing an effective guarantee for the safety of vehicles driving in the basement.

附图说明Description of the drawings

图1为本发明实施例中地下停车场盲区超视距感知及预警方法的流程图。Figure 1 is a flow chart of a blind spot over-the-horizon sensing and early warning method in an underground parking lot in an embodiment of the present invention.

图2为本发明实施例中地下停车场盲区超视距感知及预警方法中图像预处理示意图;Figure 2 is a schematic diagram of image preprocessing in the over-the-horizon sensing and early warning method for blind spots in underground parking lots in an embodiment of the present invention;

图3为本发明实施例中地下停车场盲区超视距感知及预警方法中目标检测示意图;Figure 3 is a schematic diagram of target detection in the over-the-horizon sensing and early warning method for blind spots in underground parking lots in an embodiment of the present invention;

图4为本发明实施例中地下停车场盲区超视距感知及预警方法实际应用示意图。Figure 4 is a schematic diagram of the practical application of the over-the-horizon sensing and early warning method for blind spots in underground parking lots in an embodiment of the present invention.

具体实施方式Detailed ways

为使本发明的目的、技术方案和优点更加清楚,下面将对本发明的技术方案作进一步地说明。In order to make the purpose, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be further described below.

如图1所示,本发明为一种地下停车场盲区超视距感知及预警方法,包括以下步骤:As shown in Figure 1, the present invention is an over-the-horizon sensing and early warning method for blind spots in underground parking lots, which includes the following steps:

S1、停车场全域布设的监控视频数据读入以及对视频图像进行预处理。S1. Read the surveillance video data deployed throughout the parking lot and preprocess the video images.

S11、停车场监控视频流数据通过接口实时传输至环境感知模块;S11. The parking lot monitoring video stream data is transmitted to the environment sensing module in real time through the interface;

S12、视频逐帧切分为图像。使用视频处理库(OpenCV)进行逐帧切分,获取视频序列图像;S12. The video is divided into images frame by frame. Use the video processing library (OpenCV) to segment frame by frame to obtain video sequence images;

S13、图像增强预处理。如图2所示,使用Pythonpillow库中的图像增强工具对视频画面进行处理,增强算法包括对比度、亮度和锐度。通过图像增强预处理,可以提高图像质量,减少深度提取过程中的特征细节丢失。S13. Image enhancement preprocessing. As shown in Figure 2, the video image is processed using the image enhancement tool in the Pythonpillow library. The enhancement algorithm includes contrast, brightness and sharpness. Image enhancement preprocessing can improve image quality and reduce the loss of feature details during depth extraction.

进一步地,所述步骤S13具体算法为:Further, the specific algorithm of step S13 is:

对比度增强:Contrast enhancement:

其中,Oi,j表示增强后的像素值,Ii,j表示原始像素值,μ表示图像的平均灰度值,σ表示图像的标准差,k表示对比度增益。Among them, Oi,j represents the enhanced pixel value, Ii,j represents the original pixel value, μ represents the average gray value of the image, σ represents the standard deviation of the image, and k represents the contrast gain.

亮度增强:Brightness enhancement:

Oi,j=Ii,j×k+bOi,j=Ii,j×k+b

其中b表示亮度偏移量,其余变量同上。Where b represents the brightness offset, and the other variables are the same as above.

锐度增强:Sharpness enhancement:

Oi,j=Ii,j+Ii,j×k-Li,j×kOi,j=Ii,j+Ii,j×k-Li,j×k

其中Li,j表示拉普拉斯滤波后的像素值,k表示锐度增益,其余变量同上。Among them, Li,j represents the pixel value after Laplacian filtering, k represents the sharpness gain, and the other variables are the same as above.

S2、将图像数据基于DORN算法进行深度估计,基于D4LCN模型对得到的深度图像和原始RGB图像进行三维目标检测,输出以监控为坐标原点的车辆位姿信息。S2. Use the DORN algorithm to perform depth estimation on the image data, perform three-dimensional target detection on the obtained depth image and the original RGB image based on the D4LCN model, and output the vehicle pose information with the monitoring as the origin of the coordinates.

S21、使用DORN算法对图像序列进行深度估计,获得深度图像。假设输入的RGB图像为其中h和w分别表示图像的高度和宽度,3表示图像的通道数。DORN的输出是一个深度图像/>其中每个像素点的值表示该点的深度。深度计算方式为:S21. Use the DORN algorithm to perform depth estimation on the image sequence to obtain a depth image. Assume that the input RGB image is where h and w represent the height and width of the image respectively, and 3 represents the number of channels of the image. The output of DORN is a depth image/> The value of each pixel represents the depth of that point. The depth calculation method is:

其中pk(x,y)表示像素点(x,y)属于第k个深度区间的概率。将所得深度进行图像归一化处理,从而转为灰度形式的深度图像。where p k (x, y) represents the probability that the pixel point (x, y) belongs to the k-th depth interval. The resulting depth is image normalized and converted into a depth image in grayscale form.

该算法利用深度回归神经网络对单张图像进行估计,并利用帧间信息获得连续图像序列的深度信息,提高深度估计的准确性和鲁棒性。This algorithm uses a deep regression neural network to estimate a single image, and uses inter-frame information to obtain depth information of continuous image sequences to improve the accuracy and robustness of depth estimation.

S22、将深度图像和原始RGB图像输入D4LCN模型,利用深度引导卷积提取图像特征。将深度图中的深度信息作为引导卷积模块的信息进行计算:S22. Input the depth image and original RGB image into the D4LCN model, and use depth-guided convolution to extract image features. The depth information in the depth map is calculated as the information to guide the convolution module:

其中,I'为输出图像特征信息,k为卷积核尺度,gi,gj分别为图像单元平移尺度,I为输入图像矩阵,D为深度图像矩阵。Among them, I' is the output image feature information, k is the convolution kernel scale, g i and g j are the image unit translation scales respectively, I is the input image matrix, and D is the depth image matrix.

该模型采用多分支架构,对输入的深度图像和RGB图像进行多尺度、多通道的卷积,提取图像特征,进而实现对车辆位置信息的定位和识别。The model uses a multi-branch architecture to perform multi-scale, multi-channel convolution on the input depth image and RGB image to extract image features, thereby achieving positioning and identification of vehicle position information.

S23、在图像特征提取的基础上,使用D4LCN自带的非极大值抑制(NMS)目标检测模块对车辆进行识别和定位。所得车辆位置信息为[xf,i yf,i zf,i]3D,表示车辆在监控探头视角坐标系下的三维空间坐标,其中f为预处理后视频切分图像序列的序号,i为单张影像中所包含的目标信号,同样的车辆包围盒的集合信息为[hf,i wf,i lf,i]3D,其中,h,w,l分别表示包围盒高度(竖直方向)、长度(平行目标移动方向)和宽度。该算法通过对图像中的目标进行分类和回归,获得目标的位置和尺寸及方向信息。同时,利用Deepsort跟踪算法对车辆进行跟踪,保证在连续帧中能够准确地识别和定位车辆。S23. Based on the image feature extraction, use the non-maximum suppression (NMS) target detection module of D4LCN to identify and locate the vehicle. The obtained vehicle position information is [x f,i y f,i z f,i ] 3D , which represents the three-dimensional spatial coordinates of the vehicle under the coordinate system of the surveillance probe's perspective, where f is the sequence number of the preprocessed video segmentation image sequence, i is the target signal contained in a single image. The set information of the same vehicle bounding box is [h f,i w f,i l f,i ] 3D , where h, w, l respectively represent the height of the bounding box (vertical straight direction), length (parallel to the direction of target movement) and width. This algorithm obtains the position, size and direction information of the target by classifying and regressing the targets in the image. At the same time, the Deepsort tracking algorithm is used to track the vehicle to ensure that the vehicle can be accurately identified and positioned in consecutive frames.

S3、如图3所示,获取停车场空间布置的基础信息,从摄像头感知数据集中筛选并处理与车辆匹配的位姿数据,搭建仿真平台,实现虚拟平台全景化运行。S3. As shown in Figure 3, obtain the basic information of the parking space layout, filter and process the pose data matching the vehicle from the camera sensing data set, build a simulation platform, and realize panoramic operation of the virtual platform.

S31、获取相应地下停车场的基础空间信息,包括但不限于墙柱数量和位置,停车位数量和位置以及摄像头布设位置和角度等,使用Vue框架和three.is引擎进行场景渲染,提供用户交互界面,展示停车场基础信息;S31. Obtain the basic spatial information of the corresponding underground parking lot, including but not limited to the number and location of wall columns, the number and location of parking spaces, and the location and angle of camera layout, etc. Use the Vue framework and three.is engine to perform scene rendering and provide user interaction Interface to display basic parking lot information;

S32、使用Flask框架搭建程序提取摄像头感知的车辆位姿数据,检索出同一ID的车辆位姿信息,并按时间排序构成轨迹,通过透视变换整理形成车辆在虚拟全景视角中的轨迹数据集P={P1,P2,···,Pj,···,Pm},其中为车辆j的轨迹,xj,yj,zj为t时刻车辆j的三维坐标,/>为t时刻车辆j的姿态角;S32. Use the Flask framework to build a program to extract the vehicle pose data sensed by the camera, retrieve the vehicle pose information with the same ID, sort the trajectory by time, and organize the trajectory data set of the vehicle in the virtual panoramic perspective through perspective transformation P = {P 1 ,P 2 ,···,P j ,···,P m }, where is the trajectory of vehicle j, x j , y j , z j are the three-dimensional coordinates of vehicle j at time t,/> is the attitude angle of vehicle j at time t;

S33、前后端通过RESTfulAPI进行数据交互,实现虚拟平台全景化运行。S33, front-end and back-end interact with data through RESTfulAPI to realize panoramic operation of the virtual platform.

S4、布设虚拟场景显示屏装置,在虚拟平台中追踪车内用户视角,通过撤去虚拟平台中的墙柱遮挡模型,为驾驶员提供宽广视野,消除盲区隐患。S4. Set up a virtual scene display device to track the perspective of the user in the car in the virtual platform. By removing the wall column blocking model in the virtual platform, it provides the driver with a wide field of view and eliminates the hidden danger of blind spots.

S41、根据停车场空间布局和车辆走向,在墙柱上布设虚拟场景显示屏装置,其高度、大小、方向和亮度均应以便于驾驶员观察为原则;S41. According to the parking space layout and vehicle direction, arrange a virtual scene display device on the wall column. Its height, size, direction and brightness should be in order to facilitate the driver's observation;

S42、基于虚拟空间平台追踪驾驶员视角,在显示屏上提供撤去墙柱遮挡的第一驾驶人视角的虚拟画面,实现盲区透视。S42. Based on the virtual space platform, track the driver's perspective and provide a virtual picture of the first driver's perspective without the obstruction of the wall pillar on the display screen to achieve blind spot perspective.

S5、基于车辆位姿数据,预测车辆未来一段时间内的位姿数据变化,衡量当前视角与预测位姿数据之间的关系,设计最佳视角变化。S5. Based on the vehicle's posture data, predict the vehicle's posture data changes in the future, measure the relationship between the current perspective and the predicted posture data, and design the optimal perspective change.

S51、通过轨迹分析预测算法感知用户需求,提取用户车辆的位姿数据其中x(t),y(t),z(t)为车辆在三维空间中的位置坐标,/>为车辆的姿态角(绕x,y,z轴的旋转角度),t为时间。根据过去一段时间内的位姿数据/>通过轨迹分析的预测函数/>预测未来一段时间内的位姿数据P'(tn+1);S51. Perceive the user's needs through the trajectory analysis and prediction algorithm, and extract the position and attitude data of the user's vehicle. Among them, x(t), y(t), z(t) are the position coordinates of the vehicle in the three-dimensional space,/> is the attitude angle of the vehicle (the rotation angle around the x, y, and z axes), and t is the time. Based on the pose data in the past period of time/> Prediction functions via trajectory analysis/> Predict the pose data P'(t n+1 ) in the future;

S52、通过目标函数G(V(t),P'(tn+1))衡量当前视角与预测位姿数据之间的关系,当前视角V(t)=(vx(t),vy(t),vz(t)),其中vx(t),vy(t),vz(t)分别表示当前视角在x,y,z轴的方向,目标函数值越大,表示当前视角越能满足驾驶员的需求;S52. Measure the relationship between the current perspective and the predicted pose data through the objective function G(V(t),P'(t n+1 )). The current perspective V(t)=(v x (t), v y (t), v z (t)), where v x (t), v y (t), v z (t) respectively represent the direction of the current perspective in the x, y, and z axes. The larger the value of the objective function, the greater the The more the current perspective meets the driver’s needs;

S53、设置最佳视角变化ΔV(t),使得目标函数G(V(t),P'(tn+1))值最大,即并通过V(tn+1)=V(t)+ΔV(t)更新虚拟平台中相机的位置和姿态,从而实现根据用户需求切换显示屏视角。S53. Set the optimal viewing angle change ΔV(t) to maximize the value of the objective function G(V(t),P'(t n+1 )), that is And the position and posture of the camera in the virtual platform are updated through V(t n+1 )=V(t)+ΔV(t), thereby switching the display perspective according to user needs.

S6、根据车辆位置数据的时序信息,评估车辆碰撞风险,生成标注数据,实现车辆之间碰撞预警的提示。S6. Based on the time series information of the vehicle position data, evaluate the vehicle collision risk, generate annotation data, and implement collision warning prompts between vehicles.

S61、基于车辆的位姿数据提取相应的运动轨迹和速度的时序信息,通过vf=vi+at,s=(vi+vf)t/2得到车辆的速度和加速度,其中vi和vf表示车辆的初始速度和末速度,a表示车辆的加速度,t表示相邻帧时间间隔,s表示车辆行驶距离,可由车辆相邻帧的位置数据得到;S61. Extract the corresponding motion trajectory and speed timing information based on the vehicle's pose data, through v f = vi +at, s=(v i +v f )t/2 obtains the speed and acceleration of the vehicle, where v i and v f represent the initial speed and final speed of the vehicle, a represents the acceleration of the vehicle, t represents the time interval between adjacent frames, and s represents The vehicle traveling distance can be obtained from the position data of adjacent frames of the vehicle;

S62、评估车辆碰撞风险,计算两车辆按当前速度行驶发生碰撞所需的时间TTC=L/(v1-v2),其中L为车辆1和车辆2之间的距离,v1和v2分别表示车辆1和车辆2的速度;S62. Evaluate the risk of vehicle collision and calculate the time required for a collision between two vehicles traveling at the current speed TTC = L/(v 1 - v 2 ), where L is the distance between vehicle 1 and vehicle 2, v 1 and v 2 represent the speed of vehicle 1 and vehicle 2 respectively;

S63、根据停车场车辆运行情况,设置两车间的距离阈值,当L低于一定阈值时,根据TTC值标注不同的风险等级,完成碰撞预警提示。S63. Set the distance threshold between the two vehicles according to the vehicle operation conditions in the parking lot. When L is lower than a certain threshold, different risk levels are marked according to the TTC value to complete the collision warning prompt.

本实例基于浦东某地下停车场为案例研究场景,停车场内存在坡道、急弯和密集墙柱,视野盲区多,安全隐患多。布设多个摄像头,实现全域监控并进行环境感知检测,对获取到的车辆位姿数据进行处理,渲染虚拟空间内车辆的运动状态,并进一步地感知用户需求,实现个性视角切换与碰撞预警标注。This example is based on an underground parking lot in Pudong as a case study scenario. There are ramps, sharp turns, and dense wall columns in the parking lot. There are many blind spots in the field of vision and many safety hazards. Multiple cameras are deployed to achieve full-area monitoring and environmental perception detection, process the acquired vehicle pose data, render the vehicle's motion status in the virtual space, and further perceive user needs to achieve personalized perspective switching and collision warning annotation.

本实施例应用上述技术方案,如图4所示,其主要过程为:This embodiment applies the above technical solution, as shown in Figure 4, and its main process is:

步骤一、停车场全域布设的监控视频数据读入以及对视频图像进行预处理。Step 1: Read the surveillance video data deployed throughout the parking lot and preprocess the video images.

步骤1.1、停车场监控视频流数据通过接口实时传输至信息处理模块;;Step 1.1. The parking lot monitoring video stream data is transmitted to the information processing module in real time through the interface;;

步骤1.2、视频逐帧切分为图像。使用视频处理库(OpenCV)进行逐帧切分,获取视频序列图像;Step 1.2. The video is divided into images frame by frame. Use the video processing library (OpenCV) to segment frame by frame to obtain video sequence images;

步骤1.3、图像增强预处理。使用Pythonpillow库中的图像增强工具对视频画面进行处理,增强算法包括对比度、亮度和锐度。通过图像增强预处理,可以提高图像质量,减少深度提取过程中的特征细节丢失。Step 1.3, image enhancement preprocessing. Use the image enhancement tools in the Pythonpillow library to process video images. The enhancement algorithms include contrast, brightness and sharpness. Image enhancement preprocessing can improve image quality and reduce the loss of feature details during depth extraction.

进一步地,所述步骤1.3具体算法为:Further, the specific algorithm of step 1.3 is:

对比度增强:Contrast enhancement:

其中,Oi,j表示增强后的像素值,Ii,j表示原始像素值,μ表示图像的平均灰度值,σ表示图像的标准差,k表示对比度增益。Among them, Oi,j represents the enhanced pixel value, Ii,j represents the original pixel value, μ represents the average gray value of the image, σ represents the standard deviation of the image, and k represents the contrast gain.

亮度增强:Brightness enhancement:

Oi,j=Ii,j×k+bOi,j=Ii,j×k+b

其中b表示亮度偏移量,其余变量同上。Where b represents the brightness offset, and the other variables are the same as above.

锐度增强:Sharpness enhancement:

Oi,j=Ii,j+Ii,j×k-Li,j×kOi,j=Ii,j+Ii,j×k-Li,j×k

其中Li,j表示拉普拉斯滤波后的像素值,k表示锐度增益,其余变量同上。Among them, Li,j represents the pixel value after Laplacian filtering, k represents the sharpness gain, and the other variables are the same as above.

步骤二、将图像数据基于DORN算法进行深度估计,基于D4LCN模型对得到的深度图像和原始RGB图像进行三维目标检测,输出以监控为坐标原点的车辆位姿信息。Step 2: Use the DORN algorithm to perform depth estimation on the image data, perform three-dimensional target detection on the obtained depth image and the original RGB image based on the D4LCN model, and output the vehicle pose information with the monitoring as the origin of the coordinates.

步骤2.1、使用DORN算法对图像序列进行深度估计,获得深度图像。假设输入的RGB图像为其中h和w分别表示图像的高度和宽度,3表示图像的通道数。DORN的输出是一个深度图像/>其中每个像素点的值表示该点的深度。深度计算方式为:Step 2.1. Use the DORN algorithm to perform depth estimation on the image sequence to obtain a depth image. Assume that the input RGB image is where h and w represent the height and width of the image respectively, and 3 represents the number of channels of the image. The output of DORN is a depth image/> The value of each pixel represents the depth of that point. The depth calculation method is:

其中pk(x,y)表示像素点(x,y)属于第k个深度区间的概率。将所得深度进行图像归一化处理,从而转为灰度形式的深度图像。where p k (x, y) represents the probability that the pixel point (x, y) belongs to the k-th depth interval. The resulting depth is image normalized and converted into a depth image in grayscale form.

该算法利用深度回归神经网络对单张图像进行估计,并利用帧间信息获得连续图像序列的深度信息,提高深度估计的准确性和鲁棒性。This algorithm uses a deep regression neural network to estimate a single image, and uses inter-frame information to obtain depth information of continuous image sequences to improve the accuracy and robustness of depth estimation.

步骤2.2、将深度图像和原始RGB图像输入D4LCN模型,利用深度引导卷积提取图像特征。将深度图中的深度信息作为引导卷积模块的信息进行计算:Step 2.2: Input the depth image and original RGB image into the D4LCN model, and use depth-guided convolution to extract image features. The depth information in the depth map is calculated as the information to guide the convolution module:

其中,I'为输出图像特征信息,k为卷积核尺度,gi,gj分别为图像单元平移尺度,I为输入图像矩阵,D为深度图像矩阵。Among them, I' is the output image feature information, k is the convolution kernel scale, g i and g j are the image unit translation scales respectively, I is the input image matrix, and D is the depth image matrix.

该模型采用多分支架构,对输入的深度图像和RGB图像进行多尺度、多通道的卷积,提取图像特征,进而实现对车辆位置信息的定位和识别。The model uses a multi-branch architecture to perform multi-scale, multi-channel convolution on the input depth image and RGB image to extract image features, thereby achieving positioning and identification of vehicle position information.

步骤2.3、在图像特征提取的基础上,使用D4LCN自带的非极大值抑制(NMS)目标检测模块对车辆进行识别和定位。所得车辆位置信息为[xf,i yf,i zf,i]3D,表示车辆在监控探头视角坐标系下的三维空间坐标,其中f为预处理后视频切分图像序列的序号,i为单张影像中所包含的目标信号,同样的车辆包围盒的集合信息为[hf,i wf,i lf,i]3d,其中,h,w,l分别表示包围盒高度(竖直方向)、长度(平行目标移动方向)和宽度。该算法通过对图像中的目标进行分类和回归,获得目标的位置和尺寸及方向信息。同时,利用Deepsort跟踪算法对车辆进行跟踪,保证在连续帧中能够准确地识别和定位车辆。Step 2.3. Based on the image feature extraction, use the non-maximum suppression (NMS) target detection module of D4LCN to identify and locate the vehicle. The obtained vehicle position information is [x f,i y f,i z f,i ] 3D , which represents the three-dimensional spatial coordinates of the vehicle under the coordinate system of the surveillance probe's perspective, where f is the sequence number of the preprocessed video segmentation image sequence, i is the target signal contained in a single image. The set information of the same vehicle bounding box is [h f,i w f,i l f,i ] 3d , where h, w, l respectively represent the height of the bounding box (vertical straight direction), length (parallel to the direction of target movement) and width. This algorithm obtains the position, size and direction information of the target by classifying and regressing the targets in the image. At the same time, the Deepsort tracking algorithm is used to track the vehicle to ensure that the vehicle can be accurately identified and positioned in consecutive frames.

步骤三、获取停车场空间布置的基础信息,从摄像头感知数据集中筛选并处理与车辆匹配的位姿数据,两者结合,搭建仿真平台,实现虚拟平台全景化运行。Step 3: Obtain the basic information of the parking space layout, filter and process the pose data matching the vehicle from the camera sensing data set, combine the two, build a simulation platform, and realize the panoramic operation of the virtual platform.

步骤3.1、获取地下停车场的基础空间信息,包括但不限于墙柱数量和位置,停车位数量和位置以及摄像头布设位置和角度等,使用Vue框架和three.is引擎进行场景渲染,提供用户交互界面,展示停车场基础信息;Step 3.1. Obtain the basic spatial information of the underground parking lot, including but not limited to the number and location of wall columns, the number and location of parking spaces, and the location and angle of camera layout. Use the Vue framework and three.is engine to render the scene and provide user interaction. Interface to display basic parking lot information;

步骤3.2、使用Flask框架搭建程序提取摄像头感知的车辆位姿数据,检索出同一ID的车辆位姿信息,并按时间排序构成轨迹,通过透视变换整理形成车辆在虚拟全景视角中的轨迹数据集P={P1,P2,···,Pj,···,Pm},其中为车辆j的轨迹,xj,yj,zj为t时刻车辆j的三维坐标,/>为t时刻车辆j的姿态角;Step 3.2: Use the Flask framework to build a program to extract the vehicle pose data perceived by the camera, retrieve the vehicle pose information with the same ID, sort the trajectory by time, and organize the vehicle trajectory data set P in the virtual panoramic perspective through perspective transformation. ={P 1 ,P 2 ,···,P j ,···,P m }, where is the trajectory of vehicle j, x j , y j , z j are the three-dimensional coordinates of vehicle j at time t,/> is the attitude angle of vehicle j at time t;

步骤3.3、前后端通过RESTfulAPI进行数据交互,实现虚拟平台全景化运行。Step 3.3. The front and back ends interact with data through RESTfulAPI to achieve panoramic operation of the virtual platform.

步骤四、布设虚拟场景显示屏装置,在虚拟平台中追踪车内用户视角,通过撤去虚拟平台中的墙柱遮挡模型,为驾驶员提供宽广视野,消除盲区隐患。Step 4: Set up a virtual scene display device, track the perspective of the user in the car in the virtual platform, and remove the wall column blocking model in the virtual platform to provide the driver with a wide field of view and eliminate the hidden danger of blind spots.

步骤4.1、根据停车场空间布局和车辆走向,在墙柱上布设虚拟场景显示屏装置,其高度、大小、方向和亮度均应以便于驾驶员观察为原则;Step 4.1. According to the parking lot space layout and vehicle direction, arrange the virtual scene display device on the wall pillar. Its height, size, direction and brightness should be in order to facilitate the driver's observation;

步骤4.2、基于虚拟空间平台追踪驾驶员视角,在显示屏上提供撤去墙柱遮挡的第一驾驶人视角的虚拟画面,实现盲区透视。Step 4.2: Track the driver's perspective based on the virtual space platform, and provide a virtual picture of the first driver's perspective with the wall pillars removed on the display screen to achieve blind spot perspective.

步骤五、基于车辆位姿数据,预测车辆未来一段时间内的位姿数据变化,衡量当前视角与预测位姿数据之间的关系,设计最佳视角变化。Step 5: Based on the vehicle's posture data, predict the vehicle's posture data changes in the future, measure the relationship between the current perspective and the predicted posture data, and design the optimal perspective change.

步骤5.1、通过轨迹分析预测算法感知用户需求,提取用户车辆的位姿数据其中x(t),y(t),z(t)为车辆在三维空间中的位置坐标,/>为车辆的姿态角(绕x,y,z轴的旋转角度),t为时间。根据过去一段时间内的位姿数据/>通过轨迹分析的预测函数/>预测未来一段时间内的位姿数据P'(tn+1);Step 5.1: Perceive user needs through trajectory analysis and prediction algorithms, and extract the position and attitude data of the user's vehicle Among them, x(t), y(t), z(t) are the position coordinates of the vehicle in the three-dimensional space,/> is the attitude angle of the vehicle (the rotation angle around the x, y, and z axes), and t is the time. Based on the pose data in the past period of time/> Prediction functions via trajectory analysis/> Predict the pose data P'(t n+1 ) in the future;

步骤5.2、通过目标函数G(V(t),P'(tn+1))衡量当前视角与预测位姿数据之间的关系,当前视角V(t)=(vx(t),vy(t),vz(t)),其中vx(t),vy(t),vz(t)分别表示当前视角在x,y,z轴的方向,目标函数值越大,表示当前视角越能满足驾驶员的需求;Step 5.2. Measure the relationship between the current perspective and the predicted pose data through the objective function G(V(t),P'(t n+1 )). The current perspective V(t)=(v x (t),v y (t), v z (t)), where v x (t), v y (t), v z (t) respectively represent the direction of the current viewing angle in the x, y, and z axes. The larger the objective function value, Indicates that the current perspective can better meet the driver's needs;

步骤5.3、设置最佳视角变化ΔV(t),使得目标函数G(V(t),P'(tn+1))值最大,即并通过V(tn+1)=V(t)+ΔV(t)更新虚拟平台中相机的位置和姿态,从而实现根据用户需求切换显示屏视角。Step 5.3. Set the optimal viewing angle change ΔV(t) to maximize the value of the objective function G(V(t),P'(t n+1 )), that is And the position and posture of the camera in the virtual platform are updated through V(t n+1 )=V(t)+ΔV(t), thereby switching the display perspective according to user needs.

步骤六、根据车辆位置数据的时序信息,评估车辆碰撞风险,生成标注数据,实现车辆之间碰撞预警的提示。Step 6: Based on the time series information of the vehicle position data, evaluate the vehicle collision risk, generate annotation data, and implement collision warning prompts between vehicles.

步骤6.1、基于车辆的位姿数据提取相应的运动轨迹和速度的时序信息,通过vf=vi+at,s=(vi+vf)t/2得到车辆的速度和加速度,其中vi和vf表示车辆的初始速度和末速度,a表示车辆的加速度,t表示相邻帧时间间隔,s表示车辆行驶距离,可由车辆相邻帧的位置数据得到;Step 6.1. Extract the corresponding motion trajectory and speed timing information based on the vehicle's pose data, through v f = vi +at, s=(v i +v f )t/2 obtains the speed and acceleration of the vehicle, where v i and v f represent the initial speed and final speed of the vehicle, a represents the acceleration of the vehicle, t represents the time interval between adjacent frames, and s represents The vehicle traveling distance can be obtained from the position data of adjacent frames of the vehicle;

步骤6.2、评估车辆碰撞风险,计算两车辆按当前速度行驶发生碰撞所需的时间TTC=L/(v1-v2),其中L为车辆1和车辆2之间的距离,v1和v2分别表示车辆1和车辆2的速度;Step 6.2. Evaluate the risk of vehicle collision and calculate the time required for a collision between two vehicles traveling at the current speed TTC = L/(v 1 - v 2 ), where L is the distance between vehicle 1 and vehicle 2, v 1 and v 2 represents the speed of vehicle 1 and vehicle 2 respectively;

步骤6.3、根据停车场车辆运行情况,设置两车间的距离阈值为10m。当L低于此阈值时,根据TTC值标注不同的风险等级,其中TTC≥3s,标记为低风险;1.5s<TTC<3s,标记为中风险;TTC≤1.5s,标记为高风险。根据此判断机制生成标注数据,完成碰撞预警提示。Step 6.3. Based on the vehicle operation conditions in the parking lot, set the distance threshold between the two workshops to 10m. When L is lower than this threshold, different risk levels are marked according to the TTC value, where TTC≥3s is marked as low risk; 1.5s<TTC<3s is marked as medium risk; TTC≤1.5s is marked as high risk. Based on this judgment mechanism, annotation data is generated to complete collision warning prompts.

综上所述,本发明面向地下停车场盲区事故频发,提出一种地下停车场超视距感知与预警装置,将摄像头采集感知的车辆位姿数据与虚拟空间仿真相结合,实现真实场景虚拟化,通过在虚拟空间中撤去墙柱遮挡提供宽广视野,消除盲区隐患;分析车辆轨迹运行状态,以最接近预测的位姿变化为约束条件,实现个性视角的自动切换;以车辆位置变化的时序信息为基础,对其距离,碰撞时间设置相应阈值,评估碰撞风险,生成风险标注,完成预警提示。本发明搭建了针对地下停车场盲区的虚拟空间平台,结合摄像头、显示屏等硬件设备,检测盲区车辆和行人的运动踪迹,展示与真实场景同步运行的虚拟画面,为用户提供了“智能化、精准化、个性化”的提示和预警。本发明实现了对盲区车辆及其他目标的检测与预警,具有低时延和低成本的特点,可有效解决盲区安全问题;依托环境感知和虚拟空间实现了地下停车场盲区超视距感知与预警,为解决盲区冲突点事故频发提供了重要工具和手段To sum up, the present invention proposes an over-the-horizon sensing and early warning device for underground parking lots in view of the frequent accidents in blind spots in underground parking lots, which combines the vehicle position and attitude data collected by the camera with virtual space simulation to realize virtualization of real scenes. By removing the obstruction of wall columns in the virtual space, it provides a wide field of view and eliminates the hidden danger of blind spots; it analyzes the vehicle trajectory operation status and uses the pose change closest to the prediction as the constraint to achieve automatic switching of individual perspectives; based on the timing of vehicle position changes Based on the information, corresponding thresholds are set for distance and collision time, the collision risk is evaluated, risk annotations are generated, and early warning prompts are completed. This invention builds a virtual space platform for the blind spots of underground parking lots, combines cameras, display screens and other hardware equipment to detect the movement traces of vehicles and pedestrians in the blind spots, displays virtual pictures that run synchronously with the real scene, and provides users with "intelligent, "Accurate and personalized" prompts and warnings. This invention realizes the detection and early warning of vehicles and other targets in blind spots, has the characteristics of low delay and low cost, and can effectively solve the safety problem of blind spots; it relies on environmental perception and virtual space to realize over-the-horizon sensing and early warning of blind spots in underground parking lots. , providing important tools and means to solve the frequent accidents at blind spots and conflict points.

上述仅为本发明的优选实施例而已,并不对本发明起到任何限制作用。任何所属技术领域的技术人员,在不脱离本发明的技术方案的范围内,对本发明揭露的技术方案和技术内容做任何形式的等同替换或修改等变动,均属未脱离本发明的技术方案的内容,仍属于本发明的保护范围之内。The above are only preferred embodiments of the present invention and do not limit the present invention in any way. Any person skilled in the technical field who makes any form of equivalent substitution or modification to the technical solutions and technical contents disclosed in the present invention shall not deviate from the technical solutions of the present invention. The contents still fall within the protection scope of the present invention.

Claims (7)

1.一种地下停车场盲区超视距感知及预警方法,其特征在于,包括以下步骤:1. A method for over-the-horizon sensing and early warning in blind spots in underground parking lots, which is characterized by including the following steps: S1:采集停车场全域的视频图像;S1: Collect video images of the entire parking lot; S2:将图像数据基于DORN算法进行深度估计,得出以监控为原点的车辆位姿信息;S2: Use the image data for depth estimation based on the DORN algorithm to obtain the vehicle pose information with the monitoring as the origin; S3:搭建停车场全域的虚拟场景仿真平台;S3: Build a virtual scene simulation platform for the entire parking lot; S4:在地下停车场内布设虚拟场景显示屏装置,所述显示屏装置基于所述车辆位姿信息以及虚拟场景仿真平台,在显示屏上提供撤去墙柱遮挡的第一驾驶人视角的虚拟画面,实现盲区透视;S4: Arrange a virtual scene display screen device in the underground parking lot. Based on the vehicle posture information and the virtual scene simulation platform, the display screen device provides a virtual picture from the perspective of the first driver without the obstruction of the wall pillar on the display screen. , to achieve blind spot perspective; S5:基于车辆位姿数据,预测车辆未来一段时间内的位姿数据变化,衡量当前视角与预测位姿数据之间的关系,设计最佳视角变化;S5: Based on the vehicle's posture data, predict the vehicle's posture data changes in the future, measure the relationship between the current perspective and the predicted posture data, and design the optimal perspective change; S51、通过轨迹分析预测算法感知用户需求,提取用户车辆的位姿数据其中x(t),y(t),z(t)为车辆在三维空间中的位置坐标,/>为车辆的姿态角(绕x,y,z轴的旋转角度),t为时间。根据过去一段时间内的位姿数据/>通过轨迹分析的预测函数/>预测未来一段时间内的位姿数据P'(tn+1);S51. Perceive the user's needs through the trajectory analysis and prediction algorithm, and extract the position and attitude data of the user's vehicle. Among them, x(t), y(t), z(t) are the position coordinates of the vehicle in the three-dimensional space,/> is the attitude angle of the vehicle (the rotation angle around the x, y, and z axes), and t is the time. Based on the pose data in the past period of time/> Prediction functions via trajectory analysis/> Predict the pose data P'(t n+1 ) in the future; S52、通过目标函数G(V(t),P'(tn+1))衡量当前视角与预测位姿数据之间的关系,当前视角V(t)=(vx(t),vy(t),vz(t)),其中vx(t),vy(t),vz(t)分别表示当前视角在x,y,z轴的方向,目标函数值越大,表示当前视角越能满足驾驶员的需求;S52. Measure the relationship between the current perspective and the predicted pose data through the objective function G(V(t),P'(t n+1 )). The current perspective V(t)=(v x (t), v y (t), v z (t)), where v x (t), v y (t), v z (t) respectively represent the direction of the current perspective in the x, y, and z axes. The larger the value of the objective function, the greater the The more the current perspective meets the driver’s needs; S53、设置最佳视角变化ΔV(t),使得目标函数G(V(t),P'(tn+1))值最大,即并通过V(tn+1)=V(t)+ΔV(t)更新虚拟平台中相机的位置和姿态,从而实现根据用户需求切换显示屏视角;S53. Set the optimal viewing angle change ΔV(t) to maximize the value of the objective function G(V(t),P'(t n+1 )), that is And update the position and posture of the camera in the virtual platform through V(t n+1 )=V(t)+ΔV(t), thereby switching the display perspective according to user needs; S6:根据车辆位置数据的时序信息,评估车辆碰撞风险,生成标注数据,实现车辆之间碰撞预警的提示。S6: Based on the time series information of the vehicle position data, evaluate the vehicle collision risk, generate annotation data, and implement collision warning prompts between vehicles. 2.根据权利要求1所述的地下停车场盲区超视距感知及预警方法,其特征在于,所述S1中,所述视频图像通过停车场布设的监控进行采集;所述S1中,还包括对采集的视频图像进行切帧和预处理。2. The over-the-horizon sensing and early warning method for blind spots in underground parking lots according to claim 1, characterized in that in S1, the video images are collected through monitoring of the parking lot layout; in S1, it also includes Perform frame cutting and preprocessing on the collected video images. 3.根据权利要求2所述的地下停车场盲区超视距感知及预警方法,其特征在于,所述预处理包括对视频图像进行对比度、亮度以及锐度的增强。3. The over-the-horizon sensing and early warning method for blind spots in underground parking lots according to claim 2, wherein the preprocessing includes enhancing the contrast, brightness and sharpness of the video image. 4.根据权利要求1所述的地下停车场盲区超视距感知及预警方法,其特征在于,所述S2包括以下步骤:4. The over-the-horizon sensing and early warning method for blind spots in underground parking lots according to claim 1, characterized in that the S2 includes the following steps: S21、使用DORN算法对图像序列进行深度估计,获得深度图像;该算法利用深度回归神经网络对单张图像进行估计,并利用帧间信息获得连续图像序列的深度信息;S21. Use the DORN algorithm to estimate the depth of the image sequence and obtain the depth image; the algorithm uses a deep regression neural network to estimate a single image, and uses inter-frame information to obtain the depth information of the continuous image sequence; S22、将深度图像和原始RGB图像输入D4LCN模型,利用深度引导卷积提取图像特征;该模型采用多分支架构,对输入的深度图像和RGB图像进行多尺度、多通道的卷积,提取图像特征,进而实现对车辆位置信息的定位和识别;S22. Input the depth image and original RGB image into the D4LCN model, and use depth-guided convolution to extract image features; the model uses a multi-branch architecture to perform multi-scale and multi-channel convolution on the input depth image and RGB image to extract image features. , thereby realizing the positioning and identification of vehicle location information; S23、在图像特征提取的基础上,使用D4LCN自带的非极大值抑制目标检测模块对车辆进行识别和定位;该算法通过对图像中的目标进行分类和回归,获得目标的位置和尺寸及方向信息;同时,利用Deepsort跟踪算法对车辆进行跟踪,保证在连续帧中能够准确地识别和定位车辆。S23. Based on the image feature extraction, use the non-maximum suppression target detection module of D4LCN to identify and locate the vehicle; this algorithm obtains the position, size and location of the target by classifying and regressing the targets in the image. Direction information; at the same time, the Deepsort tracking algorithm is used to track the vehicle to ensure that the vehicle can be accurately identified and positioned in consecutive frames. 5.根据权利要求1所述的地下停车场盲区超视距感知及预警方法,其特征在于,所述S3包括以下步骤:5. The over-the-horizon sensing and early warning method for blind spots in underground parking lots according to claim 1, characterized in that the S3 includes the following steps: S31、获取相应地下停车场的基础空间信息,包括墙柱数量和位置,停车位数量和位置以及摄像头布设位置和角度,使用Vue框架和three.is引擎进行场景渲染,提供用户交互界面,展示停车场基础信息;S31. Obtain the basic spatial information of the corresponding underground parking lot, including the number and location of wall columns, the number and location of parking spaces, and the location and angle of camera layout. Use the Vue framework and three.is engine to perform scene rendering, provide a user interaction interface, and display parking. Basic information about the field; S32、使用Flask框架搭建程序提取摄像头感知的车辆位姿数据,检索出同一ID的车辆位姿信息,并按时间排序构成轨迹,通过透视变换整理形成车辆在虚拟全景视角中的轨迹数据集P={P1,P2,···,Pj,···,Pm},其中为车辆j的轨迹,xj,yj,zj为t时刻车辆j的三维坐标,/>为t时刻车辆j的姿态角;S32. Use the Flask framework to build a program to extract the vehicle pose data sensed by the camera, retrieve the vehicle pose information with the same ID, sort the trajectory by time, and organize the trajectory data set of the vehicle in the virtual panoramic perspective through perspective transformation P = {P 1 ,P 2 ,···,P j ,···,P m }, where is the trajectory of vehicle j, x j , y j , z j are the three-dimensional coordinates of vehicle j at time t,/> is the attitude angle of vehicle j at time t; S33、前后端通过RESTfulAPI进行数据交互,实现虚拟平台全景化运行。S33, front-end and back-end interact with data through RESTfulAPI to realize panoramic operation of the virtual platform. 6.根据权利要求1所述的地下停车场盲区超视距感知及预警方法,其特征在于,所述S4包括以下步骤:6. The over-the-horizon sensing and early warning method for blind spots in underground parking lots according to claim 1, characterized in that the S4 includes the following steps: S41、根据停车场空间布局和车辆走向,在墙柱上布设虚拟场景显示屏装置,其高度、大小、方向和亮度均应以便于驾驶员观察为原则;S41. According to the parking space layout and vehicle direction, arrange a virtual scene display device on the wall column. Its height, size, direction and brightness should be in order to facilitate the driver's observation; S42、基于虚拟空间平台追踪驾驶员视角,在显示屏上提供撤去墙柱遮挡的第一驾驶人视角的虚拟画面,实现盲区透视。S42. Based on the virtual space platform, track the driver's perspective and provide a virtual picture of the first driver's perspective without the obstruction of the wall pillar on the display screen to achieve blind spot perspective. 7.根据权利要求1所述的地下停车场盲区超视距感知及预警方法,其特征在于,所述S6包括以下步骤:7. The over-the-horizon sensing and early warning method for blind spots in underground parking lots according to claim 1, characterized in that the S6 includes the following steps: S61、基于车辆的位姿数据提取相应的运动轨迹和速度的时序信息,通过vf=vi+at,s=(vi+vf)t/2得到车辆的速度和加速度,其中vi和vf表示车辆的初始速度和末速度,a表示车辆的加速度,t表示相邻帧时间间隔,s表示车辆行驶距离,可由车辆相邻帧的位置数据得到;S61. Extract the corresponding motion trajectory and speed timing information based on the vehicle's pose data, through v f = vi +at, s=(v i +v f )t/2 obtains the speed and acceleration of the vehicle, where v i and v f represent the initial speed and final speed of the vehicle, a represents the acceleration of the vehicle, t represents the time interval between adjacent frames, and s represents The vehicle traveling distance can be obtained from the position data of adjacent frames of the vehicle; S62、评估车辆碰撞风险,计算两车辆按当前速度行驶发生碰撞所需的时间TTC=L/(v1-v2),其中L为车辆1和车辆2之间的距离,v1和v2分别表示车辆1和车辆2的速度;S62. Evaluate the risk of vehicle collision and calculate the time required for a collision between two vehicles traveling at the current speed TTC = L/(v 1 - v 2 ), where L is the distance between vehicle 1 and vehicle 2, v 1 and v 2 represent the speed of vehicle 1 and vehicle 2 respectively; S63、根据停车场车辆运行情况,设置两车间的距离阈值,当L低于一定阈值时,根据TTC值标注不同的风险等级,完成碰撞预警提示。S63. Set the distance threshold between the two vehicles according to the vehicle operation conditions in the parking lot. When L is lower than a certain threshold, different risk levels are marked according to the TTC value to complete the collision warning prompt.
CN202311151480.7A 2023-09-07 2023-09-07 A method for sensing and warning blind spots in underground parking lots beyond visual range Active CN117437808B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311151480.7A CN117437808B (en) 2023-09-07 2023-09-07 A method for sensing and warning blind spots in underground parking lots beyond visual range

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311151480.7A CN117437808B (en) 2023-09-07 2023-09-07 A method for sensing and warning blind spots in underground parking lots beyond visual range

Publications (2)

Publication Number Publication Date
CN117437808A true CN117437808A (en) 2024-01-23
CN117437808B CN117437808B (en) 2024-12-31

Family

ID=89557263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311151480.7A Active CN117437808B (en) 2023-09-07 2023-09-07 A method for sensing and warning blind spots in underground parking lots beyond visual range

Country Status (1)

Country Link
CN (1) CN117437808B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117710909A (en) * 2024-02-02 2024-03-15 多彩贵州数字科技股份有限公司 Rural road intelligent monitoring system based on target detection and instance segmentation

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014166394A1 (en) * 2013-04-10 2014-10-16 Zhang Youbin Traffic information operation system and method
CN115171413A (en) * 2022-05-25 2022-10-11 北京国家新能源汽车技术创新中心有限公司 Control method and system for traffic light shielding scene based on vehicle-road perception fusion technology

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014166394A1 (en) * 2013-04-10 2014-10-16 Zhang Youbin Traffic information operation system and method
CN115171413A (en) * 2022-05-25 2022-10-11 北京国家新能源汽车技术创新中心有限公司 Control method and system for traffic light shielding scene based on vehicle-road perception fusion technology

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHIEN-CHUAN LIN 等: "Topview Transform Model for The Vehicle Parking Assistance System", 《2010 INTERNATIONAL COMPUTER SYMPOSIUM (ICS2010)》, 10 January 2011 (2011-01-10), pages 306 - 308 *
谭兆一;陈白帆;苏苇;葛婧;: "基于第一视角的车辆盲区视野重现", 软件导刊, no. 09, 15 September 2020 (2020-09-15), pages 202 - 206 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117710909A (en) * 2024-02-02 2024-03-15 多彩贵州数字科技股份有限公司 Rural road intelligent monitoring system based on target detection and instance segmentation
CN117710909B (en) * 2024-02-02 2024-04-12 多彩贵州数字科技股份有限公司 Rural road intelligent monitoring system based on target detection and instance segmentation

Also Published As

Publication number Publication date
CN117437808B (en) 2024-12-31

Similar Documents

Publication Publication Date Title
Datondji et al. A survey of vision-based traffic monitoring of road intersections
CA2747337C (en) Multiple object speed tracking system
CA2638416A1 (en) Method and apparatus for evaluating an image
CN106373430A (en) Intersection pass early warning method based on computer vision
CN201402413Y (en) Vehicle control assistant device
JP4609603B2 (en) 3D information display device and 3D information display method
US12182964B2 (en) Vehicle undercarriage imaging
CN104427255A (en) Image processing method of vehicle camera and image processing apparatus using the same
JP4991384B2 (en) Approaching object detection device and approaching object detection program
CN105046948A (en) System and method of monitoring illegal traffic parking in yellow grid line area
JPH0933232A (en) Object observation method and object observation apparatus using this method, as well as traffic-flow measuring apparatus using this apparatus, and parking-lot observation apparatus
Cualain et al. Multiple-camera lane departure warning system for the automotive environment
CN117437808A (en) A method for over-the-horizon sensing and early warning in blind spots in underground parking lots
Matsuda et al. A system for real-time on-street parking detection and visualization on an edge device
Munajat et al. Vehicle detection and tracking based on corner and lines adjacent detection features
CN204884166U (en) Regional violating regulations parking monitoring devices is stopped to traffic taboo
CN116824549B (en) Target detection method and device based on multi-detection network fusion and vehicle
CN117058512A (en) Multi-mode data fusion method and system based on traffic big model
CN115100903A (en) Highway curve bidirectional early warning system based on YOLOV3 target detection algorithm
CN115565155A (en) Training method of neural network model, generation method of vehicle view and vehicle
JP2004251886A (en) Device for detecting surrounding object
CN118898920B (en) 360-Degree monitoring, prompting and early warning method and device for bus
US20240428574A1 (en) Apparatus and method for generating data for traning of neural network and storage medium storing instructions to perform method for generating data for traning of neural network
van de Wouw et al. Development and analysis of a real-time system for automated detection of improvised explosive device indicators from ground vehicles
JP2007265012A (en) Windshield range detection device, method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant