[go: up one dir, main page]

CN116168508B - A driving fatigue detection and early warning control method and device for human-machine co-driving - Google Patents

A driving fatigue detection and early warning control method and device for human-machine co-driving Download PDF

Info

Publication number
CN116168508B
CN116168508B CN202211598471.8A CN202211598471A CN116168508B CN 116168508 B CN116168508 B CN 116168508B CN 202211598471 A CN202211598471 A CN 202211598471A CN 116168508 B CN116168508 B CN 116168508B
Authority
CN
China
Prior art keywords
fatigue
information
driving
image
steering wheel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211598471.8A
Other languages
Chinese (zh)
Other versions
CN116168508A (en
Inventor
陈振斌
欧阳颖
杨峥
赖佳琴
张天虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan University
Original Assignee
Hainan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hainan University filed Critical Hainan University
Publication of CN116168508A publication Critical patent/CN116168508A/en
Application granted granted Critical
Publication of CN116168508B publication Critical patent/CN116168508B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/06Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B7/00Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00
    • G08B7/06Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00 using electric transmission, e.g. involving audible and visible signalling through the use of sound and light sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Emergency Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a method and a device for detecting and controlling driving fatigue and early warning of man-machine co-driving. Facial features and heart rate values are obtained according to image recognition in a cab, wherein the facial features comprise eye states, mouth states and head pitching angles, and the image in the cab and the extracted facial features and heart rate values are input into a preset time sequence neural network for processing at the same time to obtain first fatigue information; inputting an outdoor road surface image of a driver cab into a YOLOP model, acquiring lane line information of the road surface, extracting line pressing running time, and acquiring second fatigue information based on the line pressing running time; calculating steering wheel shake features according to the steering wheel angle information, and obtaining third fatigue information based on the steering wheel shake features; and calculating the fatigue degree based on the first, second and third fatigue information, and when the fatigue degree exceeds a preset early warning value, sending out an audible/visual alarm and starting an expert database system. The application eliminates the defects caused by a single detection method and improves the reliability of the early warning system.

Description

一种人机共驾的驾驶疲劳检测及预警控制方法及装置A driving fatigue detection and early warning control method and device for human-machine co-driving

技术领域Technical field

本发明涉及安全驾驶技术领域,尤其涉及一种人机共驾的驾驶疲劳检测及预警控制方法及装置。The present invention relates to the technical field of safe driving, and in particular to a driving fatigue detection and early warning control method and device for human-machine co-driving.

背景技术Background technique

随着经济的飞速发展,国民生活质量的不断提高,汽车逐渐成为寻常百姓家出行的代步工具,在给人们生活带来便利和促进社会进步的同时,也增加了各类交通事故的发生,这其中的交通事故隐患问题不可忽视。据不完全统计,20%的公路交通事故是由驾驶员疲劳驾驶引起的,为了保障驾驶员的出行安全和财产安全,通常会对驾驶员进行疲劳检测并预警。目前驾驶员疲劳检测主要通过以下三种方法实现:基于生理特征的方法、基于驾驶员面部信息特征的方法和基于车辆行为特征的方法。其中,基于生理特征的检测方法,一般是要求驾驶员穿戴相关仪器设备以获得驾驶员的脑电波信号(EEG)、心率等作为参照指标,具有高精度、持续可用等优点,但成本较高;基于驾驶员面部信息特征通过实时检测人脸眼部、嘴部和头部的关键位置判断驾驶员的疲劳状态,这种方法最大的优点是非侵入性、精度高,但其很容易受到遮挡、光线等的影响;基于车辆行为的检测方法持续监测车辆行为参数如车道偏离检测、方向盘转角等动态信息来判断驾驶员的困倦程度,但它过于依赖驾驶员的个人驾驶习惯。With the rapid development of the economy and the continuous improvement of the quality of people's lives, cars have gradually become a means of transportation for ordinary people. While bringing convenience to people's lives and promoting social progress, they have also increased the occurrence of various traffic accidents. This has The hidden dangers of traffic accidents cannot be ignored. According to incomplete statistics, 20% of road traffic accidents are caused by driver fatigue. In order to ensure the driver's travel safety and property safety, driver fatigue detection and early warning are usually carried out. Currently, driver fatigue detection is mainly implemented through the following three methods: methods based on physiological characteristics, methods based on driver facial information characteristics, and methods based on vehicle behavioral characteristics. Among them, the detection method based on physiological characteristics generally requires the driver to wear relevant instruments and equipment to obtain the driver's brain wave signal (EEG), heart rate, etc. as reference indicators. It has the advantages of high accuracy and continuous availability, but the cost is high; Based on the driver's facial information characteristics, the driver's fatigue status is determined by real-time detection of key positions of the eyes, mouth and head of the human face. The biggest advantage of this method is that it is non-invasive and highly accurate, but it is easily affected by occlusion and light. etc.; the vehicle behavior-based detection method continuously monitors vehicle behavior parameters such as lane departure detection, steering wheel angle and other dynamic information to determine the driver's drowsiness, but it relies too much on the driver's personal driving habits.

因此,传统的疲劳驾驶预警方式存在信息源单一,疲劳驾驶预警精度较低的问题。Therefore, the traditional fatigue driving warning method has the problem of single information source and low fatigue driving warning accuracy.

发明内容Contents of the invention

为了解决上述技术问题,本发明提出一种人机共驾的驾驶疲劳检测及预警控制方法及装置。在所述方法及装置中,消除单一检测方法带来的弊端,提高预警装置的可靠性,确保驾驶员的安全。In order to solve the above technical problems, the present invention proposes a driving fatigue detection and early warning control method and device for human-machine co-driving. In the method and device, the disadvantages caused by a single detection method are eliminated, the reliability of the early warning device is improved, and the safety of the driver is ensured.

为了达到上述目的,本发明的技术方案如下:In order to achieve the above objects, the technical solutions of the present invention are as follows:

一种人机共驾的驾驶疲劳检测及预警控制方法,包括如下步骤:A driving fatigue detection and early warning control method for human-machine co-driving, including the following steps:

实时采集驾驶室内图像、驾驶室外路面图像和方向盘转角信息;Collect images inside the cab, road images outside the cab, and steering wheel angle information in real time;

根据驾驶室内图像识别获得面部特征和心率值,所述面部特征包括眼睛状态、嘴巴状态和头部俯仰角度,并将驾驶室内图像与提取的面部特征和心率值同时输入至预设的时序神经网络处理,得到第一疲劳信息;Facial features and heart rate values are obtained based on in-cab image recognition. The facial features include eye state, mouth state and head pitch angle, and the in-cab image and the extracted facial features and heart rate values are simultaneously input into the preset time-series neural network. Process to obtain the first fatigue information;

将驾驶室外路面图像输入训练好的YOLOP模型中,获取路面的车道线信息,提取所述车道线信息中压线行驶时间,基于压线行驶时间获得第二疲劳信息;Input the road surface image outside the cab into the trained YOLOP model, obtain the lane line information of the road surface, extract the lane line driving time in the lane line information, and obtain the second fatigue information based on the lane line driving time;

根据方向盘转角信息计算方向盘抖动特征,基于方向盘抖动特征得到第三疲劳信息;Calculate the steering wheel jitter characteristics based on the steering wheel angle information, and obtain the third fatigue information based on the steering wheel jitter characteristics;

构建深度学习模型并采用自适应权重算法进行训练,分别获取第一疲劳信息、第二疲劳信息和第三疲劳信息对应的权重值α、β、γ,结合第一疲劳信息、第二疲劳信息和第三疲劳信息以及对应的权重值α、β、γ计算疲劳程度,当疲劳程度超过预设预警值时,发出声/光告警并启动专家库系统。Construct a deep learning model and use the adaptive weight algorithm for training, obtain the weight values α, β, and γ corresponding to the first fatigue information, the second fatigue information, and the third fatigue information respectively, and combine the first fatigue information, the second fatigue information, and the The third fatigue information and the corresponding weight values α, β, and γ calculate the fatigue degree. When the fatigue degree exceeds the preset warning value, an audible/visual alarm is issued and the expert database system is started.

优选地,所述获得面部特征,包括如下步骤:Preferably, obtaining facial features includes the following steps:

对所述驾驶室内图像进行预处理;Preprocessing the in-cab image;

对预处理后的驾驶室内图像进行人脸识别,得到人脸图像;Perform face recognition on the pre-processed cab image to obtain a face image;

提取所述人脸图像中特征关键点坐标并输入至预设的多任务分类神经网络中,识别眼睛状态、嘴巴状态和头部俯仰角度。The coordinates of the characteristic key points in the face image are extracted and input into the preset multi-task classification neural network to identify the eye state, mouth state and head pitch angle.

优选地,所述多任务分类神经网络的损失函数为:Preferably, the loss function of the multi-task classification neural network is:

Floss_face_total=Floss_leye+Floss_mouse+Floss_angle F loss_face_total =F loss_leye +F loss_mouse +F loss_angle

式中分别表示眼睛、嘴巴和头部俯仰角的真值,分别表示眼睛、嘴巴和头部俯仰角的网络预测值,c1=c2=c3表示L2正则化。in the formula represent the true values of the pitch angles of the eyes, mouth and head respectively, represent the network prediction values of the eyes, mouth and head pitch angles respectively, and c 1 =c 2 =c 3 represents L2 regularization.

优选地,所述获得心率值,包括如下步骤:Preferably, obtaining the heart rate value includes the following steps:

基于所述特征关键点坐标提取前额区域;Extract the forehead area based on the coordinates of the feature key points;

计算所述前额区域中G通道的均值并对所述G通道的均值信号进行盲源分离,得到分离后的G通道源信号;Calculate the mean value of the G channel in the forehead area and perform blind source separation on the mean signal of the G channel to obtain the separated G channel source signal;

对所述分离后的G通道源信号进行快速傅里叶变换,得到心率值。Fast Fourier transform is performed on the separated G channel source signal to obtain the heart rate value.

优选地,所述时序神经网络的损失函数为:Preferably, the loss function of the sequential neural network is:

式中Lface_+表示正样本的损失;Lface_-表示负样本的损失,p表示时序网络输出的概率值,∈+表示正样本权重参数,∈-表示负样本权重参数;γ1,γ2,γ3分别表示眼睛、嘴巴、俯仰角的损失权重。In the formula, L face_+ represents the loss of positive samples; L face_- represents the loss of negative samples, p represents the probability value of the time series network output, ∈+ represents the positive sample weight parameter, ∈- represents the negative sample weight parameter; γ 1 , γ 2 , γ 3 represents the loss weight of eyes, mouth, and pitch angle respectively.

优选地,所述将驾驶室外路面图像输入训练好的YOLOP模型中,获取路面的车道线信息,提取所述车道线信息中压线行驶时间,基于压线行驶时间获得第二疲劳信息,包括如下步骤:Preferably, the road surface image outside the cab is input into the trained YOLOP model, the lane line information of the road is obtained, the lane line driving time is extracted from the lane line information, and the second fatigue information is obtained based on the lane line driving time, including the following step:

对所述驾驶室外路面图像进行预处理;Preprocess the road surface image outside the cab;

将预处理后的驾驶室外路面图像输入到训练好的YOLOP模型中检测出路面的车道线信息;Input the preprocessed road surface image outside the cab into the trained YOLOP model to detect the lane line information of the road;

记录检测过程中车辆打开转向灯且压线行驶的时间,基于压线行驶的时间与预设行驶阈值的关系,确定第二疲劳信息。During the detection process, the time when the vehicle turns on the turn signal and drives on the line is recorded, and based on the relationship between the time of driving on the line and the preset driving threshold, the second fatigue information is determined.

优选地,所述驾驶室外路面图像的预处理包括图像去噪、图像去雾、图像去雨、图像去模糊、图像颜色增强、图像亮度增强或图像细节增强中的一项或多项组合。Preferably, the preprocessing of the road surface image outside the cab includes one or more combinations of image denoising, image dehazing, image rain removal, image deblurring, image color enhancement, image brightness enhancement or image detail enhancement.

优选地,所述根据方向盘转角信息计算方向盘抖动特征,基于方向盘抖动特征得到第三疲劳信息,包括如下步骤:Preferably, calculating the steering wheel shake characteristics based on the steering wheel angle information, and obtaining the third fatigue information based on the steering wheel shake characteristics includes the following steps:

对所述方向盘转角信息进行滤波去噪,获得去噪后的方向盘转角信息;Perform filtering and denoising on the steering wheel angle information to obtain denoised steering wheel angle information;

基于去噪后的方向盘转角信息,计算一个时间窗口内方向盘转角近似熵,其近似熵作为方向盘抖动特征值;Based on the denoised steering wheel angle information, calculate the approximate entropy of the steering wheel angle within a time window, and its approximate entropy is used as the steering wheel jitter characteristic value;

基于所述近似熵与预设的抖动阈值的关系,确定第三疲劳信息。Based on the relationship between the approximate entropy and the preset jitter threshold, third fatigue information is determined.

优选地,所述启动专家库系统,包括如下步骤:Preferably, starting the expert database system includes the following steps:

实时跟踪与预测驾驶主车距离车道中心线的偏移量以及与前车的距离,获取预测结果;Real-time tracking and prediction of the driver's vehicle's offset from the lane centerline and the distance to the vehicle in front, and obtaining prediction results;

当预测结果超过预设的偏移量阈值或距离阈值时,则进行人机接管。When the prediction result exceeds the preset offset threshold or distance threshold, human-machine takeover is performed.

基于上述内容,本发明还公开了一种人机共驾的驾驶疲劳检测及预警控制装置,包括:图像采集装置、信息采集模块、控制模块和预警装置,其中,Based on the above content, the present invention also discloses a driving fatigue detection and early warning control device for human-machine co-driving, including: an image acquisition device, an information acquisition module, a control module and an early warning device, wherein,

所述图像采集装置,用于实时采集驾驶室内图像、驾驶室外路面图像并传输至所述控制模块中;The image acquisition device is used to collect images inside the cab and road images outside the cab in real time and transmit them to the control module;

所述信息采集模块,用于采集驾驶过程中的方向盘转角信息并传输至所述控制模块中;The information collection module is used to collect steering wheel angle information during driving and transmit it to the control module;

所述控制模块,用于根据驾驶室内图像识别获得面部特征和心率值,所述面部特征包括眼睛状态、嘴巴状态和头部俯仰角度,并将驾驶室内图像与提取的面部特征和心率值同时输入至预设的时序神经网络处理,得到第一疲劳信息;用于将驾驶室外路面图像输入训练好的YOLOP模型中,获取路面的车道线信息,提取所述车道线信息中压线行驶时间,基于压线行驶时间获得第二疲劳信息;用于根据方向盘转角信息计算方向盘抖动特征,基于方向盘抖动特征得到第三疲劳信息;用于构建深度学习模型并采用自适应权重算法进行训练,分别获取第一疲劳信息、第二疲劳信息和第三疲劳信息对应的权重值α、β、γ,结合第一疲劳信息、第二疲劳信息和第三疲劳信息以及对应的权重值α、β、γ计算疲劳程度,并根据疲劳程度向所述预警装置发送预警指令;The control module is used to obtain facial features and heart rate values based on in-cab image recognition. The facial features include eye state, mouth state and head pitch angle, and input the in-cab image and the extracted facial features and heart rate values simultaneously. The preset time-series neural network is processed to obtain the first fatigue information; it is used to input the road surface image outside the cab into the trained YOLOP model, obtain the lane line information of the road surface, and extract the lane line driving time in the lane line information, based on The second fatigue information is obtained from the driving time on the line; it is used to calculate the steering wheel jitter characteristics based on the steering wheel angle information, and the third fatigue information is obtained based on the steering wheel jitter characteristics; it is used to build a deep learning model and use the adaptive weight algorithm for training, and obtain the first The weight values α, β, and γ corresponding to the fatigue information, the second fatigue information, and the third fatigue information are combined with the first fatigue information, the second fatigue information, and the third fatigue information and the corresponding weight values α, β, and γ to calculate the fatigue degree , and send early warning instructions to the early warning device according to the degree of fatigue;

所述预警装置,用于接收所述预警指令,并根据所述预警指令进行预警;用于启动专家库系统。The early warning device is used to receive the early warning instruction and perform an early warning according to the early warning instruction; and is used to start the expert database system.

基于上述技术方案,本发明的有益效果是:Based on the above technical solutions, the beneficial effects of the present invention are:

1)本发明利用远程光电容积脉搏波描记法(remote photoplethysmography,rPPG)基于视频远程测量心率技术,解决接触式方法需要穿戴设备才能检测心率等问题,再利用深度学习融合算法将驾驶员面部特征(眼睛、嘴巴、头部俯仰角度)、心率、道路车道线特征以及方向盘转角信息按一定权重结合起来,实时监控驾驶员的疲劳状态,若检测到驾驶员正在疲劳驾驶则通过语音播告警和警示灯提醒驾驶员要注意休息。消除单一检测方法带来的弊端,提高预警系统的可靠性;1) The present invention uses remote photoplethysmography (rPPG) based on video remote heart rate measurement technology to solve the problem that the contact method requires a wearable device to detect heart rate, and then uses a deep learning fusion algorithm to combine the driver's facial features ( Eyes, mouth, head pitch angle), heart rate, road lane line characteristics and steering wheel angle information are combined according to a certain weight to monitor the driver's fatigue status in real time. If the driver is detected to be driving fatigued, an alarm and warning light will be broadcast through voice Drivers are reminded to take breaks. Eliminate the disadvantages caused by a single detection method and improve the reliability of the early warning system;

2)本发明在监测驾驶员疲劳时,不仅仅进行声/光报警来提醒驾驶员,还实时跟踪与预测驾驶主车距离车道中心线的偏移量以及与前车的距离,基于预测结果与预设的偏移量阈值或距离阈值的关系来进行人机接管,自动调节行驶过程中的速度和方向盘转角,来确保驾驶员在疲劳状态下的安全。2) When monitoring driver fatigue, the present invention not only performs sound/light alarms to remind the driver, but also tracks and predicts the offset of the driving vehicle from the lane centerline and the distance to the vehicle in front in real time, based on the prediction results and The relationship between the preset offset threshold or distance threshold is used for human-machine takeover, and the speed and steering wheel angle during driving are automatically adjusted to ensure the safety of the driver in a fatigued state.

附图说明Description of the drawings

图1是一个实施例中一种人机共驾的驾驶疲劳检测及预警控制方法的流程图1;Figure 1 is a flow chart 1 of a driving fatigue detection and early warning control method for human-machine co-driving in one embodiment;

图2是一个实施例中一种人机共驾的驾驶疲劳检测及预警控制方法的流程图2;Figure 2 is a flow chart 2 of a driving fatigue detection and early warning control method for human-machine co-driving in one embodiment;

图3是一个实施例中一种人机共驾的驾驶疲劳检测及预警控制方法中基于面部特征和心率的检测方法流程图;Figure 3 is a flow chart of a detection method based on facial features and heart rate in a driving fatigue detection and early warning control method for human-machine co-driving in one embodiment;

图4是一个实施例中一种人机共驾的驾驶疲劳检测及预警控制方法中基于车辆行驶过程的检测方法流程图;Figure 4 is a flow chart of a detection method based on the vehicle driving process in a driving fatigue detection and early warning control method for human-machine co-driving in one embodiment;

图5是一个实施例中一种人机共驾的驾驶疲劳检测及预警控制装置的结构示意图。Figure 5 is a schematic structural diagram of a driving fatigue detection and early warning control device for human-machine co-driving in one embodiment.

具体实施方式Detailed ways

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.

如图1、2所示,本实施例提供一种人机共驾的驾驶疲劳检测及预警控制方法,包括如下步骤:As shown in Figures 1 and 2, this embodiment provides a driving fatigue detection and early warning control method for human-machine co-driving, including the following steps:

步骤1,实时采集驾驶室内图像、驾驶室外路面图像和方向盘转角信息。Step 1: Collect images inside the cab, road images outside the cab, and steering wheel angle information in real time.

本实施例中,通过红外摄像头实时拍摄驾驶室内,可保证在夜间采集到较清晰的人脸图像,对拍摄视频按照30fps的采集帧率进行数据采集,获取驾驶室内图像。采用高清摄像头实时拍摄驾驶室外,对拍摄视频按照30fps的采集帧率进行数据采集,获取驾驶室外路面图像。In this embodiment, the infrared camera is used to capture the cab in real time, which ensures that clearer face images are collected at night. The captured video is collected at a frame rate of 30fps to obtain the cab images. A high-definition camera is used to capture the outside of the cab in real time, and the video is collected at a frame rate of 30fps to obtain the road surface image outside the cab.

通过方向盘传感器采集方向盘转角数据信息,采样时间点与驾驶室内图像、驾驶室外路面图像的采集频率的时间点一致。The steering wheel angle data information is collected through the steering wheel sensor, and the sampling time point is consistent with the time point of the collection frequency of images inside the cab and road images outside the cab.

步骤2,根据驾驶室内图像识别获得面部特征和心率值,所述面部特征包括眼睛状态、嘴巴状态和头部俯仰角度,并将驾驶室内图像与提取的面部特征和心率值同时输入至预设的时序神经网络处理,得到第一疲劳信息。Step 2: Obtain facial features and heart rate values based on in-cab image recognition. The facial features include eye state, mouth state and head pitch angle, and input the in-cab image and the extracted facial features and heart rate values to the preset at the same time. Sequential neural network processing is used to obtain the first fatigue information.

参见图3,首先将驾驶室内图像输入到预训练好的光线增强神经网络模型中预处理,输出光线增强后的图像,公式如下所示:Referring to Figure 3, first input the image in the cab into the pre-trained light enhancement neural network model for pre-processing, and output the light-enhanced image. The formula is as follows:

Xenhance=ZeroDCE(Xraw)X enhance =ZeroDCE(X raw )

其中Xraw表示红外摄像头采集到的原始图像,经过预训练好的ZeroDCE光线增强网络后得到优化后的图像XenhanceAmong them, X raw represents the original image collected by the infrared camera, and the optimized image X enhance is obtained after passing through the pre-trained ZeroDCE light enhancement network.

然后,利用人脸检测算法识别出驾驶员面部在当前图像中的坐标位置后从光线增强后的图像中截取出来做为感兴趣区域(ROI),再把感兴趣区域ROI输入到预训练好的多任务分类神经网络(Q2L-SPP-MobileViT)中,多任务分类神经网络输出驾驶员的眼睛、嘴巴的位置、头部俯仰角。提取到的额头ROI区域,获取前额区域中G通道的均值然后利用改进的小波变换滤波算法,再进行快速傅里叶变换得到心率值,最后根据计算出驾驶员的眨眼频率、打哈欠的频率、头部俯仰角的变换值和心率是否超过设定的阈值来判断驾驶员是否处于疲劳状态。具体说明如下:Then, the face detection algorithm is used to identify the coordinate position of the driver's face in the current image, and then it is intercepted from the light-enhanced image as a region of interest (ROI), and then the region of interest ROI is input into the pre-trained In the multi-task classification neural network (Q2L-SPP-MobileViT), the multi-task classification neural network outputs the position of the driver's eyes, mouth, and head pitch angle. For the extracted forehead ROI area, obtain the mean value of the G channel in the forehead area, then use an improved wavelet transform filtering algorithm, and then perform fast Fourier transform to obtain the heart rate value, and finally calculate the driver's blink frequency, yawn frequency, Whether the head pitch angle transformation value and heart rate exceed the set threshold are used to determine whether the driver is in a fatigue state. The specific instructions are as follows:

FaceDetect(Xenhance):FaceDetect(X enhance ):

detector=dlib.get_frontal_face_detector()detector=dlib.get_frontal_face_detector()

predictor=dlib.shape_predictor(“shape_predictor_68_face_landmarks.dat”)predictor=dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")

rect=detector(Xenhance,1)#检测到的人脸框rect=detector(X enhance , 1)#Detected face frame

landmark68=predictor(gray,rect)#根据人脸框得到68个关键点坐标landmark 68 =predictor(gray,rect)#Get 68 key point coordinates based on the face frame

return rect,landmark68 return rect,landmark 68

rect,landmark68=FaceDetect(Xenhance)rect, landmark 68 = FaceDetect(X enhance )

用dlib算法检测出人脸,输入是光线增强后的图像Xenhance,输出是检测到的人脸框rect即人脸左上角坐标(x1,y1)和右下角坐标(x2,y2);landmark68即人脸的68个特征关键点坐标;Use the dlib algorithm to detect faces . The input is the light- enhanced image ); landmark 68 is the coordinates of the 68 characteristic key points of the human face;

根据68个特征关键点坐标,裁剪出人的左眼Xleye、嘴巴Xmouse输入到设计好的神经网络中训练输出Eleye左眼状态(睁眼或闭眼)以及Emouse嘴巴的状态(打哈欠或嘴巴合上);Based on the coordinates of 68 feature key points, the person's left eye yawning or closing the mouth);

获取裁剪出的前额Xforehead区域G通道的均值XforeheadG_mean Get the mean Xforehead G_mean of the G channel of the cropped forehead X forehead area

XforeheadG_mean=nunpy.mean(Xforehead[:,:,1])Xforehead G_mean =nunpy.mean(X forehead [:,:,1])

其中numpy.mean()函数表示对括号内的参数求均值;参数Xforehead[:,:,1]表示遍历前额Xforehead区域G通道的总数值。The numpy.mean() function represents the average of the parameters in brackets; the parameter X forehead [:,:,1] represents the total value of the G channel traversing the forehead X forehead area.

对原始的G通道均值信号进行盲源分离得到滤波后的干净均值数据 Perform blind source separation on the original G channel mean signal to obtain filtered clean mean data.

进行快速傅里叶变换得到最后的心率值:Perform fast Fourier transform to get the final heart rate value:

最后把获得到的左眼状态、嘴巴状态、心率值和驾驶员俯仰角度的4维向量作为疲劳识别网络的部分输入,输出为Y1{fatigue,normal}处于疲劳状态或正常状态。Finally, the obtained 4-dimensional vector of the left eye state, mouth state, heart rate value and driver pitch angle is used as part of the input of the fatigue recognition network, and the output is Y 1 {fatigue, normal} in the fatigue state or normal state.

V={Eleye,Emouse,Ehr,Eangle}V={E leye , E mouse , E hr , E angle }

多任务分类神经网络的输入为I1The input of the multi-task classification neural network is I 1 :

I1={V,Xenhance}I 1 ={V,X enhance }

Y1=Net(I1)Y 1 =Net (I 1 )

其中Net()表示多任务分类神经网络。Among them, Net() represents a multi-task classification neural network.

多任务分类神经网络的损失函数为:The loss function of multi-task classification neural network is:

Floss_face_total=Floss_leye+Floss_mouse+Floss_angle F loss_face_total =F loss_leye +F loss_mouse +F loss_angle

其中分别表示眼睛、嘴巴和头部俯仰角的真值,分别表示眼睛、嘴巴和头部俯仰角的网络预测值,c1=c2=c3表示L2正则化权重取0.01。in represent the true values of the pitch angles of the eyes, mouth and head respectively, Represents the network prediction values of the eyes, mouth and head pitch angles respectively, c 1 =c 2 =c 3 indicates that the L2 regularization weight is 0.01.

时序神经网络损失函数:Temporal neural network loss function:

其中Lface_+表示正样本(睁眼、打哈欠状态)的损失,Lface_-(闭眼、嘴巴合上状态)表示负样本的损失,p表示时序网络输出的概率值,∈+表示正样本权重参数取值为1,∈-表示负样本权重参数取值为2;γ1、γ2、γ3分别表示眼睛、嘴巴、俯仰角的损失权重分别取值为0.3,0.3,0.1。Among them, L face_+ represents the loss of positive samples (eyes open, yawning state), L face_- (eyes closed, mouth closed state) represents the loss of negative samples, p represents the probability value of the time series network output, ∈+ represents the positive sample The weight parameter value is 1, ∈- indicates that the negative sample weight parameter value is 2; γ 1 , γ 2 , and γ 3 respectively indicate that the loss weights of the eyes, mouth, and pitch angle are 0.3, 0.3, and 0.1 respectively.

本实施例中时序神经网络的输入不仅仅是摄像头传感器拍摄到的驾驶员面部图像信息,还包括前一个多任务分类神经网络的输出结果即眼睛开闭状态、嘴巴开闭状态以及俯仰角,将其与驾驶室内图像同时作为输入送入到时序神经网络中,相当于给时序神经网络加了正则化,防止网络学偏。避免了一般端到端网络只是将摄像头拍摄到的整个图像都输入到网络中去,忽略了汽车在正常行驶过程中,背景是复杂多变的,影响到网络的精度。In this embodiment, the input of the time series neural network is not only the driver's facial image information captured by the camera sensor, but also the output results of the previous multi-task classification neural network, that is, the eye opening and closing status, the mouth opening and closing status, and the pitch angle. It is fed into the temporal neural network as input at the same time as the image in the cab, which is equivalent to adding regularization to the temporal neural network to prevent network learning bias. It avoids the general end-to-end network that only inputs the entire image captured by the camera into the network, ignoring that during the normal driving process of the car, the background is complex and changeable, which affects the accuracy of the network.

步骤3,将驾驶室外路面图像输入训练好的YOLOP模型中,获取路面的车道线信息,提取所述车道线信息中压线行驶时间,基于压线行驶时间获得第二疲劳信息。Step 3: Input the road surface image outside the cab into the trained YOLOP model, obtain the lane line information of the road surface, extract the crossing driving time in the lane line information, and obtain the second fatigue information based on the crossing driving time.

参见图4,先将驾驶室外路面图像输入到已训练好的去噪、去雾、去雨多任务模型iPT(image processing transformer)中得到清晰的路面图像,再将清晰的路面图像输入到训练好的YOLOP模型中检测出路面的车道线信息。如果检测到车辆打开转向灯,但是仍然压线行驶,则记录其压线行驶的时间,若超过阈值则判断为处于疲劳状态。Referring to Figure 4, first input the road surface image outside the cab into the trained denoising, dehazing, and rain removing multi-task model iPT (image processing transformer) to obtain a clear road image, and then input the clear road image into the trained The lane line information of the road surface is detected in the YOLOP model. If it is detected that the vehicle turns on the turn signal but is still driving on the line, the time of driving on the line is recorded. If it exceeds the threshold, it is judged to be in a fatigue state.

高清摄像头采集路面图像的原理与用红外摄像头采集驾驶员面部特征的原理类似,最终得到输出Y2{fatigue,normal}。The principle of collecting road images with high-definition cameras is similar to that of using infrared cameras to collect driver facial features, and finally the output Y 2 {fatigue, normal} is obtained.

步骤4,根据方向盘转角信息计算方向盘抖动特征,基于方向盘抖动特征得到第三疲劳信息。Step 4: Calculate the steering wheel vibration characteristics based on the steering wheel angle information, and obtain the third fatigue information based on the steering wheel vibration characteristics.

本实施例中,运用滤波算法进行滤波去噪,获得去噪后的方向盘转角信息;基于计算去噪后的方向盘转角信息,计算一个时间窗口内方向盘转角近似熵,这里用近似熵来量化一段时间内方向盘角度的随机序列不规则型,将计算获得的近似熵作为方向盘抖动特征值;判断近似熵是否持续超过预设的抖动阈值,若是则判断为处于疲劳状态,然后将结果输入Y3{fatigue,normal}。In this embodiment, a filtering algorithm is used to perform filtering and denoising, and the denoised steering wheel angle information is obtained. Based on the calculation of the denoised steering wheel angle information, the approximate entropy of the steering wheel angle within a time window is calculated. Here, approximate entropy is used to quantify a period of time. The random sequence irregularity of the inner steering wheel angle is used, and the calculated approximate entropy is used as the steering wheel jitter characteristic value; it is judged whether the approximate entropy continues to exceed the preset jitter threshold, if so, it is judged to be in a fatigue state, and then the result is input into Y 3 {fatigue ,normal}.

方向盘转角近似熵计算公式如下:The calculation formula of approximate entropy of steering wheel angle is as follows:

式中m是嵌入维度的个数,r是相似容限阈值,N是观察周期内的数据点总数或输入时间序列的长度,Bi表示在i处X(i)和X(j)两个向量的距离小于相似容限阈值r的j的数量,X(·)表示从输入时间序列重建的m维向量。In the formula, m is the number of embedding dimensions, r is the similarity tolerance threshold, N is the total number of data points in the observation period or the length of the input time series, and B i represents the two X(i) and X(j) at i The number of vectors whose distance is smaller than the similarity tolerance threshold r, where X(·) represents the m-dimensional vector reconstructed from the input time series.

X(i)=[XSWA(i),XSWA(i+1),...,XSWA(i+m-1)],XSWA(i)∈Rm X(i)=[X SWA (i), X SWA (i+1),..., X SWA (i+m-1)], X SWA (i)∈R m

X(j)=[XSWA(j),XSWA(j+1),...,XSWA(j+m-1)],XSWA(j)∈Rm X(j)=[X SWA (j), X SWA (j+1),..., X SWA (j+m-1)], X SWA (j)∈R m

步骤5,构建深度学习模型并采用自适应权重算法进行训练,分别获取第一疲劳信息、第二疲劳信息和第三疲劳信息对应的权重值α、β、γ,结合第一疲劳信息、第二疲劳信息和第三疲劳信息以及对应的权重值α、β、γ计算疲劳程度,当疲劳程度超过预设预警值时,发出声/光告警并启动专家库系统。Step 5: Construct a deep learning model and use the adaptive weight algorithm for training, obtain the weight values α, β, and γ corresponding to the first fatigue information, the second fatigue information, and the third fatigue information respectively, and combine the first fatigue information, the second fatigue information, and the third fatigue information. The fatigue information, the third fatigue information and the corresponding weight values α, β, and γ are used to calculate the fatigue degree. When the fatigue degree exceeds the preset warning value, an audible/visual alarm is issued and the expert database system is started.

本实施例中,疲劳程度的计算公式,如下所示:In this embodiment, the calculation formula of fatigue degree is as follows:

Y{fatigue,normal}=αY1{fatigue,normal}+βY2{fatigue,normal}+γY3{fatigue,normal}Y{fatigue,normal}=αY 1 {fatigue,normal}+βY 2 {fatigue,normal}+γY 3 {fatigue,normal}

具体地,预设初始化人脸图像检测结果的权重为α=0.7,基于路面车道线的检测结果权重为β=0.1,基于方向盘转角的检测结果为γ=0.2。之后再运行该系统,系统会根据积累的历史数据训练深度学习模型,将Y值输出作为自适应网络的输入,输出为权重α、β、γ的值,实现自适应权重的效果,考虑驾驶员的个人习惯,提高模型的鲁棒性和检测精度。Specifically, the preset initialization weight of the face image detection result is α=0.7, the weight of the detection result based on road lane lines is β=0.1, and the weight of the detection result based on the steering wheel angle is γ=0.2. After running the system, the system will train the deep learning model based on the accumulated historical data, use the Y value output as the input of the adaptive network, and output the values of weights α, β, and γ to achieve the effect of adaptive weights and consider the driver personal habits to improve the robustness and detection accuracy of the model.

参见图2,Q2L-SPP-MobileViT即多任务神经网络的具体模型。一般的端到端模型将驾驶员面部提取特征成图像序列后直接通过LSTM网络中进行推理,本发明把驾驶室内图像、多任务神经网络的输出眼部状态、嘴部状态、俯仰角度和心率值作为送入到改进的LSTM网络中即加入了Self_attention机制,让模型尽可能关注到能够体现疲劳状态的一些特征。最后根据第一疲劳信息、第二疲劳信息和第三疲劳信息以及采用自适应权重算法获取的第一疲劳信息、第二疲劳信息和第三疲劳信息对应的权重值α、β、γ输出一个最终的疲劳程度,可以根据驾驶员的风格自适应变换,使模型更加灵且大幅度降低误检率。See Figure 2, Q2L-SPP-MobileViT is the specific model of multi-task neural network. The general end-to-end model extracts the features of the driver's face into an image sequence and directly performs inference through the LSTM network. This invention combines the images in the cab and the output eye state, mouth state, pitch angle and heart rate value of the multi-task neural network. As it is fed into the improved LSTM network, the Self_attention mechanism is added to allow the model to pay as much attention as possible to some features that can reflect the fatigue state. Finally, a final output is output based on the first fatigue information, the second fatigue information and the third fatigue information and the weight values α, β and γ corresponding to the first fatigue information, the second fatigue information and the third fatigue information obtained using the adaptive weight algorithm. The degree of fatigue can be adaptively changed according to the driver's style, making the model more flexible and greatly reducing the false detection rate.

其中,考虑到在检测到驾驶员疲劳时仅仅采取声/光报警提醒驾驶员目前处于疲劳状态,只能在很小的程度上保障驾驶员的安全。假设驾驶员正在高速公路上驾驶,当系统检测到其疲劳驾驶时,仅仅通过声/光报警并不能让驾驶员从疲劳状态中恢复,然而在高速公路上也不能随意停车休息,假设处于疲劳状态的驾驶员需要驾驶到最近的服务区进行休息,那么在这一过程中驾驶员仍然是处于疲劳状态,反应时间变慢,那么发生交通事故的可能性仍然很大。本系统在声光报警的基础上,提出基于驾驶员疲劳状态的专家库系统,在危险紧急情况下对驾驶车辆进行接管。具体实现方案如下:Among them, considering that when driver fatigue is detected, only a sound/light alarm is used to remind the driver that he is currently in a fatigue state, which can only protect the driver's safety to a very small extent. Assume that the driver is driving on the highway. When the system detects that he is driving fatigued, sound/light alarm alone cannot allow the driver to recover from the fatigue state. However, he cannot stop and rest at will on the highway, assuming that he is in a fatigue state. Drivers need to drive to the nearest service area to take a rest. During this process, the driver is still in a state of fatigue and his reaction time slows down, so the possibility of a traffic accident is still high. Based on the sound and light alarm, this system proposes an expert database system based on driver fatigue status to take over driving vehicles in dangerous and emergency situations. The specific implementation plan is as follows:

当驾驶员处于疲劳状态时,启动专家库系统,实时跟踪与预测驾驶车辆的车速、对驾驶主车距离车道中心线的偏移量以及与前车的距离。When the driver is in a fatigue state, the expert library system is started to track and predict the speed of the driving vehicle, the offset of the driver's vehicle from the lane center line, and the distance to the vehicle in front in real time.

假设T[x_1,x_2,…,x_t]为一段时间W内驾驶主车距离车道中心线的偏移量序列。利用专家库中的修正转向机制模型,模型训练数据为未打转向灯情况下,车辆距离车道中心线的偏移量以及方向盘转角,并且模型将W时间段内非递增的偏移量序列T作为加分函数(当驾驶主车在一段时间W内距离车道中心线的偏移量越大且则可以认定发生交通事故的可能性增加)。在系统实际运行过程中,专家基于当前的偏移量序列对一段时间W内的驾驶轨迹进行预测,将W时间段内偏移量序列送入到模型中,输出为决策结果即是否对驾驶轨迹进行修正。当车辆在未打开转向灯情况下,偏移量持续大于阈值A则进行人机接管,控制方向盘转角,使其保持在车道线偏移量的安全区间。Assume that T[x_1,x_2,…,x_t] is the offset sequence of the driving vehicle from the lane centerline within a period of time W. Using the modified steering mechanism model in the expert library, the model training data is the vehicle's offset from the lane centerline and the steering wheel angle when the turn signal is not turned on, and the model uses the non-increasing offset sequence T in the W time period as Bonus function (the greater the deviation of the driver's vehicle from the lane centerline within a period of time W, the greater the possibility of a traffic accident.) During the actual operation of the system, experts predict the driving trajectory within a period of time W based on the current offset sequence, send the offset sequence within the W time period into the model, and the output is the decision result, that is, whether to correct the driving trajectory. Make corrections. When the vehicle does not turn on the turn signal and the offset continues to be greater than the threshold A, the human-machine takes over and controls the steering wheel angle to keep it within the safe range of the lane line offset.

针对前车危险碰撞情况,调用专家库中的车距保持机制模型,模型训练数据为驾驶主车车速、油门刹车踏板信号以及主车与前车的距离,模型的加分函数为车距时间其中RS表示主车与前车的距离,Vm表示驾驶主车车速,Ts表示最小安全时间。将W时间段内主车车速和主车与前车的距离送入到车距保持机制模型中,输出决策结果即是否对油门刹车踏板信号进行控制,若输出结果为是,还需要输出修正的油门踏板的数值。最后将运行结果反馈到控制模块,由控制模块对油门刹车踏板进行调整,使其驾驶主车与前车保持安全距离。In view of the dangerous collision situation with the vehicle in front, the vehicle distance maintenance mechanism model in the expert library is called. The model training data is the speed of the driver's vehicle, the accelerator and brake pedal signals, and the distance between the main vehicle and the vehicle in front. The bonus function of the model is the distance between vehicles. Among them, RS represents the distance between the main vehicle and the vehicle in front, V m represents the speed of the driving main vehicle, and T s represents the minimum safety time. The speed of the main vehicle and the distance between the main vehicle and the preceding vehicle in the W time period are fed into the distance maintenance mechanism model, and the decision result is output, that is, whether to control the accelerator and brake pedal signals. If the output result is yes, a corrected value needs to be output. The value of the accelerator pedal. Finally, the operation results are fed back to the control module, and the control module adjusts the accelerator and brake pedals to maintain a safe distance between the main vehicle and the vehicle in front.

应该理解的是,虽然上述流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,上述流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。It should be understood that although each step in the above flowchart is shown in sequence as indicated by the arrows, these steps are not necessarily executed in the order indicated by the arrows. Unless explicitly stated in this article, there is no strict order restriction on the execution of these steps, and these steps can be executed in other orders. Moreover, at least some of the steps in the above flow chart may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but may be executed at different times. The order of execution is not necessarily sequential, but may be performed in turn or alternately with other steps or sub-steps of other steps or at least part of the stages.

在一个实施例中,如图5所示,提供一种人机共驾的驾驶疲劳检测及预警控制装置,包括:图像采集装置110、信息采集模块120、控制模块130和预警装置140,其中,In one embodiment, as shown in Figure 5, a driving fatigue detection and early warning control device for human-machine co-driving is provided, including: an image acquisition device 110, an information acquisition module 120, a control module 130 and an early warning device 140, wherein,

所述图像采集装置110,用于实时采集驾驶室内图像和驾驶室外路面图像并传输至所述控制模块中;The image acquisition device 110 is used to collect images inside the cab and road images outside the cab in real time and transmit them to the control module;

所述信息采集模块120,用于采集驾驶过程中的方向盘转角信息并传输至所述控制模块中;The information collection module 120 is used to collect steering wheel angle information during driving and transmit it to the control module;

所述控制模块130,用于根据驾驶室内图像识别获得面部特征和心率值,所述面部特征包括眼睛状态、嘴巴状态和头部俯仰角度,并将驾驶室内图像与提取的面部特征和心率值同时输入至预设的时序神经网络处理,得到第一疲劳信息;用于将驾驶室外路面图像输入训练好的YOLOP模型中,获取路面的车道线信息,提取所述车道线信息中压线行驶时间,基于压线行驶时间获得第二疲劳信息;用于根据方向盘转角信息计算方向盘抖动特征,基于方向盘抖动特征得到第三疲劳信息;用于构建深度学习模型并采用自适应权重算法进行训练,分别获取第一疲劳信息、第二疲劳信息和第三疲劳信息对应的权重值α、β、γ,结合第一疲劳信息、第二疲劳信息和第三疲劳信息以及对应的权重值α、β、γ计算疲劳程度,并根据疲劳程度向所述预警装置140发送预警指令;The control module 130 is used to obtain facial features and heart rate values based on in-cab image recognition. The facial features include eye state, mouth state and head pitch angle, and combine the in-cab image with the extracted facial features and heart rate values. Input to the preset time-series neural network processing to obtain the first fatigue information; used to input the road surface image outside the cab into the trained YOLOP model, obtain the lane line information of the road surface, and extract the lane line driving time in the lane line information, The second fatigue information is obtained based on the driving time on the line; it is used to calculate the steering wheel jitter characteristics based on the steering wheel angle information, and the third fatigue information is obtained based on the steering wheel jitter characteristics; it is used to build a deep learning model and use the adaptive weight algorithm for training, and obtain the third fatigue information respectively. The weight values α, β, and γ corresponding to the first fatigue information, the second fatigue information, and the third fatigue information are combined with the first fatigue information, the second fatigue information, and the third fatigue information and the corresponding weight values α, β, and γ to calculate the fatigue degree, and sends an early warning instruction to the early warning device 140 according to the degree of fatigue;

所述预警装置140,用于接收所述预警指令,并根据所述预警指令进行预警,用于启动专家库系统。The early warning device 140 is used to receive the early warning instruction, perform an early warning according to the early warning instruction, and start the expert database system.

本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments can be completed by instructing relevant hardware through a computer program. The computer program can be stored in a non-volatile computer-readable storage. In the media, when executed, the computer program may include the processes of the above method embodiments. Any reference to memory, storage, database or other media used in the embodiments provided in this application may include non-volatile and/or volatile memory. Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory may include random access memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Synchlink DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above embodiments can be combined in any way. To simplify the description, not all possible combinations of the technical features in the above embodiments are described. However, as long as there is no contradiction in the combination of these technical features, all possible combinations should be used. It is considered to be within the scope of this manual.

以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。The above-described embodiments only express several implementation modes of the present application, and their descriptions are relatively specific and detailed, but they should not be construed as limiting the scope of the invention patent. It should be noted that, for those of ordinary skill in the art, several modifications and improvements can be made without departing from the concept of the present application, and these all fall within the protection scope of the present application. Therefore, the protection scope of this patent application should be determined by the appended claims.

Claims (9)

1. A driving fatigue detection and early warning control method for man-machine co-driving is characterized by comprising the following steps:
acquiring an image in a cab, an image of a road surface outside the cab and steering wheel corner information in real time;
facial features and heart rate values are obtained according to the image recognition in the cab, wherein the facial features comprise eye states, mouth states and head pitching angles, and the image in the cab and the extracted facial features and heart rate values are input into a preset time sequence neural network for processing at the same time to obtain first fatigue information;
inputting an image of the road surface outside the driver cab into a trained YOLOP model, acquiring lane line information of the road surface, extracting line pressing running time in the lane line information, and acquiring second fatigue information based on the line pressing running time;
calculating steering wheel shake features according to the steering wheel angle information, and obtaining third fatigue information based on the steering wheel shake features;
constructing a deep learning model, training by adopting a self-adaptive weight algorithm, respectively acquiring weight values alpha, beta and gamma corresponding to the first fatigue information, the second fatigue information and the third fatigue information, calculating the fatigue degree by combining the first fatigue information, the second fatigue information and the third fatigue information and the corresponding weight values alpha, beta and gamma, and when the fatigue degree exceeds a preset early warning value, sending out an audible/visual alarm, tracking and predicting the offset of the driving host vehicle from the lane center line and the distance from the front vehicle in real time, and acquiring a prediction result; and when the predicted result exceeds a preset offset threshold or distance threshold, taking over the man-machine connection.
2. The method for detecting and controlling the fatigue driving and the early warning of the man-machine co-driving according to claim 1, wherein the step of obtaining the facial features comprises the following steps:
preprocessing the image in the cab;
carrying out face recognition on the preprocessed image in the cab to obtain a face image;
and extracting feature key point coordinates in the face image, inputting the feature key point coordinates into a preset multitask classification neural network, and identifying the eye state, the mouth state and the head pitching angle.
3. The method for detecting and controlling fatigue and early warning of driving of a man-machine co-driving according to claim 2, wherein the loss function of the multi-task classification neural network is as follows:
F loss_face_total =F loss_leye +F loss_mouse +F loss_angle
in the middle ofTrue values of eye, mouth and head pitch angles are respectively represented,network predictions, c, representing eye, mouth and head pitch angles, respectively 1 =c 2 =c 3 Representing L2 regularization.
4. The method for detecting and controlling fatigue and early warning of driving of a man-machine co-driving according to claim 2, wherein the obtaining of the heart rate value comprises the following steps:
extracting a forehead region based on the feature key point coordinates;
calculating the average value of the G channel in the forehead area, and performing blind source separation on the average value signal of the G channel to obtain a separated G channel source signal;
and performing fast Fourier transform on the separated G channel source signals to obtain heart rate values.
5. The method for detecting and controlling driving fatigue and early warning of man-machine co-driving according to claim 1, wherein the loss function of the time sequence neural network is as follows:
in which L face_+ Indicating a loss of positive samples; l (L) face_- Representing the loss of the negative sample, p representing the probability value output by the time sequence network, epsilon+ representing the positive sample weight parameter, epsilon-representing the negative sample weight parameter; gamma ray 123 Loss weights of eyes, mouth, pitch angle are respectively represented.
6. The method for detecting and controlling fatigue and early warning of driving of man-machine co-driving according to claim 1, wherein the steps of inputting the road surface image outside the driving cab into a trained YOLOP model, obtaining lane line information of the road surface, extracting the line pressing driving time in the lane line information, and obtaining the second fatigue information based on the line pressing driving time include the following steps:
preprocessing the road surface image outside the cab;
inputting the preprocessed road surface image outside the cab into a trained YOLOP model to detect lane line information of the road surface;
and recording the time of turning on the steering lamp and driving the wire by the vehicle in the detection process, and determining second fatigue information based on the relation between the time of driving the wire and a preset driving threshold value.
7. The method for detecting and controlling fatigue and early warning of driving of a man-machine co-driving according to claim 6, wherein the preprocessing of the road surface image outside the driving cab comprises one or more of image denoising, image defogging, image rain removing, image deblurring, image color enhancement, image brightness enhancement or image detail enhancement.
8. The method for detecting and controlling fatigue and early warning of co-driving of man-machine according to claim 1, wherein the calculating steering wheel shake features according to steering wheel angle information and obtaining third fatigue information based on the steering wheel shake features comprises the following steps:
filtering and denoising the steering wheel angle information to obtain denoised steering wheel angle information;
calculating the approximate entropy of the steering wheel angle in a time window based on the denoised steering wheel angle information, wherein the approximate entropy is used as a steering wheel shake characteristic value;
and determining third fatigue information based on the relation between the approximate entropy and a preset jitter threshold.
9. A man-machine co-driving fatigue detection and early warning control device is characterized by comprising: the system comprises an image acquisition device, an information acquisition module, a control module and an early warning device, wherein,
the image acquisition device is used for acquiring images in a cab and road images outside the cab in real time and transmitting the images to the control module;
the information acquisition module is used for acquiring steering wheel angle information in the driving process and transmitting the steering wheel angle information to the control module;
the control module is used for obtaining facial features and heart rate values according to the image recognition in the cab, wherein the facial features comprise eye states, mouth states and head pitching angles, and the image in the cab and the extracted facial features and heart rate values are simultaneously input into a preset time sequence neural network for processing to obtain first fatigue information; the method comprises the steps of inputting an outdoor pavement image of a driver's cabin into a trained YOLOP model, obtaining lane line information of a pavement, extracting line pressing running time in the lane line information, and obtaining second fatigue information based on the line pressing running time; the method comprises the steps of calculating steering wheel shake features according to steering wheel angle information, and obtaining third fatigue information based on the steering wheel shake features; the method comprises the steps of constructing a deep learning model, training by adopting a self-adaptive weight algorithm, respectively obtaining weight values alpha, beta and gamma corresponding to first fatigue information, second fatigue information and third fatigue information, calculating the fatigue degree by combining the first fatigue information, the second fatigue information and the third fatigue information and the corresponding weight values alpha, beta and gamma, and sending an early warning instruction to the early warning device according to the fatigue degree;
the early warning device is used for receiving the early warning instruction, sending out sound/light warning according to the early warning instruction, tracking and predicting the offset of the driving main vehicle from the center line of the lane and the distance from the front vehicle in real time, and obtaining a prediction result; and when the predicted result exceeds a preset offset threshold or distance threshold, taking over the man-machine connection.
CN202211598471.8A 2022-05-20 2022-12-12 A driving fatigue detection and early warning control method and device for human-machine co-driving Active CN116168508B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202221233399 2022-05-20
CN2022212333994 2022-05-20

Publications (2)

Publication Number Publication Date
CN116168508A CN116168508A (en) 2023-05-26
CN116168508B true CN116168508B (en) 2023-10-24

Family

ID=86415348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211598471.8A Active CN116168508B (en) 2022-05-20 2022-12-12 A driving fatigue detection and early warning control method and device for human-machine co-driving

Country Status (1)

Country Link
CN (1) CN116168508B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117351648B (en) * 2023-10-08 2024-06-25 海南大学 Driver fatigue monitoring and early warning method and system
CN117612142B (en) * 2023-11-14 2024-07-12 中国矿业大学 Head posture and fatigue state detection method based on multi-task joint model

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105719431A (en) * 2016-03-09 2016-06-29 深圳市中天安驰有限责任公司 Fatigue driving detection system
CN106274483A (en) * 2016-11-10 2017-01-04 合肥工业大学 The Vehicular automatic driving switching device differentiated based on driving behavior of diverting one's attention and method
CN107512264A (en) * 2017-07-25 2017-12-26 武汉依迅北斗空间技术有限公司 The keeping method and device of a kind of vehicle lane
CN206841301U (en) * 2017-05-03 2018-01-05 成都中飞易驾电子有限责任公司 A kind of automobile assistant driving system
CN108216229A (en) * 2017-09-08 2018-06-29 北京市商汤科技开发有限公司 The vehicles, road detection and driving control method and device
CN109466552A (en) * 2018-10-26 2019-03-15 中国科学院自动化研究所 Intelligent driving lane keeping method and system
CN110276273A (en) * 2019-05-30 2019-09-24 福建工程学院 A driver fatigue detection method based on fusion of facial features and image pulse heart rate estimation
CN110796207A (en) * 2019-11-08 2020-02-14 中南大学 Fatigue driving detection method and system
KR102096617B1 (en) * 2018-12-12 2020-04-02 충남대학교산학협력단 Driver drowsiness detection system using image and ppg data based on multimodal deep learning
CN112733270A (en) * 2021-01-08 2021-04-30 浙江大学 System and method for predicting vehicle running track and evaluating risk degree of track deviation

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017052413A (en) * 2015-09-09 2017-03-16 株式会社デンソー Vehicle control device
CN114002669A (en) * 2021-10-21 2022-02-01 北京理工大学重庆创新中心 Road target detection system based on radar and video fusion perception
CN114387576A (en) * 2021-12-09 2022-04-22 杭州电子科技大学信息工程学院 Lane line identification method, system, medium, device and information processing terminal
CN114418895A (en) * 2022-01-25 2022-04-29 合肥英睿系统技术有限公司 Driving assistance method and device, vehicle-mounted device and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105719431A (en) * 2016-03-09 2016-06-29 深圳市中天安驰有限责任公司 Fatigue driving detection system
CN106274483A (en) * 2016-11-10 2017-01-04 合肥工业大学 The Vehicular automatic driving switching device differentiated based on driving behavior of diverting one's attention and method
CN206841301U (en) * 2017-05-03 2018-01-05 成都中飞易驾电子有限责任公司 A kind of automobile assistant driving system
CN107512264A (en) * 2017-07-25 2017-12-26 武汉依迅北斗空间技术有限公司 The keeping method and device of a kind of vehicle lane
CN108216229A (en) * 2017-09-08 2018-06-29 北京市商汤科技开发有限公司 The vehicles, road detection and driving control method and device
CN109466552A (en) * 2018-10-26 2019-03-15 中国科学院自动化研究所 Intelligent driving lane keeping method and system
KR102096617B1 (en) * 2018-12-12 2020-04-02 충남대학교산학협력단 Driver drowsiness detection system using image and ppg data based on multimodal deep learning
CN110276273A (en) * 2019-05-30 2019-09-24 福建工程学院 A driver fatigue detection method based on fusion of facial features and image pulse heart rate estimation
CN110796207A (en) * 2019-11-08 2020-02-14 中南大学 Fatigue driving detection method and system
CN112733270A (en) * 2021-01-08 2021-04-30 浙江大学 System and method for predicting vehicle running track and evaluating risk degree of track deviation

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
基于机器视觉的车道偏离报警系统;于兵;张为公;龚宗洋;;东南大学学报(自然科学版)(05);928-932 *
张德丰.TensorFlow深度学习从入门到进阶.机械工业出版社,2020,315. *
涂铭 金智勇.深度学习与目标检测:工具、原理与算法.机械工业出版社,2021,174-175. *
邱锡鹏.神经网络与深度学习.机械工业出版社,2020,28. *

Also Published As

Publication number Publication date
CN116168508A (en) 2023-05-26

Similar Documents

Publication Publication Date Title
CN111428699B (en) Driving fatigue detection method and system combined with pseudo 3D convolutional neural network and attention mechanism
CN110077414B (en) A vehicle driving safety guarantee method and system based on driver state monitoring
CN116168508B (en) A driving fatigue detection and early warning control method and device for human-machine co-driving
CN105261153A (en) Vehicle running monitoring method and device
US20200070848A1 (en) Method and System for Initiating Autonomous Drive of a Vehicle
CN106128032A (en) A kind of fatigue state monitoring and method for early warning and system thereof
Neeraja et al. DL-based somnolence detection for improved driver safety and alertness monitoring
Hasan et al. State-of-the-art analysis of modern drowsiness detection algorithms based on computer vision
Suryawanshi et al. Driver drowsiness detection system based on lbp and haar algorithm
Flores-Monroy et al. Visual-based real time driver drowsiness detection system using CNN
Yarlagadda et al. Driver drowsiness detection using facial parameters and rnns with lstm
Guo et al. Monitoring and detection of driver fatigue from monocular cameras based on Yolo v5
CN115713754A (en) Staged grading intervention method and system based on driver fear recognition
JP2004334786A (en) State detection device and state detection system
Pachouly et al. Driver Drowsiness Detection using Machine Learning
Mašanović et al. Driver monitoring using the in-vehicle camera
Mishra Driver drowsiness detection
US10945651B2 (en) Arousal level determination device
Tarba et al. The driver's attention level
CN114267169A (en) A speed limit control method for preventing fatigue driving based on machine vision
Malathy et al. Extraction of eye features of driver for detecting fatigue using OpenCV
Das et al. Vision-Based Fatigue Detection In Drivers Using Multi-Facial Feature Fusion
Vinodhini et al. A behavioral approach to detect somnolence of CAB drivers using convolutional neural network
Subbaiah et al. Driver drowsiness detection methods: A comprehensive survey
CN116118813B (en) Intelligent monitoring and early warning method and system for running safety of railway locomotive

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant