[go: up one dir, main page]

CN118587689B - Driver fatigue status detection method and system - Google Patents

Driver fatigue status detection method and system Download PDF

Info

Publication number
CN118587689B
CN118587689B CN202410761066.6A CN202410761066A CN118587689B CN 118587689 B CN118587689 B CN 118587689B CN 202410761066 A CN202410761066 A CN 202410761066A CN 118587689 B CN118587689 B CN 118587689B
Authority
CN
China
Prior art keywords
driver
rectangular frame
aspect ratio
eye
adaptive threshold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410761066.6A
Other languages
Chinese (zh)
Other versions
CN118587689A (en
Inventor
羊杰
裴沛
刘立军
董钊志
庞鑫
吴映潼
王超
朱贵杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wohang Technology Nanjing Co ltd
Original Assignee
Wohang Technology Nanjing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wohang Technology Nanjing Co ltd filed Critical Wohang Technology Nanjing Co ltd
Priority to CN202410761066.6A priority Critical patent/CN118587689B/en
Publication of CN118587689A publication Critical patent/CN118587689A/en
Application granted granted Critical
Publication of CN118587689B publication Critical patent/CN118587689B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Databases & Information Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供了驾驶员疲劳状态检测方法和系统,包括:采集驾驶员脸部图像;将驾驶员脸部图像输入到深度学习的快速人脸检测算法中,得到左眼中心坐标、右眼中心坐标和双眼距离;根据左眼中心坐标、右眼中心坐标和双眼距离,计算左眼睛外接矩形框和右眼睛外接矩形框;根据左眼睛外接矩形框和右眼睛外接矩形框,确定眼部矩形框;将眼部矩形框输入到瞳孔检测模型中,得到瞳孔区域坐标;根据瞳孔区域坐标计算自适应阈值;统计预设时间段内的纵横比均值;将纵横比均值与自适应阈值进行比较,根据比较结果确定驾驶员的当前状态;解决了现有技术中因驾驶员个人差异或复杂光照等因素导致的疲劳检测不准确和鲁棒性不强的问题。

The present invention provides a driver fatigue state detection method and system, comprising: collecting a driver's facial image; inputting the driver's facial image into a deep learning fast face detection algorithm to obtain left eye center coordinates, right eye center coordinates and binocular distance; calculating a left eye circumscribed rectangular frame and a right eye circumscribed rectangular frame according to the left eye center coordinates, right eye center coordinates and binocular distance; determining an eye rectangular frame according to the left eye circumscribed rectangular frame and the right eye circumscribed rectangular frame; inputting the eye rectangular frame into a pupil detection model to obtain pupil area coordinates; calculating an adaptive threshold according to the pupil area coordinates; calculating a mean aspect ratio within a preset time period; comparing the mean aspect ratio with the adaptive threshold, and determining the current state of the driver according to the comparison result; and solving the problems of inaccurate fatigue detection and weak robustness caused by factors such as individual differences of drivers or complex lighting in the prior art.

Description

Driver fatigue state detection method and system
Technical Field
The invention relates to the technical field of new energy automobiles, in particular to a method and a system for detecting fatigue states of drivers.
Background
At present, the fatigue detection method for the driver mainly comprises the steps of attaching various sensors to the body of the driver based on physiological signals of the driver, such as an electrocardiogram and an electroencephalogram, judging whether the driver is in a fatigue state or not by acquiring physiological data of the driver, wherein the method is high in detection precision, expensive in cost and complex in operation, and needs the cooperation of the driver, and judging whether the driver is in fatigue driving or not by analyzing facial information of the driver based on a visual technology, such as judging the eye closure degree through eyelid aspect ratio, wherein the conventional eyelid aspect ratio method is greatly influenced by factors such as individual difference and complex illumination of the driver, such as small eyes of the driver, abrupt squint under strong illumination, spectacle factors, driver turning head and the like, and is easy to cause false detection and omission detection.
In summary, the detection method mainly comprises sensor detection and image detection, wherein the sensor detection has higher detection precision but high price and complex operation, and the image detection judges the closing degree of human eyes by adopting a method for analyzing the aspect ratio of eyelids, has higher accuracy, is greatly influenced by individual difference and complex illumination of drivers, and is easy to cause false detection and omission.
Disclosure of Invention
In view of the above, the present invention aims to provide a method and a system for detecting fatigue state of a driver, which solve the problems of inaccurate fatigue detection and weak robustness caused by personal differences or complex illumination of the driver in the prior art.
In a first aspect, an embodiment of the present invention provides a method for detecting a fatigue state of a driver, the method including:
collecting a driver face image, and inputting the driver face image into a multi-scale face detection algorithm of deep learning to obtain face positioning information under different illumination conditions;
Inputting the face image of the driver into a fast face detection algorithm of deep learning to obtain a left eye center coordinate, a right eye center coordinate and a binocular distance;
Calculating a left-eye external rectangular frame and a right-eye external rectangular frame according to the left-eye center coordinate, the right-eye center coordinate and the binocular distance;
Determining an eye rectangular frame according to the left eye external rectangular frame and the right eye external rectangular frame;
Inputting the eye rectangular frame into a pupil detection model to obtain pupil region coordinates;
calculating a self-adaptive threshold according to the pupil region coordinates;
Counting an aspect ratio average value in a preset time period;
and comparing the aspect ratio average value with the adaptive threshold value, and determining the current state of the driver according to a comparison result.
Further, calculating an adaptive threshold according to the pupil region coordinates includes:
Extracting a pupil area from the pupil area coordinates and outputting boundary frame parameters, wherein the boundary frame parameters comprise center coordinates of a boundary frame, the height of the boundary frame and the width of the boundary frame;
calculating the height-width ratio of the pupil according to the height of the boundary frame and the width of the boundary frame;
Taking the aspect ratio of the pupil as statistics;
and calculating the adaptive threshold according to the statistic.
Further, calculating the adaptive threshold from the statistic includes:
When the driver normally runs, calling the first t0 minutes of the initial driving stroke, and calculating the aspect ratio of pupils of each frame;
And carrying out averaging on the aspect ratio of the pupils of each frame to obtain the self-adaptive threshold value.
Further, counting the average value of the aspect ratio in a preset time period includes:
Detecting the first t1 seconds at the t moment;
if within said first t1 seconds, counting said aspect ratio average for consecutive t2 seconds;
Wherein t is greater than t1, and t1 is greater than t2.
Further, comparing the aspect ratio average value with the adaptive threshold value, and determining the current state of the driver according to the comparison result includes:
When the aspect ratio average is less than the adaptive threshold, the driver is in a fatigue state;
the driver is in a normal state when the aspect ratio average is greater than or equal to the adaptive threshold.
Further, before inputting the driver face image to the deep learning rapid face detection algorithm, the method further comprises:
performing image size processing on the driver face image to obtain a processed driver face image;
And carrying out normalization processing on the processed driver face image to obtain a normalized driver face image.
In a second aspect, embodiments of the present invention provide a driver fatigue state detection system, the system comprising:
the acquisition module is used for acquiring a face image of a driver, inputting the face image of the driver into a multi-scale face detection algorithm of deep learning, and obtaining face positioning information under different illumination conditions;
the first input module is used for inputting the face image of the driver into a fast face detection algorithm of deep learning to obtain a left eye center coordinate, a right eye center coordinate and a binocular distance;
the external rectangular frame calculation module is used for calculating a left-eye external rectangular frame and a right-eye external rectangular frame according to the left-eye center coordinate, the right-eye center coordinate and the binocular distance;
The determining module is used for determining an eye rectangular frame according to the left eye external rectangular frame and the right eye external rectangular frame;
the second input module is used for inputting the eye rectangular frame into a pupil detection model to obtain pupil region coordinates;
the self-adaptive threshold calculating module is used for calculating a self-adaptive threshold according to the pupil region coordinates;
the statistics module is used for counting the average value of the aspect ratio in a preset time period;
and the comparison module is used for comparing the aspect ratio mean value with the self-adaptive threshold value and determining the current state of the driver according to a comparison result.
Further, the adaptive threshold calculation module is specifically configured to:
Extracting a pupil area from the pupil area coordinates and outputting boundary frame parameters, wherein the boundary frame parameters comprise center coordinates of a boundary frame, the height of the boundary frame and the width of the boundary frame;
calculating the height-width ratio of the pupil according to the height of the boundary frame and the width of the boundary frame;
Taking the aspect ratio of the pupil as statistics;
and calculating the adaptive threshold according to the statistic.
Further, the adaptive threshold calculation module is specifically configured to:
When the driver normally runs, calling the first t0 minutes of the initial driving stroke, and calculating the aspect ratio of pupils of each frame;
And carrying out averaging on the aspect ratio of the pupils of each frame to obtain the self-adaptive threshold value.
Further, the statistics module is specifically configured to:
Detecting the first t1 seconds at the t moment;
if within said first t1 seconds, counting said aspect ratio average for consecutive t2 seconds;
Wherein t is greater than t1, and t1 is greater than t2.
Further, the comparison module is specifically configured to:
When the aspect ratio average is less than the adaptive threshold, the driver is in a fatigue state;
the driver is in a normal state when the aspect ratio average is greater than or equal to the adaptive threshold.
Further, before inputting the driver face image to the deep-learning rapid face detection algorithm, the system further includes:
the preprocessing module is used for carrying out image size processing on the driver face image to obtain a processed driver face image;
and the normalization processing module is used for carrying out normalization processing on the processed driver face image to obtain a normalized driver face image.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory, and a processor, where the memory stores a computer program that can run on the processor, and the processor implements the above-mentioned driver fatigue state detection method when executing the computer program.
In a fourth aspect, embodiments of the present invention provide a computer readable medium having non-volatile program code executable by a processor, the program code causing the processor to perform the driver fatigue status detection method.
The embodiment of the invention provides a method and a system for detecting fatigue states of a driver, wherein the method comprises the steps of collecting a face image of the driver, inputting the face image of the driver into a fast face detection algorithm for deep learning to obtain a left eye center coordinate, a right eye center coordinate and a double-eye distance, calculating a left eye external rectangular frame and a right eye external rectangular frame according to the left eye center coordinate, the right eye center coordinate and the double-eye distance, determining an eye rectangular frame according to the left eye external rectangular frame and the right eye external rectangular frame, inputting the eye rectangular frame into a pupil detection model to obtain pupil region coordinates, calculating an adaptive threshold according to the pupil region coordinates, counting an aspect ratio mean value in a preset time period, comparing the aspect ratio mean value with the adaptive threshold, and determining the current state of the driver according to comparison results, thereby solving the problems of inaccurate fatigue detection and weak robustness caused by personal differences or complex illumination factors of the driver in the prior art.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an application scenario of a driver fatigue state detection system according to a first embodiment of the present invention;
Fig. 2 is a schematic diagram of an application scenario of another driver fatigue status detection system according to the second embodiment of the present invention;
FIG. 3 is a flowchart of a method for detecting fatigue status of a driver according to a third embodiment of the present invention;
fig. 4 is a schematic diagram of a human eye pupil detection bounding box according to a third embodiment of the present invention;
FIG. 5 is a schematic diagram of an aspect ratio average calculation process according to a third embodiment of the present invention;
fig. 6 is a schematic diagram of a driver fatigue state detection system according to a fourth embodiment of the present invention;
Fig. 7 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.
The icons are 1-infrared camera and 2-vehicle-mounted system equipment.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
At present, road traffic accidents are one of the most important reasons for injury to the global public health, and the proportion of road traffic accidents caused by fatigue driving exceeds the majority, and researches prove that light fatigue accounts for 36.8% of the accident proportion, and heavy fatigue accounts for 31.1%. The fatigue detection method has the advantages that the response speed, the judgment capability, the attention and the like of the driver in the fatigue state are greatly reduced, the driver can be timely found and reminded through detecting the fatigue degree of the driver, so that traffic accidents are effectively prevented, driving safety is guaranteed, meanwhile, the fatigue detection can also help the driver to timely know the physical condition of the driver, excessive fatigue is avoided, and the health risk of the driver is reduced.
The existing detection method mainly comprises sensor detection and image detection, wherein the sensor detection is high in detection precision but high in price and complex in operation, and the image detection is used for judging the closing degree of human eyes by adopting a method for analyzing the aspect ratio of eyelids, and is high in accuracy, but is greatly influenced by individual difference and complex illumination of drivers, so that false detection and omission detection are easily caused. The application aims to provide a method and a system for detecting fatigue state of a driver, which are widely applicable to different drivers and have stronger robustness, and the method comprises the steps of collecting facial images of the driver; the method comprises the steps of inputting a face image of a driver into a fast face detection algorithm of deep learning to obtain a left eye center coordinate, a right eye center coordinate and a double-eye distance, calculating a left eye external rectangular frame and a right eye external rectangular frame according to the left eye center coordinate, the right eye center coordinate and the double-eye distance, determining an eye rectangular frame according to the left eye external rectangular frame and the right eye external rectangular frame, inputting the eye rectangular frame into a pupil detection model to obtain a pupil region coordinate, calculating an adaptive threshold according to the pupil region coordinate, counting an aspect ratio mean value in a preset time period, comparing the aspect ratio mean value with the adaptive threshold, and determining the current state of the driver according to a comparison result, thereby solving the problems of inaccurate fatigue detection and poor robustness caused by personal difference or complex illumination of the driver in the prior art.
In order to facilitate understanding of the present embodiment, the following describes embodiments of the present invention in detail.
Embodiment one:
Fig. 1 is a schematic view of an application scenario of a driver fatigue state detection system according to an embodiment of the present invention.
Referring to fig. 1, an image of a driver's face is captured in real time by an infrared camera mounted on a vehicle a-pillar (driver side), and the image size may be 1280 pixels×800 pixels×3 channels. The image size can be set according to the requirements.
The vehicle-mounted system equipment pre-processes the captured face image of the driver, namely, adjusts the size of the image, and then normalizes the image so as to be suitable for the input of a visual model based on a deep learning network. The visual model based on the deep learning network comprises a fast face detection algorithm and a pupil detection model of deep learning.
The image normalization process refers to a process of performing a series of standard process transformations on an image to transform the image into a fixed standard form, and the standard image is called a normalized image. For example, normalizing the pixel value range [0,255] of the image to [0,1] can better perform image processing, and the specific calculation formula is X' = (X-x_min)/(x_max-x_min)
Where X' is normalized data, X is raw data, and x_min and x_max are the minimum and maximum values of raw data, respectively.
The face image of the driver is input into a fast face detection algorithm of deep learning to obtain a left eye center coordinate, a right eye center coordinate and a double eye distance, a left eye external rectangular frame and a right eye external rectangular frame are calculated according to the left eye center coordinate, the right eye center coordinate and the double eye distance, an eye rectangular frame is determined according to the left eye external rectangular frame and the right eye external rectangular frame, the eye rectangular frame is input into a pupil detection model to obtain pupil region coordinates, an adaptive threshold value is calculated according to the pupil region coordinates, an aspect ratio average value in a preset time period is counted, the aspect ratio average value is compared with the adaptive threshold value, the driver is in a fatigue state when the aspect ratio average value is smaller than the adaptive threshold value, and the driver is in a normal state when the aspect ratio average value is larger than or equal to the adaptive threshold value. The fatigue state detection system overcomes the defects of inaccurate fatigue detection and weak robustness caused by personal differences of drivers or complex illumination and the like in the prior art, and has wider applicability and stronger robustness.
Embodiment two:
fig. 2 is a schematic diagram of an application scenario of another driver fatigue status detection system according to the second embodiment of the present invention.
Referring to fig. 2, a trained YOLOv rapid face detection algorithm and pupil detection model are deployed into a vehicle-mounted system, and reasoning acceleration is performed through a vehicle-mounted NPU.
The infrared camera is installed at the middle position of the A column (at the side of a driver), the face image of the driver is captured, the size of the face image is 1280 pixels×800 pixels×3 channels, the face image is preprocessed, the size of the image is adjusted, and then normalization processing is carried out.
The vehicle-mounted system equipment performs face detection through a quick face detection algorithm of YOLOv and outputs a left eye center coordinate (x 1, y 1), a right eye center coordinate (x 2, y 2) and a binocular distance L.
And respectively calculating an external rectangular frame of the left eye and an external rectangular frame of the right eye by utilizing the binocular distance and the center coordinates thereof, outputting an eye rectangular frame of the region of interest, and then carrying out normalization processing after readjusting the size of the image so as to meet the requirement of a pupil detection model.
The eye rectangular box is input into a YOLOv pupil detection model, eye information is detected and analyzed, and pupil region coordinates are output.
Extracting pupil region from the pupil region coordinates, and outputting boundary frame parameters, wherein the boundary frame parameters comprise center coordinates (x, y) of the boundary frame, and the height h and the width w of the boundary frame.
Taking the aspect ratio of the pupils as statistics z=h/w, calling the first t0 minutes of the initial driving stroke when the driver normally runs, calculating the aspect ratio of the pupils of each frame and carrying out averaging to obtain self-adaptive thresholds z0=h0/w 0 suitable for different driver individuals.
Detecting whether a driver is tired at a moment t, calling the first t1 seconds at the moment t, judging that the driver is in a tired driving state at the moment if the average aspect ratio z (t) of pupils at the moment t2 seconds is lower than the self-adaptive threshold value z0, and judging that the driver is in a normal state at the moment if the average aspect ratio z (t) of pupils at the moment t2 seconds is not lower than the self-adaptive threshold value z 0.
Embodiment III:
Fig. 3 is a flowchart of a method for detecting fatigue state of a driver according to a third embodiment of the present invention.
Referring to fig. 3, the method includes the steps of:
Step S101, collecting a face image of a driver, wherein the face image of the driver is input into a multi-scale face detection algorithm of deep learning to obtain face positioning information under different illumination conditions;
Here, the image of the face of the driver is captured in real time by an infrared camera mounted on the a-pillar (driver side) of the vehicle, and the image size may be 1280 pixels×800 pixels×3 channels. The image size can be set according to the requirements.
The captured driver facial image is preprocessed, i.e., resized, and then normalized for application to the deep learning network-based visual model input. The visual model based on the deep learning network comprises a fast face detection algorithm and a pupil detection model of deep learning.
In addition, the face image of the driver is input into a deep learning multi-scale face detection algorithm so as to improve the face positioning accuracy under different illumination conditions. The algorithm should be able to adapt to face images of different sizes and handle the effect of illumination changes on feature extraction.
Step S102, inputting a face image of a driver into a fast face detection algorithm of deep learning to obtain a left eye center coordinate, a right eye center coordinate and a binocular distance;
Specifically, the preprocessed driver face image is input to a vehicle-mounted system in which a fast face detection algorithm based on deep learning is deployed for processing. The rapid face detection algorithm is deployed into vehicle-mounted system equipment, and the vehicle-mounted system equipment with limited computing power and power can be deployed with a deep learning model trained on a high-performance server by utilizing a vehicle-mounted NPU model reasoning acceleration technology.
And detecting and analyzing the preprocessed image of the face image of the driver by using the constructed fast face detection algorithm based on the deep learning, and outputting a left eye center coordinate (x 1, y 1), a right eye center coordinate (x 2, y 2) and a binocular distance L. The rapid face detection algorithm based on deep learning adopts YOLOv model framework, YOLOv model framework collects face data in a scene of practical application for training, and model weights are obtained and deployed into vehicle-mounted system equipment.
Step S103, calculating a left-eye external rectangular frame and a right-eye external rectangular frame according to the left-eye center coordinates, the right-eye center coordinates and the binocular distance;
Step S104, determining an eye rectangular frame according to the left eye external rectangular frame and the right eye external rectangular frame;
here, after the eye rectangular frame is determined, the image is required to be resized, and normalization processing is performed to meet the requirements of the pupil detection model.
Step S105, inputting an eye rectangular frame into a pupil detection model to obtain pupil region coordinates;
specifically, after the ocular rectangular frame is normalized, the ocular rectangular frame is input into the built pupil detection model, so that pupil region coordinates are output.
The pupil detection model adopts YOLOv model frames, pupil data are collected in a scene of practical application for training, model weights are obtained, and the pupil detection model is deployed into vehicle-mounted system equipment.
Step S106, calculating an adaptive threshold according to pupil region coordinates;
Step S107, counting the average value of the aspect ratio in a preset time period;
Step S108, comparing the aspect ratio mean value with the adaptive threshold value, and determining the current state of the driver according to the comparison result.
Specifically, the pupil area aspect ratio z=h/w is calculated using the pupil area aspect ratio as a statistic. And calculating by using the height-width ratio distribution in a preset time period to obtain an adaptive threshold for judging eye closure. And comparing the aspect ratio mean value with the self-adaptive threshold value, and judging whether the driver is in a fatigue state according to the comparison result.
Further, step S106 includes the steps of:
step S201, extracting a pupil area from the pupil area coordinates and outputting boundary frame parameters, wherein the boundary frame parameters comprise the center coordinates of the boundary frame, the height of the boundary frame and the width of the boundary frame;
step S202, calculating the height-width ratio of the pupil according to the height of the boundary frame and the width of the boundary frame;
step S203, taking the aspect ratio of the pupil as statistics;
step S204, calculating the self-adaptive threshold according to the statistic.
Specifically, the pupil region is extracted from the pupil region coordinates and a bounding box parameter is output, which is in xywh format, where xy represents the center coordinates (x, y) of the bounding box and wh represents the width and height (width) of the bounding box, referring specifically to FIG. 4. This format describes the position and size of the bounding box relative to the image, making it more intuitive and convenient to detect the position and size information of the object.
Further, step S204 includes the steps of:
Step S301, when the driver normally runs, calling the first t0 minutes of the initial driving stroke, and calculating the aspect ratio of pupils of each frame;
step S302, the aspect ratio of pupils of each frame is averaged to obtain an adaptive threshold.
Specifically, the pupil area aspect ratio z=h/w is calculated using the pupil area aspect ratio as a statistic. And calculating by using the height-width ratio distribution in a preset time period to obtain an adaptive threshold for judging eye closure.
When the driver normally runs, calling the first t0 minutes of the initial driving stroke, calculating the aspect ratio of pupils of each frame, and carrying out averaging to obtain an adaptive threshold value z0=h0/w 0 suitable for different driver individuals, and particularly referring to fig. 5.
Further, step S107 includes the steps of:
step S401, detecting the first t1 seconds at the t moment;
Step S402, if the aspect ratio average value of continuous t2 seconds is counted in the previous t1 seconds;
Wherein t is greater than t1, and t1 is greater than t2.
Specifically, whether the driver is tired at the moment t is detected, the first t1 seconds at the moment t is called, and if the average aspect ratio z (t) of pupils with continuous t2 seconds is smaller than the self-adaptive threshold z0 in the first t1 seconds, the state of the driver at the moment is judged to be tired driving.
Further, step S108 includes the steps of:
Step S501, when the aspect ratio average value is smaller than the adaptive threshold value, the driver is in a fatigue state;
In step S502, when the aspect ratio average value is greater than or equal to the adaptive threshold value, the driver is in a normal state.
Further, before the driver face image is input to the deep learning rapid face detection algorithm, the method further comprises the following steps:
step S601, performing image size processing on the driver face image to obtain a processed driver face image;
step S602, performing normalization processing on the processed driver face image, to obtain a normalized driver face image. The image normalization process refers to a process of performing a series of standard process transformations on an image to transform the image into a fixed standard form, and the standard image is called a normalized image.
The embodiment of the invention provides a driver fatigue state detection method, which comprises the steps of collecting a driver face image, inputting the driver face image into a depth learning rapid face detection algorithm to obtain a left eye center coordinate, a right eye center coordinate and a double eye distance, calculating a left eye external rectangular frame and a right eye external rectangular frame according to the left eye center coordinate, the right eye center coordinate and the double eye distance, determining an eye rectangular frame according to the left eye external rectangular frame and the right eye external rectangular frame, inputting the eye rectangular frame into a pupil detection model to obtain a pupil region coordinate, calculating an adaptive threshold according to the pupil region coordinate, counting an aspect ratio mean value in a preset time period, comparing the aspect ratio mean value with the adaptive threshold, and determining the current state of a driver according to a comparison result, thereby solving the problems of inaccurate fatigue detection and weak robustness caused by personal difference or complex illumination of the driver in the prior art.
In addition, the fatigue state detection of the driver can be realized by detecting the mental state of the driver when the vehicle runs, and timely finding out whether the driver is in the fatigue driving state or not so as to remind the driver of the fatigue driving. The method comprises the steps of obtaining current running information of a target vehicle to determine continuous running conditions of the target vehicle in the current running process, judging whether fatigue detection is needed to be carried out on a driver when the current running information meets preset conditions so as to detect whether the mental state of the driver is normal, generating fatigue detection voice according to the current running information, and broadcasting the fatigue detection voice to the driver of the target vehicle, so that the mental state of the driver is detected in a voice interaction mode with the driver. The current driving information includes, but is not limited to, driving mileage, driving speed and continuous driving duration, and may be other information. The preset condition may be to determine whether the continuous driving range in the current driving process exceeds a preset driving threshold, or whether the continuous driving duration in the current driving process exceeds a preset driving duration threshold.
Embodiment four:
fig. 6 is a schematic diagram of a driver fatigue state detection system according to a fourth embodiment of the present invention.
Referring to fig. 6, the system includes:
The acquisition module is used for acquiring the face image of the driver;
Specifically, an image of the face of the driver is captured in real time by an infrared camera mounted on the a-pillar (driver side) of the vehicle, and the image size may be 1280 pixels×800 pixels×3 channels. The image size can be set according to the requirements.
The captured driver facial image is preprocessed, i.e., resized, and then normalized for application to the deep learning network-based visual model input. The visual model based on the deep learning network comprises a fast face detection algorithm and a pupil detection model of deep learning.
The first input module is used for inputting the face image of the driver into a fast face detection algorithm of deep learning to obtain a left eye center coordinate, a right eye center coordinate and a binocular distance;
Specifically, the preprocessed driver face image is input to a vehicle-mounted system in which a fast face detection algorithm based on deep learning is deployed for processing. The rapid face detection algorithm is deployed into vehicle-mounted system equipment, and the vehicle-mounted system equipment with limited computing power and power can be deployed with a deep learning model trained on a high-performance server by utilizing a vehicle-mounted NPU model reasoning acceleration technology.
And detecting and analyzing the preprocessed image of the face image of the driver by using the constructed fast face detection algorithm based on the deep learning, and outputting a left eye center coordinate (x 1, y 1), a right eye center coordinate (x 2, y 2) and a binocular distance L. The rapid face detection algorithm based on deep learning adopts YOLOv model framework, YOLOv model framework collects face data in a scene of practical application for training, and model weights are obtained and deployed into vehicle-mounted system equipment.
The external rectangular frame calculation module is used for calculating a left-eye external rectangular frame and a right-eye external rectangular frame according to the left-eye center coordinate, the right-eye center coordinate and the binocular distance;
The determining module is used for determining an eye rectangular frame according to the left eye external rectangular frame and the right eye external rectangular frame;
The second input module is used for inputting the eye rectangular frame into the pupil detection model to obtain pupil region coordinates;
specifically, after the ocular rectangular frame is normalized, the ocular rectangular frame is input into the built pupil detection model, so that pupil region coordinates are output.
The pupil detection model adopts YOLOv model frames, pupil data are collected in a scene of practical application for training, model weights are obtained, and the pupil detection model is deployed into vehicle-mounted system equipment.
The self-adaptive threshold calculating module is used for calculating a self-adaptive threshold according to the pupil region coordinates;
the statistics module is used for counting the average value of the aspect ratio in a preset time period;
And the comparison module is used for comparing the aspect ratio mean value with the self-adaptive threshold value and determining the current state of the driver according to the comparison result.
Specifically, the pupil area aspect ratio z=h/w is calculated using the pupil area aspect ratio as a statistic. And calculating by using the height-width ratio distribution in a preset time period to obtain an adaptive threshold for judging eye closure. And comparing the aspect ratio mean value with the self-adaptive threshold value, and judging whether the driver is in a fatigue state according to the comparison result.
Further, the adaptive threshold calculation module is specifically configured to:
Extracting a pupil region from the pupil region coordinates and outputting boundary frame parameters, wherein the boundary frame parameters comprise the center coordinates of the boundary frame, the height of the boundary frame and the width of the boundary frame;
Calculating the height-width ratio of the pupil according to the height of the boundary frame and the width of the boundary frame;
Taking the aspect ratio of the pupil as statistics;
an adaptive threshold is calculated from the statistics.
Specifically, a pupil region is extracted from the pupil region coordinates, and a bounding box parameter is output in a xywh format, where xy represents the center coordinates (x, y) of the bounding box, and wh represents the width and height (width) of the bounding box. This format describes the position and size of the bounding box relative to the image, making it more intuitive and convenient to detect the position and size information of the object.
Further, the adaptive threshold calculation module is specifically configured to:
When a driver normally runs, calling the first t0 minutes of the initial driving stroke, and calculating the aspect ratio of pupils of each frame;
and (5) carrying out averaging on the aspect ratio of pupils of each frame to obtain the self-adaptive threshold value.
Specifically, the pupil area aspect ratio z=h/w is calculated using the pupil area aspect ratio as a statistic. And calculating by using the height-width ratio distribution in a preset time period to obtain an adaptive threshold for judging eye closure.
When the driver normally runs, calling the first t0 minutes of the initial driving stroke, calculating the aspect ratio of pupils of each frame, and carrying out averaging to obtain the self-adaptive threshold value z0=h0/w 0 suitable for different driver individuals.
Further, the statistics module is specifically configured to:
Detecting the first t1 seconds at the t moment;
if within the first t1 seconds, counting aspect ratio averages for consecutive t2 seconds;
Wherein t is greater than t1, and t1 is greater than t2.
Specifically, whether the driver is tired at the moment t is detected, the first t1 seconds at the moment t is called, and if the average aspect ratio z (t) of pupils with continuous t2 seconds is smaller than the self-adaptive threshold z0 in the first t1 seconds, the state of the driver at the moment is judged to be tired driving.
Further, the comparison module is specifically configured to:
When the aspect ratio average is less than the adaptive threshold, the driver is in a fatigue state;
When the aspect ratio average is greater than or equal to the adaptive threshold, the driver is in a normal state.
Further, before the driver face image is input to the deep learning rapid face detection algorithm, the system further comprises:
A preprocessing module (not shown) for performing image size processing on the driver face image to obtain a processed driver face image;
And the normalization processing module (not shown) is used for performing normalization processing on the processed driver face image to obtain a normalized driver face image. The image normalization process refers to a process of performing a series of standard process transformations on an image to transform the image into a fixed standard form, and the standard image is called a normalized image.
The embodiment of the invention provides a driver fatigue state detection system which comprises the steps of collecting a driver face image, inputting the driver face image into a depth learning rapid face detection algorithm to obtain a left eye center coordinate, a right eye center coordinate and a double eye distance, calculating a left eye external rectangular frame and a right eye external rectangular frame according to the left eye center coordinate, the right eye center coordinate and the double eye distance, determining an eye rectangular frame according to the left eye external rectangular frame and the right eye external rectangular frame, inputting the eye rectangular frame into a pupil detection model to obtain a pupil region coordinate, calculating an adaptive threshold according to the pupil region coordinate, counting an aspect ratio mean value in a preset time period, comparing the aspect ratio mean value with the adaptive threshold, and determining the current state of a driver according to a comparison result, thereby solving the problems of inaccurate fatigue detection and weak robustness caused by personal differences or complex illumination of the driver and the like in the prior art.
Fifth embodiment:
Fig. 7 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.
Referring to fig. 7, electronic device 100 includes one or more processors 102, one or more storage devices 104, an input device 106, an output device 108, and an image capture device 110, which are interconnected by a bus system 112 and/or other forms of connection mechanisms (not shown). It should be noted that the components and structures of the electronic device 100 shown in fig. 7 are exemplary only and not limiting, as the electronic device may have other components and structures as desired.
The processor 102 may be implemented in at least one hardware form of a Digital Signal Processor (DSP), field Programmable Gate Array (FPGA), programmable Logic Array (PLA), the processor 102 may be one or a combination of several of a Central Processing Unit (CPU) or other form of processing unit with data processing and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 102 to implement client functions and/or other desired functions in embodiments of the present invention as described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, mouse, microphone, touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The image capture device 110 may capture images (e.g., photographs, videos, etc.) desired by the user and store the captured images in the storage device 104 for use by other components.
The embodiment of the invention also provides electronic equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the steps of the driver fatigue state detection method provided by the embodiment are realized when the processor executes the computer program.
The embodiment of the invention also provides a computer readable medium with non-volatile program code executable by a processor, wherein the computer readable medium stores a computer program which executes the steps of the driver fatigue state detection method of the embodiment when being executed by the processor.
The computer program product provided by the embodiment of the present invention includes a computer readable storage medium storing a program code, where instructions included in the program code may be used to perform the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment and will not be described herein.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In addition, in the description of embodiments of the present invention, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected, mechanically connected, electrically connected, directly connected, indirectly connected via an intermediate medium, or in communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
It should be noted that the foregoing embodiments are merely illustrative embodiments of the present invention, and not restrictive, and the scope of the invention is not limited to the embodiments, and although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that any modification, variation or substitution of some of the technical features of the embodiments described in the foregoing embodiments may be easily contemplated within the scope of the present invention, and the spirit and scope of the technical solutions of the embodiments do not depart from the spirit and scope of the embodiments of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. A driver fatigue state detection method, characterized by comprising:
collecting a driver face image, and inputting the driver face image into a multi-scale face detection algorithm of deep learning to obtain face positioning information under different illumination conditions;
Inputting the face image of the driver into a fast face detection algorithm of deep learning to obtain a left eye center coordinate, a right eye center coordinate and a binocular distance;
Calculating a left-eye external rectangular frame and a right-eye external rectangular frame according to the left-eye center coordinate, the right-eye center coordinate and the binocular distance;
Determining an eye rectangular frame according to the left eye external rectangular frame and the right eye external rectangular frame;
Inputting the eye rectangular frame into a pupil detection model to obtain pupil region coordinates;
calculating a self-adaptive threshold according to the pupil region coordinates;
Counting an aspect ratio average value in a preset time period;
Comparing the aspect ratio mean value with the self-adaptive threshold value, and determining the current state of the driver according to a comparison result, wherein a YOLOv model framework is adopted in the rapid face detection algorithm;
counting an aspect ratio average value within a preset time period, comprising:
Detecting the first t1 seconds at the t moment;
if within said first t1 seconds, counting said aspect ratio average for consecutive t2 seconds;
Wherein t is greater than t1, and t1 is greater than t2.
2. The driver fatigue state detection method according to claim 1, wherein calculating an adaptive threshold from the pupil region coordinates includes:
Extracting a pupil area from the pupil area coordinates and outputting boundary frame parameters, wherein the boundary frame parameters comprise center coordinates of a boundary frame, the height of the boundary frame and the width of the boundary frame;
calculating the height-width ratio of the pupil according to the height of the boundary frame and the width of the boundary frame;
Taking the aspect ratio of the pupil as statistics;
and calculating the adaptive threshold according to the statistic.
3. The driver fatigue state detection method according to claim 2, wherein calculating the adaptive threshold from the statistic includes:
When the driver normally runs, calling the first t0 minutes of the initial driving stroke, and calculating the aspect ratio of pupils of each frame;
And carrying out averaging on the aspect ratio of the pupils of each frame to obtain the self-adaptive threshold value.
4. The driver fatigue state detection method according to claim 1, wherein comparing the aspect ratio average value with the adaptive threshold value, determining the current state of the driver from a comparison result, comprises:
When the aspect ratio average is less than the adaptive threshold, the driver is in a fatigue state;
the driver is in a normal state when the aspect ratio average is greater than or equal to the adaptive threshold.
5. The driver fatigue state detection method according to claim 1, wherein before inputting the driver face image to the deep-learning rapid face detection algorithm, the method further comprises:
performing image size processing on the driver face image to obtain a processed driver face image;
And carrying out normalization processing on the processed driver face image to obtain a normalized driver face image.
6. A driver fatigue state detection system, the system comprising:
the acquisition module is used for acquiring a face image of a driver, inputting the face image of the driver into a multi-scale face detection algorithm of deep learning, and obtaining face positioning information under different illumination conditions;
the first input module is used for inputting the face image of the driver into a fast face detection algorithm of deep learning to obtain a left eye center coordinate, a right eye center coordinate and a binocular distance;
the external rectangular frame calculation module is used for calculating a left-eye external rectangular frame and a right-eye external rectangular frame according to the left-eye center coordinate, the right-eye center coordinate and the binocular distance;
The determining module is used for determining an eye rectangular frame according to the left eye external rectangular frame and the right eye external rectangular frame;
the second input module is used for inputting the eye rectangular frame into a pupil detection model to obtain pupil region coordinates;
the self-adaptive threshold calculating module is used for calculating a self-adaptive threshold according to the pupil region coordinates;
the statistics module is used for counting the average value of the aspect ratio in a preset time period;
The comparison module is used for comparing the aspect ratio mean value with the self-adaptive threshold value and determining the current state of the driver according to a comparison result, wherein a YOLOv model framework is adopted in the rapid face detection algorithm;
the statistics module is specifically configured to:
Detecting the first t1 seconds at the t moment;
if within said first t1 seconds, counting said aspect ratio average for consecutive t2 seconds;
Wherein t is greater than t1, and t1 is greater than t2.
7. The driver fatigue status detection system of claim 6, wherein the adaptive threshold calculation module is specifically configured to:
Extracting a pupil area from the pupil area coordinates and outputting boundary frame parameters, wherein the boundary frame parameters comprise center coordinates of a boundary frame, the height of the boundary frame and the width of the boundary frame;
calculating the height-width ratio of the pupil according to the height of the boundary frame and the width of the boundary frame;
Taking the aspect ratio of the pupil as statistics;
and calculating the adaptive threshold according to the statistic.
8. An electronic device comprising a memory, a processor, the memory having stored thereon a computer program executable on the processor, characterized in that the processor, when executing the computer program, implements the driver fatigue state detection method according to any of the preceding claims 1-5.
9. A computer readable medium having a processor executable non-volatile program code, wherein the program code causes the processor to perform the driver fatigue status detection method of any of claims 1 to 5.
CN202410761066.6A 2024-06-13 2024-06-13 Driver fatigue status detection method and system Active CN118587689B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410761066.6A CN118587689B (en) 2024-06-13 2024-06-13 Driver fatigue status detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410761066.6A CN118587689B (en) 2024-06-13 2024-06-13 Driver fatigue status detection method and system

Publications (2)

Publication Number Publication Date
CN118587689A CN118587689A (en) 2024-09-03
CN118587689B true CN118587689B (en) 2025-01-17

Family

ID=92534952

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410761066.6A Active CN118587689B (en) 2024-06-13 2024-06-13 Driver fatigue status detection method and system

Country Status (1)

Country Link
CN (1) CN118587689B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119107630A (en) * 2024-10-29 2024-12-10 沃行科技(南京)有限公司 Eye state identification method and device and electronic equipment
CN119206846B (en) * 2024-11-04 2025-03-14 浙江海亮科技有限公司 On-hook detection method, device, storage medium and electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109840565A (en) * 2019-01-31 2019-06-04 成都大学 A kind of blink detection method based on eye contour feature point aspect ratio
CN113780125A (en) * 2021-08-30 2021-12-10 武汉理工大学 A driver's multi-feature fusion method and device for fatigue state detection
CN116824558A (en) * 2023-07-10 2023-09-29 湖南大学 Fatigue driving behavior identification method for 3D point cloud image data

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113361452B (en) * 2021-06-24 2023-06-20 中国科学技术大学 A real-time detection method and system for driver fatigue driving based on deep learning
KR102520188B1 (en) * 2021-10-29 2023-04-10 전남대학교 산학협력단 Vehicle device for determining a driver's condition using artificial intelligence and control method thereof
CN114359879B (en) * 2021-12-31 2024-11-26 西安航空学院 A driver fatigue detection method based on YOLO neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109840565A (en) * 2019-01-31 2019-06-04 成都大学 A kind of blink detection method based on eye contour feature point aspect ratio
CN113780125A (en) * 2021-08-30 2021-12-10 武汉理工大学 A driver's multi-feature fusion method and device for fatigue state detection
CN116824558A (en) * 2023-07-10 2023-09-29 湖南大学 Fatigue driving behavior identification method for 3D point cloud image data

Also Published As

Publication number Publication date
CN118587689A (en) 2024-09-03

Similar Documents

Publication Publication Date Title
CN118587689B (en) Driver fatigue status detection method and system
Ji et al. Fatigue state detection based on multi-index fusion and state recognition network
CN106846734B (en) A kind of fatigue driving detection device and method
CN105354985B (en) Fatigue driving monitoring apparatus and method
Wang et al. Driver fatigue detection: a survey
CN104616438B (en) A kind of motion detection method of yawning for fatigue driving detection
JP4864541B2 (en) Blink data classification device, wakefulness determination device, and wakefulness determination device
CN108446600A (en) A kind of vehicle driver's fatigue monitoring early warning system and method
Batista A drowsiness and point of attention monitoring system for driver vigilance
CN107679468A (en) A kind of embedded computer vision detects fatigue driving method and device
Flores et al. Driver drowsiness detection system under infrared illumination for an intelligent vehicle
Ahmed et al. Robust driver fatigue recognition using image processing
KR20190083155A (en) Apparatus and method for detecting state of vehicle driver
Lashkov et al. Driver dangerous state detection based on OpenCV & dlib libraries using mobile video processing
JP2009219555A (en) Drowsiness detector, driving support apparatus, drowsiness detecting method
Kulkarni et al. A review paper on monitoring driver distraction in real time using computer vision system
CN115830579A (en) Driving state monitoring method and system and vehicle
Liu et al. Driver fatigue detection through pupil detection and yawing analysis
Mašanović et al. Driver monitoring using the in-vehicle camera
CN106446822A (en) Blink detection method based on circle fitting
CN106384096B (en) A kind of fatigue driving monitoring method based on blink detection
CN118230298A (en) Driver fatigue driving state detection system and method
Li et al. A new method for detecting fatigue driving with camera based on OpenCV
Bhoyar et al. Implementation on visual analysis of eye state using image processing for driver fatigue detection
Diddi et al. Head pose and eye state monitoring (HEM) for driver drowsiness detection: Overview

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant