CN114162130B - Driving assistance mode switching method, device, equipment and storage medium - Google Patents
Driving assistance mode switching method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN114162130B CN114162130B CN202111251279.7A CN202111251279A CN114162130B CN 114162130 B CN114162130 B CN 114162130B CN 202111251279 A CN202111251279 A CN 202111251279A CN 114162130 B CN114162130 B CN 114162130B
- Authority
- CN
- China
- Prior art keywords
- determining
- driving assistance
- driver
- region
- assistance mode
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 230000007613 environmental effect Effects 0.000 claims abstract description 17
- 230000006870 function Effects 0.000 claims description 29
- 238000013507 mapping Methods 0.000 claims description 25
- 210000001747 pupil Anatomy 0.000 claims description 14
- 230000009467 reduction Effects 0.000 claims description 5
- 238000004891 communication Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 230000001186 cumulative effect Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 239000000779 smoke Substances 0.000 description 2
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005315 distribution function Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/08—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/0098—Details of control systems ensuring comfort, safety or stability not otherwise provided for
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/08—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
- B60W2040/0818—Inactivity or incapacity of driver
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2540/00—Input parameters relating to occupants
- B60W2540/225—Direction of gaze
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/80—Technologies aiming to reduce greenhouse gasses emissions common to all road transportation technologies
- Y02T10/84—Data processing systems or methods, management, administration
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention belongs to the technical field of vehicle control, and discloses a driving auxiliary mode switching method, a driving auxiliary mode switching device, driving auxiliary mode switching equipment and a storage medium. The method comprises the following steps: acquiring an environment image around a vehicle and a driver face image; determining a region of interest according to the environmental image; determining a driver sight line region from the driver face image; determining a target driving assistance mode according to the region of interest and the driver sight line region; the current driving assistance mode is switched to the target driving assistance mode. By the method, the sight line area of the driver is determined according to the face image of the driver, the area needing to be concerned is analyzed according to the surrounding environment of the current vehicle, and whether the driving auxiliary mode needs to be switched is judged according to the superposition condition of the sight line area and the area needing to be concerned.
Description
Technical Field
The present invention relates to the field of vehicle control technologies, and in particular, to a driving assistance mode switching method, device, apparatus, and storage medium.
Background
The fatigue monitoring method at present detects the states of a driver such as: whether a call is made, whether smoke is drawn, the number of blinks in a time period and the like are detected, the states of the drivers cannot be effectively detected, and some drivers like to draw smoke or blink and have high evaluation rate and the like, so that the detection method is not suitable for the existing fatigue monitoring method; while the driver looks normal, but the eyes get lost as follows: if the driver is looking ahead, the system should recognize that the driver is in fatigue state.
And the driving auxiliary mode is switched according to the active setting of the driver at the present stage; the driving assistance mode is strongly related to the state of the driver and the surrounding environment of the vehicle, the state of the driver is poor, the environment of the vehicle is at high risk and the driver is not conscious, and the driving assistance mode is adaptively adjusted to the highest level so as to ensure the safety of the driver and the vehicle to the greatest extent.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The invention mainly aims to provide a driving assistance mode switching method, which aims to solve the technical problems of how to accurately judge the driving state of a driver and switch the driving mode according to the driving state in the prior art.
To achieve the above object, the present invention provides a driving assistance mode switching method including the steps of:
acquiring an environment image around a vehicle and a driver face image;
determining a region of interest according to the environmental image;
determining a driver sight line region from the driver face image;
determining a target driving assistance mode according to the region of interest and the driver sight line region;
the current driving assistance mode is switched to the target driving assistance mode.
Optionally, the determining the region of interest according to the environment image includes:
determining an initial global saliency threshold and an initial search radius according to the environment image;
searching the environment image according to the initial global saliency threshold and the initial searching radius to obtain a searching result;
and determining a region of interest according to the search result.
Optionally, searching the environmental image according to the initial global saliency threshold and the initial search radius to obtain a search result, including:
determining a search area in the environment image according to the initial search radius;
comparing the pixel value of each pixel point in the search area with the initial global saliency threshold value to obtain a comparison value;
when the comparison value is in a preset threshold value interval, reducing the initial searching radius according to a preset reduction value, and searching the environment image according to the reduced initial searching radius;
and when the comparison value is equal to a preset threshold value, generating a search result according to the initial search radius corresponding to the comparison value.
Optionally, determining a driver sight line area according to the driver face image includes:
dividing the driver face image into a plurality of face candidate regions;
determining gray values of the respective face candidate regions;
taking a face candidate region corresponding to a gray value larger than a gray value threshold as a pupil candidate region;
determining pupil center characteristics according to the pupil candidate areas;
and determining the vision area of the driver according to the pupil center characteristics.
Optionally, the determining the driver sight line area according to the pupil center feature includes:
determining a pupil center feature vector and a gaze direction vector according to the pupil center feature;
determining a target mapping relation of the pupil center feature vector and the gazing direction vector;
and the target mapping relation and the pupil center feature vector determine a driver sight area.
Optionally, the determining the target mapping relationship of the pupil center feature vector and the gaze direction vector includes:
establishing a target loss function of the pupil center feature vector and the gazing direction vector;
deriving the target loss function to obtain a first-order derivative function;
and determining the target mapping relation of the pupil center feature vector and the gazing direction vector according to the first order derivative function and a preset value.
Optionally, the determining the target driving assistance mode according to the region of interest and the driver sight line region includes:
if the region of interest is equal to the driver sight line region, taking a first driving assistance mode as a target driving assistance mode;
if the region of interest belongs to the driver sight line region, taking a second driving assistance mode as a target driving assistance mode;
and if the region of interest does not belong to the driver sight line region, setting a third driving assistance mode as a target driving assistance mode.
In addition, in order to achieve the above object, the present invention also proposes a driving assistance mode switching apparatus including:
a face acquisition module for acquiring an environment image around the vehicle and a driver face image;
the area determining module is used for determining an area to be focused according to the environment image;
a sight line determination module for determining a driver sight line region from the driver face image;
a mode determination module for determining a target driving assistance mode according to the region of interest and the driver sight line region;
and the mode switching method is used for switching the current driving assistance mode into the target driving assistance mode.
In addition, to achieve the above object, the present invention also proposes a driving assistance mode switching apparatus including: a memory, a processor and a driving assistance mode switching program stored on the memory and executable on the processor, the driving assistance mode switching program being configured to implement the steps of the driving assistance mode switching method as described above.
In addition, in order to achieve the above object, the present invention also proposes a storage medium having stored thereon a driving assistance mode switching program which, when executed by a processor, implements the steps of the driving assistance mode switching method as described above.
The invention obtains the surrounding environment image of the vehicle and the face image of the driver; determining a region of interest according to the environmental image; determining a driver sight line region from the driver face image; determining a target driving assistance mode according to the region of interest and the driver sight line region; the current driving assistance mode is switched to the target driving assistance mode. By the method, the sight line area of the driver is determined according to the face image of the driver, the area needing to be concerned is analyzed according to the surrounding environment of the current vehicle, and whether the driving auxiliary mode needs to be switched is judged according to the superposition condition of the sight line area and the area needing to be concerned.
Drawings
Fig. 1 is a schematic structural diagram of a driving assistance mode switching apparatus of a hardware operation environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a driving assistance mode switching method according to a first embodiment of the present invention;
FIG. 3 is a flowchart illustrating a driving assistance mode switching method according to a second embodiment of the present invention;
fig. 4 is a block diagram showing the construction of a first embodiment of the driving assistance mode switching apparatus of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic diagram of a driving assistance mode switching device in a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the driving assistance mode switching apparatus may include: a processor 1001, such as a central processing unit (Central Processing Unit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (Wi-Fi) interface). The Memory 1005 may be a high-speed random access Memory (Random Access Memory, RAM) Memory or a stable nonvolatile Memory (NVM), such as a disk Memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
It will be appreciated by those skilled in the art that the structure shown in fig. 1 does not constitute a limitation of the driving assist mode switching device, and may include more or fewer components than shown, or may combine certain components, or may have a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and a driving assistance mode switching program may be included in the memory 1005 as one type of storage medium.
In the driving assistance mode switching apparatus shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the driving assistance mode switching apparatus of the present invention may be provided in the driving assistance mode switching apparatus, which invokes the driving assistance mode switching program stored in the memory 1005 through the processor 1001 and executes the driving assistance mode switching method provided by the embodiment of the present invention.
An embodiment of the present invention provides a driving assistance mode switching method, and referring to fig. 2, fig. 2 is a schematic flow chart of a first embodiment of a driving assistance mode switching method according to the present invention.
In this embodiment, the driving assistance mode switching method includes the steps of:
step S10: an image of the environment surrounding the vehicle and an image of the driver's face are acquired.
The execution subject of the implementation is a vehicle-mounted terminal, and the vehicle-mounted terminal can analyze and calculate based on data acquired by a vehicle sensor, so that corresponding functions are realized. In the present embodiment, a first camera for capturing a face image of a driver in real time is provided above a vehicle cab. The outside place ahead of vehicle is provided with the second camera, and the second camera can be shot the environment image around the vehicle that the driver can observe when being in the driver's cabin, and the second camera can be the wide angle camera.
It should be noted that, after the first camera and the second camera are installed, the two cameras need to be calibrated, so that the two cameras can convert the shot image content into the same world coordinate system. When the calibration is performed, firstly, the positions of the two cameras are converted into the same world coordinate system, and the conversion relation of the contents in the images shot by the two cameras into the world coordinate system is calculated respectively.
Step S20: and determining a region of interest according to the environment image.
It should be noted that, after the vehicle terminal acquires the environmental image around the vehicle according to the second camera, the object to be noted in the environmental image is first identified, and the object to be noted may be a real object that may affect driving, such as a vehicle, a pedestrian, a lane line, a traffic light, and the like. When the object to be focused is identified, the environment image can be input into a trained object identification model for identification.
It will be appreciated that after identification, the environment image should be marked with attention, and then the continuous area formed by all marked positions is taken as the area to be marked.
Step S30: and determining a driver sight line area according to the driver face image.
The driver sight line area is an area of interest to eyes of the driver when the driver is driving, and in order to more accurately determine the driver sight line area from the face image of the driver, step S30 includes: dividing the driver face image into a plurality of face candidate regions; determining gray values of the respective face candidate regions; taking a face candidate region corresponding to a gray value larger than a gray value threshold as a pupil candidate region; determining pupil center characteristics according to the pupil candidate areas; and determining the vision area of the driver according to the pupil center characteristics.
First, the driver face image is thresholded into a plurality of face candidate regions, and the thresholding method may employ one of an Otsu thresholding method, an adaptive thresholding method, a maximum entropy thresholding method, or an iterative thresholding method. Otsu (Otsu or maximum inter-class variance) uses the idea of clustering, dividing the number of gray scales of an image into 2 parts by gray scale level, so that the difference of gray scale values between the two parts is maximum, the difference of gray scale between each part is minimum, and finding a proper gray scale level to divide by calculating the variance. Therefore, the threshold value can be automatically selected for binarization by adopting an otsu algorithm during binarization. The iterative thresholding is a process of first guessing an initial threshold and then refining the threshold by multiple calculations on the image. The thresholding operation is repeated on the image, dividing the image into multiple types, and then improving the threshold with the gray scale level in each class.
After the threshold of the driver face image is divided into a plurality of face candidate regions, a gradation value of each face candidate region is calculated. In this embodiment, the gray level histogram of the face image of the driver is analyzed by a large amount of data, so that the gray level value of the pupil area of the driver is generally stabilized to be greater than 220, and the gray level value is smaller in the face image of the driver and is close to the right side of the cumulative distribution function of the histogram. And selecting 220 in the gray histogram cumulative distribution as a gray value threshold, and selecting a face candidate region which is larger than the gray value threshold, namely a pupil candidate region.
It can be understood that the pupil center is the position with the minimum cost of all position points in the pupil candidate area, so the relationship between the pupil center position and all position points in the area is:
wherein in formula 1, x i The pixel positions in the pupil region are indicated, i e {1,2,..N }, and c is the pupil center position. And obtaining the pupil center characteristic of the pupil center position c through the minimum cost function.
Further, the determining the driver sight line area according to the pupil center feature includes: determining a pupil center feature vector and a gaze direction vector according to the pupil center feature; determining a target mapping relation of the pupil center feature vector and the gazing direction vector; and the target mapping relation and the pupil center feature vector determine a driver sight area.
It should be noted that, the feature regression from the pupil center feature to the gaze angle of the driver may be considered as establishing the target mapping relationship between the image representation feature space and the gaze direction space. It is therefore necessary to determine the target mapping relationship. Given x= [ X 1 ,x 2 ,...,x n ]As pupil center feature vector, y= [ Y ] 1 ,y 2 ,...,y n ]Is a gaze direction vector.
Further, the determining the target mapping relationship of the pupil center feature vector and the gaze direction vector includes: establishing a target loss function of the pupil center feature vector and the gazing direction vector; deriving the target loss function to obtain a first-order derivative function; and determining the target mapping relation of the pupil center feature vector and the gazing direction vector according to the first order derivative function and a preset value. The feature regression method is intended to derive an optimal mapping (i.e., target mapping relationship) from X to Y using linear regression learning such that, where β' is derived using a minimum loss function, there is a minimum loss function (i.e., target loss function):
in formula 2, E (β) is a target loss function, λ is a regularization parameter, and β is an intermediate variable in calculating a target mapping relationship.
After deriving the target loss function to obtain a first-order derivative function, making the first-order derivative function equal to a preset value, wherein the preset value is 0, and obtaining:
β'=(XX T +λ) -1 XY T equation 3;
in formula 3, β' is the target mapping relationship.
And obtaining the vision area of the driver according to the pupil center feature vector and the target mapping relation:
y=xβ' formula 4;
in formula 4, X is a pupil center feature vector, and Y is a driver's line-of-sight region.
Step S40: and determining a target driving auxiliary mode according to the region to be focused and the driver sight line region.
In a specific implementation, the region of interest is compared with the driver's sight line region, so that the relationship between the region of interest and the driver's sight line region can be obtained, and when the relationship is different, the vehicle is hard-switched to a different driving assistance mode.
Further, step S40 includes: if the region of interest is equal to the driver sight line region, taking a first driving assistance mode as a target driving assistance mode; if the region of interest belongs to the driver sight line region, taking a second driving assistance mode as a target driving assistance mode; and if the region of interest does not belong to the driver sight line region, setting a third driving assistance mode as a target driving assistance mode.
It can be understood that the driving assistance mode is divided into a first driving assistance mode, a second driving assistance mode and a third driving assistance mode, where the first driving assistance mode, the second driving assistance mode and the third driving assistance mode correspond to a level I, a level II and a level III respectively, where the level I corresponds to a slow mode, and in the slow mode, the slow mode adjusts a driving assistance threshold to be low, such as an emergency braking, an adaptive cruise and the like, and a braking distance is adjusted to be minimum, that is, a minimum boundary of a safety distance; the II level corresponds to a conventional mode, and in the conventional mode, the driving auxiliary threshold is adjusted to be medium, and the braking distance such as emergency braking, self-adaptive cruising and the like is adjusted to be medium; the class III corresponds to an emergency mode in which the driving assistance threshold is adjusted up, such as emergency braking, adaptive cruise, etc., braking distance is adjusted to a maximum, i.e., the maximum boundary of the safe distance. The mode switching logic is as follows:
in equation 5, mode is a target driving assistance Mode, I is a first driving assistance Mode, II is a second driving assistance Mode, III is a third driving assistance Mode, Y is a driver sight line region, r is a region of interest, if is "if", and table conditions are selected, for example: when y=r, the target driving assistance mode is the first driving assistance mode.
It can be understood that when the driver's sight line area overlaps with the region of interest, that is, the driver keeps the sight line on the region of interest in a short time, the current driver state is considered to be good, the driver is focusing on all the objects of interest, and the current driving assistance mode is adjusted to the slow mode; when the driver sight line area is not completely overlapped with the area to be focused and the driver sight line is partially overlapped with the area to be focused, the current driver state is considered to be better, and the driver focuses on the target to be focused, the current driving auxiliary mode is adjusted to the normal mode; when the driver sight line area and the attention area do not have intersection, namely the driver sight line is not positioned on the attention area, the current driver state is poor, the driver does not pay attention to the object to be focused, and the current driving auxiliary mode is adjusted to an emergency mode.
Step S50: the current driving assistance mode is switched to the target driving assistance mode.
If the current driving support mode is not the target driving support mode, the current driving support mode is switched to the target driving support mode, and if the current driving support mode is the target driving support mode, the switching is not necessary.
The present embodiment is achieved by acquiring an environment image around a vehicle and a driver's face image; determining a region of interest according to the environmental image; determining a driver sight line region from the driver face image; determining a target driving assistance mode according to the region of interest and the driver sight line region; the current driving assistance mode is switched to the target driving assistance mode. By the method, the sight line area of the driver is determined according to the face image of the driver, the area needing to be concerned is analyzed according to the surrounding environment of the current vehicle, and whether the driving auxiliary mode needs to be switched is judged according to the superposition condition of the sight line area and the area needing to be concerned.
Referring to fig. 3, fig. 3 is a flowchart illustrating a driving assistance mode switching method according to a second embodiment of the present invention.
Based on the above-described first embodiment, the driving assistance mode switching method of the present embodiment includes, at the step S20:
step S21: and determining an initial global significance threshold and an initial search radius according to the environment image.
In a specific implementation, a gray maximum of the environmental image is first calculated, and the gray maximum is used as an initial global saliency threshold. The initial search radius determines the search range at the time of the first search in the environment image, for example: when the initial search radius is 100 pixels, the search range is a circle with a radius of 100 pixels.
Step S22: and searching the environment image according to the initial global significance threshold and the initial searching radius to obtain a searching result.
Further, step S22 includes: determining a search area in the environment image according to the initial search radius; comparing the pixel value of each pixel point in the search area with the initial global saliency threshold value to obtain a comparison value; when the comparison value is in a preset threshold value interval, reducing the initial searching radius according to a preset reduction value, and searching the environment image according to the reduced initial searching radius; and when the comparison value is equal to a preset threshold value, generating a search result according to the initial search radius corresponding to the comparison value.
In this embodiment, the initial search radius is set to 1/2 of the side length of the environmental image, and the search of the search area is performed in the environmental image according to the initial search radius and the initial global saliency threshold value:
in formula 6, num () is used to calculate the number of pixels in the search area, R (R) represents the search area with a search radius R, P (x, y) represents the pixel point, k (R, T) represents the proportion of pixels with pixel values above the global saliency threshold T in the search area, that is, the comparison value, the value range of k (R, T) is 0-1, when k (R, T) =1, all the pixel values in the search area are above the global saliency threshold, and when 0<k (R, T) <1, the gray values of not all the pixels in the search area are above the global saliency threshold, and at this time, there is a non-saliency area containing a certain proportion in the search area, that is, the area screened out at this time is not the required search area. At this time, the initial search radius is reduced according to a preset reduction value, and the environment image is searched according to the reduced initial search radius until k (r, T) approaches 1 indefinitely, at which point the area is considered as a portion that the driver should pay attention to.
Step S23: and determining a region of interest according to the search result.
It should be noted that, the search result includes the last initial search radius, and the area within the initial search radius range is the area to be focused.
The embodiment determines an initial global significance threshold and an initial search radius according to the environment image; searching the environment image according to the initial global saliency threshold and the initial searching radius to obtain a searching result; and determining a region of interest according to the search result. By the method, the area which the driver should pay attention to is obtained by searching in the environment image according to the threshold value, so that the part which needs to pay attention to can be analyzed from the environment.
In addition, the embodiment of the invention also provides a storage medium, wherein the storage medium stores a driving assistance mode switching program, and the driving assistance mode switching program realizes the steps of the driving assistance mode switching method when being executed by a processor.
Because the storage medium adopts all the technical schemes of all the embodiments, the storage medium has at least all the beneficial effects brought by the technical schemes of the embodiments, and the description is omitted here.
Referring to fig. 4, fig. 4 is a block diagram showing the structure of a first embodiment of the driving assistance mode switching apparatus of the present invention.
As shown in fig. 4, the driving assistance mode switching apparatus provided by the embodiment of the present invention includes:
the face acquisition module 10 is used for acquiring an environment image around the vehicle and a driver face image.
The area determining module 20 is configured to determine an area of interest according to the environmental image.
A gaze determination module 30 for determining a driver gaze area from said driver facial image.
A mode determination module 40 for determining a target driving assistance mode from the region of interest and the driver's gaze region.
The mode switching method 50 is used for switching the current driving assistance mode to the target driving assistance mode.
In an embodiment, the area determining module 20 is further configured to determine an initial global saliency threshold and an initial search radius according to the environmental image; searching the environment image according to the initial global saliency threshold and the initial searching radius to obtain a searching result; and determining a region of interest according to the search result.
In an embodiment, the area determining module 20 is further configured to determine a search area in the environmental image according to the initial search radius; comparing the pixel value of each pixel point in the search area with the initial global saliency threshold value to obtain a comparison value; when the comparison value is in a preset threshold value interval, reducing the initial searching radius according to a preset reduction value, and searching the environment image according to the reduced initial searching radius; and when the comparison value is equal to a preset threshold value, generating a search result according to the initial search radius corresponding to the comparison value.
In an embodiment, the gaze determination module 30 is further configured to segment the driver face image into a plurality of face candidate regions; determining gray values of the respective face candidate regions; taking a face candidate region corresponding to a gray value larger than a gray value threshold as a pupil candidate region; determining pupil center characteristics according to the pupil candidate areas; and determining the vision area of the driver according to the pupil center characteristics.
In an embodiment, the gaze determination module 30 is further configured to determine a pupil center feature vector and a gaze direction vector according to the pupil center feature; determining a target mapping relation of the pupil center feature vector and the gazing direction vector; and the target mapping relation and the pupil center feature vector determine a driver sight area.
In an embodiment, the gaze determination module 30 is further configured to establish a target loss function for the pupil center feature vector and the gaze direction vector; deriving the target loss function to obtain a first-order derivative function; and determining the target mapping relation of the pupil center feature vector and the gazing direction vector according to the first order derivative function and a preset value.
In an embodiment, the mode determining module 40 is further configured to set the first driving assistance mode as the target driving assistance mode if the region of interest is equal to the driver's sight line region; if the region of interest belongs to the driver sight line region, taking a second driving assistance mode as a target driving assistance mode; and if the region of interest does not belong to the driver sight line region, setting a third driving assistance mode as a target driving assistance mode.
It should be understood that the foregoing is illustrative only and is not limiting, and that in specific applications, those skilled in the art may set the invention as desired, and the invention is not limited thereto.
The present embodiment is achieved by acquiring an environment image around a vehicle and a driver's face image; determining a region of interest according to the environmental image; determining a driver sight line region from the driver face image; determining a target driving assistance mode according to the region of interest and the driver sight line region; the current driving assistance mode is switched to the target driving assistance mode. By the method, the sight line area of the driver is determined according to the face image of the driver, the area needing to be concerned is analyzed according to the surrounding environment of the current vehicle, and whether the driving auxiliary mode needs to be switched is judged according to the superposition condition of the sight line area and the area needing to be concerned.
It should be noted that the above-described working procedure is merely illustrative, and does not limit the scope of the present invention, and in practical application, a person skilled in the art may select part or all of them according to actual needs to achieve the purpose of the embodiment, which is not limited herein.
In addition, technical details not described in detail in the present embodiment may refer to the driving assistance mode switching method provided in any embodiment of the present invention, and are not described herein.
Furthermore, it should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. Read Only Memory)/RAM, magnetic disk, optical disk) and including several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.
Claims (7)
1. A driving assistance mode switching method, characterized by comprising:
acquiring an environment image around a vehicle and a driver face image;
determining a region of interest according to the environmental image;
determining a driver sight line region from the driver face image;
determining a target driving assistance mode according to the region of interest and the driver sight line region;
switching the current driving assistance mode to a target driving assistance mode;
the determining a driver sight line region from the driver face image includes:
dividing the driver face image into a plurality of face candidate regions;
determining gray values of the respective face candidate regions;
taking a face candidate region corresponding to a gray value larger than a gray value threshold as a pupil candidate region;
determining pupil center characteristics according to the pupil candidate areas;
determining a driver sight line area according to the pupil center characteristics;
the determining the driver sight line area according to the pupil center feature comprises the following steps:
determining a pupil center feature vector and a gaze direction vector according to the pupil center feature;
determining a target mapping relation of the pupil center feature vector and the gazing direction vector;
the target mapping relation and the pupil center feature vector determine a driver sight area;
the determining the target mapping relation of the pupil center feature vector and the gazing direction vector comprises the following steps:
establishing a target loss function of the pupil center feature vector and the gazing direction vector;
deriving the target loss function to obtain a first-order derivative function;
and determining the target mapping relation of the pupil center feature vector and the gazing direction vector according to the first order derivative function and a preset value.
2. The method of claim 1, wherein the determining a region of interest from the environmental image comprises:
determining an initial global saliency threshold and an initial search radius according to the environment image;
searching the environment image according to the initial global saliency threshold and the initial searching radius to obtain a searching result;
and determining a region of interest according to the search result.
3. The method of claim 2, wherein searching the environmental image according to the initial global saliency threshold and the initial search radius to obtain a search result comprises:
determining a search area in the environment image according to the initial search radius;
comparing the pixel value of each pixel point in the search area with the initial global saliency threshold value to obtain a comparison value;
when the comparison value is in a preset threshold value interval, reducing the initial searching radius according to a preset reduction value, and searching the environment image according to the reduced initial searching radius;
and when the comparison value is equal to a preset threshold value, generating a search result according to the initial search radius corresponding to the comparison value.
4. A method according to any one of claims 1-3, wherein said determining a target driving assistance pattern from said region of interest and said driver gaze region comprises:
if the region of interest is equal to the driver sight line region, taking a first driving assistance mode as a target driving assistance mode;
if the region of interest belongs to the driver sight line region, taking a second driving assistance mode as a target driving assistance mode;
and if the region of interest does not belong to the driver sight line region, setting a third driving assistance mode as a target driving assistance mode.
5. A driving assistance mode switching device, characterized by comprising:
a face acquisition module for acquiring an environment image around the vehicle and a driver face image;
the area determining module is used for determining an area to be focused according to the environment image;
a sight line determination module for determining a driver sight line region from the driver face image;
a mode determination module for determining a target driving assistance mode according to the region of interest and the driver sight line region;
a mode switching method for switching a current driving assistance mode to a target driving assistance mode;
the line-of-sight determination module is further configured to segment the driver face image into a plurality of face candidate regions; determining gray values of the respective face candidate regions; taking a face candidate region corresponding to a gray value larger than a gray value threshold as a pupil candidate region; determining pupil center characteristics according to the pupil candidate areas; determining a driver sight line area according to the pupil center characteristics;
the sight line determining module is further used for determining a pupil center feature vector and a gazing direction vector according to the pupil center feature; determining a target mapping relation of the pupil center feature vector and the gazing direction vector; the target mapping relation and the pupil center feature vector determine a driver sight area;
the sight line determining module is further used for establishing a target loss function of the pupil center feature vector and the gazing direction vector; deriving the target loss function to obtain a first-order derivative function; and determining the target mapping relation of the pupil center feature vector and the gazing direction vector according to the first order derivative function and a preset value.
6. A driving assistance mode switching apparatus, characterized in that the apparatus comprises: a memory, a processor, and a driving assistance mode switching program stored on the memory and executable on the processor, the driving assistance mode switching program configured to implement the driving assistance mode switching method according to any one of claims 1 to 4.
7. A storage medium having stored thereon a driving assistance mode switching program which, when executed by a processor, implements the driving assistance mode switching method according to any one of claims 1 to 4.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111251279.7A CN114162130B (en) | 2021-10-26 | 2021-10-26 | Driving assistance mode switching method, device, equipment and storage medium |
PCT/CN2022/080961 WO2023071024A1 (en) | 2021-10-26 | 2022-03-15 | Driving assistance mode switching method, apparatus, and device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111251279.7A CN114162130B (en) | 2021-10-26 | 2021-10-26 | Driving assistance mode switching method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114162130A CN114162130A (en) | 2022-03-11 |
CN114162130B true CN114162130B (en) | 2023-06-20 |
Family
ID=80477386
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111251279.7A Active CN114162130B (en) | 2021-10-26 | 2021-10-26 | Driving assistance mode switching method, device, equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114162130B (en) |
WO (1) | WO2023071024A1 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114162130B (en) * | 2021-10-26 | 2023-06-20 | 东风柳州汽车有限公司 | Driving assistance mode switching method, device, equipment and storage medium |
CN115909254B (en) * | 2022-12-27 | 2024-05-10 | 钧捷智能(深圳)有限公司 | DMS system based on camera original image and image processing method thereof |
CN117197786B (en) * | 2023-11-02 | 2024-02-02 | 安徽蔚来智驾科技有限公司 | Driving behavior detection method, control device and storage medium |
CN117533349A (en) * | 2023-11-30 | 2024-02-09 | 岚图汽车科技有限公司 | Method, device, equipment and readable storage medium for allocating driving rights under human-machine co-driving |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107539318A (en) * | 2016-06-28 | 2018-01-05 | 松下知识产权经营株式会社 | Drive assistance device and driving assistance method |
WO2019029195A1 (en) * | 2017-08-10 | 2019-02-14 | 北京市商汤科技开发有限公司 | Driving state monitoring method and device, driver monitoring system, and vehicle |
CN109492514A (en) * | 2018-08-28 | 2019-03-19 | 初速度(苏州)科技有限公司 | A kind of method and system in one camera acquisition human eye sight direction |
CN109664891A (en) * | 2018-12-27 | 2019-04-23 | 北京七鑫易维信息技术有限公司 | Auxiliary driving method, device, equipment and storage medium |
CN111169483A (en) * | 2018-11-12 | 2020-05-19 | 奇酷互联网络科技(深圳)有限公司 | Driving assisting method, electronic equipment and device with storage function |
CN111931579A (en) * | 2020-07-09 | 2020-11-13 | 上海交通大学 | Automated driving assistance system and method using eye tracking and gesture recognition technology |
DE102020123658A1 (en) * | 2019-09-11 | 2021-03-11 | Mando Corporation | DRIVER ASSISTANCE DEVICE AND PROCEDURE FOR IT |
CN112965502A (en) * | 2020-05-15 | 2021-06-15 | 东风柳州汽车有限公司 | Visual tracking confirmation method, device, equipment and storage medium |
CN113378771A (en) * | 2021-06-28 | 2021-09-10 | 济南大学 | Driver state determination method and device, driver monitoring system and vehicle |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006172215A (en) * | 2004-12-16 | 2006-06-29 | Fuji Photo Film Co Ltd | Driving support system |
CN103770733B (en) * | 2014-01-15 | 2017-01-11 | 中国人民解放军国防科学技术大学 | Method and device for detecting safety driving states of driver |
TWI653170B (en) * | 2017-11-01 | 2019-03-11 | 宏碁股份有限公司 | Driving notification method and driving notification system |
CN114162130B (en) * | 2021-10-26 | 2023-06-20 | 东风柳州汽车有限公司 | Driving assistance mode switching method, device, equipment and storage medium |
-
2021
- 2021-10-26 CN CN202111251279.7A patent/CN114162130B/en active Active
-
2022
- 2022-03-15 WO PCT/CN2022/080961 patent/WO2023071024A1/en active Application Filing
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107539318A (en) * | 2016-06-28 | 2018-01-05 | 松下知识产权经营株式会社 | Drive assistance device and driving assistance method |
WO2019029195A1 (en) * | 2017-08-10 | 2019-02-14 | 北京市商汤科技开发有限公司 | Driving state monitoring method and device, driver monitoring system, and vehicle |
CN109492514A (en) * | 2018-08-28 | 2019-03-19 | 初速度(苏州)科技有限公司 | A kind of method and system in one camera acquisition human eye sight direction |
CN111169483A (en) * | 2018-11-12 | 2020-05-19 | 奇酷互联网络科技(深圳)有限公司 | Driving assisting method, electronic equipment and device with storage function |
CN109664891A (en) * | 2018-12-27 | 2019-04-23 | 北京七鑫易维信息技术有限公司 | Auxiliary driving method, device, equipment and storage medium |
DE102020123658A1 (en) * | 2019-09-11 | 2021-03-11 | Mando Corporation | DRIVER ASSISTANCE DEVICE AND PROCEDURE FOR IT |
CN112965502A (en) * | 2020-05-15 | 2021-06-15 | 东风柳州汽车有限公司 | Visual tracking confirmation method, device, equipment and storage medium |
CN111931579A (en) * | 2020-07-09 | 2020-11-13 | 上海交通大学 | Automated driving assistance system and method using eye tracking and gesture recognition technology |
CN113378771A (en) * | 2021-06-28 | 2021-09-10 | 济南大学 | Driver state determination method and device, driver monitoring system and vehicle |
Non-Patent Citations (6)
Title |
---|
商用车转向系统与整车操纵稳定性的相关性研究;常健;龙玉林;;山东工业技术(第05期);全文 * |
基于Argo剖面和SST以及SLA数据重构三维网格温度场;李直龙;左军成;纪棋严;罗凤云;庄圆;;海洋预报(第04期);全文 * |
基于深度神经网络的视线跟踪技术研究;毛云丰;沈文忠;滕童;;现代电子技术(第16期);全文 * |
基于自适应半径搜索的图像感兴趣区域检测;张立保;李浩;;中国激光(第07期);全文 * |
头戴式眼动跟踪系统设计与实现;宫德麟;施家栋;张广月;王建中;;科技创新与应用(第31期);全文 * |
视线追踪系统头动状态下的视线落点补偿方法;朱博;迟健男;张天侠;;公路交通科技(第10期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
WO2023071024A1 (en) | 2023-05-04 |
CN114162130A (en) | 2022-03-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114162130B (en) | Driving assistance mode switching method, device, equipment and storage medium | |
US11511759B2 (en) | Information processing system, information processing device, information processing method, and non-transitory computer readable storage medium storing program | |
CN110789517A (en) | Automatic driving lateral control method, device, equipment and storage medium | |
EP3539054A1 (en) | Neural network image processing apparatus | |
EP2620930A1 (en) | Track estimation device and program | |
CN110678873A (en) | Attention detection method, computer device and computer-readable storage medium based on cascaded neural network | |
CN110789520B (en) | Driving control method and device and electronic equipment | |
CN108090425B (en) | Lane line detection method, device and terminal | |
CN116486386A (en) | Sight line distraction range determination method and device | |
CN117746490A (en) | Sight gaze region determination method, device, vehicle and storage medium | |
CN112950590B (en) | Terrain image suitability analysis method, equipment and readable storage medium | |
US20230267752A1 (en) | Electronic device, information processing apparatus, method for inference, and program for inference | |
CN113942511A (en) | Method, device and equipment for controlling passing of driverless vehicle and storage medium | |
CN113807407A (en) | Target detection model training method, model performance detection method and device | |
CN113536949B (en) | Accident risk level assessment method, device and computer readable storage medium | |
CN118387093B (en) | Obstacle avoidance method and device for vehicle | |
Lin et al. | Design of a lane detection and departure warning system using functional-link-based neuro-fuzzy networks | |
EP4471450A1 (en) | Radar snr distribution return descriptor for use in object detection and ground clutter removal | |
CN115457515A (en) | Method and device for judging head state of driver | |
CN115661773A (en) | Automatic driving method and device for vehicle, vehicle and storage medium | |
Jin | Comparing YOLO Models for Self-Driving Car Object Detection | |
WO2025045468A1 (en) | Driver monitoring method and system | |
CN118298407A (en) | Fatigue detection method, apparatus, computer device, and computer-readable storage medium | |
CN116142206A (en) | Vehicle vision processing and controlling method, device, equipment and automobile | |
CN116069416A (en) | Display control method and device for user interface, electronic equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |