CN110928620B - Evaluation method and system for driving distraction caused by automobile HMI design - Google Patents
Evaluation method and system for driving distraction caused by automobile HMI design Download PDFInfo
- Publication number
- CN110928620B CN110928620B CN201911060382.6A CN201911060382A CN110928620B CN 110928620 B CN110928620 B CN 110928620B CN 201911060382 A CN201911060382 A CN 201911060382A CN 110928620 B CN110928620 B CN 110928620B
- Authority
- CN
- China
- Prior art keywords
- hmi
- driving
- data
- picture
- distraction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013461 design Methods 0.000 title claims abstract description 44
- 238000011156 evaluation Methods 0.000 title claims abstract description 26
- 238000001514 detection method Methods 0.000 claims abstract description 20
- 238000013145 classification model Methods 0.000 claims abstract description 16
- 238000012423 maintenance Methods 0.000 claims abstract description 13
- 238000013210 evaluation model Methods 0.000 claims abstract description 12
- 238000012552 review Methods 0.000 claims abstract description 12
- 230000000007 visual effect Effects 0.000 claims abstract description 9
- 238000012549 training Methods 0.000 claims description 16
- 230000004424 eye movement Effects 0.000 claims description 14
- 238000000034 method Methods 0.000 claims description 14
- 238000004088 simulation Methods 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 10
- 239000006185 dispersion Substances 0.000 claims description 5
- 238000012360 testing method Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 238000010801 machine learning Methods 0.000 description 5
- 230000006399 behavior Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000004378 air conditioning Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000011158 quantitative evaluation Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000011076 safety test Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- General Engineering & Computer Science (AREA)
- Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
Abstract
The invention discloses a driving distraction evaluation method for an automobile HMI design, which comprises the following steps: the HMI visual foothold detection model analyzes and obtains a driver attention area on an HMI picture according to the size of an input HMI display device, position data in a cabin and input HMI picture data and marks the driver attention area; the HMI operation distraction evaluation model outputs the data of the number of gazing points, the number of times of review, the gazing time length and the lane deviation and the speed maintenance of the HMI picture; the invention also provides a driving distraction evaluation system for the automobile HMI design, which comprises an HMI vision footfall point detection model, an HMI operation distraction evaluation model and an HMI operation driving safety influence level classification model. By adopting the technical scheme, the acquired data are accurately marked by combining with the HMI picture, and the analysis target can be accurately designed to the design element by using the target detection method, so that a designer is guided to accurately improve the problem.
Description
Technical Field
The invention relates to a driving distraction evaluation method and system for automobile HMI design, and belongs to the technical field of automobile research and development design.
Background
With the continuous application and popularization of intelligent networking technology in the automobile industry, more and more automobile types represented by tesla are equipped with intelligent automobile machines, the functions of the intelligent automobile machines are also more and more diversified, and more applications such as navigation, music playing, air conditioning control, video and video, game listening books and the like begin to be carried on the platform. However, the driving safety is particularly important when the driving scene is different from the PC end or the mobile phone end, the distraction caused by the unreasonable design of the HMI (Human Machine Interface, human-computer interface) is very dangerous to the driver, but not all the host factories, design companies and content providers of the vehicle-machine application responsible for the HMI design development have the capability of evaluating the HMI design, and it is difficult to verify whether the HMI design meets the safety standard, the current evaluation is mostly tested in an actual road scene, the safety of a tester is difficult to ensure when the testing task is performed, and more cost is also spent, so the HMI design often skips the safety test evaluation link, and the HMI design under the condition has very great safety hidden trouble. The problems of the existing host factories and various application developers with HMI design capability are as follows: 1. quantitative evaluation is difficult to carry out through subjective deduction of designers and established safety design rules; 2. the developed product can only be used for carrying out actual road condition test, the risk and trial-and-error cost are very high, and the real implementation is difficult.
Disclosure of Invention
Therefore, the invention aims to provide a driving distraction evaluation method and system for an automobile HMI design, which can evaluate safety at any time and any place by aiming at a newly designed human-computer interaction interface in batches through an automatic intelligent model.
In order to achieve the above object, a driving distraction evaluation method according to the present invention for an automotive HMI design includes the steps of:
the HMI visual foothold detection model analyzes and obtains a driver attention area on an HMI picture according to the size of an input HMI display device, position data in a cabin and input HMI picture data and marks the driver attention area;
the HMI operation distraction evaluation model outputs the gazing point number, the reviewing times, the gazing time length, the lane deviation and the speed keeping data of the HMI picture.
The HMI operation driving safety influence level classification model outputs the distraction influence level of the HMI picture on driving safety.
The step of constructing an HMI visual footage point detection model and an HMI operation distraction assessment model comprises the following steps:
capturing eye movement data of a driver in a driving process through an eye movement instrument, collecting vehicle data through a rack device, analyzing and obtaining data of a concerned region, a gazing point number, a reviewing time number and a gazing time length of each HMI picture of the driver, lane offset and speed keeping data corresponding to each HMI picture, and marking the concerned region, the gazing point number, the reviewing time number, the gazing time length, the lane offset and the speed keeping information on the corresponding HMI picture;
and calling the labeled HMI picture data to train the HMI vision footage point detection model and the HMI operation distraction evaluation model.
The step of constructing the HMI operation driving safety influence level classification model comprises the following steps:
evaluating and marking the attention dispersion influence level of the HMI picture marked with the attention area or the attention element, the attention point number, the review times, the attention duration, the lane offset and the speed maintenance information on the driving safety;
and calling the labeled HMI picture data to train the HMI operation driving safety influence level classification model.
The invention also provides a driving distraction evaluation system for designing the automobile HMI, which comprises the following components:
the HMI visual foothold detection model is used for analyzing and obtaining a driver attention area on an HMI picture according to the input size of the HMI display device, the position data of the HMI display device in the cabin and the input HMI picture data and marking the driver attention area;
the HMI is used for operating the distraction evaluation model and outputting the gazing point number, the reviewing times, the gazing duration, the lane deviation and the speed keeping information of the attention area under different driving scenes.
The system for evaluating the driving distraction of the automobile HMI design further comprises an HMI operation driving safety influence level classification model which is used for outputting the attention distraction influence level of the HMI pictures in different driving scenes on the driving safety.
The vehicle HMI design draws driving distraction evaluation system, still includes model training unit.
The vehicle HMI is designed to draw attention to driving distraction evaluation system and further comprises a training data acquisition unit.
The training data acquisition unit comprises an automobile driving simulation virtual field, the automobile driving simulation virtual field comprises a real vehicle at least provided with an HMI display device, and the information of the HMI display device comprises the size information of an HMI display interface and the position information of the HMI display interface in a cabin; and a gantry device for acquiring vehicle data including at least lane departure and speed maintenance information during driving, the gantry device including an eye movement meter for capturing eye movement data of a driver during driving, a surrounding screen device for displaying a virtual driving scene.
The input HMI picture is an HMI picture marked with the target position.
By adopting the technical scheme, the driving distraction evaluation method and system for the design of the automobile HMI have the following beneficial effects:
1. according to the invention, by constructing the simulated road conditions and driving scenes, the driver uses the real vehicle to perform the simulation test, so that the safety is higher, the acquired result is closer to the real situation, and the effectiveness of the data is ensured;
2. the eye tracker and the bench device are used for collecting actual data, various details in the test can be captured, the collected result is more objective, and subjective judgment caused by single use of questionnaires, manual recording conditions or inquiry is avoided;
3. the collected objective and effective data are used for machine learning modeling to form an evaluation system, and then the evaluation system input by the design interface can objectively and scientifically output an evaluation result, so that subjectivity of manual evaluation is avoided;
4. the invention accurately marks the acquired data and combines the HMI picture, and can accurately analyze the target to the design element by using the target detection method, thereby guiding the designer to accurately improve the problem.
Drawings
FIG. 1 is a flow chart of a system model construction in the present invention.
Fig. 2 is a flow chart of the acquisition of training data in the present invention.
Fig. 3 is a flowchart of a driving distraction evaluation method of the present invention for an automotive HMI design.
FIG. 4 is a diagram showing an example of the output result of the system according to the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and the detailed description.
As shown in fig. 1, the method for evaluating driving distraction of the automotive HMI design according to the present invention first builds an automotive driving simulation virtual field, collects HMI condition data used by a user in different simulated driving virtual scenes, and trains an intelligent analysis model of the system by taking the collected and labeled image data as training data.
As shown in fig. 2, training data is acquired, first, driving scene simulation is performed on a screen according to experimental requirements in an automobile driving simulation virtual field, various driving scenes are constructed by the scene simulation through a 3D modeling technology, and the driving simulation environment is molded by using surrounding screens and projections, accessing to a real-field vehicle and adjusting a rack device.
The automobile driving simulation virtual field comprises an HMI display device, an eye movement instrument for capturing eye movement data of a driver in a driving process, a surrounding screen device for displaying a virtual driving scene and a rack device for collecting vehicle data, wherein the rack device is used for collecting at least vehicle data comprising lane deviation and speed maintenance information in the driving process. The information of the HMI display device comprises the size information of the HMI display device and the position information of the HMI display device in the cabin. Inviting the driver to enter a virtual field for simulating automobile driving, and testing HMI interfaces corresponding to different automobile machine functions aiming at different driving scenes in the driving process.
In the process of acquisition, an eye movement instrument is used for capturing eye movement data of a driver, analyzing and obtaining information of a focus area, a focus point number, a review frequency and a focus time length of the HMI picture by the driver, and marking the HMI picture correspondingly, for example, marking the focus area for the HMI picture, and marking the information of the focus point number, the review frequency and the focus time length for the picture. The rack device collects vehicle data and analyzes the vehicle data to obtain the data information of the size of the HMI display device, the position of the HMI display device in the cabin, lane offset and speed maintenance.
Each of the regions of interest has key information that the driver wants to pay attention to or browsing information during the search. The number of gaze points is the number of regions of interest on each HMI picture, and if the number of regions of interest is large, it is indicated that it is likely that the driver is difficult to find the key information, and has to browse a plurality of regions, or that the HMI picture information is too complicated, which leads to frequent attention to the driver. The review times refer to the review times of each HMI picture, when a driver drives, the driver usually looks at the front road, and when the driver looks at the HMI, the driver can pay attention to the front road, but for driving safety, the sight can only be suspended on the HMI for a short time and then return to the front road, if the information on the HMI pictures is complex or the interface design is unreasonable, the driver can be difficult to obtain key information once, and the HMI can be reviewed for many times. The gazing time length represents the time length of the driver focusing on each HMI picture, and the longer the time length is, the more likely the driver is difficult to obtain the key information in a short time. The lane offset data refers to parameters of the vehicle that offset the lane. The speed maintenance data is a variable amount of the running speed, which is a change in the speed caused by distraction of the driver who drives at a certain speed as set as possible during the test. Based on the data information, the influence of each HMI picture on the driving behavior of the driver can be estimated artificially, the degree of the distraction influence of the driving safety is classified, and meanwhile, the distraction influence level marked with the driving safety is added to the corresponding HMI picture.
After the data are obtained, an intelligent analysis model can be built as training data. After the intelligent analysis model is built, a newly designed HMI picture is input into the intelligent analysis model, the information of the concerned region, the concerned point number, the concerned time length, the lane deviation and the speed maintenance of the HMI picture under different driving scenes can be automatically output, and the attention dispersion influence level of the driving safety possibly caused by the fact that the HMI picture is dispersed to the attention of a driver is finally output, so that a practitioner is guided to design and optimize the vehicle HMI according to the application driving scene and the application.
The intelligent analysis model comprises an HMI vision footage detection model, an HMI operation distraction evaluation and an HMI operation driving safety influence level classification model.
The HMI vision footage point detection model is mainly applied to detecting the driver attention area of each HMI interface of the whole brand new HMI design, the model uses acquired HMI picture data marked with attention areas as training data and verification data to train the model, and a common target monitoring method comprises a FAST R-CNN or YOLO V1-V3 model, and a target monitoring area can be output and marked by using a square frame.
The HMI is used for respectively outputting the gazing point number, the reviewing times, the gazing time length, the lane offset and the speed keeping condition of the regions of interest, and the training data of the HMI is derived from the acquired HMI pictures marked with the regions of interest, the gazing point number, the reviewing times, the gazing time length, the lane offset and the speed keeping. The simple classification prediction can use a CNN model, and the FAST R-CNN model is linked with the CNN classification model after target detection.
Through the HMI vision foothold detection model and the HMI operation distraction evaluation model, after the size of an HMI picture and the position data of an HMI display device in a cabin are input, the system automatically outputs the information of the concerned region, the gazing point number, the reviewing times, the gazing duration, the lane offset and the speed maintenance of a driver in different driving scenes. However, the simple output of the information also needs to further judge the influence of the information on the driving safety manually, so that the output result is not automatic and visual enough. Based on this, further analysis may continue using these output results as intermediate process data.
Continuously taking the obtained data comprising driving scene type, HMI picture, attention area, attention point number, review times, attention duration and driving safety distraction influence level as input, and training an HMI operation driving safety influence level classification model by using a machine learning algorithm. The machine learning algorithm can adopt traditional machine learning algorithms such as decision trees, support vector machines and the like, and can also directly use deep learning algorithms such as CNN and the like. The HMI operation driving safety influence level classification model is used for outputting and marking the distraction influence level of the driving safety caused by the driver operation behaviors corresponding to the HMI pictures in different driving scenes.
After the model is established, as shown in fig. 3, the newly designed HMI picture, the size of the HMI display device and the position information data in the cabin are input into the HMI visual foothold detection model, the interface design area or element focused by the user on the interface can be automatically detected and marked, the HMI picture marked with the driver focus area and different driving scenes are input into the HMI operation distraction evaluation model, the number of gazing points, the number of times of review, the gazing time length, the lane offset and the speed of the user under different driving scenes are output, the gazing points, the number of times of review, the gazing time length, the lane offset and the speed of the user under different driving scenes are kept, the factors corresponding to the obtained HMI focus area are input into the HMI operation driving safety influence level classification model, and the influence degree of the man-machine interface design on driving behavior safety under different driving scenes can be obtained, so that the design of the HMI user interface of the automobile is guided to be optimized, and the driving distraction problem caused by the design of the HMI is effectively reduced.
The HMI picture input into the HMI visual footage point detection model is an HMI picture marked with target position information. An HMI picture may be designed for many such "target locations" depending on its use or user needs, and then experiments and data collection are performed on these target locations to assess the rationality of the design. For example, the button positions on the interface, or the road names, traffic jam information, intersection information and the like on the navigation interface are all target positions which can be concerned by a driver when the driver operates the HMI in the driving process, so that a designer can design the target positions more specifically. Therefore, the input HMI picture is marked with the target position information of the current HMI operation, and during driving, the driver may capture a lot of attention areas through the eye tracker, and the attention areas may be overlapped on the operation target position or may be distributed on other elements not in the target position so as to be distracted, so that the captured information can provide training data for the HMI vision foothold detection model together.
As shown in the output result example of fig. 4, the HMI image is shown on the left side, the pre-designed target position a is identified by using an oval frame, the regions of interest a to G are detected and identified by using a box, where the region of interest G overlaps with the target position, and the region of interest A, B, C, D, E, F does not belong to the target position, but draws the attention of the driver in the actual driving process, and by analyzing, it is known that the region of interest B, C is a road name near navigation, the region of interest D, E is on a line from the start point to the end point of navigation, G belongs to mileage information, all belong to information that the driver needs to acquire, and the region of interest F has a function key outlined, but if the driver does not implement a key operation after focusing on the function key, the key may be distracted from the driver. The right side is analysis of the number of gazing points, the number of times of review, gazing duration and the degree of influence of gazing time on the dispersion of the attention of the gazing time on driving safety in each driving scene. The analysis result can guide a designer to optimize the design scheme, and the dispersion influence of the HMI on the driving attention is reduced.
The invention uses manpower to build virtual scenes, the real vehicle test collects the service condition data of HMI in different driving scenes, including the eye movement data of driver collected by the eye movement instrument to analyze the information of the gazing point number, the reviewing times and the gazing time length on the HMI, the rack device collects the vehicle data such as lane deviation, speed maintenance and the like, and then the vehicle data are used for subsequent evaluation and model training.
The eye tracker is applied to capture the hot spot area on the HMI so as to help identify the user to pay attention to the interface element and label, so that machine modeling (instead of the whole HMI) can be performed on the element, and finally, a designer can be effectively guided to optimize for a certain design element according to the number of gazing points, the number of times of review and gazing duration.
The application rack device collects vehicle data, and uses lane deviation and speed maintenance data as vehicle running safety indexes to reflect the degree of distraction in the driving process.
The training models are divided into different driving scenes, so that functional designs with different scene demands can be selected and replaced, for example, some functions can be forbidden or unnecessary in rainy and night weather, the evaluation result in the scene can be temporarily not considered, and the design problems in other scenes are mainly optimized.
The method has the advantages that the gazing point number, the reviewing times and the gazing duration of the interface element focused by the driver, the lane deviation and the speed caused by operation are kept to be changed, the interface element is used as an intermediate output result, and after reclassification is carried out through a machine learning classification model, the evaluation of the influence degree of the interface element on driving safety is obtained, so that a designer can be directly helped to carry out scientific design decision, and misguidance caused by artificial guessing is avoided.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. While still being apparent from variations or modifications that may be made by those skilled in the art are within the scope of the invention.
Claims (4)
1. An evaluation method for driving distraction of an automobile HMI design is characterized by comprising the following steps:
inputting the newly designed HMI picture, the size of the HMI display device and the position information data of the HMI display device in the cabin into an HMI vision footage point detection model, and analyzing and obtaining a driver attention area on the HMI picture and marking the driver attention area according to the input HMI display device size, the position data of the cabin and the input HMI picture data by the HMI vision footage point detection model;
inputting HMI pictures marking the attention area of a driver and different driving scenes into an HMI operation distraction evaluation model, and outputting the gazing point number, the reviewing times, the gazing time length, the lane offset and the speed maintenance data of the HMI pictures by the HMI operation distraction evaluation model;
inputting the gazing point number, the reviewing times, the gazing time length, the lane offset and the speed keeping data corresponding to the obtained HMI gazing region into an HMI operation driving safety influence level classification model, and outputting the attention scattering influence level of the HMI picture on driving safety under different driving scenes by the HMI operation driving safety influence level classification model;
the method for constructing the HMI visual foothold detection model and the HMI operation distraction evaluation model comprises the following steps of:
capturing eye movement data of a driver in a driving process through an eye movement instrument, collecting vehicle data through a rack device, analyzing and obtaining data of a concerned region, a gazing point number, a reviewing time number and a gazing time length of each HMI picture of the driver, lane offset and speed keeping data corresponding to each HMI picture, and marking the concerned region, the gazing point number, the reviewing time number, the gazing time length, the lane offset and the speed keeping information on the corresponding HMI picture;
the labeled HMI picture data is called to train an HMI vision footage point detection model and an HMI operation distraction evaluation model;
the step of constructing the HMI operation driving safety influence level classification model comprises the following steps:
evaluating and marking the attention dispersion influence level of the HMI picture marked with the information of the attention area, the number of gazing points, the review times, the gazing time length, the lane offset and the speed maintenance on the driving safety;
and calling the labeled HMI picture data to train the HMI operation driving safety influence level classification model.
2. The vehicle HMI design driving distraction evaluation method according to claim 1, wherein: the input HMI picture is an HMI picture marked with the target position.
3. A driving distraction evaluation system of an automotive HMI design for realizing the driving distraction evaluation method of an automotive HMI design according to claim 1 or 2, comprising:
the HMI visual foothold detection model is used for analyzing and obtaining a driver attention area on an HMI picture according to the input size of the HMI display device, the position data of the HMI display device in the cabin and the input HMI picture data and marking the driver attention area;
the HMI is used for operating the distraction evaluation model and outputting the gazing point number, the reviewing times, the gazing time length, the lane deviation and the speed keeping information of the gazing area under different driving scenes;
the HMI operates the driving safety influence level classification model and is used for outputting the attention scattering influence level of the HMI pictures in different driving scenes on the driving safety;
the system also comprises a model training unit and a training data acquisition unit;
the training data acquisition unit comprises an automobile driving simulation virtual field, the automobile driving simulation virtual field comprises a real vehicle at least provided with an HMI display device, and the information of the HMI display device comprises the size information of an HMI display interface and the position information of the HMI display interface in a cabin; and a gantry device for acquiring vehicle data including at least lane departure and speed maintenance information during driving, the gantry device including an eye movement meter for capturing eye movement data of a driver during driving, a surrounding screen device for displaying a virtual driving scene.
4. The vehicle HMI design driving distraction assessment system of claim 3 wherein: the input HMI picture is an HMI picture marked with the target position.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911060382.6A CN110928620B (en) | 2019-11-01 | 2019-11-01 | Evaluation method and system for driving distraction caused by automobile HMI design |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911060382.6A CN110928620B (en) | 2019-11-01 | 2019-11-01 | Evaluation method and system for driving distraction caused by automobile HMI design |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110928620A CN110928620A (en) | 2020-03-27 |
CN110928620B true CN110928620B (en) | 2023-09-01 |
Family
ID=69850056
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911060382.6A Active CN110928620B (en) | 2019-11-01 | 2019-11-01 | Evaluation method and system for driving distraction caused by automobile HMI design |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110928620B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114333289B (en) * | 2020-09-28 | 2023-12-22 | 沃尔沃汽车公司 | Vehicle starting reminding equipment, system and method |
CN112053610A (en) * | 2020-10-29 | 2020-12-08 | 延安大学 | A VR virtual driving training and test method based on deep learning |
CN113360371B (en) * | 2021-05-17 | 2022-10-25 | 同济大学 | A Machine Learning-Based Vehicle Human-Computer Interaction Evaluation System |
CN114185775B (en) * | 2021-11-25 | 2024-11-05 | 中国汽车技术研究中心有限公司 | Method and system for external observation evaluation of interface elements based on real vehicle-in-the-loop simulation |
CN114852091B (en) * | 2022-04-08 | 2024-10-29 | 长安大学 | Safety evaluation system and method of vehicle HMI system based on real vehicle |
CN117197786B (en) * | 2023-11-02 | 2024-02-02 | 安徽蔚来智驾科技有限公司 | Driving behavior detection method, control device and storage medium |
US20250208000A1 (en) * | 2023-12-22 | 2025-06-26 | Kingfar International Inc. | Method for evaluating human-machine interaction of vehicle, system, edge computing device, and medium |
CN118865280B (en) * | 2024-09-24 | 2024-12-27 | 宁波大学 | Pedestrian crossing attention allocation mechanism evaluation method based on virtual reality simulation |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102693349A (en) * | 2011-03-25 | 2012-09-26 | 北京航空航天大学 | Pilot attention distribution work efficiency evaluation system and method based on airplane cockpit display interface |
CN103714254A (en) * | 2013-12-27 | 2014-04-09 | 北京航空航天大学 | System and method for testing characteristics of influence of layout of airplane cockpit information display interface on situational awareness of pilot |
CN106043311A (en) * | 2016-06-27 | 2016-10-26 | 观致汽车有限公司 | Method and system for judging whether driver is distracted or not |
CN107428244A (en) * | 2015-03-13 | 2017-12-01 | 普罗杰克特雷有限公司 | For making user interface adapt to user's notice and the system and method for riving condition |
CN108367678A (en) * | 2015-12-03 | 2018-08-03 | 宾利汽车有限公司 | Response type man-machine interface |
CN110096328A (en) * | 2019-05-09 | 2019-08-06 | 中国航空工业集团公司洛阳电光设备研究所 | A kind of HUD interface optimization layout adaptive approach and system based on aerial mission |
-
2019
- 2019-11-01 CN CN201911060382.6A patent/CN110928620B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102693349A (en) * | 2011-03-25 | 2012-09-26 | 北京航空航天大学 | Pilot attention distribution work efficiency evaluation system and method based on airplane cockpit display interface |
CN103714254A (en) * | 2013-12-27 | 2014-04-09 | 北京航空航天大学 | System and method for testing characteristics of influence of layout of airplane cockpit information display interface on situational awareness of pilot |
CN107428244A (en) * | 2015-03-13 | 2017-12-01 | 普罗杰克特雷有限公司 | For making user interface adapt to user's notice and the system and method for riving condition |
CN108367678A (en) * | 2015-12-03 | 2018-08-03 | 宾利汽车有限公司 | Response type man-machine interface |
CN106043311A (en) * | 2016-06-27 | 2016-10-26 | 观致汽车有限公司 | Method and system for judging whether driver is distracted or not |
CN110096328A (en) * | 2019-05-09 | 2019-08-06 | 中国航空工业集团公司洛阳电光设备研究所 | A kind of HUD interface optimization layout adaptive approach and system based on aerial mission |
Also Published As
Publication number | Publication date |
---|---|
CN110928620A (en) | 2020-03-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110928620B (en) | Evaluation method and system for driving distraction caused by automobile HMI design | |
Hadjidemetriou et al. | Automated pavement patch detection and quantification using support vector machines | |
KR102094341B1 (en) | System for analyzing pot hole data of road pavement using AI and for the same | |
CN108596266A (en) | Blending decision method, device based on semi-supervised learning and storage medium | |
CN108711152B (en) | Image analysis method and device based on AI technology and user terminal | |
GB2500760A (en) | Traffic camera diagnostics using moving test targets | |
Li et al. | A detection and configuration method for welding completeness in the automotive body-in-white panel based on digital twin | |
CN113992984B (en) | Elevator advertisement monitoring and broadcasting method | |
KR20160018944A (en) | Method of generating a preliminary estimate list from the mobile device recognizing the accident section of the vehicle | |
CN114185775A (en) | Method and system for external observation and evaluation of interface elements based on real vehicle-in-the-loop simulation | |
Zhu et al. | Driving towards the future: Exploring human-centered design and experiment of glazing projection display systems for autonomous vehicles | |
CN118245804A (en) | Travel highway landscape vision quality evaluation system and equipment based on real vehicle test | |
CN109886338A (en) | A kind of intelligent automobile test image mask method, device, system and storage medium | |
US20250054169A1 (en) | Methods for Detecting Vehicle Following Distance | |
CN117992333B (en) | Evaluation method, device and evaluation system for human-computer interaction system of vehicle | |
CN110275820B (en) | Page compatibility testing method, system and equipment | |
CN113239609A (en) | Test system and detection method for target identification and monitoring of monocular camera of intelligent vehicle | |
CN119516498A (en) | Assisted driving method and system based on artificial intelligence | |
CN114138668A (en) | Method for carrying out automatic parking test by utilizing GUI tool | |
CN117068181A (en) | Driver potential hazard perception assessment method based on real vehicle | |
CN117574596A (en) | Equipment maintenance demonstration simulation method based on virtual reality | |
CN114445797B (en) | Nighttime driving visual assistance method and related equipment | |
CN116225921A (en) | Visual debugging method and device for detection algorithm | |
CN104835132A (en) | Road condition image fast point inspection method and equipment thereof | |
Wang et al. | Safety implications of automated vehicles providing external communication to pedestrians |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20230803 Address after: 1/F, Building B, New City Center, No. 3 Wanhui Road, Zhongbei Town, Xiqing District, Tianjin, 300000 Applicant after: Zhongqi Zhilian Technology Co.,Ltd. Address before: Room 12-17, block B1, new city center, No.3 Wanhui Road, Zhongbei Town, Xiqing District, Tianjin Applicant before: TIANJIN KADAKE DATA CO.,LTD. |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |