[go: up one dir, main page]

CN114435356A - Dangerous scene processing method and device - Google Patents

Dangerous scene processing method and device Download PDF

Info

Publication number
CN114435356A
CN114435356A CN202011194437.5A CN202011194437A CN114435356A CN 114435356 A CN114435356 A CN 114435356A CN 202011194437 A CN202011194437 A CN 202011194437A CN 114435356 A CN114435356 A CN 114435356A
Authority
CN
China
Prior art keywords
scene
driver
threshold
state
dangerous
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011194437.5A
Other languages
Chinese (zh)
Other versions
CN114435356B (en
Inventor
龚胜波
徐小龙
苗瑞秋
熊健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011194437.5A priority Critical patent/CN114435356B/en
Publication of CN114435356A publication Critical patent/CN114435356A/en
Application granted granted Critical
Publication of CN114435356B publication Critical patent/CN114435356B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0956Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0872Driver physiology
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/221Physiology, e.g. weight, heartbeat, health or special needs

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

本申请提供了一种危险场景处理方法及装置,用以避免错误识别危险场景和漏识别危险场景,提高安全驾驶的安全性和可靠性。该方法中,根据第一车辆的运动轨迹和第一目标的运动轨迹确定第一场景为非危险场景或危险场景,第一场景中存在安全隐患、但低于危险场景;在第一场景为非危险场景的情况下,根据驾驶员的心率确定驾驶员的状态,以及,根据第一车辆的加速度确定驾驶员的操作意图为加速、减速或怠速,并在驾驶员的状态为慌张状态,以及,操作意图为加速或怠速的情况下,执行应急措施。

Figure 202011194437

The present application provides a method and device for processing dangerous scenes, so as to avoid misidentification of dangerous scenes and omission of identification of dangerous scenes, and to improve the safety and reliability of safe driving. In this method, the first scene is determined to be a non-dangerous scene or a dangerous scene according to the motion trajectory of the first vehicle and the motion trajectory of the first target, and there is a potential safety hazard in the first scene, but is lower than the dangerous scene; In the case of a dangerous scene, the state of the driver is determined according to the heart rate of the driver, and the operation intention of the driver is determined to be acceleration, deceleration or idling according to the acceleration of the first vehicle, and the state of the driver is a panic state, and, When the operation is intended to accelerate or idle, perform emergency measures.

Figure 202011194437

Description

Dangerous scene processing method and device
Technical Field
The application relates to the technical field of automobiles, in particular to a dangerous scene processing method and device.
Background
More and more vehicles are equipped with automatic emergency braking systems (AEB). The AEB is used for ensuring the safe driving of vehicles and plays an important role in the field of safe driving. The working principle of AEB is as follows: when recognizing that the danger of collision will occur in front of the vehicle, the AEB gives a warning to the driver through sound or images and the like to remind the driver to take measures to avoid collision. The AEB may also avoid or mitigate collisions by automatic braking if the risk of collision becomes very urgent if the driver does not respond properly to the warning in a timely manner.
At present, the accuracy of the AEB for identifying dangerous scenes (such as collision, rear-end collision and the like) is low, and the AEB cannot identify the dangerous scenes in reality, so that the AEB cannot respond, the AEB cannot protect drivers, and the safety driving is not facilitated. Therefore, how to improve the accuracy of the AEB in identifying dangerous scenes is an urgent problem to be solved.
Disclosure of Invention
The embodiment of the application provides a dangerous scene processing method and device, which are used for improving the accuracy of AEB in recognizing dangerous scenes.
In a first aspect, an embodiment of the present application provides a hazard handling method, which may be implemented by a hazard handling apparatus. The hazard handling apparatus may be located in an on-board device or a roadside device. The method comprises the following steps: determining that a first scene is a non-dangerous scene or a dangerous scene according to the motion trail of the first vehicle and the motion trail of the first target, wherein the first scene is a scene where the first vehicle and the first target are located, potential safety hazards exist in the first scene, and the potential safety hazards of the dangerous scene are higher than that of the first scene; under the condition that the first scene is a non-dangerous scene, determining the state of the driver according to the heart rate of the driver, and determining the operation intention of the driver to be acceleration, deceleration or idling according to the acceleration of the first vehicle, wherein under the condition that the heart rate of the driver is smaller than a first threshold value, the state of the driver is in a non-panic state, under the condition that the heart rate of the driver is larger than or equal to the first threshold value, the state of the driver is in a panic state, and the first threshold value is a heart rate threshold value corresponding to the first scene; in the case where the state of the driver is a panic state, and the operation intention is acceleration or idling, the emergency measure is executed.
In the above technical solution, the first scene with the potential safety hazard is further identified, so that the first scene can be determined to be a non-dangerous scene or a dangerous scene. Because the potential safety hazard of the dangerous scene is higher than that of the first scene, the dangerous scene can be prevented from being identified by mistake. In the case where the first scene is a non-dangerous scene, emergency measures may be executed according to the state of the driver and the operation intention of the driver. For example, although the first scene is recognized as a non-dangerous scene, the state of the driver is in a panic state, and the operation intention is acceleration or idling, which means that the driver has poor emergency capability when facing a scene with a safety hazard, and is easy to operate by mistake to cause an accident. Therefore, whether emergency measures are executed or not is determined by combining the driving style of the driver, and the dangerous scenes can be prevented from being missed to be identified, so that the accuracy of recognizing the dangerous scenes by the AEB can be improved, and the safe driving of the vehicle is ensured.
In one possible design, the non-panic state includes a normal state and a stress state, wherein the state of the driver is the normal state when the heart rate of the driver is less than a second threshold, the state of the driver is the stress state when the heart rate of the driver is greater than or equal to the second threshold and less than a first threshold, the second threshold is a heart rate threshold corresponding to a first scene, and the first threshold is less than the second threshold.
Through the design, the state of the driver is divided into the normal state, the tension state and the panic state through the first threshold and the second threshold, so that the accuracy of identifying the state of the driver can be improved through fine-grained division, and the accuracy of identifying the dangerous scene can be improved.
In one possible design, the method may further include: and correcting the first threshold and/or the second threshold according to the first information, wherein the first information comprises second information and/or the operation intention of the driver, and the second information is used for indicating that the first scene is a non-dangerous scene or a dangerous scene.
Through the design, the threshold value for determining the state of the driver is corrected according to the second information and/or the operation intention of the driver, the state and the emergency capacity of the drivers with different driving styles facing different scenes are possibly different, the heart rate threshold values under different scenes are corrected according to different driving styles, and the accuracy of recognizing the state of the driver can be improved.
In one possible design, determining that the first scene is a non-dangerous scene or a dangerous scene according to the motion trajectory of the first vehicle and the motion trajectory of the first target may include: determining a first collision time, which is a time required for the first vehicle to contact the first target while traveling at the current speed, and a first deceleration, which is a minimum deceleration required to enable the first vehicle to stop moving when the first vehicle comes into contact with the first target, from the movement locus of the first vehicle and the movement locus of the first target; based on the first time to collision and the first deceleration, the first scene is determined to be a non-hazardous scene or a hazardous scene.
With this design, by calculating the locus of motion of the first vehicle and the locus of motion of the first target, it is possible to determine the time required for the first vehicle to contact the first target while traveling at the current speed (i.e., the first collision time), and the first deceleration is the minimum deceleration required to enable the first vehicle to stop moving when the first vehicle is in contact with the first target (i.e., the first deceleration). And determining the first scene as a non-hazardous scene or a hazardous scene based on the first time-to-collision and the minimum deceleration.
In one possible design, determining that the first scene is a non-hazardous scene or a hazardous scene based on the first collision time and the first deceleration may include: determining that the first scene is a non-dangerous scene if the first time to collision is greater than a third threshold and/or the first deceleration is less than a fourth threshold; or determining that the first scene is a dangerous scene under the condition that the first collision time is less than or equal to a third threshold value and the first deceleration is greater than or equal to a fourth threshold value; the third threshold is a minimum collision time required for avoiding a collision in the first scenario, and the fourth threshold is a maximum deceleration supported by the vehicle for avoiding a collision in the first scenario.
With this design, when the first collision time is greater than the third threshold value, it means that the driver has a relatively sufficient time to cope with the first scene, so in this case, the first scene is a non-dangerous scene, and there is no need to perform an emergency operation. In the case where the first deceleration is smaller than the fourth threshold, it means that the first vehicle does not come into contact with the first target when decelerating at the current speed, so in this case, the first scene is a non-dangerous scene, and there is no need to perform an emergency operation. In the case where the first collision time is less than or equal to the third threshold value and the first deceleration is greater than or equal to the fourth threshold value, on the one hand, the time is short for the driver to fail to cope with it, and on the other hand, the first vehicle cannot reach the first deceleration, so in this case, the first scene is a dangerous scene and it is necessary to perform an emergency operation.
In one possible design, the method may further include: and modifying the third threshold value and/or the fourth threshold value according to third information, wherein the third information comprises one or more of the following items: second information for indicating that the first scene is a non-dangerous scene or a dangerous scene; the state of the driver; or, the operation intention of the driver.
Through the design, the states and emergency abilities of drivers with different driving styles facing different scenes are possibly different, so that the accuracy of dangerous scene identification can be improved by correcting the third threshold and/or the fourth threshold according to the third information.
In one possible design, in a case where the third information includes the second information, the state of the driver, and the operation intention of the driver, modifying the third threshold value in accordance with the third information may include: reducing the third threshold value when the first scene is a non-dangerous scene, the state of the driver is a tension state or a panic state, and the operation intention of the driver is deceleration; alternatively, when the first scene is a dangerous scene, the state of the driver is a normal state, and the operation intention of the driver is idling or acceleration, the third threshold value is increased.
With this design, when the first scene is a non-dangerous scene, the state of the driver is a stressed state or a panic state, and the operation intention of the driver is deceleration, that is, the driver has a braking intention, and the state of the driver is the stressed state or the panic state, which indicates that the driver has recognized that the first scene is a dangerous scene, and has performed the deceleration operation. However, the dangerous scene is not identified, which means that the minimum collision time for judging the dangerous scene is too long, so that the minimum collision time is reduced, the dangerous scene can be prevented from being missed to be identified, and the accuracy of identifying the dangerous scene is improved.
In the case where the first scene is a dangerous scene, the state of the driver is a normal state, and the operation intention of the driver is idling or accelerating, that is, the driver has no apparent braking intention, and the state of the driver is a normal state, meaning that the first scene may be a non-dangerous scene. However, the identification of the dangerous scene means the minimum collision time for judging the dangerous scene, so that the minimum collision time is increased, the dangerous scene can be prevented from being identified by mistake, and the accuracy of identifying the dangerous scene is improved.
In one possible design, in the case where the third information includes the second information, the state of the driver, and the operation intention of the driver, modifying the fourth threshold value according to the third information may include: reducing the fourth threshold value when the first scene is a non-dangerous scene, the state of the driver is a tension state or a panic state, and the operation intention of the driver is deceleration; alternatively, when the first scene is a dangerous scene, the state of the driver is a normal state, and the operation intention of the driver is idling or acceleration, the fourth threshold value is increased.
With this design, when the first scene is a non-dangerous scene, the state of the driver is a stressed state or a panic state, and the operation intention of the driver is deceleration, that is, the driver has a braking intention, and the state of the driver is the stressed state or the panic state, which indicates that the driver has recognized that the first scene is a dangerous scene, and has performed the deceleration operation. However, the fact that the dangerous scene is not identified means that the maximum deceleration for judging the dangerous scene is too large, so that the maximum deceleration is reduced, the dangerous scene can be prevented from being missed to be identified, and the accuracy of identifying the dangerous scene is improved.
In the case where the first scene is a dangerous scene, the state of the driver is a normal state, and the operation intention of the driver is idling or accelerating, that is, the driver has no apparent braking intention, and the state of the driver is a normal state, meaning that the first scene may be a non-dangerous scene. However, recognizing the dangerous scene means that the maximum deceleration for determining the dangerous scene is too small, and therefore, increasing the maximum deceleration can avoid erroneous recognition of the dangerous scene, and improve the accuracy of recognizing the dangerous scene.
In a second aspect, an embodiment of the present application provides a dangerous scene processing apparatus, where the dangerous scene processing module may be located in an on-board device or a roadside device. The dangerous scene processing device comprises a dangerous scene recognition module, a driver state monitoring module, a driver operation perception module and a dangerous scene execution module; the dangerous scene identification module is used for determining that a first scene is a non-dangerous scene or a dangerous scene according to the motion track of the first vehicle and the motion track of the first target, wherein the first scene is a scene where the first vehicle and the first target are located, potential safety hazards exist in the first scene, and the potential safety hazards of the dangerous scene are higher than that of the first scene; the driver state monitoring module is used for determining the state of the driver according to the heart rate of the driver under the condition that the first scene is a non-dangerous scene, wherein the state of the driver is in a non-panic state under the condition that the heart rate of the driver is smaller than a first threshold value, the state of the driver is in a panic state under the condition that the heart rate of the driver is larger than or equal to the first threshold value, and the first threshold value is a heart rate threshold value corresponding to the first scene; the driver operation sensing module is used for determining that the operation intention of the driver is acceleration, deceleration or idling according to the acceleration of the first vehicle under the condition that the first scene is a non-dangerous scene; and the execution module is used for executing emergency measures under the condition that the state of the driver is in a panic state and the operation intention is acceleration or idling.
In one possible design, the non-panic state includes a normal state and a stress state, wherein the state of the driver is the normal state when the heart rate of the driver is less than a second threshold, the state of the driver is the stress state when the heart rate of the driver is greater than or equal to the second threshold and less than a first threshold, the second threshold is a heart rate threshold corresponding to a first scene, and the first threshold is less than the second threshold.
In one possible design, the driver state monitoring module is further configured to: and correcting the first threshold and/or the second threshold according to the first information, wherein the first information comprises second information and/or the operation intention of the driver, and the second information is used for indicating that the first scene is a non-dangerous scene or a dangerous scene.
In one possible design, the hazard scenario identification module is specifically configured to: determining a first collision time, which is a time required for the first vehicle to contact the first target while traveling at the current speed, and a first deceleration, which is a minimum deceleration required to enable the first vehicle to stop moving when the first vehicle comes into contact with the first target, from the movement locus of the first vehicle and the movement locus of the first target; based on the first time to collision and the first deceleration, the first scene is determined to be a non-hazardous scene or a hazardous scene.
In one possible design, the hazard scenario identification module is specifically configured to: determining that the first scene is a non-dangerous scene if the first time to collision is greater than a third threshold and/or the first deceleration is less than a fourth threshold; or determining that the first scene is a dangerous scene under the condition that the first collision time is less than or equal to a third threshold value and the first deceleration is greater than or equal to a fourth threshold value; wherein the third threshold is a minimum collision time required to avoid the collision under the first scenario, and the fourth threshold is a maximum deceleration supported by the vehicle to avoid the collision under the first scenario.
In one possible design, the hazard scenario identification module is further configured to: and modifying the third threshold value and/or the fourth threshold value according to third information, wherein the third information comprises one or more of the following items: second information for indicating that the first scene is a non-dangerous scene or a dangerous scene; the state of the driver; or, the operation intention of the driver.
In a possible design, in the case that the third information includes the second information, the state of the driver, and the operation intention of the driver, the dangerous scene recognition module is specifically configured to: reducing the third threshold value when the first scene is a non-dangerous scene, the state of the driver is a tension state or a panic state, and the operation intention of the driver is deceleration; alternatively, when the first scene is a dangerous scene, the state of the driver is a normal state, and the operation intention of the driver is idling or acceleration, the third threshold value is increased.
In a possible design, in the case that the third information includes the second information, the state of the driver, and the operation intention of the driver, the dangerous scene recognition module is specifically configured to: reducing the fourth threshold value when the first scene is a non-dangerous scene, the state of the driver is a tension state or a panic state, and the operation intention of the driver is deceleration; alternatively, when the first scene is a dangerous scene, the state of the driver is a normal state, and the operation intention of the driver is idling or acceleration, the fourth threshold value is increased.
In a third aspect, an embodiment of the present application provides a dangerous scene processing apparatus, where the dangerous scene processing apparatus includes a processor, and is configured to implement the method shown in the first aspect. The hazardous scene handling device may also include a memory for storing program instructions and data. The memory is coupled to the processor, and the processor may invoke and execute program instructions stored in the memory for implementing any of the methods of the first aspect.
In a fourth aspect, the present application further provides a computer-readable storage medium, in which a computer program or instructions are stored, and when the computer program or instructions are executed, the method according to any design example of the first aspect may be implemented.
In a fifth aspect, an embodiment of the present application further provides a chip system, where the chip system includes a processor and may further include a memory, and is configured to implement the method in any design example of the first aspect. The chip system may be formed by a chip, and may also include a chip and other discrete devices.
In a sixth aspect, the present application further provides a computer program product, which includes instructions, when executed on a computer, cause the computer to perform the method shown in any one of the design examples of the first aspect.
Advantageous effects of the above second to sixth aspects and implementations thereof may be referred to the description of the advantageous effects of the first aspect and implementations thereof.
Drawings
FIG. 1 is a schematic diagram of a following scenario in an embodiment of the present application;
FIG. 2 is a schematic diagram of a target cut-in scenario in an embodiment of the present application;
FIG. 3 is a schematic diagram of a host vehicle cutting out a scene in an embodiment of the present application;
FIG. 4 is a schematic diagram of an embodiment of the present application illustrating a target traversing a scene;
fig. 5 is a structural diagram of a dangerous scene recognition apparatus according to an embodiment of the present application;
fig. 6 is a flowchart illustrating a method for determining that a first scene is a dangerous scene according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a reference coordinate system in an embodiment of the present application;
FIG. 8 is a schematic illustration of a first vehicle motion profile and a first target motion profile in an embodiment of the present application;
fig. 9 is a schematic flowchart of a method for processing a dangerous scene according to an embodiment of the present application;
fig. 10 is a schematic flowchart of another dangerous scene recognition apparatus according to an embodiment of the present application.
Detailed Description
The application provides a dangerous scene processing method and device, aiming at identifying dangerous scenes more accurately, so as to respond timely and ensure driving safety. The method and the device are based on the same technical conception, and because the principles of solving the problems of the method and the device are similar, the embodiments of the device and the method can be mutually referred, and repeated parts are not described again.
Some terms of the embodiments of the present application are explained below to facilitate understanding by those skilled in the art.
1) A potential danger scene, which may be referred to as a danger scene for short, may refer to a scene where the first vehicle and the target are located or a scene where the first vehicle and the target will face, and there is a high potential safety hazard in the scene, that is, in a case where the driver or the AEB does not respond (or does not respond in time), the first vehicle may collide, which affects the safety of the first vehicle. For example, a scene with a potential safety hazard, such as a car following scene, a target cut-in scene, a car cut-out scene, or a target crossing scene, may become a dangerous scene, that is, the potential safety hazard of the dangerous scene is higher than that of the car following scene (or the target cut-in scene, or the car cut-out scene, or the target crossing scene).
The following scene may refer to a scene in which the first vehicle travels following another vehicle. As shown in fig. 1, a first vehicle travels along the arrow direction following the vehicle a, and because the vehicle a is located in front of the first vehicle, and blocks the view of the front, the first vehicle driver cannot timely sense the road condition ahead, and when the vehicle a suddenly decelerates or brakes, the first vehicle driver does not have time to respond (such as deceleration or braking) or mistakenly steps on the accelerator under tension, so that a potential safety hazard (such as rear-end collision) occurs, and the safety of the first vehicle is affected.
The target cut-in scenario may refer to a scenario in which other vehicles cut into the lane in which the first vehicle is located. As shown in fig. 2, the first vehicle travels in the traveling direction in the lane 1, and the vehicle a cuts into the lane 1 from the lane 2 in the direction of the arrow. Because the vehicle a is accelerated to cut into the vehicle a in a short distance, the first vehicle driver may not be in time to respond (such as deceleration or braking) or mistakenly step on the accelerator under tension, so that a potential safety hazard (such as collision) occurs, and safe driving of the first vehicle is not facilitated.
The host vehicle cut-out scene may refer to a scene in which the first vehicle cuts into an adjacent lane from a current lane, that is, a scene in which the first vehicle changes lanes. As shown in fig. 3, the direction of travel of lanes 1 and 2 is opposite to the direction of travel of lanes 3 and 4, and the first vehicle may cut into lane 1 or into vehicle 3 from lane 2. For example, the first vehicle is accelerated to cut into the lane 1 where the vehicle a is located at a short distance in the direction of the arrow, and due to the fact that the first vehicle is accelerated to change lanes at a short distance, the driver of the vehicle a may not be in time to respond (such as deceleration or braking) or mistakenly step on the accelerator under tension, so that a safety hazard (such as collision) occurs, and safe driving of the first vehicle is not facilitated. For another example, the first vehicle turns around in the arrow direction to cut into the lane 3 where the vehicle C is located, and due to the short-distance lane change of the first vehicle, the driver of the vehicle C may not be in time to respond (such as deceleration, lane change or braking) or mistakenly step on the accelerator under tension, so that a safety hazard (such as collision) occurs, and the safe driving of the first vehicle is not facilitated.
The object crossing scenario may refer to a scenario where the object crosses the first vehicle driving direction, where the object may refer to a pedestrian, an animal, a vehicle, or the like. As shown in fig. 4, when the first vehicle is driving forward in the driving direction and the pedestrian crosses the driving direction of the first vehicle, and when the first vehicle is fast in speed or the pedestrian crosses suddenly, the driver has no time to respond (such as deceleration, braking or lane change) or mistakenly steps on the accelerator under tension, so that a safety hazard (such as collision) occurs and the safety of the first vehicle is affected.
2) A potentially dangerous object, also referred to as an object for short, may refer to an object that is located around the first vehicle and that may collide with the first vehicle, and such an object may cause a safety hazard that affects the safety of the first vehicle. For example, the target may be a following object in a following scene, such as vehicle a in fig. 1; or a target which is cut into a lane where the first vehicle is located in a scene is cut into the target, such as a vehicle a in fig. 2; or cutting out a target on a cut-in lane of a first vehicle in the scene for the vehicle, such as a vehicle a or a vehicle C in fig. 3; or an object that intersects a first vehicle direction of travel in the scene, such as a pedestrian in fig. 4.
The target or potentially dangerous target may be a vehicle, a pedestrian, a building, etc., which may be an automobile (i.e., a car) and/or a non-automobile (e.g., an electric bicycle). The traveling direction of the target may be the same as or different from the traveling direction of the first vehicle, and this is not limited in the embodiment of the present application.
Since the target is located around the first vehicle in the dangerous scene, the driver may not timely sense that the driver is located or will be located in the dangerous scene, and thus cannot timely respond to the target in the scenes such as close-distance rapid following, cutting or crossing (such as braking or avoiding) to cause potential safety hazards, and the safety of the first vehicle is affected, so that how to accurately identify the dangerous scene is very important, and the driver can timely or in advance respond.
3) The lateral direction, which may be referred to simply as the lateral direction, may refer to a direction perpendicular to the first vehicle head.
4) The longitudinal direction, which may be referred to simply as the longitudinal direction, may refer to a direction parallel to the first vehicle head.
In the present embodiment, the vehicle mainly refers to a running automobile. The driver refers to the driver of the first vehicle without specific description.
The terminology used in the following examples is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of this application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, such as "one or more", unless the context clearly indicates otherwise. It should also be understood that in the embodiments of the present application, "one or more" means one, two, or more than two; "and/or" describes the association relationship of the associated objects, indicating that three relationships may exist; for example, a and/or B, may represent: a alone, both A and B, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The embodiments of the present application relate to at least one, including one or more; wherein a plurality means greater than or equal to two. It is to be understood that the terms "first," "second," and the like, in the description of the present application, are used for distinguishing between descriptions and not necessarily for describing a sequential or chronological order, or for indicating or implying a relative importance.
In order to facilitate understanding of the embodiments of the present application, the technical features related to the present application will be described.
The AEB is used for ensuring the safe driving of the vehicle and plays an important role in the safe driving neighborhood. At present, the accuracy of AEB for identifying dangerous scenes (such as collision, rear-end collision and the like) is low, and safe driving of a vehicle is not facilitated. For example, in the case that a dangerous scene is not identified (for example, a driver mistakenly steps on an accelerator in the dangerous scene), the AEB determines that the vehicle is in a normal acceleration behavior and does not respond, so that the AEB cannot guarantee the safe driving of the vehicle, and the safety and reliability of the AEB are reduced. For example, in the case of recognition too late for an upcoming dangerous scene, the response time of the AEB is reduced, so that the AEB has no time to warn the driver, the safe driving of the vehicle cannot be guaranteed, and the safety and reliability of the AEB are reduced. For another example, in the case of a false recognition of a dangerous scene (e.g., a driver overtaking close to a car with an aggressive driving style), a danger may not actually occur, but the AEB may frequently give a warning to the driver, may cause a large disturbance to the driver (e.g., the driver turns off the warning given by the AEB with a distracted attention), and is not favorable for safe driving.
In view of this, embodiments of the present application provide a method and an apparatus for processing a dangerous scene, so as to avoid misidentifying and missing identification of the dangerous scene, thereby accurately identifying the dangerous scene, and improving safety and reliability of safe driving.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Fig. 5 is a structural diagram of a dangerous scene recognition apparatus according to an embodiment of the present application. The dangerous scene recognition device can be positioned in vehicle-mounted equipment or road side equipment. As shown in fig. 5, the dangerous scene recognition apparatus may include a dangerous scene recognition module 100, a driver state monitoring module 200, a driver operation perception module 300, and an execution module 400.
The in-vehicle device may be a device installed in a vehicle for safe driving, for example, the in-vehicle device includes an AEB. The embodiment of the present application is not limited to the form. The roadside device may be a device installed in front of the lane control system for safe driving, and the form of the roadside device is not limited in the embodiment of the present application.
It should be noted that, in the embodiment of the present application, the division manner and naming of the modules of the dangerous scene recognition apparatus are merely an example, and the embodiment of the present application is not limited thereto. For example, the dangerous scene recognition device may be further divided into a recognition module, a sensing module and an emergency module, where the recognition module may be configured to implement the function implemented by the dangerous scene recognition module 100 in the embodiment of the present application, the sensing module may be configured to implement the functions implemented by the driver state monitoring module 200 and the driver operation sensing module 300 in the embodiment of the present application, and the emergency module may be configured to implement the function implemented by the execution module 400 in the embodiment of the present application. Alternatively, the dangerous scene recognition device may have other module division modes. In addition, the names of the modules are not limited in the embodiment of the present application, and the names of the dangerous scene recognition module 100, the driver state monitoring module 200, the driver operation sensing module 300, the execution module 400, and the like in the embodiment of the present application are only examples.
The dangerous scene recognition module 100 according to the embodiment of the present application may be configured to determine the first target, predict a motion trajectory of the first vehicle and a motion trajectory of the first target, and determine whether a scene where the first vehicle and the first target are located is a dangerous scene or determine whether a scene where the first vehicle and the first target will face is a dangerous scene according to the motion trajectory of the first vehicle and the motion trajectory of the first target.
For convenience of description, a scene in which the first vehicle and the first object are located or a scene to which the first vehicle and the first object will face is hereinafter simply referred to as a first scene.
It should be noted that the first scenario in the embodiment of the present application may refer to a scenario in which a safety hazard exists, for example, the first scenario may be a following scenario, a target cut-in scenario, a first vehicle cut-in scenario, or a target traverse scenario. The first scene may be a dangerous scene, and the potential safety hazard of the first scene is lower than that of the dangerous scene.
For example, the process of the dangerous scene recognition module 100 determining whether the first scene is a dangerous scene may be as shown in fig. 6. As shown in fig. 6, the process may include:
step S601: the hazard scene recognition module 100 determines a movement trajectory of the first vehicle and a movement trajectory of the first object.
For example, the danger scene recognition module 100 may acquire the traveling data of the first vehicle through one or more of a speed sensor, a steering wheel sensor, or a sensor of an on-board system (e.g., a radar system and/or a vision system) installed in the first vehicle, and predict a movement trajectory of the first vehicle according to the traveling data. The motion trail of the first vehicle may include a transverse motion trail of the first vehicle and a longitudinal motion trail of the first vehicle, which may be respectively denoted as xe(t),ye(t) of (d). The driving data may include one or more of driving speed, lane line information, driving direction, and the like.
For example, the danger scene recognition module 100 may determine a historical movement trajectory of the first target with respect to the first vehicle through a radar system and/or a vision system or the like installed on the first vehicle, restore the historical movement trajectory of the first target with respect to the first vehicle to a reference coordinate system, and predict the movement trajectory of the first target through an algorithm such as particle swarm optimization. The motion trajectory of the first target may include a transverse motion trajectory of the first target and a longitudinal motion trajectory of the first target, which may be respectively denoted as xo(t),yo(t) of (d). A reference coordinate system canReferring to a reference coordinate system for predicting a movement track of the first vehicle, if the reference coordinate system uses the first vehicle as an origin, a direction perpendicular to the head of the first vehicle as a transverse coordinate axis, and a method parallel to the head of the first vehicle as a longitudinal coordinate axis, as shown in fig. 7.
Taking the target cut-in scenario as an example, the first target intends to cut from lane 1 to lane 2 where the first vehicle is located, and the first target is vehicle a, as shown in fig. 8. The dangerous scene recognition module 100 may predict a moving trajectory of the first vehicle, as indicated by a thin dotted arrow in fig. 8, from the traveling data of the first vehicle, and predict a moving trajectory of the vehicle a, as indicated by a thick dotted arrow in fig. 8, from a historical moving trajectory of the vehicle a.
Step S602: the hazardous scene identification module 100 determines a first scene from the lateral motion trajectory of the first vehicle and the lateral trajectory of the first target.
For example, the danger scene recognition module 100 may recognize the lateral movement trajectory x of the first vehiclee(t) and the transverse movement locus x of the first objecto(t) determining a first scenario. The first scenario in the embodiment of the present application may be a scenario with a potential safety hazard. For example, the first scene may include, but is not limited to, a following vehicle scene, a target cut-in scene, a host vehicle cut-out scene, a target crossing scene, and the like, as shown in table 1.
Table 1: first scene
Figure BDA0002753579040000081
For example, in a case where the first target is located in front of the first vehicle, the lateral motion trajectory of the first vehicle is less than or equal to the fifth threshold, and the lateral motion trajectory of the first target is less than or equal to the sixth threshold, the dangerous scene recognition module 100 may determine that the first scene is a following scene, which may be as shown in fig. 1. The fifth threshold and the sixth threshold may be the same or different. The fifth threshold may be preset. The sixth threshold value may be set in advance.
For example, in a case where the first target is located on both sides of the first vehicle, the lateral motion trajectory of the first vehicle is less than or equal to the fifth threshold, and the lateral motion trajectory of the first target is greater than the sixth threshold, the danger identifying module 100 may determine that the first scene is a target cut-in scene, which may be as shown in fig. 2.
For example, in a case where the first target is located on both sides of the first vehicle, the lateral motion trajectory of the first vehicle is greater than the fifth threshold, and the lateral motion trajectory of the first target is less than or equal to the sixth threshold, the danger recognition module 100 may determine that the first scene is a host-vehicle cut-out scene, and the target cut-out scene may be as shown in fig. 3. Further, the danger identification module 100 may also determine that the first scene is a host-vehicle cut-out scene in combination with the data collected by the steering wheel sensor.
For another example, when the first target is located in front of the first vehicle on both sides, the lateral motion trajectory of the first vehicle is less than or equal to the fifth threshold, and the lateral motion trajectory of the first target is greater than the seventh threshold, the hazard identification module 100 may determine that the first scene is a target crossing scene, which may be as shown in fig. 4. The seventh threshold may be preset. The seventh threshold may be greater than or equal to the sixth threshold.
Step S603: the hazard scene recognition module 100 determines a first collision time and a first deceleration from the longitudinal motion profile of the first vehicle and the longitudinal motion profile of the first object.
For example, the first collision time may be a time when the first vehicle contacts the first target while traveling at the current speed. The first deceleration may be a minimum deceleration required to enable the first vehicle to stop moving when the first vehicle comes into contact with the first target. For example, the hazard scenario identification module 100 may determine the first collision time based on a longitudinal movement trajectory of the first vehicle, a longitudinal movement trajectory of the first object, and a preset safe distance. For example, the first collision time may be the root of equation (1).
ye(ttc)=yo(ttc)+safeDis (1)
Wherein safeDis can represent a presetThe safe distance, ttc, may represent a first time to collision. y ise(ttc) may represent a distance that the first vehicle moves in the longitudinal direction while driving ttc at the current speed, yo(ttc) may represent a distance within ttc that the first target moves in the longitudinal direction while traveling at the current speed ttc.
When the equation (1) has the root, the danger scene recognition module 100 may determine the first deceleration according to the longitudinal movement locus of the first vehicle, the longitudinal movement locus of the first target, the preset safe distance, and the first collision time. For example, the first deceleration may be calculated by formula (2) and formula (3).
y'e(tcoll)-deamin×tcoll=y'o(tcoll) (2)
ye(tcoll)=yo(tcoll)+safeDis (3)
Wherein, deaminMay represent a first deceleration, tcollMay represent the time required for the longitudinal speed of the first vehicle to decelerate to the longitudinal speed of the first target. y ise(tcoll) It may be indicated that the first vehicle is traveling at the current speed tcollIn the case of (2), the distance moved in the longitudinal direction, yo(tcoll) It can be shown that the first target is traveling at the current speed tcollIn the longitudinal direction. Equation (2) may be used to indicate that the longitudinal speed of the first vehicle is the same as the longitudinal speed of the first target.
Step S604: the hazardous scene identification module 100 determines whether the first scene is a hazardous scene based on the first time-to-collision and the first deceleration.
For example, the hazardous scene identification module 100 may compare the first time to collision with a third threshold and the first deceleration with a fourth threshold to determine whether the first scene is a hazardous scene. Wherein the third threshold may be a minimum time to collision in the first scenario and the fourth threshold may be a maximum deceleration in the first scenario. For example, the manufacturer may collect data such as the traveling speed, whether a collision occurs, deceleration, and safe distance of each vehicle in the first scene through big data, analyze the data to obtain the minimum collision time and the maximum deceleration in the first scene, and configure the minimum collision time and the maximum deceleration in the first scene to the vehicle when the vehicle is shipped from the factory.
For example, in the event that the first time-to-collision is less than or equal to the third threshold and the first minimum deceleration is greater than or equal to the fourth threshold, the hazardous scene identification module 100 may determine that the first scene is a hazardous scene. For example, the hazard scenario may satisfy equation (4):
ttc≤tc&deamin≥deac (4)
wherein, tcMay represent a third threshold value, deaminA fourth threshold may be represented.
For example, in the event that the first time-to-collision is greater than the third threshold, the hazardous scene identification module 100 may determine that the first scene is not yet truly hazardous, is not a hazardous scene, i.e., equation (4) is not satisfied.
For example, in the event that the first minimum deceleration is less than the fourth threshold, the hazardous scene identification module 100 may determine that the first scene is not yet truly hazardous, is not a hazardous scene, i.e., equation (4) is not satisfied.
For another example, where the first time to collision is greater than the third threshold and the first minimum deceleration is less than the fourth threshold, the hazardous scene identification module 100 may determine that the first scene is not yet truly hazardous, i.e., equation (4) is not satisfied.
For example, in a following scene, the minimum collision time may be 1.7 seconds(s), the maximum deceleration may be 0.4g (g represents the gravitational acceleration), and in a case where the first collision time is less than or equal to 1.7s and the first deceleration is greater than or equal to 0.4g, the dangerous scene recognition module 100 may determine that the following scene is a dangerous scene; in the target cut-in scene, the minimum collision time may be 1.2s, the maximum deceleration may be 0.6g, and in the case that the first collision time is less than or equal to 1.2s and the first deceleration is greater than or equal to 0.6g, the dangerous scene recognition module 100 may determine that the target cut-in scene is a dangerous scene; in the host-vehicle cut-out scene, the minimum collision time may be 1.2s, the maximum deceleration may be 0.8g, and in the case where the first collision time is less than or equal to 1.2s and the first deceleration is greater than or equal to 0.8g, the dangerous scene recognition module 100 may determine that the host-vehicle cut-out scene is a dangerous scene; in the target crossing scenario, the minimum collision time may be 1.5s, the maximum deceleration may be 0.4g, and in the case where the first collision time is less than or equal to 1.5s and the first deceleration is greater than or equal to 0.4g, the hazardous scenario identification module 100 may determine that the target crossing scenario is a hazardous scenario; the minimum time to collision and the maximum deceleration for different scenarios may be as shown in table 2. The minimum collision time and the maximum deceleration in different scenarios may be set when the vehicle leaves a factory, or may be configured by a server, such as a server corresponding to an in-vehicle application related to dangerous scenario identification, which is not limited in this embodiment of the present application.
Table 2: minimum time to collision and maximum deceleration under different scenarios
Scene Minimum time to collision tc(s) Maximum deceleration deac
Scene following vehicle 1.7 0.4g
Target cut-in scenario 1.2 0.6g
Cutting out scene by vehicle 1.2 0.8g
Object crossing scene 1.5 0.4g
To this end, the dangerous scene recognition module 100 completes the judgment of whether the first scene is a dangerous scene.
In the process shown in fig. 6, the dangerous scene recognition module 100 determines, according to the motion trajectory of the first vehicle and the motion trajectory of the first object, that a scene where the first vehicle and the first object are located (or a scene where the first vehicle and the first object will face) is a scene (i.e., a first scene) with a safety hazard, and further determines, according to the minimum collision time in the first scene and the maximum deceleration in the first scene, whether the first scene is a dangerous scene, where the safety hazard of the dangerous scene is higher than that in the first scene. Because the minimum collision time and the maximum deceleration are set for different scenes with potential safety hazards, the scenes with the potential safety hazards can be further identified through differential processing, so that false identification and missing identification of the dangerous scenes can be avoided, the accuracy of identifying the dangerous scenes is ensured, and the safety and the reliability of AEB can be improved.
In one possible implementation, the hazard scenario identification module 100 identifies an environment in which the first vehicle is located, may identify one or more objects, and may determine the first target based on the one or more objects. For example, the danger scene recognition module 100 may recognize an environment in which the first vehicle is located according to data collected by a radar system, a vision system, or the like, and obtain one or more objects. For example, as shown in FIG. 1, the hazard scene recognition module 100 recognizes an object, namely, vehicle A. For another example, as shown in fig. 2, the hazardous scene recognition module 100 may recognize a plurality of objects, i.e., a vehicle a and a vehicle B.
For example, when the number of the objects is multiple, the dangerous scene recognition module 100 may select one of the multiple objects as the first target according to the priority, for example, the object with the highest priority among the multiple objects is the first target. For example, the first priority may be a scene in which a safety hazard exists, the second priority may be a collision time, the third priority may be a deceleration, and the fourth priority may be an object type, as shown in table 3. The object types can include trucks, trolleys, battery cars, pedestrians, green belts or buildings, and the like, wherein the trucks have higher priority than the green belts.
Table 3: priority level
First priority Second priority Third priority Fourth priority
Scene with potential safety hazard Time to collision ttc Deceleration de amin Object type
For example, if there is a safety hazard (e.g., a car following scene) in the scene where the object 1 and the first vehicle are located, and there is no safety hazard in the scene where the object 2 and the first vehicle are located, the dangerous scene recognition module 100 may determine that the object 1 is the first target. If the scene where the object 1 and the first vehicle are located and the scene where the object 2 and the first vehicle are located have potential safety hazards, and the collision time required for the object 1 is less than the collision time required for the object 2, the dangerous scene recognition module 100 may determine that the object 1 is the first target. The hazardous scene recognition module 100 may determine that the object 1 is the first target if the scene in which the object 1 and the first vehicle are located and the scene in which the object 2 and the first vehicle are located have safety hazards, the collision time required for the object 1 is equal to the collision time required for the object 2, and the deceleration required for the object 1 is greater than the deceleration required for the object 2. If the scene in which the object 1 and the first vehicle are located and the scene in which the object 2 and the first vehicle are located have a safety hazard, the collision time required for the object 1 is equal to the collision time required for the object 2, the deceleration required for the object 1 is equal to the deceleration required for the object 2, and the type priority of the object 1 is higher than that of the object 2 (if the object 1 is a truck and the object 2 is a green belt), the dangerous scene recognition module 100 may determine that the object 1 is the first target.
In one possible embodiment, the dangerous scene recognition module 100 may send the recognition result to the execution module 400, so that the execution module 400 determines whether to execute a corresponding emergency measure, such as issuing an alarm, emergency braking, etc., according to the recognition result. Wherein, the identification result comprises second information. The second information may be used to indicate whether the first scene is a dangerous scene or a non-dangerous scene. Optionally, one or more of a first scene (e.g., a following scene, a target cut-in scene, a host vehicle cut-out scene, or a target crossing scene), a first collision time, a first deceleration, or a first target identification may also be included in the recognition result.
In another possible implementation, the dangerous scene recognition module 100 may send the recognition result to the driver state monitoring module 200, so that the driver state monitoring module 200 corrects the heart rate threshold in different scenes according to the recognition result, so as to improve the accuracy of recognizing the driver state. Wherein, the identification result comprises second information. The second information may be used to indicate whether the first scene is a dangerous scene or a non-dangerous scene. Optionally, the recognition result may further include one or more of a first scene (e.g., a following scene, a target cut-in scene, a vehicle cut-out scene, or a target crossing scene), a first collision time, a first deceleration, or a first target identifier.
The driver state monitoring module 200 of the embodiment of the application can be used for identifying the state of the driver. For example, the driver status monitoring module 200 may determine the heart rate of the driver and determine that the status of the driver is normal, or stressed, or in a panic based on the heart rate of the driver. For example, the driver status monitoring module 200 may acquire the heart rate of the driver through the wearable device. Wherein, wearing equipment can be wrist-watch, bracelet etc. for real-time detection driver's rhythm of the heart, this application embodiment is not limited to this. The heart rate of the driver can refer to the real-time heart rate of the driver, and also can refer to the average heart rate of the driver within a set time length, and the embodiment of the application is not limited to the heart rate. Optionally, in the case that the heart rate of the driver is the real-time heart rate of the driver, the driver status monitoring module 200 may determine the status of the driver according to the heart rate of the driver for a continuously set number of times (e.g., 3 times), so as to accurately determine the status of the driver.
For example, the driver condition monitoring module 200 may determine the condition of the driver based on the second threshold, the first threshold, and the heart rate of the driver. For example, the driver status monitoring module 200 may determine the status of the driver by comparing the heart rate of the driver to a second threshold or a first threshold. Wherein the second threshold and the first threshold may be heart rate thresholds in the first scenario. For example, the second threshold may be a heart rate threshold used to determine whether the driver is in a normal state in the first scenario. The first threshold may be a heart rate threshold used to determine whether the driver is in a panic state in the first scenario. For example, in the case where the heart rate of the driver is less than the second threshold, the driver status monitoring module 200 may determine that the status of the driver is a normal status; in the case that the heart rate of the driver is less than or equal to the second threshold and less than the first threshold, the driver state monitoring module 200 may determine that the state of the driver is a stressed state; in the event that the heart rate of the driver is greater than or equal to the first threshold, the driver status monitoring module 200 may determine that the status of the driver is a panic state.
For example, as shown in table 4, in the following scenario, if the heart rate of the driver is less than 80 times/min, the driver status monitoring module 200 may determine that the status of the driver is a normal status; if the heart rate of the driver is greater than or equal to 80 times/minute and less than 120 times/minute, the driver state monitoring module 200 may determine that the state of the driver is a stressed state; if the driver's signaling is greater than or equal to 120 times/minute, the driver state monitoring module 200 may determine that the driver's state is a panic state.
In the target cut-in scene, if the heart rate of the driver is less than 90 times/minute, the driver state monitoring module 200 may determine that the state of the driver is a normal state; if the heart rate of the driver is greater than or equal to 90 times/minute and less than 120 times/minute, the driver state monitoring module 200 may determine that the state of the driver is a stressed state; if the driver's signaling is greater than or equal to 120 times/minute, the driver state monitoring module 200 may determine that the driver's state is a panic state.
In the vehicle cutting scene, if the heart rate of the driver is less than 100 times/minute, the driver state monitoring module 200 may determine that the state of the driver is a normal state; if the heart rate of the driver is greater than or equal to 100 times/minute and less than 140 times/minute, the driver state monitoring module 200 may determine that the state of the driver is a stressed state; if the driver signaling is greater than or equal to 140 times per minute, the driver status monitoring module 200 may determine that the status of the driver is a panic state.
In the target crossing scenario, if the heart rate of the driver is less than 100 times/min, the driver state monitoring module 200 may determine that the state of the driver is a normal state; if the heart rate of the driver is greater than or equal to 100 times/minute and less than 120 times/minute, the driver state monitoring module 200 may determine that the state of the driver is a stressed state; if the driver's signaling is greater than or equal to 120 times/minute, the driver state monitoring module 200 may determine that the driver's state is a panic state.
Table 4: heart rate threshold under different scenarios
Figure BDA0002753579040000121
For example, the heart rate thresholds in different scenarios may be determined from human stress and panic heart rate baselines and configured at the time of automobile shipment.
In one possible embodiment, the driver status monitoring module 200 may send the status of the driver to the execution module 400, so that the execution module 400 determines whether to execute a corresponding emergency measure, such as issuing an alarm, emergency braking, etc., according to the status of the driver.
The driver operation sensing module 300 of the embodiment of the application can be used for identifying the operation intention of the driver. For example, the driving operation sensing module 300 may determine the operation intention of the driver according to the strength of the driver's stepping on the accelerator pedal and/or the brake pedal. For example, the driver operation sensing module 300 may obtain the driver's force of stepping on the accelerator pedal through an accelerator pedal sensor. Similarly, the driver operation sensing module 300 can obtain the force of the driver on the brake pedal through the brake pedal sensor. The operation intention of the driver may be deceleration, idling, or acceleration, among others.
Since the strength of the accelerator pedal and/or the brake pedal corresponds to the acceleration of the vehicle, the driving operation sensing module 300 may determine the operation intention of the driver according to the strength of the accelerator pedal and/or the brake pedal stepped by the driver, which may be understood as the driving operation sensing module 300 may determine the operation intention of the driver according to the acceleration of the first vehicle. The corresponding relationship between the force applied to the accelerator pedal and/or the brake pedal and the acceleration of the automobile can be configured when the automobile leaves the factory.
For example, the driver operation sensing module 300 may collect data such as the force and frequency of the driver stepping on the accelerator pedal and/or the brake pedal in daily, analyze the data such as the force and frequency of the driver stepping on the accelerator pedal and/or the brake pedal in daily according to an algorithm such as density clustering, and obtain the first acceleration threshold and the second acceleration threshold by combining the corresponding relationship between the force and the acceleration of the driver stepping on the accelerator pedal and/or the brake pedal. Wherein a first acceleration threshold may be used to determine whether the driver has a significant intention to decelerate and a second acceleration threshold may be used to determine whether the driver has a significant intention to accelerate. For example, in the case where the acceleration of the first vehicle is less than or equal to the first acceleration threshold, the driver has a clear intention to decelerate; in the case where the acceleration of the first vehicle is greater than or equal to the second acceleration threshold, the driver has a clear intention to accelerate; in the event that the acceleration of the first vehicle is greater than the first acceleration threshold and less than the second acceleration threshold, the driver has a distinct idle intent. For example, as shown in table 5, when the acceleration of the first vehicle is less than or equal to a, the driver operation sensing module 300 may determine that the driver's operation intention is deceleration. When the acceleration of the first vehicle is greater than a and less than b, the driver operation sensing module 300 may determine that the driver's operation intention is an idle speed. When the acceleration of the first vehicle is greater than or equal to b, the driver operation perception module 300 may determine that the driver's operation intention is acceleration. Wherein a is a first acceleration threshold, b is a second acceleration threshold, and a is less than b.
Table 5: acceleration threshold of operational intention
Speed reduction Idling speed Acceleration
Is less than or equal to a Is greater than a and less than b Is greater than or equal to b
In one possible embodiment, the driver operation sensing module 300 may transmit the operation intention of the driver to the dangerous scene recognition module 100, so that the driver state monitoring module 100 corrects the minimum collision time in different scenes and/or the maximum deceleration according to the operation intention of the driver to improve the accuracy of recognizing the dangerous scene.
In one possible implementation, the driver operation sensing module 300 may send the operation intention of the driver to the driver state monitoring module 200, so that the driver state monitoring module 200 corrects the heart rate threshold value in different scenes according to the operation intention of the driver, so as to improve the accuracy of identifying the state of the driver.
In one possible embodiment, the driver operation sensing module 300 may transmit the operation intention of the driver to the execution module 400, so that the execution module 400 determines whether to execute a corresponding emergency measure, such as issuing a warning, emergency braking, etc., according to the operation intention of the driver.
In another possible implementation, the driver operation sensing module 300 may send the pressure value of the accelerator pedal to the execution module 400, so that the execution module 400 determines whether the driver mistakenly steps on the accelerator according to the pressure value of the accelerator pedal. For example, the driver operation sensing module 300 may acquire the pressure value of the accelerator pedal through a sensor of the accelerator pedal.
The execution module 400 of the embodiment of the application may determine whether to execute a corresponding emergency measure, such as issuing an alarm, emergency braking, etc., according to the recognition result, the state of the driver, and the operation intention of the driver. For example, the execution module 400 may acquire the recognition result through the dangerous scene recognition module 100, acquire the state of the driver through the driver state monitoring module 200, and acquire the operation intention of the driver through the driver operation perception module 300.
For example, the execution module 400 may determine whether the corresponding emergency measure needs to be executed according to the recognition result, the state of the driver, and the operation intention of the driver. For example, in the case where the first scene is a following scene but not a dangerous scene, the state of the driver is a normal state, and the operation intention of the driver is idling or deceleration, the execution module 400 may determine that no corresponding emergency measure needs to be taken. For another example, in the case where the first scenario is a target cut-in scenario but is not a dangerous scenario, the state of the driver is a normal state, and the operation intention of the driver is idling or deceleration, the execution module 400 may determine that the corresponding emergency measure does not need to be executed.
For example, the scenario for the execution module 400 to determine to execute the emergency measure may be as follows:
scene 1: in the case that the first scenario is a dangerous scenario, the execution module 400 may determine to execute the corresponding emergency measure. For example, the execution module 400 may execute emergency braking or the like according to the first collision time and the first deceleration.
In the scenario 1, the first scenario is a dangerous scenario, which means that the driver has not time to respond, and the driver loses the control capability of the dangerous scenario, so the execution module 400 may perform emergency braking to ensure the safety of the driver and reduce the loss.
Scene 2: in the case where the first scenario is a non-dangerous scenario, the state of the driver is a panic state, and the operation intention of the driver is idling or accelerating, the execution module 400 may determine to execute a corresponding emergency measure. For example, the execution module 400 issues an alert. For another example, the execution module 400 may perform emergency braking or the like based on the first collision time and the first deceleration.
In the scene 2, the first scene is a scene with a potential safety hazard, such as a following scene, a target cut-in scene, a vehicle cut-out scene, a target crossing scene, and the like, and although the first scene is not a dangerous scene, the driver is in a hurry state, and the operation intention of the driver is idling or accelerating, which means that the emergency capability of the driver is poor, and the driver is easy to operate by mistake in the scene with the potential safety hazard, so that the probability of occurrence of an accident is greatly increased. Therefore, the execution module 400 can execute corresponding emergency measures, such as sending out a warning or emergency braking, so as to avoid accidents and ensure the safe driving of the vehicle.
For example, in the case that the first scene is a following scene, a target cut-in scene or a target crossing scene, the state of the driver is a panic state, and the operation intention of the driver is acceleration, the execution module 400 may determine that the driver mistakenly steps on the accelerator and immediately execute the corresponding emergency measure. For example, the driver is prompted to mistakenly step on the accelerator pedal, so that the driver recovers control over the vehicle, misoperation of the driver can be corrected, accidents are avoided, and safe driving of the vehicle is guaranteed. For another example, the execution module 400 may directly execute emergency braking to avoid an accident and ensure safe driving of the vehicle.
For example, in a case that the first scene is a following scene, a target cut-in scene or a target crossing scene, the state of the driver is a panic state, and the operation intention of the driver is acceleration, the execution module 400 may determine whether the driver mistakenly steps on the accelerator in combination with the pressure value of the accelerator pedal. For example, the execution module 400 may acquire the pressure value of the accelerator pedal through the driver operation sensing module 300. For example, in the case that the first scene is a following scene, a target cut-in scene or a target crossing scene, the state of the driver is a panic state, the operation intention of the driver is acceleration, and the pressure value of the accelerator pedal is greater than or equal to the set pressure threshold value within the predetermined time, the execution module 400 may determine that the driver mistakenly steps on the accelerator pedal and loses the control of the vehicle. In this case, the execution module 400 may directly execute emergency braking to avoid an accident and ensure safe driving of the vehicle.
In another possible embodiment, the hazard scene identification module 100 may modify the third threshold and/or the fourth threshold according to the third information. Wherein, the third information may include one or more of the following items: the recognition result, the state of the driver, or the operation intention of the driver. The identification result includes first information, and the first information may be used to indicate that the first scene is a dangerous scene or a non-dangerous scene. For example, the danger scene recognition module 100 may acquire that the status of the driver is a normal status, a stress status, or a panic status through the driver status monitoring module 200. For example, the dangerous scene recognition module 100 may acquire the operation intention of the driver as deceleration, or as idling, or as acceleration through the driver operation perception module 300.
For example, in the case where the recognition result, the state of the driver, and the operation intention of the driver are included in the third information, the manner in which the dangerous scene recognition module 100 modifies the third threshold value and/or the fourth threshold value according to the third information may be as follows:
the first method is as follows: in the case where the first scene is a non-dangerous scene, the state of the driver is a tension state or a panic state, and the operation intention of the driver is deceleration, the dangerous scene recognition module 100 may decrease the third threshold, or decrease the fourth threshold, or decrease the third threshold and decrease the fourth threshold. For example, the hazardous scene identification module 100 may modify the third threshold according to equation (5). For example, the hazardous scene identification module 100 may modify the fourth threshold according to equation (6).
tc1=tc01 (5)
deac1=deac01 (6)
Wherein, tc0May represent a third threshold value, t, to be modifiedc1May represent a modified third threshold value, alpha1Learning rate, alpha, which may be a correction of the third threshold1May be a preset positive number. deac0May represent a fourth threshold value, dea, to be modifiedc1May represent a modified fourth threshold value, beta1Learning rate, β, which may be a correction of the fourth threshold1May be a preset positive number.
For example, when the first scene is a following scene but not a dangerous scene, the state of the driver is a tension state or a panic state, and the operation intention of the driver is deceleration, the dangerous scene recognition module 100 may determine that the third threshold to be corrected is 1.7s and the fourth threshold to be corrected is 0.4g according to table 2. Further, the hazardous scene identification module 100 may identify the hazardous situation according to α1And beta1Correcting the third threshold value to be corrected and the fourth threshold value to be corrected to obtain a corrected third threshold value of (1.7-alpha)1) s, the fourth threshold value after correction is (0.4-beta)1) g, and update table 2.
In the first mode, the operation intention of the driver is deceleration, that is, the driver has a braking intention, and the state of the driver is a tension state or a panic state, which indicates that the driver has recognized that the first scene is a dangerous scene, and has performed the deceleration operation. However, the dangerous scene recognition module 100 does not recognize the dangerous scene, which means that the minimum collision time for determining the dangerous scene is too large, or the maximum deceleration for determining the dangerous scene is too large, or both the minimum collision time and the maximum deceleration for determining the dangerous scene are too large. Therefore, the dangerous scene recognition module 100 may reduce the third threshold and/or the fourth threshold to avoid missing recognition of the dangerous scene, and improve the accuracy of recognizing the dangerous scene.
The second method comprises the following steps: in the case where the first scene is a dangerous scene, the state of the driver is a normal state, and the operation intention of the driver is idling or acceleration, the dangerous scene recognition module 100 may increase the third threshold, or increase the fourth threshold, or increase the third threshold and the fourth threshold. For example, the hazardous scene identification module 100 may modify the third threshold value according to equation (7). For example, the hazardous scene identification module 100 may modify the fourth threshold according to equation (8).
tc1=tc02 (7)
deac1=deac02 (8)
Wherein alpha is2Learning rate, alpha, which may be a correction of the third threshold2May be a preset positive number. Beta is a2Learning rate, β, which may be a correction of the fourth threshold2May be a preset positive number. Alpha is alpha1And alpha2May be the same or different. Beta is a1And beta2May be the same or different.
For example, in the case where the first scene is the target cut-in scene and is a dangerous scene, but the state of the driver is the normal state and the operation intention of the driver is idle or acceleration, the dangerous scene recognition module 100 may determine that the third threshold value to be corrected is 1.2s and the fourth threshold value to be corrected is 0.6g according to table 2. Further, the hazardous scene identification module 100 may identify the hazardous situation according to α2And beta2Correcting the third threshold value to be corrected and the fourth threshold value to be corrected to obtain a corrected third threshold value of (1.2+ alpha)2) s, the fourth threshold after correction is (0.6+ beta)2) g, and update table 2.
In the second mode, the operation intention of the driver is idling or acceleration, that is, the driver has no obvious braking intention, and the state of the driver is a normal state, meaning that the first scene may be a non-dangerous scene. However, the dangerous scene recognition module 100 recognizes the dangerous scene, which means that the minimum collision time for determining the dangerous scene is too small, or the maximum deceleration for determining the dangerous scene is too small, or both the minimum collision time and the maximum deceleration for determining the dangerous scene are too small. Therefore, the dangerous scene recognition module 100 may increase the third threshold and/or the fourth threshold to avoid misidentifying the dangerous scene, thereby improving the accuracy of recognizing the dangerous scene.
The third method comprises the following steps: in the case where the first scene is a dangerous scene, the state of the driver is a tense state or a panic state, and the operation intention of the driver is deceleration, the dangerous scene recognition module 100 may correct the third threshold, or correct the fourth threshold, or correct the third threshold and the fourth threshold to obtain the optimal threshold. For example, the hazard scene identification module 100 can modify the third threshold and/or the fourth threshold by a greedy algorithm. For example, the hazardous scene identification module 100 may modify the third threshold according to equation (9). For example, the hazardous scene identification module 100 may modify the fourth threshold according to equation (10).
tc1=tc0+εz*α3 (9)
deac1=deac0+εz*β3 (10)
Wherein z follows a uniform distribution, e.g. z [ -1,1 [ ]]. ε may be a preset constant. Alpha is alpha3Learning rate, beta, which may be a correction of the third threshold3May be a learning rate that corrects the fourth threshold. Alpha (alpha) ("alpha")3And alpha2May be the same or different. Beta is a3And beta2May be the same or different.
For example, when the first scene is the backgroundWhen the vehicle cuts out a scene and is a dangerous scene, the state of the driver is a state of tension or a state of panic, and the operation intention of the driver is deceleration, it means that the dangerous scene recognition module 100 recognizes correctly. To obtain the optimal threshold, the dangerous scene recognition module 100 may determine that the third threshold to be modified is 1.2s and the fourth threshold to be modified is 0.8g according to table 2. Further, the danger scene recognition module 100 can recognize from ε, z, α3And beta3Correcting the third threshold value to be corrected and the fourth threshold value to be corrected to obtain a corrected third threshold value of (1.2+ epsilon z alpha)3) s, the fourth threshold after correction is (0.6+ epsilon z beta)3) g, and update table 2.
In the third mode, the state of the driver is a tense state or a panic state, and the operation intention of the driver is deceleration, that is, the driver has a distinct braking intention, which means that the driver has recognized that the first scene is a dangerous scene, and has performed the deceleration operation. The dangerous scene recognition module 100 also recognizes the dangerous scene, i.e. the dangerous scene recognition module 100 recognizes correctly. To obtain the optimal threshold, the hazardous scene identification module 100 may modify the third threshold and/or the fourth threshold.
In the above-described embodiment, the dangerous scene recognition module 100 corrects the minimum collision time and/or the maximum deceleration for determining the dangerous scene according to the state of the driver and the operation intention of the driver. The method has the advantages that when the dangerous scene is judged, the motion trail of the AEB and the motion trail of the first target are considered, different states of drivers with different driving styles facing the dangerous scene and different operation intentions of the drivers with different driving styles facing the dangerous scene are also considered, so that the dangerous scene can be accurately identified, accidents caused by mistaken identification or missed identification of the dangerous scene can be avoided, and the reliability and the safety of the AEB can be improved.
As an example, in the case where the recognition result is included in the third information, or the state of the driver is included, or the operation intention of the driver is included, the dangerous scene recognition module 100 may correct the third threshold value and/or the fourth threshold value according to the third information to obtain the optimal threshold value. For example, the dangerous scene recognition module 100 may correct the third threshold according to equation (9). For example, the dangerous scene recognition module 100 may correct the fourth threshold according to equation (10).
As an example, in the case where the recognition result and the state of the driver are included in the third information, the dangerous scene recognition module 100 may correct the third threshold value and/or the fourth threshold value according to the recognition result and the state of the driver to obtain the optimal threshold value. For example, the risk scenario identification module 100 may modify the third threshold according to equation (9). For example, the dangerous scene recognition module 100 may correct the fourth threshold according to equation (10).
As an example, in a case where the recognition result and the operation intention of the driver are included in the third information, the dangerous scene recognition module 100 may correct the third threshold value and/or the fourth threshold value according to the recognition result and the operation intention of the driver to obtain the optimal threshold value. For example, the dangerous scene recognition module 100 may correct the third threshold according to equation (9). For example, the dangerous scene recognition module 100 may correct the fourth threshold according to equation (10).
As one example, in a case where the third information includes the state of the driver and the operation intention of the driver, the dangerous scene recognition module 100 may correct the third threshold and/or the fourth threshold according to the state of the driver and the operation intention of the driver to obtain the optimal threshold. For example, the dangerous scene recognition module 100 may correct the third threshold according to equation (9). For example, the risk scenario identification module 100 may modify the fourth threshold according to equation (10).
In another possible embodiment, the driver state monitoring module 200 may modify the second threshold, or modify the first threshold, or modify both the second threshold and the first threshold based on the first information. The first information comprises an identification result and/or an operation intention of a driver, the identification result comprises second information, and the second information is used for indicating that the first scene is a dangerous scene or a non-dangerous scene. For example, the driver state monitoring module 200 may acquire the recognition result through the dangerous scene recognition module 100. For example, the driver state monitoring module 200 may acquire the operation intention of the driver as deceleration, or as idling, or as acceleration through the driver operation perception module 300.
As an example, in the case where the first information includes the recognition result and the operator's intention of the driver, the driver state monitoring module 200 may label the state of the driver in the first scene for different recognition results and operation intentions.
For example, in the case where the first scene is a non-dangerous scene and the driver's operational intent is deceleration, the driver state monitoring module 200 may not label the state of the driver. In the case where the first scenario is a non-dangerous scenario and the driver's operation intention is idle, the driver state monitoring module 200 may label the state of the driver as a normal state. In the case where the first scene is a non-dangerous scene and the operation intention of the driver is acceleration, the driver state monitoring module 200 may label the state of the driver as a normal state. In the case that the first scene is a dangerous scene, the operation intention of the driver is deceleration, and the force of the brake pedal is smaller than the set threshold, the driver state monitoring module 200 may label the state of the driver as a tension state. In the case that the first scene is a dangerous scene, the operation intention of the driver is deceleration, and the force of the brake pedal is greater than or equal to the set threshold, the driver state monitoring module 200 may label the state of the driver as a panic state. In the case where the first scenario is a dangerous scenario and the driver's operation intention is idle, the driver state monitoring module 200 may not mark the state of the driver. In the case where the first scene is a dangerous scene and the driver's operational intention is acceleration, the driver state monitoring module 200 may not mark the state of the driver. The results of the different recognition and labeling with the operational intention can be shown in table 6.
Table 6: marking results under different recognition results and operation intentions
Figure BDA0002753579040000171
For example, the driver state monitoring module 200 may modify the heart rate threshold in different scenarios through a semi-supervised algorithm or a proximity algorithm, etc. For example, the driver status monitoring module 200 may learn the labeled data and the unlabeled data through a semi-supervised proximity algorithm (KNN), and obtain the corrected heart rate thresholds in different scenarios. The labeled data may include the labeled state of the driver (as shown in table 6) and the heart rate of the driver, and the unlabeled data may include the heart rate of the driver.
For example, the driver state monitoring module 200 may record the annotation data as L { (x)i,yi) And recording the unlabeled data as U-xj}. Wherein x isiMay represent the heart rate of the ith sample, yiIt can indicate that the ith sample is a sample in a normal state, or a sample in a stress state, or a sample in a panic state, such as a sample with 0 indicating the normal state, a sample with 1 indicating the stress state, and a sample with 2 indicating the panic state. x is the number ofjThe heart rate of the jth sample may be represented. Further, the driver state monitoring module 200 may train the labeled data and unlabeled data using a semi-supervised KNN algorithm. Specifically, for any j, find the distance x in LjThe latest K samples are used for voting to obtain xjThe prediction label is that the semi-supervised KNN algorithm is utilized to train the labeled data and the unlabelled data, so that the unlabelled data before training can be labeled, and L and U can be corrected. Then, the driver status monitoring module 100 may correct the heart rate thresholds corresponding to different scenes according to the corrected L and U, that is, update table 4.
In the above example, the driver state monitoring module 200 corrects the heart rate threshold value for determining the state of the driver according to the recognition result and the operation intention of the driver. That means, when judging the state of the driver, not only the different tension degrees of the drivers with different driving styles facing the dangerous scene are considered, but also the different operation intentions of the drivers with different driving styles facing the dangerous scene are considered, so that the state of the driver can be accurately identified, accidents caused by the condition of the driver being identified by mistake can be avoided, and the reliability and the safety of the AEB can be improved.
As an example, in the case where the recognition result is included in the first information or the operation intention of the driver is included, the driver state monitoring module 200 may correct the second threshold value and/or the first threshold value according to the first information. For example, the driver status monitoring module 200 may label the status of the driver according to the identification result, and the specific labeling process may refer to the description of table 6, and learn labeled data and unlabeled data according to the semi-supervised KNN algorithm, so as to obtain the modified second threshold and/or the modified first threshold. For another example, the driver state monitoring module 200 may label the state of the driver according to the operation intention of the driver, and the specific labeling process may refer to the description of table 6, and learn labeled data and unlabeled data according to a semi-supervised KNN algorithm, so as to obtain the modified second threshold and/or the modified first threshold.
Fig. 9 is a flowchart schematically illustrating a dangerous scene processing method provided by an embodiment of the present application, where the method may be implemented by a dangerous scene recognition apparatus. As shown in fig. 9, the flow of the method may include:
s901: the hazardous scene identification module 100 may determine a first target.
For example, the hazard scene identification module 100 may identify one or more objects surrounding the first vehicle via a first vehicle-mounted radar system and/or vision system, etc., and determine the first target based on the one or more objects. When there are multiple objects, the first target may be an object with the highest priority among the multiple objects, and the priority may be as shown in table 3.
S902: the dangerous scene recognition module 100 may determine, according to the transverse motion trajectory of the first object and the transverse motion trajectory of the first vehicle, that the first vehicle and the first object are in a first scene.
For example, the first scenario may be a scenario in which a security risk exists. For example, the first scene may be a following scene, a target cut-in scene, a host vehicle cut-out scene, a target crossing scene, or the like, as shown in fig. 1 to 4. The specific implementation process of step S902 may refer to the content described in S602 in fig. 6, and is not described herein again.
S903: the dangerous scene recognition module 100 determines whether the first scene is a dangerous scene according to the longitudinal motion trajectory of the first target and the longitudinal motion trajectory of the first vehicle, and obtains a recognition result.
For example, the second information is included in the recognition result, and the second information can be used for indicating that the first scene is a dangerous scene or a non-dangerous scene. Optionally, the recognition result may further include one or more of a first scenario, a first collision time, a first deceleration, a first target identifier, or the like.
For example, the hazardous scene recognition module 100 may determine a first collision time and a first deceleration based on the longitudinal motion trajectory of the first target and the longitudinal motion trajectory of the first vehicle, and determine whether the first scene is a hazardous scene based on the first collision time and the first deceleration. For example, in the event that the first time to collision is less than or equal to the third threshold and the first deceleration is greater than or equal to the fourth threshold, the hazardous scene identification module 100 may determine that the first scene is a hazardous scene. For example, in the event that the first time-to-collision is greater than the third threshold, the hazardous scene identification module 100 may determine that the first scene is a non-hazardous scene. For example, in the event that the first deceleration is less than the third threshold, the hazardous scene identification module 100 may determine that the first scene is a non-hazardous scene. For another example, where the first time to collision is greater than the third threshold and the first deceleration is less than the fourth threshold, the hazardous scene identification module 100 may determine that the first scene is a non-hazardous scene. Wherein the third threshold may be a minimum collision time in the first scenario. The fourth threshold may be a maximum deceleration under the first scenario.
The specific implementation process of step S903 may refer to the content described in step S603 and step S604 in fig. 6, and is not described herein again.
S904: the dangerous scene recognition module 100 may transmit the recognition result to the execution module 400. Accordingly, the execution module 400 receives the recognition result.
S905: the driver status monitoring module 200 determines the heart rate of the driver.
For example, the driver status monitoring module 200 may acquire the heart rate of the driver through the wearable device. Wherein, wearing equipment can be wrist-watch, bracelet etc. for real-time detection driver's rhythm of the heart, this application embodiment is not limited to this. The heart rate of the driver can refer to the real-time heart rate of the driver, and also can refer to the average heart rate of the driver within a set time length, and the embodiment of the application is not limited to the heart rate.
S906: the driver status monitoring module 200 determines the status of the driver based on the heart rate of the driver.
For example, the driver status monitoring module 200 may determine that the status of the driver is normal, or stressed, or in a panic state based on the heart rate of the driver, the second threshold, and the first threshold. For example, in the case where the heart rate of the driver is less than the second threshold, the driver status monitoring module 200 may determine that the status of the driver is a normal status; in the case that the heart rate of the driver is less than or equal to the second threshold and less than the first threshold, the driver state monitoring module 200 may determine that the state of the driver is a stressed state; in the event that the heart rate of the driver is greater than or equal to the first threshold, the driver status monitoring module 200 may determine that the status of the driver is a panic state. Wherein the second threshold and the first threshold may be heart rate thresholds in a first scenario, as shown in table 4.
S907: the driver status monitoring module 200 may send the status of the driver to the execution module 400. Accordingly, the execution module 400 receives a status of the driver.
S908: the driver operation sensing module 300 may determine the operation intention of the driver from the acceleration of the first vehicle.
For example, the driving operation sensing module 300 may determine that the driver's operation is intended to be decelerating, or idling, or accelerating, according to the acceleration of the first vehicle. The acceleration of the first vehicle can be determined by the force with which the driver steps on the accelerator pedal and/or the brake pedal. For example, in the case where the acceleration of the first vehicle is less than or equal to the first acceleration threshold, the driver has a clear intention to decelerate; in the case where the acceleration of the first vehicle is greater than or equal to the second acceleration threshold, the driver has a clear intention to accelerate; in the event that the acceleration of the first vehicle is greater than the first acceleration threshold and less than the second acceleration threshold, the driver has a clear intent to idle, as shown in table 5.
S909: the driver operation perception module 300 may transmit the operation intention of the driver to the execution module 400. Accordingly, the execution module 400 receives the operator intent of the driver.
S910: the execution module 400 determines whether to execute the emergency measure according to the recognition result, the state of the driver, and the operation intention of the driver.
For example, the execution module 400 may determine whether to execute the emergency measure according to the recognition result, the state of the driver, and the operation intention of the driver. For example, in the case where the first scenario is a dangerous scenario, the execution module 400 may determine to execute a corresponding emergency measure. For another example, in the case where the first scene is not a dangerous scene, the state of the driver is a panic state, and the operation intention of the driver is idling or acceleration, the execution module 400 may determine to execute a corresponding emergency measure.
In one possible implementation, the driver status monitoring module 200 may send the status of the driver to the execution module 100. The driver operation sensing module 300 may transmit the operation intention of the driver to the execution module 100. Accordingly, the execution module 100 may receive the status of the driver and the operational intent of the driver. Further, the execution module 100 may modify the third threshold and/or the fourth threshold, i.e., update the table 2, according to the state of the driver and the operation intention of the driver.
For example, in the case where the first scene is a non-dangerous scene, the state of the driver is a stressed state or a panic state, and the operation intention of the driver is deceleration, the dangerous scene recognition module 100 may decrease the third threshold, or decrease the fourth threshold, or decrease the third threshold and the fourth threshold, as shown in equation (5) or equation (6). For example, in the case where the first scene is a dangerous scene, the state of the driver is a normal state, and the operation intention of the driver is idling or acceleration, the dangerous scene recognition module 100 may increase the third threshold, or increase the fourth threshold, or increase the third threshold and the fourth threshold, as shown in equation (7) and equation (8). For another example, in the case where the first scene is a dangerous scene, the state of the driver is a stressed state or a panic state, and the operation intention of the driver is deceleration, the dangerous scene recognition module 100 may correct the third threshold, or correct the fourth threshold, or correct the third threshold and the fourth threshold, as shown in equation (9) or equation (10).
In another possible implementation, the execution module 100 may send the recognition result to the driver state monitoring module 200. The driver operation perception module 300 may transmit the driver's operation intention to the driver state monitoring module 200. Accordingly, the driver state monitoring module 200 may receive the recognition result and the operation intention of the driver. Further, the driver status monitoring module 200 may modify the second threshold, or modify the first threshold, or modify the second threshold and the first threshold according to the recognition result and the operation intention of the driver, that is, update the table 4, so that the status of the driver may be determined according to different driving styles, and the accuracy of recognizing the status of the driver may be improved.
It should be noted that the execution sequence of each step in fig. 9 is only an example, and the embodiment of the present application does not limit this. For example, the dangerous scene recognition module 100 may acquire the recognition result between the states of the driver determined by the driver state monitoring module 200, may acquire the recognition result after the states of the driver are determined by the driver state monitoring module 200, and may acquire the recognition result while the states of the driver are determined by the driver state monitoring module 200.
As shown in fig. 10, an embodiment of the present application further provides another schematic structural diagram of a dangerous scene processing apparatus, where the apparatus may implement the functions of the modules shown in fig. 5 in the foregoing embodiment, or implement the method provided in the embodiment shown in fig. 9 in the foregoing embodiment. The apparatus may include, among other things, a processor 1001. The processor 1001 is configured to implement the scheme provided in the embodiment shown in fig. 9 in the above embodiment, or implement the functions of the modules shown in fig. 5 in the above embodiment. The modules include a dangerous scene recognition module 100, a driver state monitoring module 200, a driver operation perception module 300 and an execution module 400.
Optionally, the apparatus further comprises a memory 1002, the memory 1002 being for storing computer programs or instructions. The memory 1002 may be internal to the processor or external to the processor. In the case where the unit modules described in fig. 10 are implemented by software, software or program codes required for the processor 1001 to perform the corresponding actions are stored in the memory 1002. The processor 1001 is configured to execute the program or instructions in the memory 1002 to implement the steps shown in fig. 9 in the above embodiment or implement the functions of the modules shown in fig. 5 in the above embodiment.
Optionally, the apparatus further comprises a communication interface 1003, and the communication interface 1003 may be used for communication between the apparatus and other apparatuses, for example, acquiring data collected by a vision system or a radar system, acquiring data of a sensor, and the like. The processor 1001 is configured to execute the program or instructions in the memory 1002, and the processor 1001 is coupled to the communication interface, and configured to implement the scheme provided by the embodiment shown in fig. 9 in the above embodiment or implement the functions of the modules shown in fig. 5 in the above embodiment. The modules include a dangerous scene recognition module 100, a driver state monitoring module 200, a driver operation perception module 300 and an execution module 400.
For example, the processor 1001 may be configured to determine, according to a motion trajectory of a first vehicle and a motion trajectory of a first target, that a first scene is a non-dangerous scene or a dangerous scene, where the first scene is a scene in which the first vehicle and the first target are located, a potential safety hazard exists in the first scene, and a potential safety hazard of the dangerous scene is higher than that of the first scene; under the condition that the first scene is a non-dangerous scene, determining the state of the driver according to the heart rate of the driver, and determining the operation intention of the driver to be acceleration, deceleration or idling according to the acceleration of the first vehicle, wherein under the condition that the heart rate of the driver is smaller than a second threshold value, the state of the driver is a normal state, under the condition that the heart rate of the driver is larger than or equal to the second threshold value and smaller than a first threshold value, the state of the driver is a tension state, under the condition that the heart rate of the driver is larger than or equal to the first threshold value, the state of the driver is a panic state, and the second threshold value and the first threshold value are heart rate threshold values corresponding to the first scene; and the execution module is used for executing emergency measures under the condition that the state of the driver is a panic state and the operation intention is acceleration or idling.
Optionally, the processor 1001 may obtain the driver's force of stepping on the accelerator pedal and/or the brake pedal through the communication interface 1003, and determine the acceleration of the first vehicle according to the driver's force of stepping on the accelerator pedal and/or the brake pedal. Wherein, there is a corresponding relation between the acceleration of the first vehicle and the pressure value of the accelerator pedal and/or the brake pedal.
Optionally, the processor 1001 may obtain the heart rate monitored by the wearable device in real time through the communication interface 1003.
In one possible implementation, the processor 1001 may further be configured to: and correcting the second threshold and/or the first threshold according to the first information, wherein the first information comprises second information and/or the operation intention of the driver, and the second information is used for indicating that the first scene is a non-dangerous scene or a dangerous scene.
In one possible implementation, the processor 1001 may be configured to: determining a first collision time, which is a time required for the first vehicle to contact the first target while traveling at the current speed, and a first deceleration, which is a minimum deceleration required to enable the first vehicle to stop moving when the first vehicle comes into contact with the first target, from the movement locus of the first vehicle and the movement locus of the first target; based on the first time to collision and the first deceleration, the first scene is determined to be a non-hazardous scene or a hazardous scene.
Alternatively, the processor 1001 may receive data collected by a sensor (such as an acceleration sensor, a radar system, or a vision system) through the communication interface to determine the movement track of the first vehicle and the movement track of the first target.
In one possible implementation, the processor 1001 may be configured to: determining that the first scene is a non-dangerous scene if the first time to collision is greater than a third threshold and/or the first deceleration is less than a fourth threshold; or determining that the first scene is a dangerous scene under the condition that the first collision time is less than or equal to a third threshold value and the first deceleration is greater than or equal to a fourth threshold value; wherein the third threshold is a minimum time to collision in the first scenario and the fourth threshold is a maximum deceleration in the first scenario.
In one possible implementation, the processor 1001 may further be configured to: and modifying the third threshold value and/or the fourth threshold value according to third information, wherein the third information comprises one or more of the following items: second information for indicating that the first scene is a non-dangerous scene or a dangerous scene; the state of the driver; or, the operation intention of the driver.
In one possible embodiment, in the case that the third information includes the second information, the state of the driver, and the operation intention of the driver, the processor 1001 may be configured to: reducing the third threshold value when the first scene is a non-dangerous scene, the state of the driver is a tension state or a panic state, and the operation intention of the driver is deceleration; alternatively, when the first scene is a dangerous scene, the state of the driver is a normal state, and the operation intention of the driver is idling or acceleration, the third threshold value is increased.
In one possible embodiment, in the case that the third information includes the second information, the state of the driver, and the operation intention of the driver, the processor 1001 may be configured to: reducing the fourth threshold value when the first scene is a non-dangerous scene, the state of the driver is a tension state or a panic state, and the operation intention of the driver is deceleration; alternatively, when the first scene is a dangerous scene, the state of the driver is a normal state, and the operation intention of the driver is idling or acceleration, the fourth threshold value is increased.
In the case where the memory 1002 is disposed outside the processor, the memory 1002, the processor 1001, and the communication interface 1003 are connected to each other by a bus 1004, and the bus 1004 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. It should be understood that the bus may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 10, but that does not indicate only one bus or one type of bus.
An embodiment of the present application further provides a chip system, including: a processor coupled to a memory for storing a program or instructions that, when executed by the processor, cause the system-on-chip to implement the method of any of the above method embodiments.
Optionally, the system on a chip may have one or more processors. The processor may be implemented by hardware or by software. When implemented in hardware, the processor may be a logic circuit, an integrated circuit, or the like. When implemented in software, the processor may be a general-purpose processor implemented by reading software code stored in a memory.
Optionally, the memory in the system-on-chip may also be one or more. The memory may be integrated with the processor or may be separate from the processor, which is not limited in this application. For example, the memory may be a non-transitory processor, such as a read only memory ROM, which may be integrated with the processor on the same chip or separately disposed on different chips, and the type of the memory and the arrangement of the memory and the processor are not particularly limited in this application.
The system-on-chip may be, for example, a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), a system on chip (SoC), a Central Processing Unit (CPU), a Network Processor (NP), a digital signal processing circuit (DSP), a Microcontroller (MCU), a Programmable Logic Device (PLD), or other integrated chips.
It will be appreciated that the steps of the above described method embodiments may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
The embodiment of the present application further provides a computer-readable storage medium, where computer-readable instructions are stored in the computer-readable storage medium, and when the computer-readable instructions are read and executed by a computer, the computer is enabled to execute the method in any of the above method embodiments.
The embodiments of the present application further provide a computer program product, which when read and executed by a computer, causes the computer to execute the method in any of the above method embodiments.
It should be understood that the processor mentioned in the embodiments of the present application may be a Central Processing Unit (CPU), and may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will also be appreciated that the memory referred to in the embodiments of the application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM, enhanced SDRAM, SLDRAM, Synchronous Link DRAM (SLDRAM), and direct rambus RAM (DR RAM).
It should be noted that when the processor is a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, the memory (memory module) is integrated in the processor.
It should be noted that the memory described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of the processes should be determined by their functions and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (18)

1.一种危险场景处理方法,其特征在于,包括:1. A method for processing dangerous scenes, comprising: 根据第一车辆的运动轨迹和第一目标的运动轨迹确定第一场景为非危险场景或危险场景,其中,所述第一场景为所述第一车辆和所述第一目标所处的场景,所述第一场景中存在安全隐患,所述危险场景的安全隐患高于所述第一场景;It is determined that the first scene is a non-dangerous scene or a dangerous scene according to the motion trajectory of the first vehicle and the motion trajectory of the first target, wherein the first scene is a scene where the first vehicle and the first target are located, There is a potential safety hazard in the first scenario, and the potential safety hazard of the dangerous scenario is higher than that of the first scenario; 在所述第一场景为所述非危险场景的情况下,根据驾驶员的心率确定驾驶员的状态,以及,根据第一车辆的加速度确定驾驶员的操作意图为加速、减速或怠速,其中,在所述驾驶员的心率小于第一阈值的情况下,所述驾驶员的状态为非慌张状态,在所述驾驶员的心率大于或等于所述第一阈值的情况下,所述驾驶员的状态为慌张状态,所述第一阈值为所述第一场景对应的心率阈值;When the first scene is the non-dangerous scene, the state of the driver is determined according to the heart rate of the driver, and the operation intention of the driver is determined according to the acceleration of the first vehicle to accelerate, decelerate or idle, wherein, When the driver's heart rate is less than the first threshold, the driver's state is a non-panic state, and when the driver's heart rate is greater than or equal to the first threshold, the driver's The state is a panic state, and the first threshold is the heart rate threshold corresponding to the first scene; 在所述驾驶员的状态为所述慌张状态,以及,所述操作意图为所述加速或所述怠速的情况下,执行应急措施。When the state of the driver is the panic state, and the operation intention is the acceleration or the idling, emergency measures are executed. 2.如权利要求1所述的方法,其特征在于,2. The method of claim 1, wherein 所述非慌张状态包括正常状态和紧张状态,其中,在所述驾驶员的心率小于第二阈值的情况下,所述驾驶员的状态为所述正常状态,在所述驾驶员的心率大于或等于所述第二阈值,且小于所述第一阈值的情况下,所述驾驶员的状态为所述紧张状态,所述第二阈值为所述第一场景对应的心率阈值,所述第一阈值小于所述第二阈值。The non-panic state includes a normal state and a nervous state, wherein, when the driver's heart rate is less than the second threshold, the driver's state is the normal state, and when the driver's heart rate is greater than or is equal to the second threshold and less than the first threshold, the state of the driver is the nervous state, the second threshold is the heart rate threshold corresponding to the first scene, and the first The threshold value is smaller than the second threshold value. 3.如权利要求2所述的方法,其特征在于,所述方法还包括:3. The method of claim 2, wherein the method further comprises: 根据第一信息修正所述第一阈值和/或所述第二阈值,其中,所述第一信息包括第二信息和/或所述驾驶员的操作意图,所述第二信息用于指示所述第一场景为所述非危险场景或所述危险场景。The first threshold and/or the second threshold are modified according to first information, wherein the first information includes second information and/or the driver's operation intention, and the second information is used to indicate the The first scene is the non-dangerous scene or the dangerous scene. 4.如权利要求2或3所述的方法,其特征在于,所述根据第一车辆的运动轨迹和第一目标的运动轨迹确定第一场景为非危险场景或危险场景,包括:4. The method according to claim 2 or 3, wherein the determining that the first scene is a non-dangerous scene or a dangerous scene according to the motion track of the first vehicle and the motion track of the first target, comprising: 根据所述第一车辆的运动轨迹和所述第一目标的运动轨迹确定第一碰撞时间和第一减速度,所述第一碰撞时间是所述第一车辆在以当前速度行驶的情况下接触所述第一目标所需的时间,所述第一减速度是使得所述第一车辆与所述第一目标接触时所述第一车辆能够停止运动所需的最小减速度;A first collision time and a first deceleration are determined according to the movement trajectory of the first vehicle and the movement trajectory of the first target, and the first collision time is when the first vehicle contacts the vehicle at the current speed the time required for the first target, the first deceleration being the minimum deceleration required to enable the first vehicle to stop moving when the first vehicle is in contact with the first target; 根据所述第一碰撞时间和所述第一减速度,确定所述第一场景为非危险场景或危险场景。According to the first collision time and the first deceleration, it is determined that the first scene is a non-dangerous scene or a dangerous scene. 5.如权利要求4所述的方法,其特征在于,根据所述第一碰撞时间和所述第一减速度,确定所述第一场景为非危险场景或危险场景,包括:5. The method of claim 4, wherein determining the first scene as a non-hazardous scene or a dangerous scene according to the first collision time and the first deceleration, comprising: 在所述第一碰撞时间大于第三阈值,和/或,所述第一减速度小于第四阈值的情况下,确定所述第一场景为非危险场景;或者,When the first collision time is greater than a third threshold, and/or the first deceleration is less than a fourth threshold, it is determined that the first scene is a non-hazardous scene; or, 在所述第一碰撞时间小于或等于第三阈值,且所述第一减速度大于或等于第四阈值的情况下,确定所述第一场景为危险场景;When the first collision time is less than or equal to a third threshold, and the first deceleration is greater than or equal to a fourth threshold, determining that the first scene is a dangerous scene; 其中,所述第三阈值是第一场景下避免发生碰撞所需的最小碰撞时间,所述第四阈值是所述第一场景下避免发生碰撞汽车所支持的最大减速度。Wherein, the third threshold is the minimum collision time required to avoid collision in the first scenario, and the fourth threshold is the maximum deceleration supported by the collision-avoiding car in the first scenario. 6.如权利要求5所述的方法,其特征在于,所述方法还包括:6. The method of claim 5, wherein the method further comprises: 根据第三信息修正所述第三阈值和/或所述第四阈值,其中,所述第三信息包括如下一项或多项:The third threshold and/or the fourth threshold are modified according to third information, wherein the third information includes one or more of the following: 第二信息,所述第二信息用于指示所述第一场景为所述非危险场景或所述危险场景;second information, where the second information is used to indicate that the first scene is the non-dangerous scene or the dangerous scene; 所述驾驶员的状态;或,the status of said driver; or, 所述驾驶员的操作意图。The driver's operation intention. 7.如权利要求6所述的方法,其特征在于,在所述第三信息包括所述第二信息、所述驾驶员的状态和所述驾驶员的操作意图的情况下,根据第三信息修正所述第三阈值,包括:7. The method according to claim 6, wherein, in the case that the third information includes the second information, the state of the driver, and the operation intention of the driver, according to the third information Modifying the third threshold includes: 在所述第一场景为非危险场景,所述驾驶员的状态为紧张状态或慌张状态,且所述驾驶员的操作意图为减速的情况下,减小所述第三阈值;或者,When the first scene is a non-dangerous scene, the driver's state is a nervous state or a panic state, and the driver's operation intention is to decelerate, reduce the third threshold; or, 在所述第一场景为危险场景,所述驾驶员的状态为正常状态,且所述驾驶员的操作意图为怠速或加速的情况下,增大所述第三阈值。When the first scene is a dangerous scene, the state of the driver is a normal state, and the operation intention of the driver is idling or acceleration, the third threshold is increased. 8.如权利要求6或7所述的方法,其特征在于,在所述第三信息包括所述第二信息、所述驾驶员的状态和所述驾驶员的操作意图的情况下,根据第三信息修正所述第四阈值,包括:8. The method according to claim 6 or 7, wherein when the third information includes the second information, the state of the driver and the operation intention of the driver, according to the first Three pieces of information modify the fourth threshold, including: 在所述第一场景为非危险场景,所述驾驶员的状态为紧张状态或慌张状态,且所述驾驶员的操作意图为减速的情况下,减小所述第四阈值;或者,When the first scene is a non-dangerous scene, the driver's state is a nervous state or a panic state, and the driver's operation intention is to decelerate, reduce the fourth threshold; or, 在所述第一场景为危险场景,所述驾驶员的状态为正常状态,且所述驾驶员的操作意图为怠速或加速的情况下,增大所述第四阈值。When the first scene is a dangerous scene, the state of the driver is a normal state, and the operation intention of the driver is idling or acceleration, the fourth threshold is increased. 9.一种危险场景处理装置,其特征在于,包括:危险场景识别模块、驾驶员状态监测模块、驾驶员操作感知模块以及危险场景执行模块;其中,9. A dangerous scene processing device, comprising: a dangerous scene identification module, a driver state monitoring module, a driver operation perception module, and a dangerous scene execution module; wherein, 所述危险场景识别模块,用于根据第一车辆的运动轨迹和第一目标的运动轨迹确定第一场景为非危险场景或危险场景,其中,所述第一场景为所述第一车辆和所述第一目标所处的场景,所述第一场景中存在安全隐患,所述危险场景的安全隐患高于所述第一场景;The dangerous scene identification module is configured to determine that the first scene is a non-dangerous scene or a dangerous scene according to the movement trajectory of the first vehicle and the movement trajectory of the first target, wherein the first scene is the first vehicle and all the scene in which the first target is located, where there is a potential safety hazard in the first scene, and the potential safety hazard in the dangerous scene is higher than that in the first scene; 所述驾驶员状态监测模块,用于在所述第一场景为所述非危险场景的情况下,根据驾驶员的心率确定驾驶员的状态,其中,在所述驾驶员的心率小于第一阈值的情况下,所述驾驶员的状态为非慌张状态,在所述驾驶员的心率大于或等于所述第一阈值的情况下,所述驾驶员的状态为慌张状态,所述第一阈值为所述第一场景对应的心率阈值;The driver state monitoring module is configured to determine the driver's state according to the driver's heart rate when the first scene is the non-hazardous scene, where the driver's heart rate is less than a first threshold In the case of , the state of the driver is a non-panic state, and when the heart rate of the driver is greater than or equal to the first threshold, the state of the driver is a panic state, and the first threshold is the heart rate threshold corresponding to the first scene; 所述驾驶员操作感知模块,用于在所述第一场景为所述非危险场景的情况下,根据第一车辆的加速度确定驾驶员的操作意图为加速、减速或怠速;the driver's operation perception module, configured to determine whether the driver's operation intention is acceleration, deceleration or idling according to the acceleration of the first vehicle when the first scene is the non-dangerous scene; 所述执行模块,用于在所述驾驶员的状态为所述慌张状态,以及,所述操作意图为所述加速或所述怠速的情况下,执行应急措施。The execution module is configured to execute emergency measures when the state of the driver is the panic state and the operation intention is the acceleration or the idling speed. 10.如权利要求9所述的装置,其特征在于,10. The apparatus of claim 9, wherein 所述非慌张状态包括正常状态和紧张状态,其中,在所述驾驶员的心率小于第二阈值的情况下,所述驾驶员的状态为所述正常状态,在所述驾驶员的心率大于或等于所述第二阈值,且小于所述第一阈值的情况下,所述驾驶员的状态为所述紧张状态,所述第二阈值为所述第一场景对应的心率阈值,所述第一阈值小于所述第二阈值。The non-panic state includes a normal state and a nervous state, wherein, when the driver's heart rate is less than the second threshold, the driver's state is the normal state, and when the driver's heart rate is greater than or is equal to the second threshold and less than the first threshold, the state of the driver is the nervous state, the second threshold is the heart rate threshold corresponding to the first scene, and the first The threshold value is smaller than the second threshold value. 11.如权利要求10所述的装置,其特征在于,所述驾驶员状态监测模块,进一步用于:11. The device of claim 10, wherein the driver state monitoring module is further used for: 根据第一信息修正所述第一阈值和/或所述第二阈值,其中,所述第一信息包括第二信息和/或所述驾驶员的操作意图,所述第二信息用于指示所述第一场景为所述非危险场景或所述危险场景。The first threshold value and/or the second threshold value are modified according to first information, wherein the first information includes second information and/or an operation intention of the driver, and the second information is used to indicate the The first scene is the non-dangerous scene or the dangerous scene. 12.如权利要求10或11所述的装置,其特征在于,所述危险场景识别模块,具体用于:12. The device according to claim 10 or 11, wherein the dangerous scene identification module is specifically used for: 根据所述第一车辆的运动轨迹和所述第一目标的运动轨迹确定第一碰撞时间和第一减速度,所述第一碰撞时间是所述第一车辆在以当前速度行驶的情况下接触所述第一目标所需的时间,所述第一减速度是使得所述第一车辆与所述第一目标接触时所述第一车辆能够停止运动所需的最小减速度;A first collision time and a first deceleration are determined according to the motion trajectory of the first vehicle and the motion trajectory of the first target, and the first collision time is when the first vehicle contacts the vehicle at the current speed the time required for the first target, the first deceleration being the minimum deceleration required to enable the first vehicle to stop moving when the first vehicle is in contact with the first target; 根据所述第一碰撞时间和所述第一减速度,确定所述第一场景为非危险场景或危险场景。According to the first collision time and the first deceleration, it is determined that the first scene is a non-dangerous scene or a dangerous scene. 13.如权利要求12所述的装置,其特征在于,所述危险场景识别模块,具体用于:13. The apparatus according to claim 12, wherein the dangerous scene identification module is specifically used for: 在所述第一碰撞时间大于第三阈值,和/或,所述第一减速度小于第四阈值的情况下,确定所述第一场景为非危险场景;或者,When the first collision time is greater than a third threshold, and/or the first deceleration is less than a fourth threshold, it is determined that the first scene is a non-hazardous scene; or, 在所述第一碰撞时间小于或等于第三阈值,且所述第一减速度大于或等于第四阈值的情况下,确定所述第一场景为危险场景;When the first collision time is less than or equal to a third threshold, and the first deceleration is greater than or equal to a fourth threshold, determining that the first scene is a dangerous scene; 其中,所述第三阈值是第一场景下避免发生碰撞所需的最小碰撞时间,所述第四阈值是所述第一场景下避免发生碰撞汽车所支持的最大减速度。Wherein, the third threshold is the minimum collision time required to avoid collision in the first scenario, and the fourth threshold is the maximum deceleration supported by the collision-avoiding car in the first scenario. 14.如权利要求13所述的装置,其特征在于,所述危险场景识别模块,进一步用于:14. The apparatus of claim 13, wherein the dangerous scene identification module is further configured to: 根据第三信息修正所述第三阈值和/或所述第四阈值,其中,所述第三信息包括如下一项或多项:The third threshold and/or the fourth threshold are modified according to third information, wherein the third information includes one or more of the following: 第二信息,所述第二信息用于指示所述第一场景为所述非危险场景或所述危险场景;second information, where the second information is used to indicate that the first scene is the non-dangerous scene or the dangerous scene; 所述驾驶员的状态;或,the status of said driver; or, 所述驾驶员的操作意图。The driver's operation intention. 15.如权利要求14所述的装置,其特征在于,在所述第三信息包括所述第二信息、所述驾驶员的状态和所述驾驶员的操作意图的情况下,所述危险场景识别模块,具体用于:15. The apparatus according to claim 14, wherein, in the case that the third information includes the second information, the state of the driver, and the operation intention of the driver, the dangerous scene Identify modules, specifically for: 在所述第一场景为非危险场景,所述驾驶员的状态为紧张状态或慌张状态,且所述驾驶员的操作意图为减速的情况下,减小所述第三阈值;或者,When the first scene is a non-dangerous scene, the driver's state is a nervous state or a panic state, and the driver's operation intention is to decelerate, reduce the third threshold; or, 在所述第一场景为危险场景,所述驾驶员的状态为正常状态,且所述驾驶员的操作意图为怠速或加速的情况下,增大所述第三阈值。When the first scene is a dangerous scene, the state of the driver is a normal state, and the operation intention of the driver is idling or acceleration, the third threshold is increased. 16.如权利要求14或15所述的装置,其特征在于,在所述第三信息包括所述第二信息、所述驾驶员的状态和所述驾驶员的操作意图的情况下,所述危险场景识别模块,具体用于:16. The apparatus according to claim 14 or 15, characterized in that, when the third information includes the second information, the state of the driver, and the operation intention of the driver, the Hazardous scene recognition module, specifically used for: 在所述第一场景为非危险场景,所述驾驶员的状态为紧张状态或慌张状态,且所述驾驶员的操作意图为减速的情况下,减小所述第四阈值;或者,When the first scene is a non-dangerous scene, the driver's state is a nervous state or a panic state, and the driver's operation intention is to decelerate, reduce the fourth threshold; or, 在所述第一场景为危险场景,所述驾驶员的状态为正常状态,且所述驾驶员的操作意图为怠速或加速的情况下,增大所述第四阈值。When the first scene is a dangerous scene, the state of the driver is a normal state, and the operation intention of the driver is idling or acceleration, the fourth threshold is increased. 17.一种危险场景处理装置,其特征在于,包括:处理器,所述处理器与存储器耦合,所述存储器用于存储程序或指令,当所述程序或指令被所述处理器执行时,使得所述危险场景处理装置执行如权利要求1至8中任一项所述的方法。17. An apparatus for processing dangerous scenarios, comprising: a processor, wherein the processor is coupled to a memory, and the memory is used for storing programs or instructions, and when the programs or instructions are executed by the processor, The dangerous scene processing apparatus is caused to execute the method according to any one of claims 1 to 8 . 18.一种计算机存储介质,其特征在于,包括计算机指令,当所述计算机指令在装置上运行时,使得所述装置如权利要求1至8中任一项所述的危险场景处理方法。18 . A computer storage medium, characterized by comprising computer instructions, which, when the computer instructions are executed on an apparatus, cause the apparatus to perform the method for processing a dangerous scene according to any one of claims 1 to 8 .
CN202011194437.5A 2020-10-30 2020-10-30 A method and device for handling dangerous scenes Active CN114435356B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011194437.5A CN114435356B (en) 2020-10-30 2020-10-30 A method and device for handling dangerous scenes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011194437.5A CN114435356B (en) 2020-10-30 2020-10-30 A method and device for handling dangerous scenes

Publications (2)

Publication Number Publication Date
CN114435356A true CN114435356A (en) 2022-05-06
CN114435356B CN114435356B (en) 2024-11-22

Family

ID=81358233

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011194437.5A Active CN114435356B (en) 2020-10-30 2020-10-30 A method and device for handling dangerous scenes

Country Status (1)

Country Link
CN (1) CN114435356B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114954384A (en) * 2022-05-16 2022-08-30 毫末智行科技有限公司 A brake control method, device, system and vehicle
CN116605226A (en) * 2023-07-07 2023-08-18 中国第一汽车股份有限公司 Emergency braking method and device for vehicle, electronic equipment and storage medium
CN120510715A (en) * 2025-07-22 2025-08-19 西安热工研究院有限公司 Rule judgment-based vehicle abnormal driving behavior identification method and identification system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008062852A (en) * 2006-09-08 2008-03-21 Fujitsu Ten Ltd Vehicle controller
CN107316436A (en) * 2017-07-31 2017-11-03 努比亚技术有限公司 Dangerous driving state processing method, electronic equipment and storage medium
CN108263359A (en) * 2016-12-29 2018-07-10 博世汽车部件(苏州)有限公司 The system and method for preventing vehicle from accidentally accelerating
CN109801511A (en) * 2017-11-16 2019-05-24 华为技术有限公司 A kind of anti-collision warning method and device
CN109835348A (en) * 2019-01-25 2019-06-04 中国汽车技术研究中心有限公司 A kind of screening technique and device of road traffic danger scene
CN110088816A (en) * 2016-12-21 2019-08-02 三星电子株式会社 Electronic equipment and the method for operating electronic equipment
CN110942671A (en) * 2019-12-04 2020-03-31 北京京东乾石科技有限公司 Vehicle dangerous driving detection method, device and storage medium
CN111016914A (en) * 2019-11-22 2020-04-17 华东交通大学 Dangerous driving scene identification system based on portable terminal information and identification method thereof
CN111186444A (en) * 2018-11-13 2020-05-22 现代自动车株式会社 Vehicle and its control method
CN111354183A (en) * 2018-12-20 2020-06-30 上海博泰悦臻电子设备制造有限公司 Early warning information push method and terminal

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008062852A (en) * 2006-09-08 2008-03-21 Fujitsu Ten Ltd Vehicle controller
CN110088816A (en) * 2016-12-21 2019-08-02 三星电子株式会社 Electronic equipment and the method for operating electronic equipment
CN108263359A (en) * 2016-12-29 2018-07-10 博世汽车部件(苏州)有限公司 The system and method for preventing vehicle from accidentally accelerating
CN107316436A (en) * 2017-07-31 2017-11-03 努比亚技术有限公司 Dangerous driving state processing method, electronic equipment and storage medium
CN109801511A (en) * 2017-11-16 2019-05-24 华为技术有限公司 A kind of anti-collision warning method and device
CN111186444A (en) * 2018-11-13 2020-05-22 现代自动车株式会社 Vehicle and its control method
CN111354183A (en) * 2018-12-20 2020-06-30 上海博泰悦臻电子设备制造有限公司 Early warning information push method and terminal
CN109835348A (en) * 2019-01-25 2019-06-04 中国汽车技术研究中心有限公司 A kind of screening technique and device of road traffic danger scene
CN111016914A (en) * 2019-11-22 2020-04-17 华东交通大学 Dangerous driving scene identification system based on portable terminal information and identification method thereof
CN110942671A (en) * 2019-12-04 2020-03-31 北京京东乾石科技有限公司 Vehicle dangerous driving detection method, device and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114954384A (en) * 2022-05-16 2022-08-30 毫末智行科技有限公司 A brake control method, device, system and vehicle
CN114954384B (en) * 2022-05-16 2023-08-29 毫末智行科技有限公司 Brake control method, device and system and vehicle
CN116605226A (en) * 2023-07-07 2023-08-18 中国第一汽车股份有限公司 Emergency braking method and device for vehicle, electronic equipment and storage medium
CN120510715A (en) * 2025-07-22 2025-08-19 西安热工研究院有限公司 Rule judgment-based vehicle abnormal driving behavior identification method and identification system

Also Published As

Publication number Publication date
CN114435356B (en) 2024-11-22

Similar Documents

Publication Publication Date Title
US8849557B1 (en) Leveraging of behavior of vehicles to detect likely presence of an emergency vehicle
US9925989B2 (en) Vehicle control apparatus and method for operation in passing lane
US20220083056A1 (en) Alerting predicted accidents between driverless cars
CN104386063B (en) Drive assist system based on artificial intelligence
US20170248949A1 (en) Alerting predicted accidents between driverless cars
CN110164183A (en) A kind of safety assistant driving method for early warning considering his vehicle driving intention under the conditions of truck traffic
CN102074096B (en) Method and control device for fatigue recognition
CN108099819B (en) A lane departure warning system and method
CN107783536B (en) Enhanced lane detection
CN114435356B (en) A method and device for handling dangerous scenes
JP7434882B2 (en) Dangerous driving determination device, dangerous driving determination method, and dangerous driving determination program
CN107851377A (en) Drive apparatus for evaluating
CN111526311B (en) Method and system for judging driving user behavior, computer equipment and storage medium
CN110675633B (en) Method for determining an illegal driving behavior, control unit and storage medium
CN110103955A (en) A kind of vehicle early warning method, device and electronic equipment
CN115042782B (en) Vehicle cruise control method, system, equipment and medium
CN106383918A (en) System for distinguishing reasonability of emergency braking behavior and distinguishing method
WO2023132055A1 (en) Evaluation device, evaluation method, and program
CN111591294B (en) Early warning method for vehicle lane change in different traffic environments
CN117272690B (en) Method, equipment and medium for extracting dangerous cut-in scene of automatic driving vehicle
JP2024152782A (en) Information processing device, information processing method, program, and storage medium
CN117601858A (en) Method, equipment and system for avoiding rear-end collision of vehicle
CN117593874A (en) Traffic light indication system with suppression notice for vehicle
CN117864166A (en) Vehicle avoidance control method and device, automobile and storage medium
US20220386091A1 (en) Motorcycle monitoring system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant