Detailed Description
The application provides a dangerous scene processing method and device, aiming at identifying dangerous scenes more accurately, so as to respond timely and ensure driving safety. The method and the device are based on the same technical conception, and because the principles of solving the problems of the method and the device are similar, the embodiments of the device and the method can be mutually referred, and repeated parts are not described again.
Some terms of the embodiments of the present application are explained below to facilitate understanding by those skilled in the art.
1) A potential danger scene, which may be referred to as a danger scene for short, may refer to a scene where the first vehicle and the target are located or a scene where the first vehicle and the target will face, and there is a high potential safety hazard in the scene, that is, in a case where the driver or the AEB does not respond (or does not respond in time), the first vehicle may collide, which affects the safety of the first vehicle. For example, a scene with a potential safety hazard, such as a car following scene, a target cut-in scene, a car cut-out scene, or a target crossing scene, may become a dangerous scene, that is, the potential safety hazard of the dangerous scene is higher than that of the car following scene (or the target cut-in scene, or the car cut-out scene, or the target crossing scene).
The following scene may refer to a scene in which the first vehicle travels following another vehicle. As shown in fig. 1, a first vehicle travels along the arrow direction following the vehicle a, and because the vehicle a is located in front of the first vehicle, and blocks the view of the front, the first vehicle driver cannot timely sense the road condition ahead, and when the vehicle a suddenly decelerates or brakes, the first vehicle driver does not have time to respond (such as deceleration or braking) or mistakenly steps on the accelerator under tension, so that a potential safety hazard (such as rear-end collision) occurs, and the safety of the first vehicle is affected.
The target cut-in scenario may refer to a scenario in which other vehicles cut into the lane in which the first vehicle is located. As shown in fig. 2, the first vehicle travels in the traveling direction in the lane 1, and the vehicle a cuts into the lane 1 from the lane 2 in the direction of the arrow. Because the vehicle a is accelerated to cut into the vehicle a in a short distance, the first vehicle driver may not be in time to respond (such as deceleration or braking) or mistakenly step on the accelerator under tension, so that a potential safety hazard (such as collision) occurs, and safe driving of the first vehicle is not facilitated.
The host vehicle cut-out scene may refer to a scene in which the first vehicle cuts into an adjacent lane from a current lane, that is, a scene in which the first vehicle changes lanes. As shown in fig. 3, the direction of travel of lanes 1 and 2 is opposite to the direction of travel of lanes 3 and 4, and the first vehicle may cut into lane 1 or into vehicle 3 from lane 2. For example, the first vehicle is accelerated to cut into the lane 1 where the vehicle a is located at a short distance in the direction of the arrow, and due to the fact that the first vehicle is accelerated to change lanes at a short distance, the driver of the vehicle a may not be in time to respond (such as deceleration or braking) or mistakenly step on the accelerator under tension, so that a safety hazard (such as collision) occurs, and safe driving of the first vehicle is not facilitated. For another example, the first vehicle turns around in the arrow direction to cut into the lane 3 where the vehicle C is located, and due to the short-distance lane change of the first vehicle, the driver of the vehicle C may not be in time to respond (such as deceleration, lane change or braking) or mistakenly step on the accelerator under tension, so that a safety hazard (such as collision) occurs, and the safe driving of the first vehicle is not facilitated.
The object crossing scenario may refer to a scenario where the object crosses the first vehicle driving direction, where the object may refer to a pedestrian, an animal, a vehicle, or the like. As shown in fig. 4, when the first vehicle is driving forward in the driving direction and the pedestrian crosses the driving direction of the first vehicle, and when the first vehicle is fast in speed or the pedestrian crosses suddenly, the driver has no time to respond (such as deceleration, braking or lane change) or mistakenly steps on the accelerator under tension, so that a safety hazard (such as collision) occurs and the safety of the first vehicle is affected.
2) A potentially dangerous object, also referred to as an object for short, may refer to an object that is located around the first vehicle and that may collide with the first vehicle, and such an object may cause a safety hazard that affects the safety of the first vehicle. For example, the target may be a following object in a following scene, such as vehicle a in fig. 1; or a target which is cut into a lane where the first vehicle is located in a scene is cut into the target, such as a vehicle a in fig. 2; or cutting out a target on a cut-in lane of a first vehicle in the scene for the vehicle, such as a vehicle a or a vehicle C in fig. 3; or an object that intersects a first vehicle direction of travel in the scene, such as a pedestrian in fig. 4.
The target or potentially dangerous target may be a vehicle, a pedestrian, a building, etc., which may be an automobile (i.e., a car) and/or a non-automobile (e.g., an electric bicycle). The traveling direction of the target may be the same as or different from the traveling direction of the first vehicle, and this is not limited in the embodiment of the present application.
Since the target is located around the first vehicle in the dangerous scene, the driver may not timely sense that the driver is located or will be located in the dangerous scene, and thus cannot timely respond to the target in the scenes such as close-distance rapid following, cutting or crossing (such as braking or avoiding) to cause potential safety hazards, and the safety of the first vehicle is affected, so that how to accurately identify the dangerous scene is very important, and the driver can timely or in advance respond.
3) The lateral direction, which may be referred to simply as the lateral direction, may refer to a direction perpendicular to the first vehicle head.
4) The longitudinal direction, which may be referred to simply as the longitudinal direction, may refer to a direction parallel to the first vehicle head.
In the present embodiment, the vehicle mainly refers to a running automobile. The driver refers to the driver of the first vehicle without specific description.
The terminology used in the following examples is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of this application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, such as "one or more", unless the context clearly indicates otherwise. It should also be understood that in the embodiments of the present application, "one or more" means one, two, or more than two; "and/or" describes the association relationship of the associated objects, indicating that three relationships may exist; for example, a and/or B, may represent: a alone, both A and B, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The embodiments of the present application relate to at least one, including one or more; wherein a plurality means greater than or equal to two. It is to be understood that the terms "first," "second," and the like, in the description of the present application, are used for distinguishing between descriptions and not necessarily for describing a sequential or chronological order, or for indicating or implying a relative importance.
In order to facilitate understanding of the embodiments of the present application, the technical features related to the present application will be described.
The AEB is used for ensuring the safe driving of the vehicle and plays an important role in the safe driving neighborhood. At present, the accuracy of AEB for identifying dangerous scenes (such as collision, rear-end collision and the like) is low, and safe driving of a vehicle is not facilitated. For example, in the case that a dangerous scene is not identified (for example, a driver mistakenly steps on an accelerator in the dangerous scene), the AEB determines that the vehicle is in a normal acceleration behavior and does not respond, so that the AEB cannot guarantee the safe driving of the vehicle, and the safety and reliability of the AEB are reduced. For example, in the case of recognition too late for an upcoming dangerous scene, the response time of the AEB is reduced, so that the AEB has no time to warn the driver, the safe driving of the vehicle cannot be guaranteed, and the safety and reliability of the AEB are reduced. For another example, in the case of a false recognition of a dangerous scene (e.g., a driver overtaking close to a car with an aggressive driving style), a danger may not actually occur, but the AEB may frequently give a warning to the driver, may cause a large disturbance to the driver (e.g., the driver turns off the warning given by the AEB with a distracted attention), and is not favorable for safe driving.
In view of this, embodiments of the present application provide a method and an apparatus for processing a dangerous scene, so as to avoid misidentifying and missing identification of the dangerous scene, thereby accurately identifying the dangerous scene, and improving safety and reliability of safe driving.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Fig. 5 is a structural diagram of a dangerous scene recognition apparatus according to an embodiment of the present application. The dangerous scene recognition device can be positioned in vehicle-mounted equipment or road side equipment. As shown in fig. 5, the dangerous scene recognition apparatus may include a dangerous scene recognition module 100, a driver state monitoring module 200, a driver operation perception module 300, and an execution module 400.
The in-vehicle device may be a device installed in a vehicle for safe driving, for example, the in-vehicle device includes an AEB. The embodiment of the present application is not limited to the form. The roadside device may be a device installed in front of the lane control system for safe driving, and the form of the roadside device is not limited in the embodiment of the present application.
It should be noted that, in the embodiment of the present application, the division manner and naming of the modules of the dangerous scene recognition apparatus are merely an example, and the embodiment of the present application is not limited thereto. For example, the dangerous scene recognition device may be further divided into a recognition module, a sensing module and an emergency module, where the recognition module may be configured to implement the function implemented by the dangerous scene recognition module 100 in the embodiment of the present application, the sensing module may be configured to implement the functions implemented by the driver state monitoring module 200 and the driver operation sensing module 300 in the embodiment of the present application, and the emergency module may be configured to implement the function implemented by the execution module 400 in the embodiment of the present application. Alternatively, the dangerous scene recognition device may have other module division modes. In addition, the names of the modules are not limited in the embodiment of the present application, and the names of the dangerous scene recognition module 100, the driver state monitoring module 200, the driver operation sensing module 300, the execution module 400, and the like in the embodiment of the present application are only examples.
The dangerous scene recognition module 100 according to the embodiment of the present application may be configured to determine the first target, predict a motion trajectory of the first vehicle and a motion trajectory of the first target, and determine whether a scene where the first vehicle and the first target are located is a dangerous scene or determine whether a scene where the first vehicle and the first target will face is a dangerous scene according to the motion trajectory of the first vehicle and the motion trajectory of the first target.
For convenience of description, a scene in which the first vehicle and the first object are located or a scene to which the first vehicle and the first object will face is hereinafter simply referred to as a first scene.
It should be noted that the first scenario in the embodiment of the present application may refer to a scenario in which a safety hazard exists, for example, the first scenario may be a following scenario, a target cut-in scenario, a first vehicle cut-in scenario, or a target traverse scenario. The first scene may be a dangerous scene, and the potential safety hazard of the first scene is lower than that of the dangerous scene.
For example, the process of the dangerous scene recognition module 100 determining whether the first scene is a dangerous scene may be as shown in fig. 6. As shown in fig. 6, the process may include:
step S601: the hazard scene recognition module 100 determines a movement trajectory of the first vehicle and a movement trajectory of the first object.
For example, the danger scene recognition module 100 may acquire the traveling data of the first vehicle through one or more of a speed sensor, a steering wheel sensor, or a sensor of an on-board system (e.g., a radar system and/or a vision system) installed in the first vehicle, and predict a movement trajectory of the first vehicle according to the traveling data. The motion trail of the first vehicle may include a transverse motion trail of the first vehicle and a longitudinal motion trail of the first vehicle, which may be respectively denoted as xe(t),ye(t) of (d). The driving data may include one or more of driving speed, lane line information, driving direction, and the like.
For example, the danger scene recognition module 100 may determine a historical movement trajectory of the first target with respect to the first vehicle through a radar system and/or a vision system or the like installed on the first vehicle, restore the historical movement trajectory of the first target with respect to the first vehicle to a reference coordinate system, and predict the movement trajectory of the first target through an algorithm such as particle swarm optimization. The motion trajectory of the first target may include a transverse motion trajectory of the first target and a longitudinal motion trajectory of the first target, which may be respectively denoted as xo(t),yo(t) of (d). A reference coordinate system canReferring to a reference coordinate system for predicting a movement track of the first vehicle, if the reference coordinate system uses the first vehicle as an origin, a direction perpendicular to the head of the first vehicle as a transverse coordinate axis, and a method parallel to the head of the first vehicle as a longitudinal coordinate axis, as shown in fig. 7.
Taking the target cut-in scenario as an example, the first target intends to cut from lane 1 to lane 2 where the first vehicle is located, and the first target is vehicle a, as shown in fig. 8. The dangerous scene recognition module 100 may predict a moving trajectory of the first vehicle, as indicated by a thin dotted arrow in fig. 8, from the traveling data of the first vehicle, and predict a moving trajectory of the vehicle a, as indicated by a thick dotted arrow in fig. 8, from a historical moving trajectory of the vehicle a.
Step S602: the hazardous scene identification module 100 determines a first scene from the lateral motion trajectory of the first vehicle and the lateral trajectory of the first target.
For example, the danger scene recognition module 100 may recognize the lateral movement trajectory x of the first vehiclee(t) and the transverse movement locus x of the first objecto(t) determining a first scenario. The first scenario in the embodiment of the present application may be a scenario with a potential safety hazard. For example, the first scene may include, but is not limited to, a following vehicle scene, a target cut-in scene, a host vehicle cut-out scene, a target crossing scene, and the like, as shown in table 1.
Table 1: first scene
For example, in a case where the first target is located in front of the first vehicle, the lateral motion trajectory of the first vehicle is less than or equal to the fifth threshold, and the lateral motion trajectory of the first target is less than or equal to the sixth threshold, the dangerous scene recognition module 100 may determine that the first scene is a following scene, which may be as shown in fig. 1. The fifth threshold and the sixth threshold may be the same or different. The fifth threshold may be preset. The sixth threshold value may be set in advance.
For example, in a case where the first target is located on both sides of the first vehicle, the lateral motion trajectory of the first vehicle is less than or equal to the fifth threshold, and the lateral motion trajectory of the first target is greater than the sixth threshold, the danger identifying module 100 may determine that the first scene is a target cut-in scene, which may be as shown in fig. 2.
For example, in a case where the first target is located on both sides of the first vehicle, the lateral motion trajectory of the first vehicle is greater than the fifth threshold, and the lateral motion trajectory of the first target is less than or equal to the sixth threshold, the danger recognition module 100 may determine that the first scene is a host-vehicle cut-out scene, and the target cut-out scene may be as shown in fig. 3. Further, the danger identification module 100 may also determine that the first scene is a host-vehicle cut-out scene in combination with the data collected by the steering wheel sensor.
For another example, when the first target is located in front of the first vehicle on both sides, the lateral motion trajectory of the first vehicle is less than or equal to the fifth threshold, and the lateral motion trajectory of the first target is greater than the seventh threshold, the hazard identification module 100 may determine that the first scene is a target crossing scene, which may be as shown in fig. 4. The seventh threshold may be preset. The seventh threshold may be greater than or equal to the sixth threshold.
Step S603: the hazard scene recognition module 100 determines a first collision time and a first deceleration from the longitudinal motion profile of the first vehicle and the longitudinal motion profile of the first object.
For example, the first collision time may be a time when the first vehicle contacts the first target while traveling at the current speed. The first deceleration may be a minimum deceleration required to enable the first vehicle to stop moving when the first vehicle comes into contact with the first target. For example, the hazard scenario identification module 100 may determine the first collision time based on a longitudinal movement trajectory of the first vehicle, a longitudinal movement trajectory of the first object, and a preset safe distance. For example, the first collision time may be the root of equation (1).
ye(ttc)=yo(ttc)+safeDis (1)
Wherein safeDis can represent a presetThe safe distance, ttc, may represent a first time to collision. y ise(ttc) may represent a distance that the first vehicle moves in the longitudinal direction while driving ttc at the current speed, yo(ttc) may represent a distance within ttc that the first target moves in the longitudinal direction while traveling at the current speed ttc.
When the equation (1) has the root, the danger scene recognition module 100 may determine the first deceleration according to the longitudinal movement locus of the first vehicle, the longitudinal movement locus of the first target, the preset safe distance, and the first collision time. For example, the first deceleration may be calculated by formula (2) and formula (3).
y'e(tcoll)-deamin×tcoll=y'o(tcoll) (2)
ye(tcoll)=yo(tcoll)+safeDis (3)
Wherein, deaminMay represent a first deceleration, tcollMay represent the time required for the longitudinal speed of the first vehicle to decelerate to the longitudinal speed of the first target. y ise(tcoll) It may be indicated that the first vehicle is traveling at the current speed tcollIn the case of (2), the distance moved in the longitudinal direction, yo(tcoll) It can be shown that the first target is traveling at the current speed tcollIn the longitudinal direction. Equation (2) may be used to indicate that the longitudinal speed of the first vehicle is the same as the longitudinal speed of the first target.
Step S604: the hazardous scene identification module 100 determines whether the first scene is a hazardous scene based on the first time-to-collision and the first deceleration.
For example, the hazardous scene identification module 100 may compare the first time to collision with a third threshold and the first deceleration with a fourth threshold to determine whether the first scene is a hazardous scene. Wherein the third threshold may be a minimum time to collision in the first scenario and the fourth threshold may be a maximum deceleration in the first scenario. For example, the manufacturer may collect data such as the traveling speed, whether a collision occurs, deceleration, and safe distance of each vehicle in the first scene through big data, analyze the data to obtain the minimum collision time and the maximum deceleration in the first scene, and configure the minimum collision time and the maximum deceleration in the first scene to the vehicle when the vehicle is shipped from the factory.
For example, in the event that the first time-to-collision is less than or equal to the third threshold and the first minimum deceleration is greater than or equal to the fourth threshold, the hazardous scene identification module 100 may determine that the first scene is a hazardous scene. For example, the hazard scenario may satisfy equation (4):
ttc≤tc&deamin≥deac (4)
wherein, tcMay represent a third threshold value, deaminA fourth threshold may be represented.
For example, in the event that the first time-to-collision is greater than the third threshold, the hazardous scene identification module 100 may determine that the first scene is not yet truly hazardous, is not a hazardous scene, i.e., equation (4) is not satisfied.
For example, in the event that the first minimum deceleration is less than the fourth threshold, the hazardous scene identification module 100 may determine that the first scene is not yet truly hazardous, is not a hazardous scene, i.e., equation (4) is not satisfied.
For another example, where the first time to collision is greater than the third threshold and the first minimum deceleration is less than the fourth threshold, the hazardous scene identification module 100 may determine that the first scene is not yet truly hazardous, i.e., equation (4) is not satisfied.
For example, in a following scene, the minimum collision time may be 1.7 seconds(s), the maximum deceleration may be 0.4g (g represents the gravitational acceleration), and in a case where the first collision time is less than or equal to 1.7s and the first deceleration is greater than or equal to 0.4g, the dangerous scene recognition module 100 may determine that the following scene is a dangerous scene; in the target cut-in scene, the minimum collision time may be 1.2s, the maximum deceleration may be 0.6g, and in the case that the first collision time is less than or equal to 1.2s and the first deceleration is greater than or equal to 0.6g, the dangerous scene recognition module 100 may determine that the target cut-in scene is a dangerous scene; in the host-vehicle cut-out scene, the minimum collision time may be 1.2s, the maximum deceleration may be 0.8g, and in the case where the first collision time is less than or equal to 1.2s and the first deceleration is greater than or equal to 0.8g, the dangerous scene recognition module 100 may determine that the host-vehicle cut-out scene is a dangerous scene; in the target crossing scenario, the minimum collision time may be 1.5s, the maximum deceleration may be 0.4g, and in the case where the first collision time is less than or equal to 1.5s and the first deceleration is greater than or equal to 0.4g, the hazardous scenario identification module 100 may determine that the target crossing scenario is a hazardous scenario; the minimum time to collision and the maximum deceleration for different scenarios may be as shown in table 2. The minimum collision time and the maximum deceleration in different scenarios may be set when the vehicle leaves a factory, or may be configured by a server, such as a server corresponding to an in-vehicle application related to dangerous scenario identification, which is not limited in this embodiment of the present application.
Table 2: minimum time to collision and maximum deceleration under different scenarios
| Scene
|
Minimum time to collision tc(s)
|
Maximum deceleration deac |
| Scene following vehicle
|
1.7
|
0.4g
|
| Target cut-in scenario
|
1.2
|
0.6g
|
| Cutting out scene by vehicle
|
1.2
|
0.8g
|
| Object crossing scene
|
1.5
|
0.4g |
To this end, the dangerous scene recognition module 100 completes the judgment of whether the first scene is a dangerous scene.
In the process shown in fig. 6, the dangerous scene recognition module 100 determines, according to the motion trajectory of the first vehicle and the motion trajectory of the first object, that a scene where the first vehicle and the first object are located (or a scene where the first vehicle and the first object will face) is a scene (i.e., a first scene) with a safety hazard, and further determines, according to the minimum collision time in the first scene and the maximum deceleration in the first scene, whether the first scene is a dangerous scene, where the safety hazard of the dangerous scene is higher than that in the first scene. Because the minimum collision time and the maximum deceleration are set for different scenes with potential safety hazards, the scenes with the potential safety hazards can be further identified through differential processing, so that false identification and missing identification of the dangerous scenes can be avoided, the accuracy of identifying the dangerous scenes is ensured, and the safety and the reliability of AEB can be improved.
In one possible implementation, the hazard scenario identification module 100 identifies an environment in which the first vehicle is located, may identify one or more objects, and may determine the first target based on the one or more objects. For example, the danger scene recognition module 100 may recognize an environment in which the first vehicle is located according to data collected by a radar system, a vision system, or the like, and obtain one or more objects. For example, as shown in FIG. 1, the hazard scene recognition module 100 recognizes an object, namely, vehicle A. For another example, as shown in fig. 2, the hazardous scene recognition module 100 may recognize a plurality of objects, i.e., a vehicle a and a vehicle B.
For example, when the number of the objects is multiple, the dangerous scene recognition module 100 may select one of the multiple objects as the first target according to the priority, for example, the object with the highest priority among the multiple objects is the first target. For example, the first priority may be a scene in which a safety hazard exists, the second priority may be a collision time, the third priority may be a deceleration, and the fourth priority may be an object type, as shown in table 3. The object types can include trucks, trolleys, battery cars, pedestrians, green belts or buildings, and the like, wherein the trucks have higher priority than the green belts.
Table 3: priority level
| First priority
|
Second priority
|
Third priority
|
Fourth priority
|
| Scene with potential safety hazard
|
Time to collision ttc
|
Deceleration de amin |
Object type |
For example, if there is a safety hazard (e.g., a car following scene) in the scene where the object 1 and the first vehicle are located, and there is no safety hazard in the scene where the object 2 and the first vehicle are located, the dangerous scene recognition module 100 may determine that the object 1 is the first target. If the scene where the object 1 and the first vehicle are located and the scene where the object 2 and the first vehicle are located have potential safety hazards, and the collision time required for the object 1 is less than the collision time required for the object 2, the dangerous scene recognition module 100 may determine that the object 1 is the first target. The hazardous scene recognition module 100 may determine that the object 1 is the first target if the scene in which the object 1 and the first vehicle are located and the scene in which the object 2 and the first vehicle are located have safety hazards, the collision time required for the object 1 is equal to the collision time required for the object 2, and the deceleration required for the object 1 is greater than the deceleration required for the object 2. If the scene in which the object 1 and the first vehicle are located and the scene in which the object 2 and the first vehicle are located have a safety hazard, the collision time required for the object 1 is equal to the collision time required for the object 2, the deceleration required for the object 1 is equal to the deceleration required for the object 2, and the type priority of the object 1 is higher than that of the object 2 (if the object 1 is a truck and the object 2 is a green belt), the dangerous scene recognition module 100 may determine that the object 1 is the first target.
In one possible embodiment, the dangerous scene recognition module 100 may send the recognition result to the execution module 400, so that the execution module 400 determines whether to execute a corresponding emergency measure, such as issuing an alarm, emergency braking, etc., according to the recognition result. Wherein, the identification result comprises second information. The second information may be used to indicate whether the first scene is a dangerous scene or a non-dangerous scene. Optionally, one or more of a first scene (e.g., a following scene, a target cut-in scene, a host vehicle cut-out scene, or a target crossing scene), a first collision time, a first deceleration, or a first target identification may also be included in the recognition result.
In another possible implementation, the dangerous scene recognition module 100 may send the recognition result to the driver state monitoring module 200, so that the driver state monitoring module 200 corrects the heart rate threshold in different scenes according to the recognition result, so as to improve the accuracy of recognizing the driver state. Wherein, the identification result comprises second information. The second information may be used to indicate whether the first scene is a dangerous scene or a non-dangerous scene. Optionally, the recognition result may further include one or more of a first scene (e.g., a following scene, a target cut-in scene, a vehicle cut-out scene, or a target crossing scene), a first collision time, a first deceleration, or a first target identifier.
The driver state monitoring module 200 of the embodiment of the application can be used for identifying the state of the driver. For example, the driver status monitoring module 200 may determine the heart rate of the driver and determine that the status of the driver is normal, or stressed, or in a panic based on the heart rate of the driver. For example, the driver status monitoring module 200 may acquire the heart rate of the driver through the wearable device. Wherein, wearing equipment can be wrist-watch, bracelet etc. for real-time detection driver's rhythm of the heart, this application embodiment is not limited to this. The heart rate of the driver can refer to the real-time heart rate of the driver, and also can refer to the average heart rate of the driver within a set time length, and the embodiment of the application is not limited to the heart rate. Optionally, in the case that the heart rate of the driver is the real-time heart rate of the driver, the driver status monitoring module 200 may determine the status of the driver according to the heart rate of the driver for a continuously set number of times (e.g., 3 times), so as to accurately determine the status of the driver.
For example, the driver condition monitoring module 200 may determine the condition of the driver based on the second threshold, the first threshold, and the heart rate of the driver. For example, the driver status monitoring module 200 may determine the status of the driver by comparing the heart rate of the driver to a second threshold or a first threshold. Wherein the second threshold and the first threshold may be heart rate thresholds in the first scenario. For example, the second threshold may be a heart rate threshold used to determine whether the driver is in a normal state in the first scenario. The first threshold may be a heart rate threshold used to determine whether the driver is in a panic state in the first scenario. For example, in the case where the heart rate of the driver is less than the second threshold, the driver status monitoring module 200 may determine that the status of the driver is a normal status; in the case that the heart rate of the driver is less than or equal to the second threshold and less than the first threshold, the driver state monitoring module 200 may determine that the state of the driver is a stressed state; in the event that the heart rate of the driver is greater than or equal to the first threshold, the driver status monitoring module 200 may determine that the status of the driver is a panic state.
For example, as shown in table 4, in the following scenario, if the heart rate of the driver is less than 80 times/min, the driver status monitoring module 200 may determine that the status of the driver is a normal status; if the heart rate of the driver is greater than or equal to 80 times/minute and less than 120 times/minute, the driver state monitoring module 200 may determine that the state of the driver is a stressed state; if the driver's signaling is greater than or equal to 120 times/minute, the driver state monitoring module 200 may determine that the driver's state is a panic state.
In the target cut-in scene, if the heart rate of the driver is less than 90 times/minute, the driver state monitoring module 200 may determine that the state of the driver is a normal state; if the heart rate of the driver is greater than or equal to 90 times/minute and less than 120 times/minute, the driver state monitoring module 200 may determine that the state of the driver is a stressed state; if the driver's signaling is greater than or equal to 120 times/minute, the driver state monitoring module 200 may determine that the driver's state is a panic state.
In the vehicle cutting scene, if the heart rate of the driver is less than 100 times/minute, the driver state monitoring module 200 may determine that the state of the driver is a normal state; if the heart rate of the driver is greater than or equal to 100 times/minute and less than 140 times/minute, the driver state monitoring module 200 may determine that the state of the driver is a stressed state; if the driver signaling is greater than or equal to 140 times per minute, the driver status monitoring module 200 may determine that the status of the driver is a panic state.
In the target crossing scenario, if the heart rate of the driver is less than 100 times/min, the driver state monitoring module 200 may determine that the state of the driver is a normal state; if the heart rate of the driver is greater than or equal to 100 times/minute and less than 120 times/minute, the driver state monitoring module 200 may determine that the state of the driver is a stressed state; if the driver's signaling is greater than or equal to 120 times/minute, the driver state monitoring module 200 may determine that the driver's state is a panic state.
Table 4: heart rate threshold under different scenarios
For example, the heart rate thresholds in different scenarios may be determined from human stress and panic heart rate baselines and configured at the time of automobile shipment.
In one possible embodiment, the driver status monitoring module 200 may send the status of the driver to the execution module 400, so that the execution module 400 determines whether to execute a corresponding emergency measure, such as issuing an alarm, emergency braking, etc., according to the status of the driver.
The driver operation sensing module 300 of the embodiment of the application can be used for identifying the operation intention of the driver. For example, the driving operation sensing module 300 may determine the operation intention of the driver according to the strength of the driver's stepping on the accelerator pedal and/or the brake pedal. For example, the driver operation sensing module 300 may obtain the driver's force of stepping on the accelerator pedal through an accelerator pedal sensor. Similarly, the driver operation sensing module 300 can obtain the force of the driver on the brake pedal through the brake pedal sensor. The operation intention of the driver may be deceleration, idling, or acceleration, among others.
Since the strength of the accelerator pedal and/or the brake pedal corresponds to the acceleration of the vehicle, the driving operation sensing module 300 may determine the operation intention of the driver according to the strength of the accelerator pedal and/or the brake pedal stepped by the driver, which may be understood as the driving operation sensing module 300 may determine the operation intention of the driver according to the acceleration of the first vehicle. The corresponding relationship between the force applied to the accelerator pedal and/or the brake pedal and the acceleration of the automobile can be configured when the automobile leaves the factory.
For example, the driver operation sensing module 300 may collect data such as the force and frequency of the driver stepping on the accelerator pedal and/or the brake pedal in daily, analyze the data such as the force and frequency of the driver stepping on the accelerator pedal and/or the brake pedal in daily according to an algorithm such as density clustering, and obtain the first acceleration threshold and the second acceleration threshold by combining the corresponding relationship between the force and the acceleration of the driver stepping on the accelerator pedal and/or the brake pedal. Wherein a first acceleration threshold may be used to determine whether the driver has a significant intention to decelerate and a second acceleration threshold may be used to determine whether the driver has a significant intention to accelerate. For example, in the case where the acceleration of the first vehicle is less than or equal to the first acceleration threshold, the driver has a clear intention to decelerate; in the case where the acceleration of the first vehicle is greater than or equal to the second acceleration threshold, the driver has a clear intention to accelerate; in the event that the acceleration of the first vehicle is greater than the first acceleration threshold and less than the second acceleration threshold, the driver has a distinct idle intent. For example, as shown in table 5, when the acceleration of the first vehicle is less than or equal to a, the driver operation sensing module 300 may determine that the driver's operation intention is deceleration. When the acceleration of the first vehicle is greater than a and less than b, the driver operation sensing module 300 may determine that the driver's operation intention is an idle speed. When the acceleration of the first vehicle is greater than or equal to b, the driver operation perception module 300 may determine that the driver's operation intention is acceleration. Wherein a is a first acceleration threshold, b is a second acceleration threshold, and a is less than b.
Table 5: acceleration threshold of operational intention
| Speed reduction
|
Idling speed
|
Acceleration
|
| Is less than or equal to a
|
Is greater than a and less than b
|
Is greater than or equal to b |
In one possible embodiment, the driver operation sensing module 300 may transmit the operation intention of the driver to the dangerous scene recognition module 100, so that the driver state monitoring module 100 corrects the minimum collision time in different scenes and/or the maximum deceleration according to the operation intention of the driver to improve the accuracy of recognizing the dangerous scene.
In one possible implementation, the driver operation sensing module 300 may send the operation intention of the driver to the driver state monitoring module 200, so that the driver state monitoring module 200 corrects the heart rate threshold value in different scenes according to the operation intention of the driver, so as to improve the accuracy of identifying the state of the driver.
In one possible embodiment, the driver operation sensing module 300 may transmit the operation intention of the driver to the execution module 400, so that the execution module 400 determines whether to execute a corresponding emergency measure, such as issuing a warning, emergency braking, etc., according to the operation intention of the driver.
In another possible implementation, the driver operation sensing module 300 may send the pressure value of the accelerator pedal to the execution module 400, so that the execution module 400 determines whether the driver mistakenly steps on the accelerator according to the pressure value of the accelerator pedal. For example, the driver operation sensing module 300 may acquire the pressure value of the accelerator pedal through a sensor of the accelerator pedal.
The execution module 400 of the embodiment of the application may determine whether to execute a corresponding emergency measure, such as issuing an alarm, emergency braking, etc., according to the recognition result, the state of the driver, and the operation intention of the driver. For example, the execution module 400 may acquire the recognition result through the dangerous scene recognition module 100, acquire the state of the driver through the driver state monitoring module 200, and acquire the operation intention of the driver through the driver operation perception module 300.
For example, the execution module 400 may determine whether the corresponding emergency measure needs to be executed according to the recognition result, the state of the driver, and the operation intention of the driver. For example, in the case where the first scene is a following scene but not a dangerous scene, the state of the driver is a normal state, and the operation intention of the driver is idling or deceleration, the execution module 400 may determine that no corresponding emergency measure needs to be taken. For another example, in the case where the first scenario is a target cut-in scenario but is not a dangerous scenario, the state of the driver is a normal state, and the operation intention of the driver is idling or deceleration, the execution module 400 may determine that the corresponding emergency measure does not need to be executed.
For example, the scenario for the execution module 400 to determine to execute the emergency measure may be as follows:
scene 1: in the case that the first scenario is a dangerous scenario, the execution module 400 may determine to execute the corresponding emergency measure. For example, the execution module 400 may execute emergency braking or the like according to the first collision time and the first deceleration.
In the scenario 1, the first scenario is a dangerous scenario, which means that the driver has not time to respond, and the driver loses the control capability of the dangerous scenario, so the execution module 400 may perform emergency braking to ensure the safety of the driver and reduce the loss.
Scene 2: in the case where the first scenario is a non-dangerous scenario, the state of the driver is a panic state, and the operation intention of the driver is idling or accelerating, the execution module 400 may determine to execute a corresponding emergency measure. For example, the execution module 400 issues an alert. For another example, the execution module 400 may perform emergency braking or the like based on the first collision time and the first deceleration.
In the scene 2, the first scene is a scene with a potential safety hazard, such as a following scene, a target cut-in scene, a vehicle cut-out scene, a target crossing scene, and the like, and although the first scene is not a dangerous scene, the driver is in a hurry state, and the operation intention of the driver is idling or accelerating, which means that the emergency capability of the driver is poor, and the driver is easy to operate by mistake in the scene with the potential safety hazard, so that the probability of occurrence of an accident is greatly increased. Therefore, the execution module 400 can execute corresponding emergency measures, such as sending out a warning or emergency braking, so as to avoid accidents and ensure the safe driving of the vehicle.
For example, in the case that the first scene is a following scene, a target cut-in scene or a target crossing scene, the state of the driver is a panic state, and the operation intention of the driver is acceleration, the execution module 400 may determine that the driver mistakenly steps on the accelerator and immediately execute the corresponding emergency measure. For example, the driver is prompted to mistakenly step on the accelerator pedal, so that the driver recovers control over the vehicle, misoperation of the driver can be corrected, accidents are avoided, and safe driving of the vehicle is guaranteed. For another example, the execution module 400 may directly execute emergency braking to avoid an accident and ensure safe driving of the vehicle.
For example, in a case that the first scene is a following scene, a target cut-in scene or a target crossing scene, the state of the driver is a panic state, and the operation intention of the driver is acceleration, the execution module 400 may determine whether the driver mistakenly steps on the accelerator in combination with the pressure value of the accelerator pedal. For example, the execution module 400 may acquire the pressure value of the accelerator pedal through the driver operation sensing module 300. For example, in the case that the first scene is a following scene, a target cut-in scene or a target crossing scene, the state of the driver is a panic state, the operation intention of the driver is acceleration, and the pressure value of the accelerator pedal is greater than or equal to the set pressure threshold value within the predetermined time, the execution module 400 may determine that the driver mistakenly steps on the accelerator pedal and loses the control of the vehicle. In this case, the execution module 400 may directly execute emergency braking to avoid an accident and ensure safe driving of the vehicle.
In another possible embodiment, the hazard scene identification module 100 may modify the third threshold and/or the fourth threshold according to the third information. Wherein, the third information may include one or more of the following items: the recognition result, the state of the driver, or the operation intention of the driver. The identification result includes first information, and the first information may be used to indicate that the first scene is a dangerous scene or a non-dangerous scene. For example, the danger scene recognition module 100 may acquire that the status of the driver is a normal status, a stress status, or a panic status through the driver status monitoring module 200. For example, the dangerous scene recognition module 100 may acquire the operation intention of the driver as deceleration, or as idling, or as acceleration through the driver operation perception module 300.
For example, in the case where the recognition result, the state of the driver, and the operation intention of the driver are included in the third information, the manner in which the dangerous scene recognition module 100 modifies the third threshold value and/or the fourth threshold value according to the third information may be as follows:
the first method is as follows: in the case where the first scene is a non-dangerous scene, the state of the driver is a tension state or a panic state, and the operation intention of the driver is deceleration, the dangerous scene recognition module 100 may decrease the third threshold, or decrease the fourth threshold, or decrease the third threshold and decrease the fourth threshold. For example, the hazardous scene identification module 100 may modify the third threshold according to equation (5). For example, the hazardous scene identification module 100 may modify the fourth threshold according to equation (6).
tc1=tc0-α1 (5)
deac1=deac0-β1 (6)
Wherein, tc0May represent a third threshold value, t, to be modifiedc1May represent a modified third threshold value, alpha1Learning rate, alpha, which may be a correction of the third threshold1May be a preset positive number. deac0May represent a fourth threshold value, dea, to be modifiedc1May represent a modified fourth threshold value, beta1Learning rate, β, which may be a correction of the fourth threshold1May be a preset positive number.
For example, when the first scene is a following scene but not a dangerous scene, the state of the driver is a tension state or a panic state, and the operation intention of the driver is deceleration, the dangerous scene recognition module 100 may determine that the third threshold to be corrected is 1.7s and the fourth threshold to be corrected is 0.4g according to table 2. Further, the hazardous scene identification module 100 may identify the hazardous situation according to α1And beta1Correcting the third threshold value to be corrected and the fourth threshold value to be corrected to obtain a corrected third threshold value of (1.7-alpha)1) s, the fourth threshold value after correction is (0.4-beta)1) g, and update table 2.
In the first mode, the operation intention of the driver is deceleration, that is, the driver has a braking intention, and the state of the driver is a tension state or a panic state, which indicates that the driver has recognized that the first scene is a dangerous scene, and has performed the deceleration operation. However, the dangerous scene recognition module 100 does not recognize the dangerous scene, which means that the minimum collision time for determining the dangerous scene is too large, or the maximum deceleration for determining the dangerous scene is too large, or both the minimum collision time and the maximum deceleration for determining the dangerous scene are too large. Therefore, the dangerous scene recognition module 100 may reduce the third threshold and/or the fourth threshold to avoid missing recognition of the dangerous scene, and improve the accuracy of recognizing the dangerous scene.
The second method comprises the following steps: in the case where the first scene is a dangerous scene, the state of the driver is a normal state, and the operation intention of the driver is idling or acceleration, the dangerous scene recognition module 100 may increase the third threshold, or increase the fourth threshold, or increase the third threshold and the fourth threshold. For example, the hazardous scene identification module 100 may modify the third threshold value according to equation (7). For example, the hazardous scene identification module 100 may modify the fourth threshold according to equation (8).
tc1=tc0+α2 (7)
deac1=deac0+β2 (8)
Wherein alpha is2Learning rate, alpha, which may be a correction of the third threshold2May be a preset positive number. Beta is a2Learning rate, β, which may be a correction of the fourth threshold2May be a preset positive number. Alpha is alpha1And alpha2May be the same or different. Beta is a1And beta2May be the same or different.
For example, in the case where the first scene is the target cut-in scene and is a dangerous scene, but the state of the driver is the normal state and the operation intention of the driver is idle or acceleration, the dangerous scene recognition module 100 may determine that the third threshold value to be corrected is 1.2s and the fourth threshold value to be corrected is 0.6g according to table 2. Further, the hazardous scene identification module 100 may identify the hazardous situation according to α2And beta2Correcting the third threshold value to be corrected and the fourth threshold value to be corrected to obtain a corrected third threshold value of (1.2+ alpha)2) s, the fourth threshold after correction is (0.6+ beta)2) g, and update table 2.
In the second mode, the operation intention of the driver is idling or acceleration, that is, the driver has no obvious braking intention, and the state of the driver is a normal state, meaning that the first scene may be a non-dangerous scene. However, the dangerous scene recognition module 100 recognizes the dangerous scene, which means that the minimum collision time for determining the dangerous scene is too small, or the maximum deceleration for determining the dangerous scene is too small, or both the minimum collision time and the maximum deceleration for determining the dangerous scene are too small. Therefore, the dangerous scene recognition module 100 may increase the third threshold and/or the fourth threshold to avoid misidentifying the dangerous scene, thereby improving the accuracy of recognizing the dangerous scene.
The third method comprises the following steps: in the case where the first scene is a dangerous scene, the state of the driver is a tense state or a panic state, and the operation intention of the driver is deceleration, the dangerous scene recognition module 100 may correct the third threshold, or correct the fourth threshold, or correct the third threshold and the fourth threshold to obtain the optimal threshold. For example, the hazard scene identification module 100 can modify the third threshold and/or the fourth threshold by a greedy algorithm. For example, the hazardous scene identification module 100 may modify the third threshold according to equation (9). For example, the hazardous scene identification module 100 may modify the fourth threshold according to equation (10).
tc1=tc0+εz*α3 (9)
deac1=deac0+εz*β3 (10)
Wherein z follows a uniform distribution, e.g. z [ -1,1 [ ]]. ε may be a preset constant. Alpha is alpha3Learning rate, beta, which may be a correction of the third threshold3May be a learning rate that corrects the fourth threshold. Alpha (alpha) ("alpha")3And alpha2May be the same or different. Beta is a3And beta2May be the same or different.
For example, when the first scene is the backgroundWhen the vehicle cuts out a scene and is a dangerous scene, the state of the driver is a state of tension or a state of panic, and the operation intention of the driver is deceleration, it means that the dangerous scene recognition module 100 recognizes correctly. To obtain the optimal threshold, the dangerous scene recognition module 100 may determine that the third threshold to be modified is 1.2s and the fourth threshold to be modified is 0.8g according to table 2. Further, the danger scene recognition module 100 can recognize from ε, z, α3And beta3Correcting the third threshold value to be corrected and the fourth threshold value to be corrected to obtain a corrected third threshold value of (1.2+ epsilon z alpha)3) s, the fourth threshold after correction is (0.6+ epsilon z beta)3) g, and update table 2.
In the third mode, the state of the driver is a tense state or a panic state, and the operation intention of the driver is deceleration, that is, the driver has a distinct braking intention, which means that the driver has recognized that the first scene is a dangerous scene, and has performed the deceleration operation. The dangerous scene recognition module 100 also recognizes the dangerous scene, i.e. the dangerous scene recognition module 100 recognizes correctly. To obtain the optimal threshold, the hazardous scene identification module 100 may modify the third threshold and/or the fourth threshold.
In the above-described embodiment, the dangerous scene recognition module 100 corrects the minimum collision time and/or the maximum deceleration for determining the dangerous scene according to the state of the driver and the operation intention of the driver. The method has the advantages that when the dangerous scene is judged, the motion trail of the AEB and the motion trail of the first target are considered, different states of drivers with different driving styles facing the dangerous scene and different operation intentions of the drivers with different driving styles facing the dangerous scene are also considered, so that the dangerous scene can be accurately identified, accidents caused by mistaken identification or missed identification of the dangerous scene can be avoided, and the reliability and the safety of the AEB can be improved.
As an example, in the case where the recognition result is included in the third information, or the state of the driver is included, or the operation intention of the driver is included, the dangerous scene recognition module 100 may correct the third threshold value and/or the fourth threshold value according to the third information to obtain the optimal threshold value. For example, the dangerous scene recognition module 100 may correct the third threshold according to equation (9). For example, the dangerous scene recognition module 100 may correct the fourth threshold according to equation (10).
As an example, in the case where the recognition result and the state of the driver are included in the third information, the dangerous scene recognition module 100 may correct the third threshold value and/or the fourth threshold value according to the recognition result and the state of the driver to obtain the optimal threshold value. For example, the risk scenario identification module 100 may modify the third threshold according to equation (9). For example, the dangerous scene recognition module 100 may correct the fourth threshold according to equation (10).
As an example, in a case where the recognition result and the operation intention of the driver are included in the third information, the dangerous scene recognition module 100 may correct the third threshold value and/or the fourth threshold value according to the recognition result and the operation intention of the driver to obtain the optimal threshold value. For example, the dangerous scene recognition module 100 may correct the third threshold according to equation (9). For example, the dangerous scene recognition module 100 may correct the fourth threshold according to equation (10).
As one example, in a case where the third information includes the state of the driver and the operation intention of the driver, the dangerous scene recognition module 100 may correct the third threshold and/or the fourth threshold according to the state of the driver and the operation intention of the driver to obtain the optimal threshold. For example, the dangerous scene recognition module 100 may correct the third threshold according to equation (9). For example, the risk scenario identification module 100 may modify the fourth threshold according to equation (10).
In another possible embodiment, the driver state monitoring module 200 may modify the second threshold, or modify the first threshold, or modify both the second threshold and the first threshold based on the first information. The first information comprises an identification result and/or an operation intention of a driver, the identification result comprises second information, and the second information is used for indicating that the first scene is a dangerous scene or a non-dangerous scene. For example, the driver state monitoring module 200 may acquire the recognition result through the dangerous scene recognition module 100. For example, the driver state monitoring module 200 may acquire the operation intention of the driver as deceleration, or as idling, or as acceleration through the driver operation perception module 300.
As an example, in the case where the first information includes the recognition result and the operator's intention of the driver, the driver state monitoring module 200 may label the state of the driver in the first scene for different recognition results and operation intentions.
For example, in the case where the first scene is a non-dangerous scene and the driver's operational intent is deceleration, the driver state monitoring module 200 may not label the state of the driver. In the case where the first scenario is a non-dangerous scenario and the driver's operation intention is idle, the driver state monitoring module 200 may label the state of the driver as a normal state. In the case where the first scene is a non-dangerous scene and the operation intention of the driver is acceleration, the driver state monitoring module 200 may label the state of the driver as a normal state. In the case that the first scene is a dangerous scene, the operation intention of the driver is deceleration, and the force of the brake pedal is smaller than the set threshold, the driver state monitoring module 200 may label the state of the driver as a tension state. In the case that the first scene is a dangerous scene, the operation intention of the driver is deceleration, and the force of the brake pedal is greater than or equal to the set threshold, the driver state monitoring module 200 may label the state of the driver as a panic state. In the case where the first scenario is a dangerous scenario and the driver's operation intention is idle, the driver state monitoring module 200 may not mark the state of the driver. In the case where the first scene is a dangerous scene and the driver's operational intention is acceleration, the driver state monitoring module 200 may not mark the state of the driver. The results of the different recognition and labeling with the operational intention can be shown in table 6.
Table 6: marking results under different recognition results and operation intentions
For example, the driver state monitoring module 200 may modify the heart rate threshold in different scenarios through a semi-supervised algorithm or a proximity algorithm, etc. For example, the driver status monitoring module 200 may learn the labeled data and the unlabeled data through a semi-supervised proximity algorithm (KNN), and obtain the corrected heart rate thresholds in different scenarios. The labeled data may include the labeled state of the driver (as shown in table 6) and the heart rate of the driver, and the unlabeled data may include the heart rate of the driver.
For example, the driver state monitoring module 200 may record the annotation data as L { (x)i,yi) And recording the unlabeled data as U-xj}. Wherein x isiMay represent the heart rate of the ith sample, yiIt can indicate that the ith sample is a sample in a normal state, or a sample in a stress state, or a sample in a panic state, such as a sample with 0 indicating the normal state, a sample with 1 indicating the stress state, and a sample with 2 indicating the panic state. x is the number ofjThe heart rate of the jth sample may be represented. Further, the driver state monitoring module 200 may train the labeled data and unlabeled data using a semi-supervised KNN algorithm. Specifically, for any j, find the distance x in LjThe latest K samples are used for voting to obtain xjThe prediction label is that the semi-supervised KNN algorithm is utilized to train the labeled data and the unlabelled data, so that the unlabelled data before training can be labeled, and L and U can be corrected. Then, the driver status monitoring module 100 may correct the heart rate thresholds corresponding to different scenes according to the corrected L and U, that is, update table 4.
In the above example, the driver state monitoring module 200 corrects the heart rate threshold value for determining the state of the driver according to the recognition result and the operation intention of the driver. That means, when judging the state of the driver, not only the different tension degrees of the drivers with different driving styles facing the dangerous scene are considered, but also the different operation intentions of the drivers with different driving styles facing the dangerous scene are considered, so that the state of the driver can be accurately identified, accidents caused by the condition of the driver being identified by mistake can be avoided, and the reliability and the safety of the AEB can be improved.
As an example, in the case where the recognition result is included in the first information or the operation intention of the driver is included, the driver state monitoring module 200 may correct the second threshold value and/or the first threshold value according to the first information. For example, the driver status monitoring module 200 may label the status of the driver according to the identification result, and the specific labeling process may refer to the description of table 6, and learn labeled data and unlabeled data according to the semi-supervised KNN algorithm, so as to obtain the modified second threshold and/or the modified first threshold. For another example, the driver state monitoring module 200 may label the state of the driver according to the operation intention of the driver, and the specific labeling process may refer to the description of table 6, and learn labeled data and unlabeled data according to a semi-supervised KNN algorithm, so as to obtain the modified second threshold and/or the modified first threshold.
Fig. 9 is a flowchart schematically illustrating a dangerous scene processing method provided by an embodiment of the present application, where the method may be implemented by a dangerous scene recognition apparatus. As shown in fig. 9, the flow of the method may include:
s901: the hazardous scene identification module 100 may determine a first target.
For example, the hazard scene identification module 100 may identify one or more objects surrounding the first vehicle via a first vehicle-mounted radar system and/or vision system, etc., and determine the first target based on the one or more objects. When there are multiple objects, the first target may be an object with the highest priority among the multiple objects, and the priority may be as shown in table 3.
S902: the dangerous scene recognition module 100 may determine, according to the transverse motion trajectory of the first object and the transverse motion trajectory of the first vehicle, that the first vehicle and the first object are in a first scene.
For example, the first scenario may be a scenario in which a security risk exists. For example, the first scene may be a following scene, a target cut-in scene, a host vehicle cut-out scene, a target crossing scene, or the like, as shown in fig. 1 to 4. The specific implementation process of step S902 may refer to the content described in S602 in fig. 6, and is not described herein again.
S903: the dangerous scene recognition module 100 determines whether the first scene is a dangerous scene according to the longitudinal motion trajectory of the first target and the longitudinal motion trajectory of the first vehicle, and obtains a recognition result.
For example, the second information is included in the recognition result, and the second information can be used for indicating that the first scene is a dangerous scene or a non-dangerous scene. Optionally, the recognition result may further include one or more of a first scenario, a first collision time, a first deceleration, a first target identifier, or the like.
For example, the hazardous scene recognition module 100 may determine a first collision time and a first deceleration based on the longitudinal motion trajectory of the first target and the longitudinal motion trajectory of the first vehicle, and determine whether the first scene is a hazardous scene based on the first collision time and the first deceleration. For example, in the event that the first time to collision is less than or equal to the third threshold and the first deceleration is greater than or equal to the fourth threshold, the hazardous scene identification module 100 may determine that the first scene is a hazardous scene. For example, in the event that the first time-to-collision is greater than the third threshold, the hazardous scene identification module 100 may determine that the first scene is a non-hazardous scene. For example, in the event that the first deceleration is less than the third threshold, the hazardous scene identification module 100 may determine that the first scene is a non-hazardous scene. For another example, where the first time to collision is greater than the third threshold and the first deceleration is less than the fourth threshold, the hazardous scene identification module 100 may determine that the first scene is a non-hazardous scene. Wherein the third threshold may be a minimum collision time in the first scenario. The fourth threshold may be a maximum deceleration under the first scenario.
The specific implementation process of step S903 may refer to the content described in step S603 and step S604 in fig. 6, and is not described herein again.
S904: the dangerous scene recognition module 100 may transmit the recognition result to the execution module 400. Accordingly, the execution module 400 receives the recognition result.
S905: the driver status monitoring module 200 determines the heart rate of the driver.
For example, the driver status monitoring module 200 may acquire the heart rate of the driver through the wearable device. Wherein, wearing equipment can be wrist-watch, bracelet etc. for real-time detection driver's rhythm of the heart, this application embodiment is not limited to this. The heart rate of the driver can refer to the real-time heart rate of the driver, and also can refer to the average heart rate of the driver within a set time length, and the embodiment of the application is not limited to the heart rate.
S906: the driver status monitoring module 200 determines the status of the driver based on the heart rate of the driver.
For example, the driver status monitoring module 200 may determine that the status of the driver is normal, or stressed, or in a panic state based on the heart rate of the driver, the second threshold, and the first threshold. For example, in the case where the heart rate of the driver is less than the second threshold, the driver status monitoring module 200 may determine that the status of the driver is a normal status; in the case that the heart rate of the driver is less than or equal to the second threshold and less than the first threshold, the driver state monitoring module 200 may determine that the state of the driver is a stressed state; in the event that the heart rate of the driver is greater than or equal to the first threshold, the driver status monitoring module 200 may determine that the status of the driver is a panic state. Wherein the second threshold and the first threshold may be heart rate thresholds in a first scenario, as shown in table 4.
S907: the driver status monitoring module 200 may send the status of the driver to the execution module 400. Accordingly, the execution module 400 receives a status of the driver.
S908: the driver operation sensing module 300 may determine the operation intention of the driver from the acceleration of the first vehicle.
For example, the driving operation sensing module 300 may determine that the driver's operation is intended to be decelerating, or idling, or accelerating, according to the acceleration of the first vehicle. The acceleration of the first vehicle can be determined by the force with which the driver steps on the accelerator pedal and/or the brake pedal. For example, in the case where the acceleration of the first vehicle is less than or equal to the first acceleration threshold, the driver has a clear intention to decelerate; in the case where the acceleration of the first vehicle is greater than or equal to the second acceleration threshold, the driver has a clear intention to accelerate; in the event that the acceleration of the first vehicle is greater than the first acceleration threshold and less than the second acceleration threshold, the driver has a clear intent to idle, as shown in table 5.
S909: the driver operation perception module 300 may transmit the operation intention of the driver to the execution module 400. Accordingly, the execution module 400 receives the operator intent of the driver.
S910: the execution module 400 determines whether to execute the emergency measure according to the recognition result, the state of the driver, and the operation intention of the driver.
For example, the execution module 400 may determine whether to execute the emergency measure according to the recognition result, the state of the driver, and the operation intention of the driver. For example, in the case where the first scenario is a dangerous scenario, the execution module 400 may determine to execute a corresponding emergency measure. For another example, in the case where the first scene is not a dangerous scene, the state of the driver is a panic state, and the operation intention of the driver is idling or acceleration, the execution module 400 may determine to execute a corresponding emergency measure.
In one possible implementation, the driver status monitoring module 200 may send the status of the driver to the execution module 100. The driver operation sensing module 300 may transmit the operation intention of the driver to the execution module 100. Accordingly, the execution module 100 may receive the status of the driver and the operational intent of the driver. Further, the execution module 100 may modify the third threshold and/or the fourth threshold, i.e., update the table 2, according to the state of the driver and the operation intention of the driver.
For example, in the case where the first scene is a non-dangerous scene, the state of the driver is a stressed state or a panic state, and the operation intention of the driver is deceleration, the dangerous scene recognition module 100 may decrease the third threshold, or decrease the fourth threshold, or decrease the third threshold and the fourth threshold, as shown in equation (5) or equation (6). For example, in the case where the first scene is a dangerous scene, the state of the driver is a normal state, and the operation intention of the driver is idling or acceleration, the dangerous scene recognition module 100 may increase the third threshold, or increase the fourth threshold, or increase the third threshold and the fourth threshold, as shown in equation (7) and equation (8). For another example, in the case where the first scene is a dangerous scene, the state of the driver is a stressed state or a panic state, and the operation intention of the driver is deceleration, the dangerous scene recognition module 100 may correct the third threshold, or correct the fourth threshold, or correct the third threshold and the fourth threshold, as shown in equation (9) or equation (10).
In another possible implementation, the execution module 100 may send the recognition result to the driver state monitoring module 200. The driver operation perception module 300 may transmit the driver's operation intention to the driver state monitoring module 200. Accordingly, the driver state monitoring module 200 may receive the recognition result and the operation intention of the driver. Further, the driver status monitoring module 200 may modify the second threshold, or modify the first threshold, or modify the second threshold and the first threshold according to the recognition result and the operation intention of the driver, that is, update the table 4, so that the status of the driver may be determined according to different driving styles, and the accuracy of recognizing the status of the driver may be improved.
It should be noted that the execution sequence of each step in fig. 9 is only an example, and the embodiment of the present application does not limit this. For example, the dangerous scene recognition module 100 may acquire the recognition result between the states of the driver determined by the driver state monitoring module 200, may acquire the recognition result after the states of the driver are determined by the driver state monitoring module 200, and may acquire the recognition result while the states of the driver are determined by the driver state monitoring module 200.
As shown in fig. 10, an embodiment of the present application further provides another schematic structural diagram of a dangerous scene processing apparatus, where the apparatus may implement the functions of the modules shown in fig. 5 in the foregoing embodiment, or implement the method provided in the embodiment shown in fig. 9 in the foregoing embodiment. The apparatus may include, among other things, a processor 1001. The processor 1001 is configured to implement the scheme provided in the embodiment shown in fig. 9 in the above embodiment, or implement the functions of the modules shown in fig. 5 in the above embodiment. The modules include a dangerous scene recognition module 100, a driver state monitoring module 200, a driver operation perception module 300 and an execution module 400.
Optionally, the apparatus further comprises a memory 1002, the memory 1002 being for storing computer programs or instructions. The memory 1002 may be internal to the processor or external to the processor. In the case where the unit modules described in fig. 10 are implemented by software, software or program codes required for the processor 1001 to perform the corresponding actions are stored in the memory 1002. The processor 1001 is configured to execute the program or instructions in the memory 1002 to implement the steps shown in fig. 9 in the above embodiment or implement the functions of the modules shown in fig. 5 in the above embodiment.
Optionally, the apparatus further comprises a communication interface 1003, and the communication interface 1003 may be used for communication between the apparatus and other apparatuses, for example, acquiring data collected by a vision system or a radar system, acquiring data of a sensor, and the like. The processor 1001 is configured to execute the program or instructions in the memory 1002, and the processor 1001 is coupled to the communication interface, and configured to implement the scheme provided by the embodiment shown in fig. 9 in the above embodiment or implement the functions of the modules shown in fig. 5 in the above embodiment. The modules include a dangerous scene recognition module 100, a driver state monitoring module 200, a driver operation perception module 300 and an execution module 400.
For example, the processor 1001 may be configured to determine, according to a motion trajectory of a first vehicle and a motion trajectory of a first target, that a first scene is a non-dangerous scene or a dangerous scene, where the first scene is a scene in which the first vehicle and the first target are located, a potential safety hazard exists in the first scene, and a potential safety hazard of the dangerous scene is higher than that of the first scene; under the condition that the first scene is a non-dangerous scene, determining the state of the driver according to the heart rate of the driver, and determining the operation intention of the driver to be acceleration, deceleration or idling according to the acceleration of the first vehicle, wherein under the condition that the heart rate of the driver is smaller than a second threshold value, the state of the driver is a normal state, under the condition that the heart rate of the driver is larger than or equal to the second threshold value and smaller than a first threshold value, the state of the driver is a tension state, under the condition that the heart rate of the driver is larger than or equal to the first threshold value, the state of the driver is a panic state, and the second threshold value and the first threshold value are heart rate threshold values corresponding to the first scene; and the execution module is used for executing emergency measures under the condition that the state of the driver is a panic state and the operation intention is acceleration or idling.
Optionally, the processor 1001 may obtain the driver's force of stepping on the accelerator pedal and/or the brake pedal through the communication interface 1003, and determine the acceleration of the first vehicle according to the driver's force of stepping on the accelerator pedal and/or the brake pedal. Wherein, there is a corresponding relation between the acceleration of the first vehicle and the pressure value of the accelerator pedal and/or the brake pedal.
Optionally, the processor 1001 may obtain the heart rate monitored by the wearable device in real time through the communication interface 1003.
In one possible implementation, the processor 1001 may further be configured to: and correcting the second threshold and/or the first threshold according to the first information, wherein the first information comprises second information and/or the operation intention of the driver, and the second information is used for indicating that the first scene is a non-dangerous scene or a dangerous scene.
In one possible implementation, the processor 1001 may be configured to: determining a first collision time, which is a time required for the first vehicle to contact the first target while traveling at the current speed, and a first deceleration, which is a minimum deceleration required to enable the first vehicle to stop moving when the first vehicle comes into contact with the first target, from the movement locus of the first vehicle and the movement locus of the first target; based on the first time to collision and the first deceleration, the first scene is determined to be a non-hazardous scene or a hazardous scene.
Alternatively, the processor 1001 may receive data collected by a sensor (such as an acceleration sensor, a radar system, or a vision system) through the communication interface to determine the movement track of the first vehicle and the movement track of the first target.
In one possible implementation, the processor 1001 may be configured to: determining that the first scene is a non-dangerous scene if the first time to collision is greater than a third threshold and/or the first deceleration is less than a fourth threshold; or determining that the first scene is a dangerous scene under the condition that the first collision time is less than or equal to a third threshold value and the first deceleration is greater than or equal to a fourth threshold value; wherein the third threshold is a minimum time to collision in the first scenario and the fourth threshold is a maximum deceleration in the first scenario.
In one possible implementation, the processor 1001 may further be configured to: and modifying the third threshold value and/or the fourth threshold value according to third information, wherein the third information comprises one or more of the following items: second information for indicating that the first scene is a non-dangerous scene or a dangerous scene; the state of the driver; or, the operation intention of the driver.
In one possible embodiment, in the case that the third information includes the second information, the state of the driver, and the operation intention of the driver, the processor 1001 may be configured to: reducing the third threshold value when the first scene is a non-dangerous scene, the state of the driver is a tension state or a panic state, and the operation intention of the driver is deceleration; alternatively, when the first scene is a dangerous scene, the state of the driver is a normal state, and the operation intention of the driver is idling or acceleration, the third threshold value is increased.
In one possible embodiment, in the case that the third information includes the second information, the state of the driver, and the operation intention of the driver, the processor 1001 may be configured to: reducing the fourth threshold value when the first scene is a non-dangerous scene, the state of the driver is a tension state or a panic state, and the operation intention of the driver is deceleration; alternatively, when the first scene is a dangerous scene, the state of the driver is a normal state, and the operation intention of the driver is idling or acceleration, the fourth threshold value is increased.
In the case where the memory 1002 is disposed outside the processor, the memory 1002, the processor 1001, and the communication interface 1003 are connected to each other by a bus 1004, and the bus 1004 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. It should be understood that the bus may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 10, but that does not indicate only one bus or one type of bus.
An embodiment of the present application further provides a chip system, including: a processor coupled to a memory for storing a program or instructions that, when executed by the processor, cause the system-on-chip to implement the method of any of the above method embodiments.
Optionally, the system on a chip may have one or more processors. The processor may be implemented by hardware or by software. When implemented in hardware, the processor may be a logic circuit, an integrated circuit, or the like. When implemented in software, the processor may be a general-purpose processor implemented by reading software code stored in a memory.
Optionally, the memory in the system-on-chip may also be one or more. The memory may be integrated with the processor or may be separate from the processor, which is not limited in this application. For example, the memory may be a non-transitory processor, such as a read only memory ROM, which may be integrated with the processor on the same chip or separately disposed on different chips, and the type of the memory and the arrangement of the memory and the processor are not particularly limited in this application.
The system-on-chip may be, for example, a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), a system on chip (SoC), a Central Processing Unit (CPU), a Network Processor (NP), a digital signal processing circuit (DSP), a Microcontroller (MCU), a Programmable Logic Device (PLD), or other integrated chips.
It will be appreciated that the steps of the above described method embodiments may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
The embodiment of the present application further provides a computer-readable storage medium, where computer-readable instructions are stored in the computer-readable storage medium, and when the computer-readable instructions are read and executed by a computer, the computer is enabled to execute the method in any of the above method embodiments.
The embodiments of the present application further provide a computer program product, which when read and executed by a computer, causes the computer to execute the method in any of the above method embodiments.
It should be understood that the processor mentioned in the embodiments of the present application may be a Central Processing Unit (CPU), and may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will also be appreciated that the memory referred to in the embodiments of the application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM, enhanced SDRAM, SLDRAM, Synchronous Link DRAM (SLDRAM), and direct rambus RAM (DR RAM).
It should be noted that when the processor is a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, the memory (memory module) is integrated in the processor.
It should be noted that the memory described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of the processes should be determined by their functions and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.