[go: up one dir, main page]

CN115837919A - Interactive behavior decision method and device for automatic driving vehicle and automatic driving vehicle - Google Patents

Interactive behavior decision method and device for automatic driving vehicle and automatic driving vehicle Download PDF

Info

Publication number
CN115837919A
CN115837919A CN202211435273.XA CN202211435273A CN115837919A CN 115837919 A CN115837919 A CN 115837919A CN 202211435273 A CN202211435273 A CN 202211435273A CN 115837919 A CN115837919 A CN 115837919A
Authority
CN
China
Prior art keywords
vehicle
interaction
interactive
behavior decision
decision model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211435273.XA
Other languages
Chinese (zh)
Inventor
刘金鑫
李文博
彭亮
王超
包帅
姚萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202211435273.XA priority Critical patent/CN115837919A/en
Publication of CN115837919A publication Critical patent/CN115837919A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The disclosure provides an interactive behavior decision method and device for an automatic driving vehicle and the automatic driving vehicle, and relates to the field of artificial intelligence, in particular to an automatic driving technology. The method comprises the following steps: acquiring driving environment information of an automatic driving vehicle, wherein the driving environment information comprises motion state information of the automatic driving vehicle, motion state information of traffic participants around the automatic driving vehicle and road information; determining an interaction scene where the automatic driving vehicle is located and an interaction degree between the automatic driving vehicle and an interaction object according to the driving environment information; and determining a target vehicle interactive behavior decision model based on the interactive scene and the interactive degree, and outputting a decision instruction through the target vehicle interactive behavior decision model, wherein the target vehicle interactive behavior decision model is one of a plurality of vehicle interactive behavior decision models trained in advance. The method improves the generalization ability of the interactive behavior decision of the automatic driving vehicle under the dynamic uncertain environment.

Description

Interactive behavior decision method and device for automatic driving vehicle and automatic driving vehicle
Technical Field
The present disclosure relates to an automatic driving technology in the field of artificial intelligence, and in particular, to an interactive behavior decision method and apparatus for an automatic driving vehicle, and an automatic driving vehicle.
Background
With the rapid development of automatic driving technology and the continuous popularization of automatic driving vehicles, the important challenge for realizing high-level automatic driving is the dynamics and uncertainty of the actual traffic environment, which are mainly reflected in the dynamics and uncertainty of the behaviors of traffic participants. When the automatic driving vehicle interacts with other traffic participants, how the automatic driving vehicle makes an interactive behavior decision in the dynamic uncertain environment is a key point of the automatic driving technology and is also an important link for realizing intelligent safety of the automatic driving vehicle.
The current interactive behavior decision method is mainly based on perceived ambient environment data, situation cognition and behavior decision are sequentially performed, wherein the situation cognition mainly comprises vehicle intention recognition, vehicle track prediction and environmental risk assessment, but the interactive behavior decision generalization capability of the method in a dynamic uncertain environment is poor.
Disclosure of Invention
The disclosure provides an interactive behavior decision method and device of an automatic driving vehicle with high generalization capability in a dynamic uncertain environment and the automatic driving vehicle.
According to a first aspect of the present disclosure, there is provided an interactive behaviour decision method for an autonomous vehicle, comprising:
acquiring driving environment information of an autonomous vehicle, wherein the driving environment information comprises motion state information of the autonomous vehicle, motion state information of traffic participants around the autonomous vehicle and road information;
determining an interaction scene where the automatic driving vehicle is located and an interaction degree of the automatic driving vehicle and an interaction object according to the driving environment information;
determining a target vehicle interaction behavior decision model based on the interaction scene and the interaction degree, and outputting a decision instruction through the target vehicle interaction behavior decision model, wherein the target vehicle interaction behavior decision model is one of a plurality of vehicle interaction behavior decision models trained in advance.
According to a second aspect of the present disclosure, there is provided an interactive behaviour decision making apparatus for an autonomous vehicle, comprising:
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring driving environment information of an automatic driving vehicle, and the driving environment information comprises motion state information of the automatic driving vehicle, motion state information of traffic participants around the automatic driving vehicle and road information;
the first processing module is used for determining an interaction scene where the automatic driving vehicle is located and the interaction degree of the automatic driving vehicle and an interaction object according to the driving environment information;
and the second processing module is used for determining a target vehicle interactive behavior decision model based on the interactive scene and the interactive degree so as to output a decision instruction through the target vehicle interactive behavior decision model, wherein the target vehicle interactive behavior decision model is one of a plurality of vehicle interactive behavior decision models trained in advance.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of the first aspect described above.
According to a fifth aspect of the present disclosure, there is provided a computer program product, the program product comprising: a computer program, stored in a readable storage medium, from which at least one processor of an electronic device can read the computer program, execution of the computer program by the at least one processor causing the electronic device to perform the method of the first aspect.
According to a fifth aspect of the present disclosure, there is provided an autonomous vehicle comprising the electronic device according to the third aspect.
According to the technical scheme disclosed by the invention, the generalization capability of the interactive behavior decision of the automatic driving vehicle under the dynamic uncertain environment is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic flowchart of an interactive behavior decision method for an autonomous vehicle according to an embodiment of the disclosure;
fig. 2 is a schematic diagram of a parallel import scenario provided in accordance with an embodiment of the present disclosure;
fig. 3 is a schematic diagram of an oblique entry scene according to an embodiment of the disclosure;
fig. 4 is a schematic diagram of a T-type import scenario provided in accordance with an embodiment of the present disclosure;
FIG. 5 is a schematic view of a lane increase scenario provided in accordance with an embodiment of the present disclosure;
FIG. 6 is a schematic view of a lane reduction scenario provided in accordance with an embodiment of the present disclosure;
FIG. 7 is a block diagram of a method for interactive behavior decision-making for an autonomous vehicle according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an interactive behavior decision device for an autonomous vehicle according to an embodiment of the present disclosure;
FIG. 9 is a schematic block diagram of an electronic device for implementing an automated vehicle interactive behavior decision method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The current interactive behavior decision method is mainly based on a layered decision framework, and utilizes perceived ambient environment data to conduct behavior decision after situational awareness is conducted on an interactive vehicle, wherein the situational awareness mainly comprises vehicle intention identification, vehicle track prediction and environmental risk assessment, but the behavior of traffic participants has strong dynamics and randomness, and the interactive scene is complex, so that the interactive behavior decision generalization capability of the method in a dynamic uncertain environment is poor.
In order to fully utilize sensed ambient environment data, an end-to-end decision framework is provided in some schemes, an interactive behavior decision model is obtained by utilizing a deep neural network or a deep reinforcement learning-based method, the sensed ambient environment data is used as the input of the interactive behavior decision model, and a decision result is directly output by the model, but the method still has the problem of poor generalization capability.
Therefore, the method comprises the steps of obtaining a plurality of interactive behavior decision models through pre-training, aiming at different driving environments, firstly identifying an interactive scene where the automatic driving vehicle is located when the method is applied, determining the interactive degree of the automatic driving vehicle and an interactive object under the interactive scene, and carrying out subsequent processing based on the identified interactive scene and the interactive degree to select a proper interactive behavior decision model from the plurality of interactive behavior decision models for behavior decision, so that the generalization capability of the interactive behavior decision under the dynamic uncertain environment is improved.
Hereinafter, the interactive behavior decision method of the autonomous vehicle provided by the present disclosure will be described in detail by specific embodiments. It is to be understood that the following detailed description may be combined with other embodiments, and that the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 1 is a schematic flow chart of an interactive behavior decision method for an autonomous vehicle according to an embodiment of the present disclosure. The execution subject of the method is an interactive behavior decision device of an automatic driving vehicle, and the device can be realized in a software and/or hardware mode. As shown in fig. 1, the method includes:
s101, acquiring driving environment information of the automatic driving vehicle, wherein the driving environment information comprises motion state information of the automatic driving vehicle, motion state information of traffic participants around the automatic driving vehicle and road information.
The driving environment information is a basis for interactive behavior decision of the autonomous vehicle, and the driving environment information may be acquired by the autonomous vehicle through a sensing system, such as a camera, a sensor, a radar, or the like, or may also be acquired by the autonomous vehicle from a high-precision map, or may also be acquired by directly or indirectly performing information interaction with roadside equipment.
Alternatively, the motion state information of the autonomous vehicle and the surrounding traffic participants of the autonomous vehicle may include position, speed and acceleration, the surrounding traffic participants may be vehicles or pedestrians, and when the surrounding traffic participants are vehicles, the motion state information may further include types of vehicles, such as non-motorized vehicles, special-purpose vehicles (e.g., road sweepers, sprinklers, fire trucks, ambulances, etc.), regular motor vehicles, etc. The road information may include road structure information as well as road traffic regulations, such as dashed solid lines, highest speed limits, turn lane markings, etc.
S102, determining an interaction scene where the automatic driving vehicle is located and the interaction degree of the automatic driving vehicle and the interaction object according to the driving environment information.
Traffic rules corresponding to different interactive scenes may be different, and behaviors of traffic participants may also be different, so that the interactive scene where the automatic driving vehicle is currently located needs to be determined based on driving environment information. In addition, in the interactive scene, the degree of the mutual influence between the automatic driving vehicle and the interactive object is also distinguished, different interactive degrees can represent the degree of the mutual influence between the automatic driving vehicle and the interactive object, which directly influences the behavior decision of the subsequent automatic driving vehicle, so that after the interactive scene is determined, the degree of the mutual influence between the automatic driving vehicle and the interactive object also needs to be determined.
S103, determining a target vehicle interactive behavior decision model based on the interactive scene and the interactive degree, and outputting a decision instruction through the target vehicle interactive behavior decision model, wherein the target vehicle interactive behavior decision model is one of a plurality of vehicle interactive behavior decision models trained in advance.
The vehicle interactive behavior decision models in the embodiment of the application are pre-trained models aiming at different driving environments, the automatic driving vehicle can further judge the driving environments based on interactive scenes and interactive degrees, and a proper target vehicle interactive behavior decision model is selected from the vehicle interactive behavior decision models, so that the most proper decision instruction is obtained.
According to the interactive behavior decision method for the automatic driving vehicle, the interactive scene where the automatic driving vehicle is located is identified, the interactive degree of the automatic driving vehicle and the interactive object in the interactive scene is determined, and subsequent processing is performed on the basis of the identified interactive scene and the identified interactive degree so as to select a proper interactive behavior decision model from a plurality of interactive behavior decision models for behavior decision, so that the generalization capability of interactive behavior decision in a dynamic uncertain environment is improved, the interactive behavior decision aiming at the dynamic uncertainty of an actual traffic environment is realized, and the safe and reasonable running of the automatic driving vehicle is ensured.
On the basis of the above-described embodiments, how to determine the interaction scenario and the interaction degree will be described first.
Optionally, an interaction scene where the autonomous vehicle is located is determined according to the road structure information, where the interaction scene is any one of the following: a flat convergence scene, an oblique convergence scene, a T-shaped convergence scene, a lane increase scene and a lane decrease scene; and determining the interaction degree of the automatic driving vehicle and the interaction object according to the interaction scene, the motion state information of the automatic driving vehicle and the motion state information of the traffic participants around the automatic driving vehicle.
In the embodiment of the present application, interaction scenes where automatic driving vehicles are located are further classified based on the road structure information, and for example, as shown in fig. 2 to 6, vehicle interaction diagrams of a flat merging scene, an oblique merging scene, a T-type merging scene, a lane increasing scene, and a lane decreasing scene are sequentially illustrated. In addition to the influence of the interaction scene on the subsequent decision, the degree of interaction between the autonomous vehicle and the interaction object also influences the subsequent decision, and after the interaction scene is determined, the degree of interaction between the autonomous vehicle and the interaction object needs to be determined according to the motion state information of the interaction scene, the autonomous vehicle and surrounding traffic participants. An interactive object that interacts with the autonomous vehicle is determined from the surrounding traffic participants, and a degree of interaction of the autonomous vehicle with the interactive object is determined.
For example, as shown in fig. 2 to 6, vehicle a is an autonomous vehicle, vehicles B, C, and D are surrounding traffic participants, and vehicle B is an interactive object that interacts with vehicle a. The position, speed, acceleration and the like of the autonomous vehicle and the interactive object may affect the interaction degree between the autonomous vehicle and the interactive object, optionally, the interaction scene, the motion state information of the autonomous vehicle and the motion state information of traffic participants around the autonomous vehicle are input into the interaction degree discrimination model to obtain the interaction degree between the autonomous vehicle and the interactive object, optionally, the interaction degree may be identified in a numerical value or a level manner, for example, the level of the interaction degree from low to high is weak, strong and strong, and here, only the level is schematically illustrated, but not limited thereto. The interaction degree discrimination model can be developed based on rules or a classical machine learning method. Therefore, the subsequent decision processing is performed in a targeted manner according to different interactive scenes and interactive degrees, and the scene generalization capability is improved.
After the interaction scene and the interaction degree are determined, how to determine the target vehicle interaction behavior decision model based on the interaction scene and the interaction degree is further explained.
Optionally, if the interaction degree is greater than or equal to the preset degree, determining whether a preset event exists in the interaction scene according to the driving environment information; and if the preset event exists, determining the third vehicle interactive behavior decision model as the target vehicle interactive behavior decision model, wherein the third vehicle interactive behavior decision model is a vehicle interactive behavior decision model under the condition of the pre-trained preset event, and the input of the third vehicle interactive behavior decision model is driving environment information and the output of the third vehicle interactive behavior decision model is a decision instruction. If no preset event exists, determining the movement intention of the interactive object according to the movement state information of the interactive object; and determining a target vehicle interaction behavior decision model based on the movement intention of the interaction object.
Optionally, if the interaction degree is smaller than the preset degree, subsequent interactive behavior decision making is not needed, and the automatically driven vehicle can make a behavior decision of normal driving.
It can be understood that if the interaction degree is smaller than the preset degree, it indicates that the interaction object in the interaction scene has no or minimal influence on the driving of the autonomous vehicle, or the interaction object does not exist in the interaction scene, and at this time, a behavior decision does not need to be made according to vehicle interaction. And if the interaction degree is greater than or equal to the preset degree, the interaction object in the interaction scene can influence the driving of the automatic driving vehicle, so that subsequent interactive behavior decision needs to be carried out.
When the interaction degree is greater than or equal to the preset degree, the automatic driving vehicle further judges whether a preset event exists in the interaction scene, the preset event is a preset special event, and for example, the preset event includes at least one of the following events: the method comprises the following steps that vehicle reversing exists in an interaction area corresponding to an interaction scene, large-scale motor vehicles start and stop exist in the interaction area corresponding to the interaction scene, an interaction object is a vulnerable road user or a special-purpose vehicle, and the traffic flow in the interaction area corresponding to the interaction scene is larger than the preset traffic flow. Wherein, the weak road user refers to a non-motor vehicle or a pedestrian. The driving risk of the automatic driving vehicle is possibly increased due to the preset event, and special processing is needed to be carried out on the automatic driving vehicle, so that whether the preset event exists is further judged after the interactive scene and the interactive degree are determined, if the preset event exists, special processing is needed, and at the moment, the interactive behavior decision under the special condition is directly carried out by taking the third vehicle interactive behavior decision model as the target vehicle interactive behavior decision model. Optionally, the third vehicle interaction behavior decision model may be developed based on rules. Therefore, interactive behavior decision making for targeted processing of special events in the actual dynamic traffic environment is achieved.
If the preset event does not exist, the movement intention of the interactive object is continuously determined according to the movement state information of the interactive object, the movement intention can also be called driving intention, for example, the movement intention can be car cutting, straight going and the like, and the driving risk caused by uncertainty of the movement intention of the interactive object can be reasonably avoided by identifying the movement intention of the interactive object. Optionally, the motion state information of the interactive object is input into the vehicle motion intention recognition model to obtain a motion intention recognition result of the interactive object, and the motion intention recognition result is a probability that the interactive object takes a certain motion intention. Alternatively, the vehicle movement intention recognition model may be developed based on a deep neural network or a dynamic bayesian network.
How to determine the target vehicle interaction behavior decision model based on the movement intention of the interaction object is further described below.
Optionally, if the movement intention of the interactive object is a preset high-risk movement intention, determining a risk value of the environment where the automatic driving vehicle is located according to the driving environment information and the movement intention; if the risk value is greater than or equal to the preset risk value, determining a first vehicle interactive behavior decision model as a target vehicle interactive behavior decision model, wherein the first vehicle interactive behavior decision model is a vehicle interactive behavior decision model under the condition of irrational driving of an interactive object trained in advance, and the input of the first vehicle interactive behavior decision model is driving environment information and the output of the first vehicle interactive behavior decision model is a decision instruction; and if the risk value is smaller than the preset risk value, determining a second vehicle interactive behavior decision model as a target vehicle interactive behavior decision model, wherein the second vehicle interactive behavior decision model is a pre-trained vehicle interactive behavior decision model under the condition of interactive object rational driving, and the input of the second vehicle interactive behavior decision model is driving environment information and the output of the second vehicle interactive behavior decision model is a decision instruction.
When the movement intention of the interactive object is a high-risk movement intention, for example, when the movement intention is a high-risk car switching intention, the risk value of the environment where the automatic driving vehicle is located is directly evaluated at the moment, and the real-time risk of the dynamic traffic environment where the automatic driving vehicle is located is determined, so that the safety decision of the vehicle is ensured. Optionally, the driving environment information and the movement intention of the interactive object are input into the risk assessment model to obtain a risk value of the current environment. If the risk value is greater than or equal to the preset risk value, adopting a vehicle interaction behavior decision model under the irrational driving condition to carry out behavior decision; and if the risk value is smaller than the preset risk value, adopting a vehicle interactive behavior decision model under the condition of rational driving to make a behavior decision. Optionally, the first vehicle interaction behavior decision model may be developed based on rules, and the second vehicle interaction behavior decision model may be developed based on a data-driven method.
Optionally, if the movement intention of the interactive object is a preset low-risk movement intention, determining a predicted trajectory of the interactive object according to the movement state information and the movement intention of the interactive object, so as to avoid driving risks caused by uncertainty of a vehicle movement trajectory; determining the motion characteristics of the interactive object according to the motion state information, the motion intention and the predicted track of the interactive object; if the motion characteristic is irrational driving, determining the first vehicle interactive behavior decision model as a target vehicle interactive behavior decision model, wherein the input of the first vehicle interactive behavior decision model is driving environment information, and the output of the first vehicle interactive behavior decision model is a decision instruction; and if the motion characteristic is rational driving, determining the second vehicle interactive behavior decision model as a target vehicle interactive behavior decision model, and outputting the first vehicle interactive behavior decision model as a decision instruction with the input of driving environment information.
Under the condition that the movement intention of the interactive object is a preset low-risk movement intention, the track of the interactive object is predicted without directly evaluating a risk value of an environment where the automatic driving vehicle is located, and optionally, the movement state information and the movement intention of the interactive object are input into a vehicle movement prediction model to obtain a predicted track point of the interactive object in a future period of time. Alternatively, the vehicle motion prediction model may be developed based on a deep neural network or a conventional machine learning method. Then, the motion characteristics of the interactive object are further determined based on the motion state information, the motion intention and the predicted track of the interactive object, and optionally, the motion characteristics can be divided into rational driving and non-rational driving. Optionally, the motion state information, the motion intention, and the predicted trajectory of the interactive object are input into the vehicle motion characteristic recognition model to obtain the motion characteristic of the interactive object, where the motion characteristic may also be referred to as a driving style, and may reflect the driving style of a driver of the interactive object. If the motion characteristic is irrational driving, adopting a vehicle interactive behavior decision-making model under the condition of irrational driving to make a behavior decision; and if the motion characteristic is rational driving, adopting a vehicle interactive behavior decision model under the condition of rational driving to make a behavior decision. By analyzing the motion characteristics, the risk of uncertainty of the driving style of the driver on decision making of the automatic driving vehicle can be avoided.
Fig. 7 is a schematic frame diagram of an interactive behavior decision method for an autonomous vehicle according to an embodiment of the present application. As shown in fig. 7, the framework includes four parts, wherein the driving environment information part includes moving state information of the autonomous vehicle, moving state information of traffic participants around the autonomous vehicle, and road information. The interactive scene preprocessing part comprises interactive scene classification, interactive degree judgment and special scene processing, wherein the special scene processing is the preset event processing. The interactive vehicle motion cognition part comprises motion intention identification, motion track prediction and motion characteristic analysis. The vehicle interactive behavior decision part comprises dynamic environment risk assessment, vehicle interactive behavior decision under the condition of irrational driving, vehicle interactive behavior decision under the condition of rational driving and vehicle interactive behavior decision under a special scene. The processing logic of each part in the framework is as described in the previous embodiment.
The method for interactive behavior decision making for an autonomous vehicle provided by the present disclosure is further described below with reference to specific examples. Take the parallel import scenario shown in fig. 2 as an example. Vehicle a is an autonomous vehicle, vehicles B, C, and D are surrounding traffic participants, and vehicle B is an interactive object that interacts with vehicle a.
Step 1, driving environment information of a vehicle A is acquired. The acquired driving environment information mainly comprises the motion state information of the vehicle A, including position, speed, acceleration and the like; the motion state information of other vehicles (vehicle B, vehicle C, and vehicle D) around the vehicle a, including the vehicle type, position, speed, acceleration, and the like; the road structure information of the current interaction area comprises a broken line, a solid line, a crescent, a bush, a fence and the like of the road surface, and traffic rules comprise a highest speed limit, a road surface steering sign and the like.
And 2, preprocessing an interactive scene. The method comprises the following steps:
and 2.1, interactive scene classification.
And determining the interactive scene of the vehicle A as a parallel convergence scene based on the road structure information.
And 2.2, judging the interaction degree.
Based on the parallel convergence scene, the relative position and the relative speed between the vehicles are calculated according to the motion state information of the vehicle A and the surrounding vehicles (the vehicle B, the vehicle C and the vehicle D), and then the strength of the interaction degree of the driving behaviors between the vehicles is judged.
And 2.3, processing special scenes.
In the case that the interaction degree level is strong, it is determined whether a special scene, that is, a preset event in the foregoing embodiment, exists. If the traffic flow of the current interaction area is greater than or equal to the preset traffic flow, entering step 4.4; if the traffic flow or the vehicle density in the interactive area is less than the preset traffic flow, no special scene needs to be processed, and the step 3 is continuously executed.
And 3, interacting vehicle motion cognition, wherein the interactive vehicle is an interactive object.
And 3.1, identifying the movement intention of the interactive vehicle.
Based on the moving state information of the vehicle B, a moving intention recognition result of the vehicle B, that is, a probability that the vehicle B takes a different moving intention is output using the vehicle moving intention recognition model. In the parallel convergence scene shown in FIG. 2, the possible movement intentions of the vehicle B are straight-line and left lane change, and the probability { P is identified according to the output straight-line and left lane change intentions of the vehicle B LK ,P LCL Fourthly, if the output straight-line movement intention probability P is obtained LK If the movement intention is high, namely the movement intention is straight, the step 3.2 is entered, and if the output probability P of the movement intention of the left-turn track is LCL If the motion is high, namely the motion is intended to be a left-turn road, and a certain collision risk exists, the step 4.1 of dynamic environment risk assessment is carried out.
And 3.2, predicting the motion track of the interactive vehicle.
Based on the motion state information of the vehicle B and the motion intention of the vehicle B, a predicted track point { x } of the vehicle B in a future period of time Deltat from the current time t is output using a vehicle motion prediction model t ,y t } t:t+Δt
And 3.3, analyzing the motion characteristics of the interactive vehicle.
And outputting probability values of the vehicle B with different motion characteristics by using the vehicle motion characteristic recognition model based on the motion state information of the vehicle B, the motion intention and the predicted track of the vehicle B. Probability value P of rational driving characteristics output by vehicle motion characteristic identification model Norm If the driving characteristic is high, namely rational driving, the step 4.3 is carried out; if the output irrational driving characteristic probability value P AB_Norm If the driving characteristic is high, namely irrational driving, the vehicle interactive behavior decision under the irrational driving condition is carried out in step 4.2.
And 4, making a vehicle interactive behavior decision. And outputting a decision instruction of the vehicle A at the next moment t +1, wherein the decision instruction comprises a longitudinal decision instruction and a transverse decision instruction of the vehicle A.
And 4.1, evaluating the risk of the dynamic environment.
And performing real-time risk assessment on the dynamic traffic environment of the vehicle A based on the parallel import scene. The motion state information of the vehicle a and surrounding vehicles (vehicle B, vehicle C, vehicle D), the road structure information and traffic regulation information, and the motion intention of the vehicle B are input to the risk assessment model, and the risk value of the current traffic environment is output. If the risk value exceeds the preset risk value, entering a step 4.2; if the risk value is lower than the preset risk value, step 4.3 is entered.
And 4.2, making a vehicle interactive behavior decision under the irrational driving condition.
And inputting the driving environment information into a vehicle interactive behavior decision model under the condition of irrational driving, outputting an interactive behavior decision result of the vehicle A, namely a decision instruction, and after the decision instruction is processed, transmitting the output decision instruction to an automatic driving planning system of the vehicle A.
And 4.3, making a vehicle interactive behavior decision under the condition of rational driving.
And inputting the driving environment information into a vehicle interactive behavior decision model under the condition of rational driving, outputting an interactive behavior decision result of the vehicle A, namely a decision instruction, and after the decision instruction is processed, transmitting the output decision instruction to an automatic driving planning system of the vehicle A.
And 4.4, making a vehicle interactive behavior decision in a special scene.
The driving environment information is input into a vehicle interactive behavior decision model under a special scene, namely the vehicle interactive behavior decision model under the condition of a preset event, an interactive behavior decision result of the vehicle A, namely a decision instruction, is output, and after the processing is finished, the output decision instruction is transmitted to an automatic driving planning system of the vehicle A.
The interactive behavior decision method for the automatic driving vehicle under the uncertain environment can develop a corresponding interactive behavior decision model aiming at a dynamic uncertain scene faced by the automatic driving vehicle in the landing process, can process uncertainty of the dynamic scene, uncertainty of driving behaviors of the vehicle in the scene and uncertainty of sudden collision risks in the scene, has high generalization capability, and can ensure safe driving of the automatic driving vehicle. In addition, the decision-making method for the interactive behavior of the automatic driving vehicle under the uncertain environment has high interpretability, can trace the source of problems in the development of an actual system, has strong maintainability and expansibility, can process various different interactive scenes and various different risk scenes, and has very high actual application value.
Fig. 8 is a schematic structural diagram of an interactive behavior decision device of an autonomous vehicle according to an embodiment of the present disclosure. As shown in fig. 8, the interactive behavior decision device 800 of the autonomous vehicle includes:
an obtaining module 801, configured to obtain driving environment information of an autonomous vehicle, where the driving environment information includes motion state information of the autonomous vehicle, motion state information of traffic participants around the autonomous vehicle, and road information;
the first processing module 802 is configured to determine an interaction scene where the autonomous vehicle is located and an interaction degree between the autonomous vehicle and an interaction object according to the driving environment information;
the second processing module 803 is configured to determine a target vehicle interaction behavior decision model based on the interaction scenario and the interaction degree, and output a decision instruction through the target vehicle interaction behavior decision model, where the target vehicle interaction behavior decision model is one of a plurality of vehicle interaction behavior decision models trained in advance.
Optionally, the road information includes road structure information, and the first processing module 802 includes:
the first processing unit is used for determining an interactive scene where the automatic driving vehicle is located according to the road structure information, wherein the interactive scene is any one of the following scenes: a parallel convergence scene, an oblique convergence scene, a T-shaped convergence scene, a lane increase scene and a lane decrease scene;
and the second processing unit is used for determining the interaction degree of the automatic driving vehicle and the interaction object according to the interaction scene, the motion state information of the automatic driving vehicle and the motion state information of the traffic participants around the automatic driving vehicle.
Optionally, the second processing module 803 includes:
the third processing unit is used for determining whether a preset event exists in the interactive scene according to the driving environment information if the interactive degree is greater than or equal to the preset degree;
the fourth processing unit is used for determining the movement intention of the interactive object according to the movement state information of the interactive object if the preset event does not exist;
and the fifth processing unit is used for determining a target vehicle interactive behavior decision model based on the movement intention of the interactive object.
Optionally, the fifth processing unit includes:
the first processing subunit is used for determining a risk value of the environment where the automatic driving vehicle is located according to the driving environment information and the movement intention if the movement intention of the interactive object is a preset high-risk movement intention;
the second processing subunit is used for determining the first vehicle interactive behavior decision model as a target vehicle interactive behavior decision model if the risk value is greater than or equal to a preset risk value, wherein the first vehicle interactive behavior decision model is a vehicle interactive behavior decision model under the condition of irrational driving of an interactive object trained in advance;
and the third processing subunit is used for determining the second vehicle interactive behavior decision model as the target vehicle interactive behavior decision model if the risk value is smaller than the preset risk value, wherein the second vehicle interactive behavior decision model is a pre-trained vehicle interactive behavior decision model under the condition of interactive object rational driving.
Optionally, the device 800 for deciding interactive behavior of an autonomous vehicle further includes:
the fourth processing subunit is used for determining the predicted track of the interactive object according to the motion state information and the motion intention of the interactive object if the motion intention of the interactive object is a preset low-risk motion intention;
the fifth processing subunit is used for determining the motion characteristic of the interactive object according to the motion state information, the motion intention and the predicted track of the interactive object;
the sixth processing subunit is used for determining the first vehicle interactive behavior decision model as a target vehicle interactive behavior decision model if the motion characteristic is irrational driving;
and the seventh processing subunit is used for determining the second vehicle interactive behavior decision model as the target vehicle interactive behavior decision model if the motion characteristic is rational driving.
Optionally, the device 800 for deciding interactive behavior of an autonomous vehicle further includes:
and the sixth processing unit is used for determining the third vehicle interactive behavior decision model as the target vehicle interactive behavior decision model if the preset event exists, wherein the third vehicle interactive behavior decision model is a vehicle interactive behavior decision model under the condition of the pre-trained preset event.
Optionally, the preset event includes at least one of: the method comprises the following steps that a vehicle backs up in an interaction area corresponding to an interaction scene, a large-scale motor vehicle starts and stops in the interaction area corresponding to the interaction scene, an interaction object is a vulnerable road user or a special-purpose vehicle, and the traffic flow in the interaction area corresponding to the interaction scene is larger than the preset traffic flow.
The device of the embodiment of the present disclosure may be used to execute the interactive behavior decision method for an autonomous vehicle in the above method embodiments, and the implementation principle and technical effect are similar, which are not described herein again.
The present disclosure also provides an electronic device and a non-transitory computer-readable storage medium storing computer instructions, according to embodiments of the present disclosure.
According to an embodiment of the present disclosure, the present disclosure also provides an autonomous vehicle including the foregoing electronic device.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program, stored in a readable storage medium, from which at least one processor of the electronic device can read the computer program, and the execution of the computer program by the at least one processor causes the electronic device to perform the solutions provided by any of the above embodiments.
Fig. 9 is a schematic block diagram of an electronic device for implementing an automated driving vehicle interactive behavior decision method according to an embodiment of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the electronic apparatus 900 includes a computing unit 901, which can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM) 902 or a computer program loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data required for the operation of the device 900 can also be stored. The calculation unit 901, ROM 902, and RAM 903 are connected to each other via a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.
A number of components in the electronic device 900 are connected to the I/O interface 905, including: an input unit 906 such as a keyboard, a mouse, and the like; an output unit 907 such as various types of displays, speakers, and the like; a storage unit 908 such as a magnetic disk, optical disk, or the like; and a communication unit 909 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 909 allows the device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 901 performs the various methods and processes described above, such as an interactive behavior decision method of an autonomous vehicle. For example, in some embodiments, the automated vehicle's interactive behavior decision method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 908. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 900 via ROM 902 and/or communications unit 909. When loaded into RAM 903 and executed by computing unit 901, may perform one or more of the steps of the above described automated vehicle's interactive behavior decision method. Alternatively, in other embodiments, the computing unit 901 may be configured to perform the automated vehicle's interactive behavior decision method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server may be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service extensibility in a traditional physical host and VPS service ("Virtual Private Server", or "VPS" for short). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (18)

1. An interactive behavior decision method for an autonomous vehicle, comprising:
acquiring driving environment information of an autonomous vehicle, wherein the driving environment information comprises motion state information of the autonomous vehicle, motion state information of traffic participants around the autonomous vehicle and road information;
determining an interaction scene where the automatic driving vehicle is located and an interaction degree of the automatic driving vehicle and an interaction object according to the driving environment information;
determining a target vehicle interaction behavior decision model based on the interaction scene and the interaction degree, and outputting a decision instruction through the target vehicle interaction behavior decision model, wherein the target vehicle interaction behavior decision model is one of a plurality of vehicle interaction behavior decision models trained in advance.
2. The method of claim 1, wherein the road information includes road structure information, the determining an interaction scenario in which the autonomous vehicle is located and a degree of interaction of the autonomous vehicle with an interaction object according to the driving environment information includes:
determining an interactive scene where the automatic driving vehicle is located according to the road structure information, wherein the interactive scene is any one of the following scenes: a parallel convergence scene, an oblique convergence scene, a T-shaped convergence scene, a lane increase scene and a lane decrease scene;
and determining the interaction degree of the automatic driving vehicle and the interaction object according to the interaction scene, the motion state information of the automatic driving vehicle and the motion state information of the traffic participants around the automatic driving vehicle.
3. The method of claim 1 or 2, wherein the determining a target vehicle interaction behavior decision model based on the interaction scenario and the degree of interaction comprises:
if the interaction degree is greater than or equal to a preset degree, determining whether a preset event exists in the interaction scene according to the driving environment information;
if the preset event does not exist, determining the movement intention of the interactive object according to the movement state information of the interactive object;
and determining a target vehicle interaction behavior decision model based on the movement intention of the interaction object.
4. The method of claim 3, wherein the determining a target vehicle interaction behavior decision model based on the intent to move of the interaction object comprises:
if the movement intention of the interactive object is a preset high-risk movement intention, determining a risk value of the environment where the automatic driving vehicle is located according to the driving environment information and the movement intention;
if the risk value is larger than or equal to a preset risk value, determining a first vehicle interactive behavior decision model as the target vehicle interactive behavior decision model, wherein the first vehicle interactive behavior decision model is a vehicle interactive behavior decision model under the condition of pre-trained irrational driving of an interactive object;
and if the risk value is smaller than a preset risk value, determining a second vehicle interactive behavior decision model as the target vehicle interactive behavior decision model, wherein the second vehicle interactive behavior decision model is a vehicle interactive behavior decision model under the condition of interactive object rational driving trained in advance.
5. The method of claim 4, further comprising:
if the movement intention of the interactive object is a preset low-risk movement intention, determining a predicted track of the interactive object according to the movement state information of the interactive object and the movement intention;
determining the motion characteristic of the interactive object according to the motion state information of the interactive object, the motion intention and the predicted track;
if the motion characteristic is irrational driving, determining the first vehicle interactive behavior decision model as the target vehicle interactive behavior decision model;
and if the motion characteristic is rational driving, determining the second vehicle interactive behavior decision model as the target vehicle interactive behavior decision model.
6. The method of claim 3, further comprising:
and if the preset event exists, determining a third vehicle interactive behavior decision model as the target vehicle interactive behavior decision model, wherein the third vehicle interactive behavior decision model is a vehicle interactive behavior decision model under the condition of the preset event which is trained in advance.
7. The method according to any one of claims 3-6, wherein the preset event comprises at least one of: the method comprises the following steps that a vehicle backs up in an interaction area corresponding to the interaction scene, a large motor vehicle starts and stops in the interaction area corresponding to the interaction scene, the interaction object is a weak road user or a special-purpose vehicle, and the traffic flow in the interaction area corresponding to the interaction scene is larger than the preset traffic flow.
8. An interactive behavior decision-making device for an autonomous vehicle, comprising:
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring driving environment information of an automatic driving vehicle, and the driving environment information comprises motion state information of the automatic driving vehicle, motion state information of traffic participants around the automatic driving vehicle and road information;
the first processing module is used for determining an interaction scene where the automatic driving vehicle is located and the interaction degree of the automatic driving vehicle and an interaction object according to the driving environment information;
and the second processing module is used for determining a target vehicle interactive behavior decision model based on the interactive scene and the interactive degree so as to output a decision instruction through the target vehicle interactive behavior decision model, wherein the target vehicle interactive behavior decision model is one of a plurality of vehicle interactive behavior decision models trained in advance.
9. The apparatus of claim 8, wherein the road information includes road structure information, the first processing module comprising:
a first processing unit, configured to determine an interaction scenario in which the autonomous vehicle is located according to the road structure information, where the interaction scenario is any one of: a parallel convergence scene, an oblique convergence scene, a T-shaped convergence scene, a lane increase scene and a lane decrease scene;
and the second processing unit is used for determining the interaction degree of the automatic driving vehicle and the interaction object according to the interaction scene, the motion state information of the automatic driving vehicle and the motion state information of the traffic participants around the automatic driving vehicle.
10. The apparatus of claim 8 or 9, wherein the second processing module comprises:
the third processing unit is used for determining whether a preset event exists in the interactive scene according to the driving environment information if the interactive degree is greater than or equal to a preset degree;
the fourth processing unit is used for determining the movement intention of the interactive object according to the movement state information of the interactive object if the preset event does not exist;
and the fifth processing unit is used for determining a target vehicle interactive behavior decision model based on the movement intention of the interactive object.
11. The apparatus of claim 10, wherein the fifth processing unit comprises:
the first processing subunit is used for determining a risk value of the environment where the automatic driving vehicle is located according to the driving environment information and the movement intention if the movement intention of the interactive object is a preset high-risk movement intention;
the second processing subunit is configured to determine a first vehicle interaction behavior decision model as the target vehicle interaction behavior decision model if the risk value is greater than or equal to a preset risk value, where the first vehicle interaction behavior decision model is a pre-trained vehicle interaction behavior decision model under the irrational driving condition of an interaction object;
and the third processing subunit is configured to determine a second vehicle interaction behavior decision model as the target vehicle interaction behavior decision model if the risk value is smaller than a preset risk value, where the second vehicle interaction behavior decision model is a pre-trained vehicle interaction behavior decision model under the condition of rational driving of an interaction object.
12. The apparatus of claim 11, further comprising:
the fourth processing subunit is configured to determine, if the motion intention of the interactive object is a preset low-risk motion intention, a predicted trajectory of the interactive object according to the motion state information of the interactive object and the motion intention;
the fifth processing subunit is used for determining the motion characteristic of the interactive object according to the motion state information of the interactive object, the motion intention and the predicted track;
the sixth processing subunit is configured to determine the first vehicle interaction behavior decision model as the target vehicle interaction behavior decision model if the motion characteristic is irrational driving;
and the seventh processing subunit is configured to determine the second vehicle interaction behavior decision model as the target vehicle interaction behavior decision model if the motion characteristic is rational driving.
13. The apparatus of claim 10, further comprising:
and the sixth processing unit is configured to determine a third vehicle interactive behavior decision model as the target vehicle interactive behavior decision model if the preset event exists, where the third vehicle interactive behavior decision model is a vehicle interactive behavior decision model trained in advance under the preset event.
14. The apparatus according to any one of claims 10-13, wherein the preset event comprises at least one of: the method comprises the following steps that vehicle reversing exists in an interaction area corresponding to the interaction scene, large-scale motor vehicle starting and stopping exists in the interaction area corresponding to the interaction scene, the interaction object is a vulnerable road user or a special-purpose vehicle, and the traffic flow in the interaction area corresponding to the interaction scene is larger than the preset traffic flow.
15. An electronic device, comprising:
at least one processor; and a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
17. A computer program product comprising a computer program which, when executed by a processor, implements the method of any one of claims 1-7.
18. An autonomous vehicle comprising the electronic device of claim 15.
CN202211435273.XA 2022-11-16 2022-11-16 Interactive behavior decision method and device for automatic driving vehicle and automatic driving vehicle Pending CN115837919A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211435273.XA CN115837919A (en) 2022-11-16 2022-11-16 Interactive behavior decision method and device for automatic driving vehicle and automatic driving vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211435273.XA CN115837919A (en) 2022-11-16 2022-11-16 Interactive behavior decision method and device for automatic driving vehicle and automatic driving vehicle

Publications (1)

Publication Number Publication Date
CN115837919A true CN115837919A (en) 2023-03-24

Family

ID=85575664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211435273.XA Pending CN115837919A (en) 2022-11-16 2022-11-16 Interactive behavior decision method and device for automatic driving vehicle and automatic driving vehicle

Country Status (1)

Country Link
CN (1) CN115837919A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118004196A (en) * 2024-01-31 2024-05-10 清华大学 Human decision-making behavior recognition method and device for multiple driving modes under dangerous conditions
CN118701099A (en) * 2024-02-08 2024-09-27 北京科技大学 Surrounding vehicle trajectory prediction method driven by behavior cognition in multi-vehicle interference scenarios

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118004196A (en) * 2024-01-31 2024-05-10 清华大学 Human decision-making behavior recognition method and device for multiple driving modes under dangerous conditions
CN118701099A (en) * 2024-02-08 2024-09-27 北京科技大学 Surrounding vehicle trajectory prediction method driven by behavior cognition in multi-vehicle interference scenarios

Similar Documents

Publication Publication Date Title
CN113753077A (en) Method and device for predicting movement locus of obstacle and automatic driving vehicle
US11242050B2 (en) Reinforcement learning with scene decomposition for navigating complex environments
CN115837919A (en) Interactive behavior decision method and device for automatic driving vehicle and automatic driving vehicle
CN114022973B (en) Method, device, equipment and storage medium for processing vehicle faults
CN113978465A (en) Lane-changing track planning method, device, equipment and storage medium
CN114475656A (en) Travel track prediction method, travel track prediction device, electronic device, and storage medium
CN115973179A (en) Model training method, vehicle control method, device, electronic equipment and vehicle
CN117373285A (en) Risk early warning model training method, risk early warning method and automatic driving vehicle
CN115959154A (en) Method and device for generating lane change track and storage medium
CN114333416A (en) Vehicle risk early warning method and device based on neural network and automatic driving vehicle
CN113052047A (en) Traffic incident detection method, road side equipment, cloud control platform and system
CN115447617B (en) Vehicle control method, device, equipment and medium
CN114973656B (en) Traffic interaction performance evaluation method, device, equipment, medium and product
CN117912295A (en) Vehicle data processing method and device, electronic equipment and storage medium
CN113147794A (en) Method, device and equipment for generating automatic driving early warning information and automatic driving vehicle
CN112907949A (en) Traffic anomaly detection method, model training method and device
CN116842392B (en) Track prediction method and training method, device, equipment and medium of model thereof
CN114572233B (en) Model set-based prediction method, electronic equipment and automatic driving vehicle
CN115571165B (en) Vehicle control method, device, electronic equipment and computer readable medium
CN113791564B (en) Remote control method, device, equipment, cloud server and control system
EP3965017A1 (en) Knowledge distillation for autonomous vehicles
EP4102481A1 (en) Method and apparatus for controlling vehicle, device, medium, and program product
CN116279589A (en) Training method of automatic driving decision model, vehicle control method and device
CN119672950A (en) Method and device for generating vehicle intersection information
CN116596051A (en) Scene representation model training method, obstacle marking method and automatic driving vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination