Disclosure of Invention
The method and the device for controlling the vehicle can reduce the problem of safety reduction caused by the fact that the view of the vehicle is blocked.
The method for controlling the vehicle comprises the steps of evaluating risk of collision with a moving object on the road junction or the pedestrian crossing under the condition that the vision in front of the vehicle is blocked and the road junction or the pedestrian crossing exists behind a blocking object, and controlling the vehicle based on the evaluated risk.
In some embodiments, the state of the own vehicle includes a speed of the own vehicle and a maximum braking acceleration of the own vehicle, and
The risk is assessed by a risk rate and is based onCalculating the risk rate;
Wherein, risk is the risk rate, and when vc 2 - (pi-po) 2a < = 0, risk is 0;k as the prior probability, ro is the shielding rate, vc is the speed of the vehicle, po is the position of the shielding object, pi is the position of the intersection or crosswalk, and a is the maximum braking acceleration of the vehicle.
In some embodiments, the controlling the autonomous vehicle based on the assessed risk comprises:
Generating a plurality of driving strategies and future scenes previewed under each driving strategy based on the current environment information and the vehicle state information when the risk is estimated to exist;
Evaluating future scenes previewed under each driving strategy, and
And determining an optimal driving strategy for controlling the self-vehicle based on the evaluation result of the previewed future scene.
In some embodiments, a deep learning network is utilized to generate a plurality of driving strategies and future scenes that are previewed under each driving strategy based on the current environmental information and vehicle state information.
In some embodiments, the method further comprises:
Determining whether a barrier is present in the front view of the vehicle;
when the shielding object exists, determining the magnitude relation between the shielding rate and a preset first threshold value;
executing the risk prevention control when the shielding rate is greater than the first threshold value;
When the shielding rate is smaller than the first threshold value, judging whether an intersection or a crosswalk exists behind the shielding object or not, and
When there is an intersection or crosswalk, an operation of evaluating risk is performed.
In some embodiments, controlling the host vehicle based on the assessed risk includes performing a risk prevention control on the host vehicle when the risk rate is greater than a preset second threshold.
In some embodiments, the method further comprises detecting the obstruction based on a detection scheme of deep learning and bird's eye view, and calculating the obstruction rate.
The computer device of the embodiment of the invention comprises a memory, a processor and a computer program/instruction stored on the memory, wherein the processor executes the computer program/instruction to realize the steps of the method of the embodiment of the invention.
A computer readable storage medium of an embodiment of the present invention has stored thereon a computer program/instruction which, when executed by a processor, implements the steps of the method of the embodiment of the present invention.
A computer program product of an embodiment of the invention comprises a computer program/instruction which, when executed by a processor, implements the steps of the method of the embodiment of the invention.
The embodiment of the invention has the beneficial effects that:
When a blocking object exists in the front view of the vehicle and an intersection or a crosswalk exists behind the blocking object, the risk of collision with a moving object assumed to exist on the intersection or the crosswalk is estimated based on the blocking rate, the state of the vehicle, the position of the blocking object and the position of the intersection or the crosswalk. The vehicle is controlled based on the evaluation result, for example, the driver is warned, the speed of the vehicle is reduced, and the like. By the above means, the risk of collision can be reduced and the safety can be improved even for a scene where the view of the vehicle is limited such as a "ghost probe" or a passing intersection.
Detailed Description
In order to make the technical problems, technical schemes and beneficial effects to be solved more clear and obvious, the invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the particular embodiments described herein are illustrative only and are not limiting upon the invention.
In the description of the present invention, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. Moreover, the terms "first," "second," and the like, are used to distinguish between similar objects and do not necessarily describe a particular order or precedence. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or otherwise described herein.
As shown in fig. 1, a flow chart of an embodiment of the method of controlling a vehicle of the present invention is shown.
In some embodiments, the method may be used in a vehicle that is equipped with an autopilot function, such as a vehicle that is equipped with L2-L4 level autopilot functions. In particular, the method may be integrated in an AEB (automatic sudden braking) function to increase the security of the AEB function. In other embodiments, the method may be performed by a controller responsible for autopilot functions, such as by an autopilot controller.
As shown in fig. 1, the method for controlling a vehicle specifically includes:
step S10, shielding the front view of the vehicle and crossing or crosswalk behind the shielding object.
Step S10 may identify whether there is a shelter in the front view and a scene of an intersection or a crosswalk behind the shelter based on data acquired by a radar and/or an image sensor (e.g., a camera) mounted in front of the vehicle. For example, fig. 5 shows such a scenario. As shown in fig. 5, in the forward direction of the own vehicle 50, there is a shelter 51, and behind the shelter there is an intersection or crosswalk 52.
In some embodiments, a multi-sensor fusion scheme based on a deep-learning bird' S eye view (BEV) may be employed to detect intersections, vehicles, pedestrians, obstructions, etc. to identify whether a host vehicle is in the scene shown in step S10. In addition, the detected data can also be used in the risk assessment in step S12 and in the selection of the vehicle control strategy in step S14.
Specifically, as shown in fig. 2, a flow diagram of a scheme for performing environmental detection using a multi-sensor fusion scheme based on a deep learning Bird's Eye View (BEV) is shown. In fig. 2, image data acquired by an image sensor and radar data acquired by a radar are respectively input into a CNN (Convolutional Neural Networks, convolutional neural network) and an MLP (Multilayer Perceptron, multi-layer perceptron) for processing, so as to obtain image features and radar features respectively. And inputting the image features and the radar features into an LSS (light, splat, shoot) algorithm, converting the image features and the radar features into BEV (Bird's Eye View) features by adopting the LSS algorithm, and finally, performing CNN processing to output a detection result. The detection results can include, but are not limited to, the position, shape and speed of vehicles and pedestrians, the position and size of a shelter, whether an intersection exists and the intersection position, whether a crosswalk exists and the crosswalk position, and the like. Whether or not there is a screen in the view in front of the host vehicle and whether or not there is an intersection or a crosswalk behind the screen can be determined based on the detection result, and whether or not there is an intersection or a crosswalk behind the screen can be determined based on the relative positions of the screen and the intersection or the crosswalk, for example.
Taking fig. 5 as an example, based on the detection result, the obstacle 51 and the intersection or crosswalk 52 within the range of the visual field 53 can be recognized, and therefore, it can be determined that there is an obstacle in the visual field ahead of the host vehicle and that there is an intersection or crosswalk 52 in the visual field. Based on the detection result, the position of the obstacle 51 and the position of the intersection or crosswalk 52 can be obtained, and based on the relative positions of the two, it can be determined that the intersection or crosswalk 52 is behind the obstacle 51. Therefore, based on the detection result, it can be determined that the own vehicle is in the scene described in step S10.
In a scenario like that of fig. 5, if a moving object passes through a crosswalk or intersection 52 while the vehicle 50 is traveling forward, the moving object may be found and braked when the AEB function is not urgent due to the shielding of the radar or the image sensor view by the shielding object 51, so that a dangerous situation of collision between the vehicle and the moving object occurs. In this embodiment, when it is recognized that the own vehicle is in such a scene, the risk assessment may be performed based on the manner of step S12, and the own vehicle may be controlled based on the result of the risk assessment, so as to improve the security problem caused by the view obstruction.
Step S12, when the blocked mobile object exists on the intersection or the crosswalk, the risk of collision with the mobile object is estimated.
In some embodiments, risk may be assessed based on occlusion rate, status of the vehicle, location of the occlusion, and location of the intersection or crosswalk, among others.
Wherein the occlusion rate may be obtained by projecting the field of view of the vehicle onto the BEV plane and dividing the projected area of the occluded area by the total area. For example, referring to fig. 5, the area of the region 54 (i.e., the size of the blocked field of view) divided by the area of the field of view 53 is the blocking rate.
The state of the own vehicle can comprise the speed of the own vehicle and the maximum braking acceleration of the own vehicle, the speed of the own vehicle can be measured by a vehicle carrying speed sensor, and the maximum braking acceleration of the own vehicle can be pre-stored in the vehicle.
The position of the shielding object and the position of the intersection or the crosswalk refer to the distance between the vehicle and the shielding object and the position of the vehicle and the intersection or the crosswalk when the line is drawn along the advancing direction of the vehicle. The position of the shielding object and the position of the crossing or the crosswalk can be obtained from the detection result.
In general, the greater the occlusion rate (indicating a greater blind area of view), the faster the speed of the vehicle, the smaller the maximum braking acceleration, and the closer the position is, the greater the risk is.
In some embodiments, step S10 may quantify the risk based on the following equation (1):
Wherein, risk is the risk rate, and when vc 2 - (pi-po) 2a < = 0, risk is 0;k as the prior probability, ro is the shielding rate, vc is the speed of the vehicle, po is the position of the shielding object, pi is the position of the intersection or crosswalk, and a is the maximum braking acceleration of the vehicle.
In the risk ratio calculation formula, parameters such as a shielding rate, a speed of a vehicle, a maximum braking acceleration of the vehicle, a position of a shielding object, a position of an intersection or a crosswalk and the like are comprehensively considered, so that the risk can be accurately estimated.
In some embodiments, after the risk rate is obtained, the risk may be classified into different levels, such as high risk, medium risk, low risk, and the like, based on the relationship of the risk rate to the threshold.
And step S14, controlling the own vehicle based on the risk assessed in the step S12.
In some embodiments, different risk levels may correspond to different vehicle control strategies. For example, a low risk may not perform any additional operations. The risk in (2) may perform an operation of alerting the driver, for example by means of sound, light, vibration, etc. to prompt the driver to take care of passing an intersection or crosswalk. The high risk may perform the warning and actively reducing the vehicle speed, for example, braking is actively performed to reduce the vehicle speed when the user does not have a braking action after the warning.
In other embodiments, the optimal control (driving) strategy for the vehicle may be determined based on a deep learning network approach.
As shown in fig. 3, the inputs to the deep learning network include environmental signals, vehicle status signals, and risk decision algorithms and risk situations. The environmental signal may be provided by an environmental sensor such as an image sensor and radar, for example, which may include obstacles in the environment, lane lines, pedestrians, other vehicles, speeds, etc. Examples of vehicle status signals may include wheel speed signals, steering wheel signals, and the like. The risk situation originates from the output of step S12, and may be, for example, indication information of different risk levels, indication information of whether there is a risk or not, or may be a risk rate. In some embodiments, when the risk condition indicates no risk or low risk, the deep learning network is not started, i.e., the optimal driving strategy is not determined, but the current driving scheme is maintained.
Based on these input signals, the deep learning network outputs a plurality of driving strategies (i.e., control signals for the vehicle), as well as future scenes that are previewed under each driving strategy. In some implementations, the deep learning network may determine a variety of driving strategies and previewing future scenes based on the environmental data, the vehicle state data, and the risk rate. And then inputting the predicted future scene and the driving strategy into a scene evaluation module, evaluating the predicted future scene, for example, scoring the predicted future scene by adopting a scoring algorithm, and determining the optimal driving strategy based on the scoring result.
In particular, the scoring algorithm may evaluate the future scenario of the preview based on the security of the driving under different driving strategies, the position and relative speed of the vehicle with other objects in the future scenario, the speed and direction of the vehicle itself, the complexity of the control signal operation, the impact on the driving experience, etc. The complexity of the control signal can be measured by the number, frequency and amplitude of the controlled objects. The impact on the driving experience can be measured by the magnitude of the acceleration, the turning radius, etc.
According to the embodiment of the invention, when the shielding object exists in the front view of the vehicle and the intersection or the crosswalk exists behind the shielding object, the risk of collision with the moving object is estimated on the basis of the shielding rate, the state of the vehicle, the position of the shielding object and the position of the intersection or the crosswalk when the shielded moving object exists on the intersection or the crosswalk. The vehicle is controlled based on the evaluation result, for example, the driver is warned, the speed of the vehicle is reduced, and the like. By the above means, the risk of collision can be reduced and the safety can be improved for a scene with limited field of view of the vehicle such as a "ghost probe" or a passing intersection.
In addition, the embodiment of the invention provides a specific risk assessment method based on parameters such as shielding rate and the like, such as a formula (1). The risk assessment method has comprehensive parameters, and can improve the accuracy of risk assessment.
In addition, when the risk is estimated, the embodiment of the invention can determine the optimal driving strategy based on the deep learning network mode, thereby balancing the aspects of risk control, driving experience and the like.
A specific example of an embodiment of the present invention is described below with reference to fig. 4.
As shown in fig. 4, the surrounding environment is detected based on the data of the radar and/or the image sensor to obtain a detection result (step S40). Next, based on the detection result, it is determined whether or not there is a shielding in the front view (step S41). If no occlusion exists, the risk of concern in the embodiment does not exist, and whether the occlusion exists is continuously detected. If there is a blocking, it is first determined whether the blocking rate is greater than a preset threshold th1 (step S42), and if the blocking rate is greater than the threshold th1, it may be directly considered that there is a risk, and an operation related to the risk, such as early warning or controlling the vehicle to slow down, is performed (step S45). If the shielding rate is smaller than the threshold th1, the judgment is continued as to whether an intersection or a crosswalk exists behind the shielding object (step S43), and if no intersection or crosswalk exists, the risk is considered to be low, and no additional control is performed. If there is an intersection or crosswalk, it is determined whether the risk rate is greater than a preset threshold th2 (step S44), and if the risk rate is less than the threshold th2, it may be considered that the risk is low, the current control may be maintained, and no additional control operation is performed. If the risk ratio is greater than th2, an operation related to the risk, such as an early warning, controlling the vehicle to slow down, or even controlling the vehicle to stop, may be performed (step S44).
In this embodiment, the blocking rate is used as traction, the risk condition of the vehicle when passing through the blocked intersection or crosswalk is evaluated, and the vehicle is controlled based on the evaluation result, so as to improve the driving safety.
As shown in fig. 6, a schematic structural diagram of an embodiment of a computer device 6 according to the present invention includes a memory 60, a processor 62, and computer programs/instructions stored on the memory 60, where the processor 62 executes the computer programs/instructions to implement the methods according to the embodiments of the present invention.
In the present embodiment, the computer device 6 may be, for example, a controller responsible for an automatic driving function, such as a central computing unit, a domain controller, a zone controller, or other ECU (electronic unit), or the like.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program/instruction is stored, which when executed by a processor, implements the method according to the embodiment of the invention.
The embodiments of the present invention also provide a computer program product comprising a computer program/instruction which, when executed by a processor, implements the method according to the embodiments of the present invention.
The description of the apparatus, storage medium, and program product embodiments above is similar to that of the method embodiments described above, with similar benefits as the method embodiments. For technical details not disclosed in the embodiments of the apparatus, the storage medium and the program product of the present application, reference should be made to the description of the embodiments of the method of the present application.
The Processor may be at least one of an Application SPECIFIC INTEGRATED Circuit (ASIC), a digital signal Processor (DIGITAL SIGNAL Processor, DSP), a digital signal processing device (DIGITAL SIGNAL Processing Device, DSPD), a programmable logic device (Programmable Logic Device, PLD), a Field programmable gate array (Field programmable-table GATE ARRAY, FPGA), a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, a microprocessor, etc. It will be appreciated that the electronic device implementing the above-mentioned processor function may be other, and embodiments of the present application are not limited in detail.
The computer storage medium/Memory may be a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable programmable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable programmable Read Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, EEPROM), a magnetic random access Memory (Ferromagnetic Random Access Memory, FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a compact disk Read Only Memory (Compact Disc Read-Only Memory, CD-ROM), or any combination thereof, and may be any terminal including one or more of the above, such as a mobile phone, a computer, a tablet device, a personal digital assistant, or the like.
It should be noted that the above description is illustrative only and not limiting of the invention. In other embodiments of the invention, the method may have more, fewer, or different steps, and the order, inclusion, functional relationship between steps may be different than that described and illustrated. For example, typically multiple steps may be combined into a single step, which may also be split into multiple steps. It is within the scope of the present invention for one of ordinary skill to vary the sequence of steps without undue burden.
The technical solution of the present invention may be embodied in essence or in a part contributing to the prior art or in whole or in part in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor or a microcontroller to perform all or part of the steps of the method according to the embodiments of the present invention.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of implementing the various method embodiments described above may be implemented by hardware associated with program instructions. The foregoing program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above.
While the invention has been described in terms of preferred embodiments, the invention is not limited thereto. Any person skilled in the art shall not depart from the spirit and scope of the present invention and shall accordingly fall within the scope of the invention as defined by the appended claims.