CN116767281A - Auxiliary driving method, device, equipment, vehicle and medium - Google Patents
Auxiliary driving method, device, equipment, vehicle and medium Download PDFInfo
- Publication number
- CN116767281A CN116767281A CN202310904974.1A CN202310904974A CN116767281A CN 116767281 A CN116767281 A CN 116767281A CN 202310904974 A CN202310904974 A CN 202310904974A CN 116767281 A CN116767281 A CN 116767281A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- driving
- data
- current vehicle
- digital twin
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 80
- 230000008569 process Effects 0.000 claims abstract description 20
- 238000004590 computer program Methods 0.000 claims description 16
- 238000012546 transfer Methods 0.000 claims description 16
- 238000003860 storage Methods 0.000 claims description 14
- 230000001133 acceleration Effects 0.000 claims description 13
- 230000004927 fusion Effects 0.000 claims description 5
- 238000004088 simulation Methods 0.000 abstract description 11
- 238000004422 calculation algorithm Methods 0.000 description 33
- 238000004891 communication Methods 0.000 description 20
- 239000010410 layer Substances 0.000 description 16
- 238000001514 detection method Methods 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 7
- 230000007246 mechanism Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000003066 decision tree Methods 0.000 description 2
- 230000004438 eyesight Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 239000002356 single layer Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- DMSMPAJRVJJAGA-UHFFFAOYSA-N benzo[d]isothiazol-3-one Chemical compound C1=CC=C2C(=O)NSC2=C1 DMSMPAJRVJJAGA-UHFFFAOYSA-N 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000013467 fragmentation Methods 0.000 description 1
- 238000006062 fragmentation reaction Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
- B60W60/001—Planning or execution of driving tasks
- B60W60/0015—Planning or execution of driving tasks specially adapted for safety
- B60W60/0016—Planning or execution of driving tasks specially adapted for safety of the vehicle or its occupants
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/02—Control of vehicle driving stability
- B60W30/04—Control of vehicle driving stability related to roll-over prevention
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
- B60W30/095—Predicting travel path or likelihood of collision
- B60W30/0956—Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/02—Control of vehicle driving stability
- B60W30/04—Control of vehicle driving stability related to roll-over prevention
- B60W2030/043—Control of vehicle driving stability related to roll-over prevention about the roll axis
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Human Computer Interaction (AREA)
- Traffic Control Systems (AREA)
Abstract
The application discloses a driving assisting method, a driving assisting device, driving assisting equipment, a vehicle and a medium. The method specifically comprises the following steps: acquiring vehicle operation data and environment data of a current vehicle in a form process; determining the vehicle condition information of the current vehicle and the obstacle information of the environment according to the vehicle operation data, the environment data and the pre-constructed digital twin model; predicting whether the current vehicle is dangerous or not through a digital twin model according to the vehicle condition information or the obstacle information; and determining driving advice according to the prediction result, and feeding back the driving advice to the vehicle-mounted terminal so as to assist the driving of the current vehicle. According to the technical scheme provided by the embodiment of the application, simulation and prediction are performed in the digital twin model, and the generated driving advice is fed back to the vehicle for execution, so that the safety and stability of the vehicle can be further improved.
Description
Technical Field
The application relates to the technical field of automatic driving, in particular to an auxiliary driving method, an auxiliary driving device, auxiliary driving equipment, an auxiliary driving vehicle and an auxiliary driving medium.
Background
With the development of society, more and more people select a private car trip mode. Nowadays, vehicles combining novel technical means such as automation and intellectualization are increasingly favored by users. The demands of users are also pushing the vehicle manufacturing industry to continue to develop.
Currently, more and more manufacturers start to develop the autopilot technology, in general, simulation software of autopilot is adopted to perform simulation experiments, the proportion of the simulation experiments occupies about 90%, and experiments and real open road tests occupy only a very small part in a closed scene, so that the simulation experiments cannot simulate real and complex road conditions, the corresponding autopilot algorithm has poor coping capability for complex conditions, and the safety and stability of the autopilot vehicle are low.
Disclosure of Invention
The application provides an auxiliary driving method, an auxiliary driving device, auxiliary driving equipment, an auxiliary driving vehicle and an auxiliary driving medium, so that the safety and the stability of an automatic driving vehicle are improved.
According to a first aspect of the present application, there is provided a driving assistance method applied to a cloud, the method comprising:
acquiring vehicle operation data and environment data of a current vehicle in a form process;
determining the vehicle condition information of the current vehicle and the obstacle information of the environment according to the vehicle operation data, the environment data and the pre-constructed digital twin model;
predicting whether the current vehicle is dangerous or not through a digital twin model according to the vehicle condition information or the obstacle information;
And determining driving advice according to the prediction result, and feeding back the driving advice to the vehicle-mounted terminal so as to assist the driving of the current vehicle.
According to a second aspect of the present application, there is provided a driving assistance method applied to a vehicle-mounted terminal, the method comprising:
collecting vehicle operation data and environment data of a current vehicle in the running process, and sending the vehicle operation data and the environment data to a cloud end so that the cloud end predicts whether the current vehicle is dangerous or not through a digital twin model;
and receiving the driving advice fed back by the cloud, and modifying driving parameters according to the driving advice so as to assist the driving of the current vehicle.
According to a third aspect of the present application, there is provided a driving assistance apparatus applied to a cloud, the apparatus comprising:
the data acquisition module is used for acquiring vehicle running data and environment data of the current vehicle in the form process;
the vehicle condition determining module is used for determining vehicle condition information of a current vehicle and obstacle information of an environment according to vehicle operation data, environment data and a pre-constructed digital twin model;
the danger prediction module is used for predicting whether the current vehicle is dangerous or not through a digital twin model according to the vehicle condition information or the obstacle information;
And the driving assisting module is used for determining driving advice according to the prediction result and feeding the driving advice back to the vehicle-mounted terminal so as to assist the driving of the current vehicle.
According to a fourth aspect of the present application, there is provided a driving assistance apparatus, characterized by being applied to a vehicle-mounted terminal, comprising:
the data acquisition module is used for acquiring vehicle operation data and environment data of the current vehicle in the running process and sending the vehicle operation data and the environment data to the cloud so as to enable the cloud to predict whether the current vehicle is dangerous or not through the digital twin model;
and the driving assisting module is used for receiving the prediction result fed back by the cloud and modifying driving parameters according to the prediction result so as to assist the driving of the current vehicle.
According to a fifth aspect of the present application, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the driving assistance method according to the embodiment of the first aspect of the application and/or to perform the driving assistance method according to the embodiment of the second aspect of the application.
According to a sixth aspect of the present application, there is provided a vehicle for use in an autopilot or the like, the vehicle being provided with an electronic device according to an embodiment of the fifth aspect of the present application.
According to a seventh aspect of the present application, there is provided a computer-readable storage medium storing computer instructions for causing a processor to implement the driving assistance method according to the embodiment of the first aspect of the present application and/or implement the driving assistance method according to the embodiment of the second aspect of the present application when executed.
According to the technical scheme, the vehicle running data and the environment data are obtained, the vehicle condition information of the current vehicle and the obstacle information in the environment are determined in the digital twin model, and whether the vehicle is dangerous or not is predicted, so that the vehicle can be helped to select a proper driving mode in time. The simulation and the prediction are carried out in the digital twin model, and the generated driving advice is fed back to the vehicle for execution, so that the problem of insufficient calculation force of the vehicle-mounted terminal can be solved, and the function of assisting driving (or automatic driving) can be perfected by means of the support of a large amount of data of the cloud digital twin model, so that the safety and the stability of the vehicle are further improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a driving assistance method according to a first embodiment of the present application;
fig. 2 is a flowchart of a driving assistance method according to a second embodiment of the present application;
FIG. 3A is a schematic illustration of assisted driving of a digital twin based autonomous vehicle to which the third embodiment of the present application is applied;
FIG. 3B is a schematic diagram of a digital twin frame for safety precautions applicable to the third embodiment of the present application;
FIG. 3C is a schematic diagram of collision prediction applicable to the third embodiment of the present application;
FIG. 3D is a schematic diagram of rollover prediction as applied to a third embodiment of the present application;
fig. 4 is a schematic structural view of a driving assistance device according to a fourth embodiment of the present application;
fig. 5 is a schematic structural view of a driving assistance device according to a fifth embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device implementing a driving assistance method according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a driving assistance method provided in an embodiment of the present application, where the embodiment is applicable to a situation where a vehicle is driven by a digital twin body, the method may be applied to a cloud end, the method may be performed by a driving assistance device, the driving assistance device may be implemented in a form of hardware and/or software, and the driving assistance device may be configured in an electronic device. As shown in fig. 1, the method includes:
s110, acquiring vehicle operation data and environment data of the current vehicle in the form process.
The present vehicle may be any vehicle that runs on an actual road, and is provided with a large number of sensors, including but not limited to sensors that monitor the state of the present vehicle itself, and sensors that monitor the environment outside the vehicle, such as a vision camera, an infrared ranging sensor, a laser radar, a millimeter wave radar, and the like. The sensors are used for detecting various information of the current vehicle in the road in real time, including vehicle running data, environment data outside the vehicle and the like. The vehicle operation data may be various operation parameters of the current vehicle when the current vehicle is traveling in a real road, and may include, but not limited to, a vehicle speed, a throttle parameter, a rotation angle of a steering wheel, and the like, which can be directly detected by a sensor provided by the current vehicle. The environmental data may be information of road conditions in the actual scene of the current vehicle form, and may include, but is not limited to, weather conditions, lane information of a driving road surface, distance between roadside guardrail trees and vehicles, other vehicle information of vehicles meeting in the past, and the like. It is understood that the vehicle running data and the environment data of the current vehicle can be actually obtained by the vehicle-mounted terminal set in the current vehicle through various sensors and sent to the cloud. The communication mechanism between the vehicle-mounted terminal and the cloud terminal may adopt a timing polling and/or slicing mechanism, and embodiments of the present application are not limited herein.
S120, determining the vehicle condition information of the current vehicle and the obstacle information of the environment according to the vehicle operation data, the environment data and the pre-constructed digital twin model.
The digital twin model may be a digital twin body previously constructed according to the actual situation of the current vehicle. The digital twin is a cloud simulation process of multiple physical quantities, multiple scales and multiple probabilities, which is performed by fully utilizing data such as a physical model, sensor update, operation history and the like. The physical model of the current vehicle can be stored in the cloud in advance, and the digital twin model is updated by acquiring vehicle operation data and environment data provided by the vehicle-mounted terminal, so that the real driving condition of the current vehicle is simulated through the digital twin body at the cloud, and the subsequent dangerous prediction and early warning of the current vehicle are facilitated on the basis of the digital twin body.
The vehicle condition information may be an operation state parameter of the vehicle calculated from the vehicle operation data, and may include, for example, but not limited to, a yaw rate, a roll angle, a centroid lateral acceleration, a centroid lateral deviation angle, and the like in the vehicle running, which are not exhaustive herein. The obstacle information of the environment may be any object that exists in the environment of the current vehicle and has a probability of affecting normal running of the current vehicle, may be stationary relative to the ground, may also be moving relative to the ground, and may include, but is not limited to, vehicles running on roads and other objects. It will be appreciated that the digital twin model includes not only the model of the current vehicle, but also the model of the operating environment in which the vehicle is located, and is updated instantaneously. Then, the vehicle running data and the environment data obtained from the vehicle-mounted terminal are used for updating the digital twin model, and meanwhile, the vehicle condition information of the current vehicle can be calculated and obtained, and the possible obstacle in the environment can be identified. Moreover, as the current vehicle moves in the actual road section, the detected environmental data also changes in real time, the digital twin model is updated in real time by depending on the cloud platform, and the calculated vehicle condition information and obstacle information also change in real time.
S130, predicting whether the current vehicle is dangerous or not according to the vehicle condition information or the obstacle information through a digital twin model.
Among other hazards that may occur, may include, but are not limited to, the current vehicle colliding with obstacles in the environment, and the current vehicle itself turning on its side. The stress conditions of the tires on the two sides of the current vehicle, and parameters such as the turning speed and the centrifugal force of the vehicle can be calculated according to the vehicle condition information obtained by the steps. Of course, these calculation processes may be determined in a manner known in the related art, and embodiments of the present application are not limited thereto. It may be further determined whether rollover will occur by calculating the vehicle stress conditions. Similarly, whether the current vehicle collides or not is predicted based on the vehicle condition information (e.g., vehicle speed, turning condition, etc.) of the current vehicle in combination with the identified obstacle information.
In an alternative embodiment, the vehicle condition information includes yaw rate, roll angle, centroid lateral acceleration and centroid lateral deviation angle of the current vehicle; the predicting whether the current vehicle is dangerous according to the vehicle condition information through the digital twin model can comprise: determining the vertical load of each tire of the current vehicle according to the yaw rate, the roll angle, the mass center lateral acceleration and the mass center lateral deflection angle; based on the digital twin model, judging whether the vehicle turns over or not according to each vertical load.
The vertical load is a vertical component of any load applied to a tower designated point of a three-dimensional system with the same ordinate as the tower, and in the vehicle application scene of the embodiment of the application, the vertical load of each tire of the current vehicle can be understood as the load condition of each tire in the vertical direction. It will be appreciated that uneven loading of the tires, or low or no loading of a side tire, may result in rollover of the vehicle. Therefore, based on the digital twin model, it is predicted whether the vehicle will rollover based on the vertical load of each tire.
Further, the determining whether the vehicle turns over according to each vertical load may include: determining the transverse load transfer rate of the current vehicle according to each vertical load; and judging whether the vehicle turns over or not according to the transverse load transfer rate and a preset threshold value.
The lateral load transfer rate can be used for representing the change condition of stress on two sides of the vehicle, and is generally defined as the ratio of the difference value of the left and right wheel loads to the sum of the left and right wheel loads. Therefore, on the basis of the foregoing embodiment, the lateral load transfer rate of the current vehicle can be accurately calculated from the different vertical loads of the respective tires that have been acquired. The predetermined threshold value may be a limit value for the lateral load transfer rate, for example, the threshold value for the lateral load transfer rate may be set at (-1, 1). It will be appreciated that when the lateral load transfer rate reaches 1 or-1, then it may be determined that there is one side of the tire off the ground, i.e., rollover may occur. The state of the vehicle can be effectively represented in the digital twin model through the calculation of the transverse load transfer rate by the vertical load, so that whether rollover occurs or not can be predicted conveniently, and early warning is facilitated in advance.
Alternatively, the obstacle information may be determined by: acquiring image data and millimeter wave data acquired by a current vehicle; information fusion is carried out on the image data and the millimeter wave data, and the coverage rate between a target identification frame in the image data and a projection point area in the millimeter wave data is determined; and determining obstacle information according to the coverage rate.
The image data can be images of the surrounding environment of the vehicle, which are acquired by cameras at different positions and shooting angles of the vehicle at present; the millimeter wave data may be distance information between an object in the surrounding environment of the vehicle, which is acquired by the present vehicle through millimeter wave radars of different positions and detection angles on the vehicle, and the present vehicle. Of course, the focus of the photographing and detection may be on the current direction of travel of the vehicle to avoid collisions. Wherein a target recognition algorithm may be used on the basis of the acquired image data to mark the target recognition frames for all image data. Similarly, in the acquired millimeter wave data, a target recognition algorithm may be used, with the projected point area of the object information detected by the target recognition algorithm as a target detection frame in the millimeter wave data. By comparing the ratio (i.e., coverage) of the overlapping object recognition frame and the proxel area, it is determined whether the object detected in both ways is an obstacle in particular. For example, if the coverage exceeds 60%, it is considered that the same first recognized object detected in two ways is actually present, and it is determined that the object is an obstacle. The detection and recognition of the obstacle by one kind of data are easy to cause the phenomena of missing detection, false detection, etc. The two modes are used for detection together, so that the truly existing object is determined in a comparison mode, the accuracy of obstacle detection is greatly improved, and the method is helpful for timely and accurately making an anti-collision response strategy in the digital twin model of the current vehicle. Of course, the target recognition according to the embodiment of the present application may use a target recognition algorithm in the related art, which is not limited in this embodiment of the present application.
In another embodiment, the predicting whether the current vehicle is dangerous according to the vehicle condition information or the obstacle information through the digital twin model may further include: determining collision time between the obstacle and the current vehicle according to the vehicle condition information, the obstacle information and the digital twin model; and determining whether the current vehicle collides according to the collision time.
The collision time may be the time predicted in the digital twin model when the collision is about to occur between the current vehicle and the obstacle during running, and it is understood that the collision can be avoided in time by changing the driving strategy during the collision time. Then, in the digital twin model, the vehicle condition information characterizes the current state in which the vehicle is driving (including, but not limited to, direction of travel and speed of the vehicle, etc.); the obstacle information characterizes the relative position and relative distance of the obstacle to the current vehicle. Therefore, the time when the current vehicle collides with the obstacle while maintaining the current running state can be calculated from the vehicle condition information and the obstacle information. The collision time gives the digital twin model a calculation of whether the driving strategy can be changed within the collision time according to the braking distance or the steering time of the vehicle so as to judge whether the vehicle will collide. Of course, the calculation of the collision time in the embodiment of the present application may be determined by using a collision time method in the related art, which is not limited in the embodiment of the present application.
In the digital twin model, the collision time is determined according to the vehicle condition information and the obstacle information, so that the collision condition is predicted, and executable driving strategy modification is simulated in the digital twin model, so that the current vehicle is helped to carry out anti-collision driving operation in an actual road section in advance, and the safety of the vehicle is ensured.
And S140, determining driving advice according to the prediction result, and feeding back the driving advice to the vehicle-mounted terminal so as to assist the driving of the current vehicle.
The prediction result may be a result of predicting whether a collision occurs in the digital twin model in the foregoing step. Based on the prediction, various driving strategy changes are simulated in the digital twin model to find the optimal rollover prevention strategy and anti-collision strategy, i.e., driving advice (e.g., emergency braking, deceleration, or steering detours, etc.). The driving advice is sent to the in-vehicle terminal, and the vehicle-mounted terminal executes the strategy to help the vehicle in the real environment make a driving strategy change before collision. Of course, the determination of the anti-collision strategy in the digital twin model may be performed by methods in the prior art, which are not limited by the embodiments of the present application.
According to the technical scheme, the vehicle running data and the environment data are obtained, the vehicle condition information of the current vehicle and the obstacle information in the environment are determined in the digital twin model, and whether the vehicle is dangerous or not is predicted, so that the vehicle can be helped to select a proper driving mode in time. The simulation and the prediction are carried out in the digital twin model, and the generated driving advice is fed back to the vehicle for execution, so that the problem of insufficient calculation force of the vehicle-mounted terminal can be solved, and the function of assisting driving (or automatic driving) can be perfected by means of the support of a large amount of data of the cloud digital twin model, so that the safety and the stability of the vehicle are further improved.
Example two
Fig. 2 is a flowchart of a driving assistance method provided in a second embodiment of the present application, where the present embodiment is applicable to a situation where a vehicle is driven by a digital twin body, the method may be applied to a vehicle-mounted terminal, the method may be performed by a driving assistance device, the driving assistance device may be implemented in a form of hardware and/or software, and the driving assistance device may be configured in an electronic device. As shown in fig. 1, the method includes:
s210, collecting vehicle operation data and environment data of the current vehicle in the running process, and sending the vehicle operation data and the environment data to the cloud end so that the cloud end predicts whether the current vehicle is dangerous or not through a digital twin model.
The vehicle operation data may be various operation parameters of the current vehicle when the current vehicle runs on the real road, for example, may include, but not limited to, vehicle speed, throttle parameter, rotation angle of steering wheel, and the like, which can be directly detected by a sensor set by the current vehicle. The environmental data may be information of road conditions in the actual scene of the current vehicle form, and may include, but is not limited to, weather conditions, lane information of a driving road surface, distance between roadside guardrail trees and vehicles, other vehicle information of vehicles meeting in the past, and the like. It is understood that the vehicle running data and the environment data of the current vehicle can be actually obtained by the vehicle-mounted terminal set in the current vehicle through various sensors and sent to the cloud.
After the cloud receives the vehicle operation data and the environment data sent by the vehicle-mounted terminal, the cloud performs analysis and prediction in the digital twin model as in the previous embodiment, and then determines whether the current vehicle is dangerous or not. Among the hazards that may occur include, but are not limited to, rollover and collisions.
S220, receiving the driving advice fed back by the cloud, and modifying driving parameters according to the driving advice so as to assist driving of the current vehicle.
The driving parameter may be a controlled object that the current vehicle needs to control while driving, and may include, but is not limited to, steering wheel rotation angle and braking acceleration, for example.
It can be understood that after the vehicle-mounted terminal obtains the driving suggestion fed back by the cloud, the steering wheel rotation angle of the vehicle is modified according to the driving suggestion to help the vehicle steer, or the braking acceleration of the vehicle is modified to help the vehicle decelerate or brake, and the like, so that the current side turning or collision is avoided.
According to the technical scheme, simulation and prediction are carried out in the digital twin model, and the generated driving advice is fed back to the vehicle-mounted terminal of the vehicle to be executed so as to assist driving, so that the problem of insufficient calculation force of the vehicle-mounted terminal can be solved, the function of assisting driving (or automatic driving) can be perfected by means of supporting a large amount of data of the cloud digital twin model, and therefore safety and stability of the vehicle are further improved.
Example III
The third embodiment of the present application is a preferred embodiment provided in an application scenario for an automatic driving and/or assisted driving vehicle on the basis of the foregoing embodiments, and specifically includes the following steps:
a schematic diagram of the driving assisting method related to the preferred embodiment of the application is shown in fig. 3A, and mainly comprises a vehicle-mounted terminal and a cloud terminal. The vehicle-mounted equipment end comprises various sensors, communication equipment and a controller. The data monitored by the sensor is sent to the cloud end through the communication equipment, and the communication mechanism comprises a timing polling mechanism and a fragmentation mechanism. The timing polling is to send a get request packet to the cloud end at regular time, then the cloud end computing center analyzes the request packet, then the status code is fed back to the vehicle, and the vehicle-mounted communication end determines information to be sent in the next step according to the status code. First the cloud needs to identify which vehicle is communicating and thus the device identity. After the polling request is identified by the cloud, the cloud returns the corresponding information description message and the state code retcode to the vehicle-mounted equipment, the vehicle-mounted communication terminal performs identification analysis, determines a task to be executed, and sends a get request packet and uploads the get request packet after the task is completed.
The driving process can be divided into four tasks: uploading surrounding environment information, uploading the current vehicle state, acquiring a control command, and updating the driving state. This mechanism allows communication to begin with a subtask without having to restart the entire flow. The slicing mechanism is used for carrying out communication transmission according to the progress offset when the vehicle-mounted equipment end sends request information, and files can be downloaded in a segmented mode. The method avoids the problem of re-downloading caused by uploading interruption when the file is integrally downloaded, and can improve the communication speed and stability. The vehicle-mounted equipment divides a large file into slices, records the size and the offset, sends HASH character strings and file names of the file encrypted by md5 to a cloud end through an HTTP request, and the cloud end downloads all the slices and combines the slices and verifies the correctness of the content through the HASH character strings.
The cloud is a main body of the digital twin body and is provided with a database, an early warning and control algorithm. The method is mainly used for receiving information sent by the vehicle-mounted communication section and storing vehicle state information and scene information through MySQL. In addition, an early warning algorithm in the cloud can judge whether to send out early warning according to the transmitted information, and a digital twin model in the cloud is controlled through a control algorithm, so that the state of the digital twin body is regulated and controlled in real time. The early warning algorithm mainly comprises a collision early warning algorithm with visual perception and millimeter wave radar fusion and a vehicle rollover early warning algorithm based on AdaBoost (iterative algorithm). The digital twin body and the physical entity vehicle are mapped mutually, and the cloud end feeds back the control result in the digital twin body to the entity vehicle to form closed loop feedback. In addition, according to the stored large amount of data, the early warning algorithm and the digital twin body model can be continuously updated through big data analysis and other optimization algorithms.
The digital twin framework of the autopilot safety precaution system is shown in fig. 3B. The system mainly comprises a physical layer, a digital twin body, a data layer and an application layer, wherein the digital twin body and the physical layer are mapped in real time. After the physical layer collects data, the data layer is used for storing and early warning judgment, after a judgment result is given, a virtual vehicle in the digital twin body is used as a carrier, reasonable control is carried out on the application layer, the virtual vehicle is enabled to safely run, and meanwhile, the result is fed back to a real physical vehicle.
The physical layer mainly includes the own vehicle and the surrounding environment. The self-vehicle comprises a vehicle body, a vehicle-mounted communication terminal, a camera and various radars and various other sensors; the surrounding environment includes road information, other vehicles and pedestrians, obstacles on the road, and the like. Static environment information in a driving scene and dynamic vehicle and pedestrian information are identified through a camera and a radar on the vehicle, and real-time states of the automobile, such as speed, acceleration, position, tire side inclination angle, steering wheel rotation angle and the like, are measured through various sensors on the vehicle. And finally, all the data are collected and sent to a central cloud processor through the vehicle-mounted communication terminal.
The digital twin body includes a virtual driving system including a virtual environment, a virtual vehicle, and the like, and is a projection of a physical layer. The virtual static environment can be built by rFPro. The laser radar scanning method of rFPro is very accurate, and the scanning resolution can reach 1 cm. After the laser radar scanning is performed on roads and objects around the roads, such as trees, buildings, obstacles and the like, a simulation environment very close to the real environment can be generated. After the data of the virtual vehicle is acquired by the vehicle-mounted camera, the SSD deep learning algorithm is used for carrying out convolution operation, the target characteristic information is extracted by acquiring point cloud data in the millimeter wave radar and fused into a convolution layer of the image target, and the specific target is identified according to the difference of radar reflection amplitude.
The data layer is mainly formed by combining a database and an early warning algorithm. The data layer receives various data sent from the physical layer and stores the data in a database MySQL. And then, recognizing the state of the vehicle and the states among other vehicles, pedestrians and obstacles through a deep learning algorithm, judging whether collision occurs through a vehicle anti-collision algorithm, judging whether rollover occurs through a vehicle overturning algorithm, and giving out early warning information.
The application layer mainly enables the early warning information to be applied in other control algorithms after receiving the early warning information, for example, adjusts the speed, acceleration, direction and the like of the vehicle, so that the vehicle can keep safe running.
In practical application, digital twinning plays an important role in anti-collision and anti-rollover of a vehicle in the running process, and anti-collision is described below:
collision early warning is carried out through vision perception and millimeter wave radar fusion, as shown in fig. 3C, data acquisition is carried out through a camera arranged on a vehicle body and the millimeter wave radar, and image data acquired by the camera and millimeter wave data acquired by the millimeter wave radar are unified through a series of coordinate transformation.
The image acquired by the camera can obtain the target area and the target type after the image is subjected to a deep learning SSD (Single Shot MultiBox Detector) algorithm, and the problem of false detection can occur. When the SSD algorithm detects a target, firstly, processing an image through a specific convolution check to highlight the characteristic of the target, layering the convolution layers by the algorithm in order to adapt to the scale effect of the image, and after the feature map of the upper layer is obtained, extracting the feature by the algorithm to obtain a neural network of the feature, and finally determining the left upper corner coordinate of the target and the grading of the target category through classification and regression. Because of the combination with digital twinning, a large amount of data can be used for training the SSD algorithm, improving the recognition accuracy. The millimeter wave output information is distance, speed, angle and the like, and false alarm can be generated. The intersection ratio IoU is calculated by using a target area (marked as A) output by the SSD and an area (marked as B) formed by projection points of the millimeter wave radar on an image, and is expressed as follows:
Wherein A and B respectively represent the target recognition frame of the image data and the target recognition frame corresponding to the projection point of the millimeter wave data.
When the cross-over ratio is greater than a certain threshold, the target is judged to be a valid target and is passed to the next step of processing. Assuming that the millimeter wave radar target frame and the SSD algorithm target frame have the same size, dividing the millimeter wave radar target point into three cases of being out of the SSD algorithm target frame, on the frame and in the frame, and calculating the range of IoU so as to select an initial threshold value. And the reflected amplitude of the millimeter wave radar to the target is different, so that the traveler and the vehicle can be further distinguished.
And generating and converting the 2D image information according to continuous discrete sampling of the camera, and identifying the behavior information of the vehicle by applying a 4D vehicle detection algorithm in a kth frame. Combining the information of the current frame with a pre-established 3D model library to obtain a 3D hypothesis model of the detected vehicle, and establishing motion parameters of the detected vehicle on the predicted information of the image processing result of the previous frame, and generating 4D information of the detected vehicle in a digital twin body. When a target moves, a discretized state equation is generally used for modeling the target, and the following formula is adopted:
X[(k+1)T]=A·X[kT]+v[kT]
t is the sampling period of the camera, and X is the state variable of the target; x [ kT ] represents the state of the target at the kT moment, X [ (k+1) T ] represents the state v [ kT ] of the target at the (k+1) T moment, and white noise.
Finally, anti-collision early warning is carried out, and a vehicle collision time (TTC) method is used. D (D) AB Representing the distance between two workshops, V A Representing the speed of the vehicle, V B Indicating the speed of the preceding vehicle. TTC is calculated according to the following formula:
the output data of the millimeter wave radar can obtain the target distance through coordinate transformation, the speed of the target relative to the vehicle can be provided, the TTC of each target can be calculated according to the target distance, the TTC is compared with a TTC threshold value, and early warning is carried out when the TTC threshold value is lower than the TTC threshold value. The TTC threshold requires a large amount of statistical data to be obtained, so that digital twinning can predict a virtual vehicle and then feed back a real vehicle, and a large amount of monitoring data is used for determining a proper threshold.
Meanwhile, the transverse load transfer rate is calculated according to the vertical load so as to predict whether the vehicle will rollover, as shown in fig. 3D, an AdaBoost (iterative algorithm) can be adopted to suggest a plurality of weak classifiers, the weak classifiers are trained through a large number of data samples, if the classification samples are wrong, the weight of the classification samples is increased, if the classification samples are correct, the weight of the classification samples is reduced, and finally, a strong classifier is obtained through voting, weighting and superposition. In the prediction of rollover, a transverse Load Transfer Rate (LTR) criterion is used as a weak classifier, the LTR takes a proper threshold value as a rollover limit (for example, -1 to +1), and +/-1 is used as a data class label for distinguishing. And simultaneously recording the yaw rate, the roll angle, the mass center lateral acceleration, the mass center slip angle and the like of the vehicle, and generating the data type label. The lateral Load Transfer Rate (LTR) is an index for determining whether or not the vehicle is about to roll over, and is represented by the following formula:
F r,i And F l,i The vertical load of the i-th tire on the right and left of the vehicle is shown, respectively, and when the LTR reaches the threshold value + -1, the load of one side wheel is zero, i.e. the tire is off the ground, and rollover is about to occur. The conventional early warning strategy usually sets the threshold value to a fixed value, and here, the threshold value can be set to a dynamic value (the threshold value is set according to actual conditions), and the threshold value can be obtained by continuously optimizing a digital twin body and a large amount of generated data by an intelligent algorithm.
It will be appreciated that the data collected by the sensors on the vehicle and the vehicle's own parameters are modeled in the vehicle dynamics model simulation software (e.g., truckSim). And the vehicle sensor monitors in real time, acquires the original data in running, and generates data class labels such as LTR, roll angle, mass center lateral acceleration and the like. A single-layer decision tree weak classifier is established, and then a minimum error demarcation value is obtained by performing column feature circulation on a plurality of weak classifiers. And meanwhile, calculating to obtain the probability of the classification result. And then importing real-time data, obtaining classification results by all single-layer decision tree weak classifiers, giving probability, accumulating all the classifiers to obtain a final result, and sending early warning information. The early warning information is sent to a follow-up control algorithm to correct the state of the digital twin body, and is fed back to a real vehicle in real time. In reality, a large amount of data generated by the vehicle can also be used for continuously training an AdaBoost algorithm, so that the accuracy of rollover prediction is improved.
Example IV
Fig. 4 is a schematic structural diagram of a driving assisting device according to a fourth embodiment of the present application. As shown in fig. 4, the apparatus 400 includes:
a data acquisition module 410, configured to acquire vehicle operation data and environment data of a current vehicle in a formal process;
the vehicle condition determining module 420 is configured to determine vehicle condition information of a current vehicle and obstacle information of an environment according to vehicle operation data, environment data and a digital twin model constructed in advance;
the danger prediction module 430 is configured to predict whether the current vehicle is dangerous or not according to the vehicle condition information or the obstacle information by using a digital twin model;
the driving assistance module 440 is configured to determine driving advice according to the prediction result, and feed back the driving advice to the vehicle-mounted terminal to assist driving of the current vehicle.
According to the technical scheme, the vehicle running data and the environment data are obtained, the vehicle condition information of the current vehicle and the obstacle information in the environment are determined in the digital twin model, and whether the vehicle is dangerous or not is predicted, so that the vehicle can be helped to select a proper driving mode in time. The simulation and the prediction are carried out in the digital twin model, and the generated driving advice is fed back to the vehicle for execution, so that the problem of insufficient calculation force of the vehicle-mounted terminal can be solved, and the function of assisting driving (or automatic driving) can be perfected by means of the support of a large amount of data of the cloud digital twin model, so that the safety and the stability of the vehicle are further improved.
In an alternative embodiment, the vehicle condition information includes yaw rate, roll angle, centroid lateral acceleration and centroid lateral deviation angle of the current vehicle; the hazard prediction module 430 may include:
the vertical load determining unit is used for determining the vertical load of each tire of the current vehicle according to the yaw rate, the roll angle, the mass center lateral acceleration and the mass center lateral deflection angle;
and the rollover prediction unit is used for judging whether the vehicle is rollover or not according to the vertical loads based on the digital twin model.
Further, the rollover prediction unit may include:
a load transfer rate determination subunit, configured to determine a lateral load transfer rate of the current vehicle according to each vertical load;
and judging whether the vehicle turns over or not according to the transverse load transfer rate and a preset threshold value.
Optionally, the apparatus 400 includes an obstacle information determining module, where the obstacle information determining module may include:
a data acquisition unit for acquiring image data and millimeter wave data acquired by a current vehicle;
the coverage rate determining unit is used for carrying out information fusion on the image data and the millimeter wave data and determining the coverage rate between a target identification frame in the image data and a projection point area in the millimeter wave data;
And the obstacle information determining unit is used for determining obstacle information according to the coverage rate.
In another alternative embodiment, the hazard prediction module 430 may further include:
a collision time determining unit for determining a collision time between the obstacle and the current vehicle according to the vehicle condition information, the obstacle information and the digital twin model;
and the collision prediction unit is used for determining whether the current vehicle collides according to the collision time.
The driving assisting device provided by the embodiment of the application can execute the driving assisting method provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of executing the driving assisting methods.
Example five
Fig. 5 is a schematic structural diagram of a driving assisting device according to a fifth embodiment of the present application. As shown in fig. 5, the apparatus 500 includes:
the data acquisition module 510 is configured to acquire vehicle operation data and environment data of the current vehicle during the driving process, and send the vehicle operation data and the environment data to the cloud end, so that the cloud end predicts whether the current vehicle is dangerous or not through the digital twin model.
The driving assistance module 520 is configured to receive the driving advice fed back by the cloud, and modify driving parameters according to the driving advice, so as to assist driving of the current vehicle.
According to the technical scheme, simulation and prediction are carried out in the digital twin model, and the generated driving advice is fed back to the vehicle-mounted terminal of the vehicle to be executed so as to assist driving, so that the problem of insufficient calculation force of the vehicle-mounted terminal can be solved, the function of assisting driving (or automatic driving) can be perfected by means of supporting a large amount of data of the cloud digital twin model, and therefore safety and stability of the vehicle are further improved.
The driving assisting device provided by the embodiment of the application can execute the driving assisting method provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of executing the driving assisting methods.
Example six
Fig. 6 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 6, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 executes the respective methods and processes described above, such as the assisted driving method.
In some embodiments, the present disclosure further provides a vehicle, which may be an autopilot-related vehicle, and the vehicle may be provided with an electronic device as provided in the embodiments of the present disclosure.
In some embodiments, the driving assistance method may be implemented as a computer program, which is tangibly embodied in a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into the RAM 13 and executed by the processor 11, one or more steps of the driving assistance method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the driving assistance method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present application may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present application, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present application are achieved, and the present application is not limited herein.
The above embodiments do not limit the scope of the present application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application should be included in the scope of the present application.
Claims (11)
1. A method of driving assistance applied to a cloud, the method comprising:
acquiring vehicle operation data and environment data of a current vehicle in a form process;
determining the vehicle condition information of the current vehicle and the obstacle information of the environment according to the vehicle running data, the environment data and a pre-constructed digital twin model;
predicting whether the current vehicle is dangerous or not through the digital twin model according to the vehicle condition information or the obstacle information;
And determining driving advice according to the prediction result, and feeding the driving advice back to the vehicle-mounted terminal so as to assist the driving of the current vehicle.
2. The method of claim 1, wherein the vehicle condition information includes yaw rate, roll angle, centroid lateral acceleration, and centroid lateral angle of the current vehicle; predicting whether the current vehicle is dangerous or not according to the vehicle condition information through the digital twin model comprises the following steps:
determining the vertical load of each tire of the current vehicle according to the yaw rate, the roll angle, the centroid lateral acceleration and the centroid lateral deflection angle;
based on the digital twin model, judging whether the vehicle turns over or not according to each vertical load.
3. The method of claim 2, wherein said determining whether said vehicle is turned over based on each of said vertical loads comprises:
determining the transverse load transfer rate of the current vehicle according to each vertical load;
and judging whether the vehicle turns over or not according to the transverse load transfer rate and a preset threshold value.
4. A method according to any one of claims 1-3, characterized in that the obstacle information is determined by:
Acquiring image data and millimeter wave data acquired by the current vehicle;
information fusion is carried out on the image data and the millimeter wave data, and coverage rate between a target identification frame in the image data and a projection point area in the millimeter wave data is determined;
and determining the obstacle information according to the coverage rate.
5. A method according to any one of claims 1-3, wherein said predicting, from said vehicle condition information or said obstacle information, whether said current vehicle is at risk by said digital twin model, further comprises:
determining collision time between an obstacle and a current vehicle according to the vehicle condition information, the obstacle information and the digital twin model;
and determining whether the current vehicle collides according to the collision time.
6. A driving assistance method, applied to a vehicle-mounted terminal, comprising:
collecting vehicle operation data and environment data of a current vehicle in a running process, and sending the vehicle operation data and the environment data to a cloud end so that the cloud end predicts whether the current vehicle is dangerous or not through a digital twin model;
and receiving the driving advice fed back by the cloud, and modifying driving parameters according to the driving advice so as to assist the driving of the current vehicle.
7. A driving assistance apparatus, applied to a cloud, the apparatus comprising:
the data acquisition module is used for acquiring vehicle running data and environment data of the current vehicle in the form process;
the vehicle condition determining module is used for determining vehicle condition information of the current vehicle and obstacle information of the environment according to the vehicle running data, the environment data and a pre-constructed digital twin model;
the danger prediction module is used for predicting whether the current vehicle is dangerous or not through the digital twin model according to the vehicle condition information or the obstacle information;
and the driving assisting module is used for determining driving advice according to the prediction result and feeding the driving advice back to the vehicle-mounted terminal so as to assist the driving of the current vehicle.
8. An auxiliary driving device, characterized by being applied to a vehicle-mounted terminal, comprising:
the data acquisition module is used for acquiring vehicle operation data and environment data of the current vehicle in the running process and sending the vehicle operation data and the environment data to the cloud so as to enable the cloud to predict whether the current vehicle is dangerous or not through the digital twin model;
and the driving assistance module is used for receiving the driving advice fed back by the cloud and modifying driving parameters according to the driving advice so as to assist the driving of the current vehicle.
9. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the driving assistance method of any one of claims 1-5 and/or to perform the driving assistance method of claim 6.
10. A vehicle, characterized in that it is provided with an electronic device as claimed in claim 9.
11. A computer-readable storage medium, characterized in that the computer-readable storage medium stores computer instructions for causing a processor to implement the driving assistance method according to any one of claims 1 to 5 and/or to implement the driving assistance method according to claim 6 when executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310904974.1A CN116767281A (en) | 2023-07-21 | 2023-07-21 | Auxiliary driving method, device, equipment, vehicle and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310904974.1A CN116767281A (en) | 2023-07-21 | 2023-07-21 | Auxiliary driving method, device, equipment, vehicle and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116767281A true CN116767281A (en) | 2023-09-19 |
Family
ID=87991415
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310904974.1A Pending CN116767281A (en) | 2023-07-21 | 2023-07-21 | Auxiliary driving method, device, equipment, vehicle and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116767281A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117727183A (en) * | 2024-02-18 | 2024-03-19 | 南京淼瀛科技有限公司 | Automatic driving safety early warning method and system combining vehicle-road cooperation |
JP7473730B1 (en) | 2023-09-22 | 2024-04-23 | Kddi株式会社 | Information processing device, information processing method, and program |
CN117922538A (en) * | 2024-03-25 | 2024-04-26 | 杭州迪为科技有限公司 | Measurement and control method and system for hybrid electric vehicle based on digital twin technology |
CN118015598A (en) * | 2024-04-08 | 2024-05-10 | 广汽埃安新能源汽车股份有限公司 | Target detection model construction method, device and target detection system |
CN118289038A (en) * | 2024-06-06 | 2024-07-05 | 中国第一汽车股份有限公司 | Vehicle operation control method, device, computer equipment and storage medium |
CN118764518A (en) * | 2024-05-31 | 2024-10-11 | 湖北中交航胜科技有限公司 | Cloud-based assisted driving control system and method |
CN119169821A (en) * | 2024-09-23 | 2024-12-20 | 中山大学 | A vehicle driving assistance system and control method based on digital twin |
CN119611517A (en) * | 2025-02-12 | 2025-03-14 | 厦门理工学院 | Distributed electric vehicle steering stability warning method, device, equipment and medium |
-
2023
- 2023-07-21 CN CN202310904974.1A patent/CN116767281A/en active Pending
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7473730B1 (en) | 2023-09-22 | 2024-04-23 | Kddi株式会社 | Information processing device, information processing method, and program |
CN117727183A (en) * | 2024-02-18 | 2024-03-19 | 南京淼瀛科技有限公司 | Automatic driving safety early warning method and system combining vehicle-road cooperation |
CN117727183B (en) * | 2024-02-18 | 2024-05-17 | 南京淼瀛科技有限公司 | Automatic driving safety early warning method and system combining vehicle-road cooperation |
CN117922538A (en) * | 2024-03-25 | 2024-04-26 | 杭州迪为科技有限公司 | Measurement and control method and system for hybrid electric vehicle based on digital twin technology |
CN117922538B (en) * | 2024-03-25 | 2024-06-11 | 杭州迪为科技有限公司 | Measurement and control method and system for hybrid electric vehicle based on digital twin technology |
CN118015598A (en) * | 2024-04-08 | 2024-05-10 | 广汽埃安新能源汽车股份有限公司 | Target detection model construction method, device and target detection system |
CN118764518A (en) * | 2024-05-31 | 2024-10-11 | 湖北中交航胜科技有限公司 | Cloud-based assisted driving control system and method |
CN118289038A (en) * | 2024-06-06 | 2024-07-05 | 中国第一汽车股份有限公司 | Vehicle operation control method, device, computer equipment and storage medium |
CN118289038B (en) * | 2024-06-06 | 2024-10-18 | 中国第一汽车股份有限公司 | Vehicle operation control method, device, computer equipment and storage medium |
CN119169821A (en) * | 2024-09-23 | 2024-12-20 | 中山大学 | A vehicle driving assistance system and control method based on digital twin |
CN119169821B (en) * | 2024-09-23 | 2025-06-03 | 中山大学 | Vehicle driving auxiliary system based on digital twin and control method |
CN119611517A (en) * | 2025-02-12 | 2025-03-14 | 厦门理工学院 | Distributed electric vehicle steering stability warning method, device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116767281A (en) | Auxiliary driving method, device, equipment, vehicle and medium | |
US20220035733A1 (en) | Method and apparatus for checking automatic driving algorithm, related device and storage medium | |
CN109927719B (en) | Auxiliary driving method and system based on obstacle trajectory prediction | |
CN113165646B (en) | Electronic device for detecting risk factors around a vehicle and control method thereof | |
US20190155291A1 (en) | Methods and systems for automated driving system simulation, validation, and implementation | |
JP2022160538A (en) | Collision detection method, device, electronic apparatus, storage medium, automatic driving vehicle, and computer program | |
CN113741485A (en) | Control method and device for cooperative automatic driving of vehicle and road, electronic equipment and vehicle | |
CN110632610A (en) | Autonomous vehicle localization using gaussian mixture model | |
US12254700B2 (en) | Method and systems for detection accuracy ranking and vehicle instructions | |
CN112446466A (en) | Measuring confidence in deep neural networks | |
CN119190018B (en) | A V2X-based blind spot assisted driving method and system for intersections | |
CN112287801A (en) | Vehicle-mounted data processing method and device, server and readable storage medium | |
CN116776151A (en) | Autonomous driving models and training methods that can autonomously interact with people outside the vehicle | |
CN117818659A (en) | Vehicle safety decision method and device, electronic equipment, storage medium and vehicle | |
CN116358584A (en) | Automatic driving vehicle path planning method, device, equipment and medium | |
CN117774914A (en) | Target screening method and device for automatic emergency braking | |
CN115092130B (en) | Vehicle collision prediction method, device, electronic device, medium and vehicle | |
CN116279589A (en) | Training method of automatic driving decision model, vehicle control method and device | |
CN113353083B (en) | Vehicle behavior recognition method | |
CN114212108A (en) | Automatic driving method, device, vehicle, storage medium and product | |
CN117612140B (en) | Road scene identification method and device, storage medium and electronic equipment | |
CN114954534B (en) | Method and device for detecting abnormal operating state of an autonomous driving vehicle | |
CN113971788A (en) | Vehicle travel control method, device, electronic device, and storage medium | |
CN115909813B (en) | Vehicle collision early warning method, device, equipment and storage medium | |
CN116853295A (en) | Obstacle track prediction method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |