CN115468778A - Vehicle testing method and device, electronic equipment and storage medium - Google Patents
Vehicle testing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN115468778A CN115468778A CN202211113822.1A CN202211113822A CN115468778A CN 115468778 A CN115468778 A CN 115468778A CN 202211113822 A CN202211113822 A CN 202211113822A CN 115468778 A CN115468778 A CN 115468778A
- Authority
- CN
- China
- Prior art keywords
- traffic
- simulated
- images
- information
- static element
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012360 testing method Methods 0.000 title claims abstract description 225
- 230000003068 static effect Effects 0.000 claims abstract description 177
- 230000004927 fusion Effects 0.000 claims abstract description 80
- 238000004088 simulation Methods 0.000 claims abstract description 63
- 238000011156 evaluation Methods 0.000 claims description 42
- 230000008447 perception Effects 0.000 claims description 31
- 238000009877 rendering Methods 0.000 claims description 28
- 238000000034 method Methods 0.000 claims description 27
- 238000004590 computer program Methods 0.000 claims description 14
- 238000002372 labelling Methods 0.000 claims description 14
- 238000012549 training Methods 0.000 claims description 12
- 230000015572 biosynthetic process Effects 0.000 claims description 6
- 238000003786 synthesis reaction Methods 0.000 claims description 6
- 230000003190 augmentative effect Effects 0.000 claims description 5
- 230000001133 acceleration Effects 0.000 claims description 4
- 230000003993 interaction Effects 0.000 abstract description 8
- 238000013473 artificial intelligence Methods 0.000 abstract description 6
- 238000013135 deep learning Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 18
- 238000005516 engineering process Methods 0.000 description 15
- 238000012545 processing Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000033001 locomotion Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000010998 test method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01M—TESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
- G01M17/00—Testing of vehicles
- G01M17/007—Wheeled or endless-tracked vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Traffic Control Systems (AREA)
Abstract
The disclosure provides a vehicle testing method, a vehicle testing device, electronic equipment and a storage medium, and relates to the technical field of artificial intelligence, in particular to the technical field of automatic driving, computer vision and deep learning. The scheme is as follows: according to the set historical traffic flow information, driving simulation is carried out on the vehicles to obtain simulated traffic flow information corresponding to the vehicles; determining simulation parameter information of an on-board sensor of a target vehicle in the plurality of vehicles according to the parameter information of the on-board sensor; further, performing image fusion on a plurality of traffic static element images and a plurality of simulated traffic dynamic element images determined according to the simulated traffic flow information and/or the simulated parameter information to obtain a plurality of target fusion images; and testing the target vehicle according to the plurality of target fusion images to obtain a test result of the target vehicle, so that interaction between the simulated traffic dynamic elements and the simulated traffic static elements is realized, and the accuracy of vehicle test is improved.
Description
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to the field of automatic driving, computer vision, and deep learning technologies, and in particular, to a vehicle testing method and apparatus, an electronic device, and a storage medium.
Background
As vehicle technology matures, the vehicle industry is continuously developing, and before vehicles are put on the market, in order to ensure the performance of the vehicles, the vehicles need to be tested to improve the safety of the vehicles in running, so that how to test the vehicles is very important.
Disclosure of Invention
The disclosure provides a vehicle testing method, a vehicle testing device, an electronic device and a storage medium.
According to an aspect of the present disclosure, there is provided a vehicle testing method including: according to set historical traffic flow information, driving simulation is carried out on a plurality of vehicles so as to obtain simulated traffic flow information corresponding to the vehicles; determining simulation parameter information of an on-board sensor of a target vehicle in the plurality of vehicles according to the parameter information of the on-board sensor; determining a plurality of traffic static element images and a plurality of simulated traffic dynamic element images according to simulated traffic flow information and/or the simulated parameter information; performing image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images; and testing the target vehicle according to the plurality of target fusion images to obtain a test result of the target vehicle.
According to another aspect of the present disclosure, there is provided a vehicle testing apparatus including: the simulation module is used for carrying out driving simulation on a plurality of vehicles according to set historical traffic flow information so as to obtain simulated traffic flow information corresponding to the vehicles; the first determination module is used for determining simulation parameter information of an on-board sensor of a target vehicle in the plurality of vehicles according to the parameter information of the on-board sensor; the second determination module is used for determining a plurality of traffic static element images and a plurality of simulated traffic dynamic element images according to simulated traffic flow information and/or the simulated parameter information; the fusion module is used for carrying out image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images; and the test module is used for testing the target vehicle according to the plurality of target fusion images to obtain a test result of the target vehicle.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the vehicle testing method of the first aspect of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to execute the vehicle testing method of the first aspect of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the vehicle testing method of the embodiments of the first aspect of the present disclosure.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic illustration according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 3 is a schematic diagram according to a third embodiment of the present disclosure;
FIG. 4 is a schematic diagram according to a fourth embodiment of the present disclosure;
FIG. 5 is a schematic diagram according to a fifth embodiment of the present disclosure;
FIG. 6 is a schematic diagram according to a sixth embodiment of the present disclosure;
FIG. 7 is a schematic diagram illustrating a manner of obtaining data for vehicle testing according to an embodiment of the present disclosure;
FIG. 8 is a schematic flow chart diagram of a vehicle testing method provided by an embodiment of the present disclosure;
FIG. 9 is a schematic diagram according to a seventh embodiment of the present disclosure;
FIG. 10 is a block diagram of an electronic device for implementing a vehicle testing method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the related art, a vehicle test is performed by a simulation system based on a graphics rendering engine or a game engine, wherein a vehicle test is performed by rendering an environment based on 3D modeling, but the 3D modeling generally has the problems of high texture repeatability, poor reality and the like. Neural rendering technologies based on Neural radiation Fields (NeRF for short) and the like can well solve the problem of texture reality, but the NeRF technology is mainly applied to static scenes at present and is not applicable to expression of dynamic scenes such as vehicles, pedestrians and traffic lights, processing of physical collision and the like.
Therefore, in view of the above problems, the present disclosure provides a vehicle testing method, apparatus, electronic device, and storage medium.
A vehicle testing method, a device, an electronic apparatus, and a storage medium according to embodiments of the present disclosure are described below with reference to the drawings.
Fig. 1 is a schematic diagram according to a first embodiment of the present disclosure. It should be noted that, the vehicle testing method implemented by the present disclosure is exemplified by being configured in a vehicle testing device, and the vehicle testing device may be applied to any electronic device, so that the electronic device may perform a vehicle testing function.
The electronic device may be any device having a computing capability, for example, a Personal Computer (PC), a mobile terminal, and the like, and the mobile terminal may be a hardware device having various operating systems, touch screens, and/or display screens, such as a mobile phone, a tablet Computer, a Personal digital assistant, and a wearable device.
As shown in fig. 1, the vehicle testing method may include the steps of:
and step 101, performing running simulation on a plurality of vehicles according to the set historical traffic flow information to obtain simulated traffic flow information corresponding to the plurality of vehicles.
As a possible implementation manner of the embodiment of the present disclosure, the historical traffic flow information may be historical traffic flow information of an actual road, for example, the historical traffic flow information may include position information, speed information, driving direction information, driving lane information, traffic signal information on a road on which a plurality of vehicles are driven, and the like of the plurality of vehicles, and further, according to the historical traffic flow information, driving simulation may be performed on vehicle models corresponding to the plurality of vehicles to obtain simulated traffic flow information corresponding to the plurality of vehicles.
As another possible implementation manner of the embodiment of the present disclosure, the historical traffic video may be played back to count historical traffic flow information corresponding to a plurality of vehicles on an actual road, and then, according to the historical traffic flow information, a driving simulation is performed on vehicle models corresponding to the plurality of vehicles to obtain simulated traffic flow information corresponding to the plurality of vehicles. The historical traffic flow information may include position information, speed information, driving direction information, driving lane information, traffic signal information on roads on which the vehicles travel, and the like of the vehicles.
And 102, determining simulation parameter information of the vehicle-mounted sensor of the target vehicle in the plurality of vehicles according to the parameter information of the vehicle-mounted sensor.
In the embodiment of the present disclosure, the simulation parameter information of the vehicle-mounted sensor of the target vehicle in the multiple vehicles may be set according to the parameter information of the vehicle-mounted sensor of the actual vehicle, where the simulation parameter information may include internal parameters of the vehicle-mounted sensor and multiple simulation pose information (external parameters), where the vehicle-mounted sensor may include a vehicle-mounted camera, a millimeter wave radar, an ultrasonic radar, and the like, and the target vehicle may be an autonomous vehicle in the multiple vehicles or a vehicle that needs to be tested, which is not specifically limited in this disclosure.
And 103, determining a plurality of traffic static element images and a plurality of simulated traffic dynamic element images according to the simulated traffic flow information and/or the simulated parameter information.
As a possible implementation manner of the embodiment of the present disclosure, a plurality of traffic static element images may be generated according to a plurality of simulation pose information in the simulation parameter information, and a plurality of simulated traffic dynamic element images may be generated according to the simulated traffic flow information and the plurality of simulation parameter information.
And 104, carrying out image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images.
In order to improve the authenticity and accuracy of the vehicle test, the traffic static element and the traffic dynamic element can be fused to realize the vehicle test with high simulation degree.
And 105, testing the target vehicle according to the plurality of target fusion images to obtain a test result of the target vehicle.
Furthermore, by adopting the plurality of target fusion images, the target vehicle can be tested to obtain the test result of the target vehicle, for example, when the target vehicle is an automatic driving vehicle, the vehicle perception test and the track planning test can be performed on the target vehicle to obtain the perception test result and the track planning test result of the automatic driving vehicle.
In conclusion, running simulation is carried out on a plurality of vehicles according to the set historical traffic flow information, so as to obtain simulated traffic flow information corresponding to the plurality of vehicles; determining simulation parameter information of an on-board sensor of a target vehicle in the plurality of vehicles according to the parameter information of the on-board sensor; determining a plurality of traffic static element images and a plurality of simulated traffic dynamic element images according to the simulated traffic flow information and/or the simulated parameter information; carrying out image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images; the target vehicle is tested according to the target fusion images to obtain a test result of the target vehicle, therefore, the traffic static element images and the simulated traffic dynamic element images are subjected to image fusion to obtain the target fusion images, and then the target vehicle is tested according to the target fusion images, so that interaction between the simulated traffic dynamic element and the traffic static element is realized, and the accuracy of vehicle testing is improved.
In order to clearly illustrate how the above-described embodiments determine the plurality of traffic static element images and the plurality of simulated traffic dynamic element images based on the simulated traffic flow information and the simulated parameter information, the present disclosure proposes another vehicle testing method.
Fig. 2 is a schematic diagram according to a second embodiment of the present disclosure.
As shown in fig. 2, the vehicle testing method may include the steps of:
In order to perform driving simulation on a plurality of vehicles, in the disclosed embodiment, driving parameter information of the plurality of vehicles may be extracted from traffic flow information to obtain driving parameter information of the plurality of vehicles; and performing running simulation on the plurality of vehicles according to the running parameter information to obtain simulated traffic flow information corresponding to the plurality of vehicles.
In order to improve the accuracy of the vehicle driving simulation, the driving parameter information may include: position information, direction information, speed information, acceleration information, travel lane information, and the like.
And step 202, determining simulation parameter information of the vehicle-mounted sensor of the target vehicle in the plurality of vehicles according to the parameter information of the vehicle-mounted sensor.
And step 203, determining a traffic static element image matched with any simulated pose information according to any simulated pose information in the plurality of simulated pose information.
In order to improve the authenticity of the traffic static element in the vehicle test and thus improve the confidence of the vehicle test, as a possible implementation manner of the embodiment of the present disclosure, a plurality of pieces of simulated pose information may be respectively input into the trained traffic static element image generation model to obtain the traffic static element image output by the traffic static element image generation model. The trained traffic static element image generation model learns the corresponding relation between the pose information and the traffic static element image, and the traffic static element image generation model can be a NeRF model.
And 204, rendering the image by adopting the simulated traffic flow information and the plurality of simulated parameter information to obtain a plurality of simulated traffic dynamic element images.
In the embodiment of the disclosure, in order to enable the simulated traffic dynamic element image to include a plurality of traffic dynamic elements, three-dimensional rendering may be performed by using the simulated traffic flow information and a plurality of simulation parameter information to render a plurality of simulated traffic dynamic element images including a plurality of related traffic dynamic elements such as vehicles, pedestrians, traffic lights, and the like.
As an example, the simulated traffic flow information and the plurality of pieces of simulated parameter information are input into a three-dimensional rendering model, so that the three-dimensional rendering model three-dimensionally renders the simulated traffic flow information based on a plurality of pieces of simulated pose information in the plurality of simulated parameters, so as to obtain a plurality of simulated traffic dynamic element images output by the three-dimensional rendering model and matched with the plurality of pieces of simulated pose information.
And step 205, performing image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images.
And step 206, testing the target vehicle according to the plurality of target fusion images to obtain a test result of the target vehicle.
It should be noted that the execution processes of steps 201 to 202 and steps 205 to 206 may be implemented by any one of the embodiments of the present disclosure, and the embodiments of the present disclosure do not limit this and are not described again.
In summary, a traffic static element image matched with any simulated pose information is determined according to any simulated pose information in the plurality of simulated pose information, and image rendering is performed by adopting the simulated traffic flow information and the plurality of simulated parameter information to obtain a plurality of simulated traffic dynamic element images, so that a plurality of simulated traffic dynamic element images including a plurality of traffic dynamic elements and a plurality of traffic static element images with higher reality can be generated according to the simulated traffic flow information and/or the simulated parameter information.
In order to clearly illustrate how the above embodiment trains the traffic static element image generation model so that the traffic static element image generation model learns the corresponding relationship between the pose information and the traffic static element image, the present disclosure proposes another vehicle testing method.
Fig. 3 is a schematic diagram according to a third embodiment of the present disclosure.
As shown in fig. 3, the vehicle testing method may include the steps of:
And step 302, determining simulation parameter information of the vehicle-mounted sensor of the target vehicle in the plurality of vehicles according to the parameter information of the vehicle-mounted sensor.
And marking the pose information of the corresponding vehicle-mounted sensor on the sample traffic static element image.
In this disclosure, the sample traffic static element image may be acquired on-line, for example, an image including a plurality of static elements on a real road may be acquired on-line through a web crawler technology, and is used as the sample traffic static element image, or the sample traffic static element image may also be an image including a plurality of static elements on a real road acquired by a vehicle-mounted sensor, and the like, which is not limited in this disclosure.
It should be noted that, in order to make the sample traffic static elemental image carry the pose information, the pose information corresponding to the vehicle-mounted sensor may be labeled on the sample traffic static elemental image.
And 304, inputting the pose information of the vehicle-mounted sensor carried on the sample traffic static element image into the initial static element image generation model to obtain a traffic static element prediction image output by the initial static element image generation model.
In order to enable the static elemental image generation model to learn the corresponding relationship between the position and orientation information and the traffic static elemental image, as an example, the image information corresponding to the sample traffic static elemental image and the position and orientation information of the labeled vehicle-mounted sensor may be input into the initial static elemental image generation model to obtain the traffic static elemental prediction image output by the initial static elemental image generation model.
As another example, a plurality of pieces of sample traffic static elemental image information are preset in the initial static elemental image generation model, and furthermore, pose information of the vehicle-mounted sensor marked on the sample traffic static elemental image is input into the initial static elemental image generation model to obtain a traffic static elemental prediction image output by the initial static elemental image generation model.
And 305, training an initial traffic static element image generation model according to the difference between the traffic static element prediction image and the sample traffic static element image.
And further, according to the difference between the traffic static element prediction image and the sample traffic static element image, performing coefficient adjustment on the initial traffic static element image generation model so as to minimize the difference between the traffic static element prediction image and the sample traffic static element image.
It should be noted that, in the above example, only the termination condition of the model training is taken as minimization of the difference between the predicted image of the traffic static element and the image of the sample traffic static element, and in practical application, other termination conditions may also be set, for example, the termination condition may be that the number of times of training reaches a set number of times, or the termination condition may be that the training duration reaches a set duration, and the like, which is not limited by the present disclosure.
And 307, rendering the image by adopting the simulated traffic flow information and the plurality of simulated parameter information to obtain a plurality of simulated traffic dynamic element images.
And 308, carrying out image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images.
It should be noted that the execution processes of steps 301 to 302 and steps 307 to 309 may be implemented by any one of the embodiments of the present disclosure, and the embodiments of the present disclosure do not limit this and are not described again.
In conclusion, by acquiring a sample traffic static element image; inputting the pose information of the vehicle-mounted sensor marked on the sample traffic static element image into an initial static element image generation model to obtain a traffic static element prediction image output by the initial static element image generation model; and training the initial traffic static element image generation model according to the difference between the traffic static element prediction image and the sample traffic static element image, so that the traffic static element image generation model can be trained to obtain the corresponding relation between the pose information and the traffic static element image.
In order to clearly illustrate how the above embodiments perform image fusion on the multiple traffic static element images and the multiple simulated traffic dynamic element images to obtain multiple target fusion images, the present disclosure proposes another vehicle testing method.
Fig. 4 is a schematic diagram according to a fourth embodiment of the present disclosure.
As shown in fig. 4, the vehicle testing method may include the steps of:
In the embodiment of the disclosure, each of the plurality of traffic static element images may correspond to one piece of simulated pose information, and each of the simulated traffic dynamic element images may correspond to one piece of simulated pose information, so that the simulated traffic dynamic element image corresponding to the simulated pose information may be determined according to the simulated pose information corresponding to any one of the traffic static element images, and the simulated traffic dynamic element image corresponding to the simulated pose information may be used as an image matched with any one of the traffic static element images.
In order to improve the sense of Reality of the vehicle running environment in the vehicle test, the virtual traffic dynamic element and the real traffic static element can be fused to realize the interaction between the simulated traffic dynamic element and the real traffic static element and improve the accuracy of the vehicle test.
Further, the plurality of synthesized images are set as a plurality of target fusion images.
It should be noted that the execution processes of steps 401 to 403 and step 407 may be implemented by any one of the embodiments of the present disclosure, and the embodiments of the present disclosure do not limit this and are not described again.
In summary, for any one of the plurality of traffic static element images, a simulated traffic dynamic element image matched with any one of the traffic static element images is determined according to the simulated pose information corresponding to any one of the traffic static element images; carrying out augmented reality synthesis on any traffic static element image and a simulated traffic dynamic element image matched with any traffic static element to obtain a synthesized image; and determining a plurality of target fusion images according to the synthesized images, so that the interaction between the simulated traffic dynamic elements and the real traffic static elements is realized by synthesizing the virtual traffic dynamic elements and the real traffic static elements, the sense of reality of the vehicle running environment in the vehicle test is improved, and the accuracy of the vehicle test is improved.
In order to clearly illustrate how the above embodiment tests the target vehicle according to the multiple target fusion images to obtain the test result of the target vehicle, the present disclosure proposes another vehicle testing method.
Fig. 5 is a schematic diagram according to a fifth embodiment of the present disclosure.
As shown in fig. 5, the vehicle testing method may include the steps of:
And step 504, carrying out image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images.
And 505, performing obstacle vehicle perception test on the target vehicle according to the plurality of target fusion images to obtain a perception test result of the target vehicle.
In order to improve the driving safety of the vehicle, the sensing of the vehicle for the obstacle may be tested, as an example, the target vehicle may be an autonomous vehicle, and the obstacle sensing test may be performed on the plurality of target fusion images by using a vehicle sensing algorithm (e.g., a target detection algorithm) to obtain a sensing test result of the autonomous vehicle.
Meanwhile, a Planning and Control algorithm (PNC for short) can be adopted to perform track Planning test on the multiple target fusion images so as to obtain a track Planning test result of the automatic driving vehicle.
It should be noted that, in the present disclosure, the execution sequence of step 505 and step 506 is not specifically limited, and step 505 and step 506 may be executed in parallel or sequentially.
And 507, generating a test result according to the perception test result and the track planning test result.
And further splicing the perception test result and the track planning test result to obtain the test result of the target vehicle.
It should be noted that the execution processes of steps 501 to 504 may be implemented by any one of the embodiments of the present disclosure, and the embodiments of the present disclosure do not limit this and are not described again.
In conclusion, the obstacle vehicle perception test is carried out on the target vehicle according to the plurality of target fusion images so as to obtain the perception test result of the target vehicle; according to the multiple target fusion images, the target vehicle is subjected to track planning testing to obtain a track planning testing result of the target vehicle, and the testing result is generated according to the perception testing result and the track planning testing result.
In order to further improve the driving safety of the vehicle, the present disclosure proposes another vehicle testing method.
Fig. 6 is a schematic diagram according to a sixth embodiment of the present disclosure. In the embodiment of the present disclosure, the test result may be evaluated to generate a test evaluation index, and a test report may be generated according to the test evaluation index, so that relevant personnel may improve the vehicle according to the test report, and the embodiment shown in fig. 6 may include the following steps:
And step 602, determining simulation parameter information of the vehicle-mounted sensor of the target vehicle in the plurality of vehicles according to the parameter information of the vehicle-mounted sensor.
And step 604, carrying out image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images.
And 606, comparing the test result with the labeling result to obtain a first test evaluation index and a second test evaluation index corresponding to the test result.
The first test evaluation is used for representing the sensing accuracy of the target vehicle to the obstacle, and the second test evaluation is used for representing the track planning accuracy of the target vehicle.
In the embodiment of the disclosure, the perception test result in the test result may be compared with the perception labeling result in the labeling result to determine a difference between the perception test result and the perception labeling result in the labeling result, and a first test evaluation index may be determined according to the difference between the perception test result and the perception labeling result in the labeling result, where the first test evaluation is used to represent the perception accuracy of the target vehicle for the obstacle, and the difference between the perception test result and the perception labeling result is in a negative correlation relationship with the first test evaluation index, that is, the smaller the difference between the perception test result and the perception labeling result in the labeling result, the higher the first test evaluation index.
Similarly, the track planning test result in the test result and the track planning annotation result in the annotation result can be compared to determine the difference between the track planning test result and the track planning annotation result in the annotation result, and a second test evaluation index is determined according to the difference between the track planning test result and the track planning annotation result, wherein the second test evaluation can be used for representing the track planning accuracy of the target vehicle, and the difference between the track planning test result and the track planning annotation result and the second test evaluation index are in a negative correlation relationship, that is, the smaller the difference between the track planning test result and the track planning annotation result in the annotation result is, the higher the second test evaluation index is.
Furthermore, according to the first test evaluation index and the second test evaluation index, a test report can be generated, related personnel can determine the sensing accuracy rate of the vehicle to the obstacle and the trajectory planning accuracy rate according to the test report, and the vehicle can be improved according to the sensing accuracy rate of the vehicle to the obstacle and the trajectory planning accuracy rate, so that the driving safety of the vehicle can be improved.
It should be noted that the execution processes of steps 601 to 605 may be implemented by any one of the embodiments of the present disclosure, and the embodiments of the present disclosure do not limit this and are not described again.
In conclusion, a first test evaluation index and a second test evaluation index corresponding to the test result are obtained by comparing the test result with the labeling result; according to the first test evaluation index and the second test evaluation index, the test report is generated, so that the test evaluation index can be generated by evaluating the test result, and the test report can be generated according to the test evaluation index, so that related personnel can improve the vehicle according to the test report, and the driving safety of the vehicle can be further improved.
In order to clearly illustrate the above embodiments, the description will now be made by way of example.
For example, a vehicle testing method of an embodiment of the present disclosure may include the steps of:
1. as shown in fig. 7, on the basis of the NeRF technology, road environment image information (historical image information carrying pose information accumulated in actual road tests) in a certain area is taken as input, and a NeRF model corresponding to a scene is trained;
2. generating high-simulation traffic flow information such as positions, motions and the like of the main vehicle and the obstacle vehicles based on historical traffic flow data of large-scale actual roads, and driving the main vehicle and the obstacle vehicles to move in the scene to simulate a real traffic scene;
3. setting parameter information of a virtual vehicle-mounted sensor in a vehicle according to internal parameters and external parameters of a real vehicle-mounted sensor and by combining physical characteristics of the vehicle-mounted sensor;
4. as shown in fig. 8, the pose data of the sensor is used as the input of the NeRF model, and a high-photorealistic environment rendering image of a corresponding view angle is generated;
5. 3D rendering is carried out according to the traffic flow information generated in the step 2, the pose information of the sensor and the like, and sensor data containing relevant dynamic elements such as vehicles, pedestrians, traffic lights and the like are rendered;
6. performing AR synthesis on the results generated in the step 4 and the step 5 to obtain an image containing static environment and dynamic vehicle elements;
7. taking the result obtained by the step 6 as the input of vehicle control algorithms such as automatic driving perception and PNC (portable navigation control) and the like, and testing the obstacle perception and the trajectory planning of the vehicle;
8. and evaluating the test result to generate a test report.
According to the vehicle testing method, driving simulation is carried out on a plurality of vehicles according to set historical traffic flow information, so that simulated traffic flow information corresponding to the plurality of vehicles is obtained; determining simulation parameter information of an on-board sensor of a target vehicle in the plurality of vehicles according to the parameter information of the on-board sensor; determining a plurality of traffic static element images and a plurality of simulated traffic dynamic element images according to the simulated traffic flow information and the simulated parameter information; carrying out image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images; the target vehicle is tested according to the plurality of target fusion images to obtain a test result of the target vehicle, so that the plurality of traffic static element images and the plurality of simulated traffic dynamic element images are subjected to image fusion to obtain the plurality of target fusion images, and the target vehicle is tested according to the plurality of target fusion images, so that interaction between the simulated traffic dynamic elements and the real traffic static elements is realized, the sense of reality of the vehicle running environment in the vehicle test is improved, and meanwhile, the simulation degree and the accuracy of the vehicle test are improved.
In order to implement the above embodiments, the present disclosure proposes a vehicle testing device.
Fig. 9 is a schematic diagram according to a seventh embodiment of the present disclosure. As shown in fig. 9, the vehicle testing apparatus 900 includes: a simulation module 910, a first determination module 920, a second determination module 930, a fusion module 940, and a test module 950.
The simulation module 910 is configured to perform driving simulation on a plurality of vehicles according to set historical traffic flow information to obtain simulated traffic flow information corresponding to the plurality of vehicles; a first determining module 920, configured to determine, according to parameter information of an on-board sensor, simulation parameter information of the on-board sensor of a target vehicle in the plurality of vehicles; a second determining module 930, configured to determine a plurality of traffic static element images and a plurality of simulated traffic dynamic element images according to the simulated traffic flow information and/or the simulated parameter information; a fusion module 940, configured to perform image fusion on the multiple traffic static elemental images and the multiple simulated traffic dynamic elemental images to obtain multiple target fusion images; the testing module 950 is configured to test the target vehicle according to the plurality of target fusion images to obtain a testing result of the target vehicle.
As a possible implementation manner of the embodiment of the present disclosure, the second determining module 930 is configured to: determining a traffic static element image matched with any simulated pose information according to any simulated pose information in the plurality of simulated pose information; and rendering the image by adopting the simulated traffic flow information and the plurality of simulated parameter information to obtain a plurality of simulated traffic dynamic element images.
As a possible implementation manner of the embodiment of the present disclosure, the second determining module 930 is further configured to: and inputting any simulated pose information into the trained traffic static element image generation model to obtain a traffic static element image output by the trained traffic static element image generation model.
As a possible implementation manner of the embodiment of the present disclosure, the traffic static element image generation model is obtained through the following module training: the device comprises an acquisition module, an input module and a training module.
The system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a sample traffic static element image, and the sample traffic static element image is marked with corresponding pose information of a vehicle-mounted sensor; the input module is used for inputting the pose information of the vehicle-mounted sensor marked on the sample traffic static element image into an initial static element image generation model so as to obtain a traffic static element prediction image output by the initial static element image generation model; and the training module is used for training the initial traffic static element image generation model according to the difference between the traffic static element prediction image and the sample traffic static element image.
As a possible implementation manner of the embodiment of the present disclosure, the second determining module 930 is further configured to: and inputting the simulated traffic flow information and the plurality of simulated parameter information into a three-dimensional rendering model so that the three-dimensional rendering model performs three-dimensional rendering on the simulated traffic flow information based on the plurality of simulated pose information in the plurality of simulated parameters to obtain a plurality of simulated traffic dynamic element images which are output by the three-dimensional rendering model and matched with the plurality of simulated pose information.
As a possible implementation manner of the embodiment of the present disclosure, the fusion module 940 is configured to: aiming at any one of the plurality of traffic static element images, determining a simulated traffic dynamic element image matched with any one of the traffic static element images according to simulated pose information corresponding to any one of the traffic static element images; carrying out augmented reality synthesis on any traffic static element image and any traffic static element matched simulated traffic dynamic element image to obtain a synthesized image; a plurality of target fusion images are determined from the synthesized images.
As a possible implementation manner of the embodiment of the present disclosure, the simulation module 910 is configured to: extracting the running parameter information of a plurality of vehicles from the traffic flow information to obtain the running parameter information of the plurality of vehicles; and performing running simulation on the plurality of vehicles according to the running parameter information to obtain simulated traffic flow information corresponding to the plurality of vehicles.
As a possible implementation manner of the embodiment of the present disclosure, the driving parameter information includes at least one of the following parameter information: position information, direction information, speed information, acceleration information, and lane information.
As a possible implementation manner of the embodiment of the present disclosure, the testing module 950 is configured to: according to the target fusion images, performing obstacle perception test on the target vehicle to obtain a perception test result of the target vehicle; according to the multiple target fusion images, performing track planning test on the target vehicle to obtain a track planning test result of the target vehicle; and planning a test result according to the perception test result and the track to generate a test result.
As a possible implementation manner of the embodiment of the present disclosure, the vehicle testing apparatus 900 further includes: the device comprises a comparison module and a generation module.
The comparison module is used for comparing the test result with the labeling result to obtain a first test evaluation index and a second test evaluation index corresponding to the test result, wherein the first test evaluation is used for representing the sensing accuracy of the target vehicle on the obstacle, and the second test evaluation is used for representing the track planning accuracy of the target vehicle; and the generating module is used for generating a test report according to the first test evaluation index and the second test evaluation index.
The vehicle testing device of the embodiment of the disclosure simulates the running of a plurality of vehicles according to the set historical traffic flow information to obtain the simulated traffic flow information corresponding to the plurality of vehicles; determining simulation parameter information of an on-board sensor of a target vehicle in the plurality of vehicles according to the parameter information of the on-board sensor; determining a plurality of traffic static element images and a plurality of simulated traffic dynamic element images according to the simulated traffic flow information and/or the simulated parameter information; carrying out image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images; according to the target fusion images, the target vehicle is tested to obtain the test result of the target vehicle, therefore, the device can achieve image fusion of the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain the plurality of target fusion images, and further, the target vehicle is tested according to the plurality of target fusion images, interaction between the simulated traffic dynamic elements and the real traffic static elements is achieved, the sense of reality of the vehicle running environment in vehicle testing is improved, and meanwhile, the simulation degree and the accuracy of the vehicle testing are improved.
In order to implement the above embodiments, the present disclosure also provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the vehicle testing method of the above embodiments.
In order to achieve the above embodiments, the present disclosure also proposes a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the vehicle testing method of the above embodiments.
In order to implement the above embodiments, the present disclosure also proposes a computer program product comprising a computer program which, when executed by a processor, implements the vehicle testing method of the above embodiments.
In the technical scheme of the present disclosure, the processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the related user are all performed under the premise of obtaining the consent of the user, and all meet the regulations of the related laws and regulations, and do not violate the good custom of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 10 shows a schematic block diagram of an example electronic device 1000 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 10, the apparatus 1000 includes a computing unit 1001 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1002 or a computer program loaded from a storage unit 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for the operation of the device 1000 can be stored. The calculation unit 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
A number of components in device 1000 are connected to I/O interface 1005, including: an input unit 1006 such as a keyboard, a mouse, and the like; an output unit 1007 such as various types of displays, speakers, and the like; a storage unit 1008 such as a magnetic disk, an optical disk, or the like; and a communication unit 1009 such as a network card, a modem, a wireless communication transceiver, or the like. The communication unit 1009 allows the device 1000 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), the Internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be noted that artificial intelligence is a subject for studying a computer to simulate some human thinking processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), and includes both hardware and software technologies. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning/deep learning technology, a big data processing technology, a knowledge map technology and the like.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.
Claims (23)
1. A vehicle testing method, comprising:
according to set historical traffic flow information, driving simulation is carried out on a plurality of vehicles so as to obtain simulated traffic flow information corresponding to the vehicles;
determining simulation parameter information of an on-board sensor of a target vehicle in the plurality of vehicles according to the parameter information of the on-board sensor;
determining a plurality of traffic static element images and a plurality of simulated traffic dynamic element images according to simulated traffic flow information and/or the simulated parameter information;
performing image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images;
and testing the target vehicle according to the plurality of target fusion images to obtain a test result of the target vehicle.
2. The method according to claim 1, wherein the simulation parameter information includes a plurality of simulation pose information of the vehicle-mounted sensor, and the determining a plurality of traffic static element images and a plurality of simulation traffic dynamic element images according to simulation traffic flow information and/or the simulation parameter information includes:
determining a traffic static element image matched with any simulated pose information according to any simulated pose information in the plurality of simulated pose information;
and rendering images by adopting the simulated traffic flow information and the plurality of simulated parameter information to obtain the plurality of simulated traffic dynamic element images.
3. The method of claim 2, wherein the determining, from any of the plurality of simulated pose information, a traffic static element image that matches the any of the simulated pose information comprises:
and inputting any simulation pose information into a trained traffic static element image generation model to obtain a traffic static element image output by the trained traffic static element image generation model.
4. The method of claim 3, wherein the traffic static element image generation model is trained by:
acquiring a sample traffic static element image, wherein the sample traffic static element image is marked with pose information corresponding to a vehicle-mounted sensor;
inputting the pose information of the vehicle-mounted sensor carried on the sample traffic static element image into an initial static element image generation model to obtain a traffic static element prediction image output by the initial static element image generation model;
and training the initial traffic static element image generation model according to the difference between the traffic static element prediction image and the sample traffic static element image.
5. The method of claim 2, wherein the image rendering using the simulated traffic flow information and the plurality of simulated parameter information to obtain the plurality of simulated traffic dynamic element images comprises:
and inputting the simulated traffic flow information and the plurality of pieces of simulated parameter information into a three-dimensional rendering model, so that the three-dimensional rendering model performs three-dimensional rendering on the simulated traffic flow information based on a plurality of pieces of simulated pose information in a plurality of simulated parameters, and a plurality of simulated traffic dynamic element images which are output by the three-dimensional rendering model and are matched with the plurality of pieces of simulated pose information are obtained.
6. The method of claim 2, wherein said image fusing said plurality of traffic static elemental images and said plurality of simulated traffic dynamic elemental images to obtain a plurality of target fused images comprises:
aiming at any one of the plurality of traffic static element images, determining a simulated traffic dynamic element image matched with any one of the traffic static element images according to the simulated pose information corresponding to any one of the traffic static element images;
carrying out augmented reality synthesis on any traffic static element image and the simulated traffic dynamic element image matched with any traffic static element to obtain a synthesized image;
and determining the plurality of target fusion images according to the synthesized images.
7. The method according to claim 1, wherein the driving simulation of a plurality of vehicles according to the set historical traffic flow information to obtain the simulated traffic flow information corresponding to the plurality of vehicles comprises:
extracting driving parameter information of the plurality of vehicles from the traffic flow information to obtain the driving parameter information of the plurality of vehicles;
and according to the running parameter information, running simulation is carried out on the vehicles to obtain simulated traffic flow information corresponding to the vehicles.
8. The method of claim 7, wherein the driving parameter information comprises at least one of:
position information, direction information, speed information, acceleration information, and lane information.
9. The method of claim 1, wherein the testing the target vehicle according to the plurality of target fusion images to obtain a test result of the target vehicle comprises:
according to the target fusion images, performing obstacle perception test on the target vehicle to obtain a perception test result of the target vehicle;
according to the plurality of target fusion images, performing track planning test on the target vehicle to obtain a track planning test result of the target vehicle;
and generating the test result according to the perception test result and the track planning test result.
10. The method according to any one of claims 1-9, wherein the method further comprises:
comparing the test result with the labeling result to obtain a first test evaluation index and a second test evaluation index corresponding to the test result, wherein the first test evaluation is used for representing the sensing accuracy of the target vehicle on the obstacle, and the second test evaluation is used for representing the track planning accuracy of the target vehicle;
and generating a test report according to the first test evaluation index and the second test evaluation index.
11. A vehicle testing apparatus comprising:
the simulation module is used for carrying out driving simulation on a plurality of vehicles according to set historical traffic flow information so as to obtain simulated traffic flow information corresponding to the vehicles;
the first determining module is used for determining simulation parameter information of an on-board sensor of a target vehicle in the vehicles according to the parameter information of the on-board sensor;
the second determination module is used for determining a plurality of traffic static element images and a plurality of simulated traffic dynamic element images according to simulated traffic flow information and/or the simulated parameter information;
the fusion module is used for carrying out image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images;
and the testing module is used for testing the target vehicle according to the plurality of target fusion images to obtain a testing result of the target vehicle.
12. The apparatus of claim 11, wherein the second determining means is configured to:
determining a traffic static element image matched with any simulated pose information according to any simulated pose information in the plurality of simulated pose information;
and rendering images by adopting the simulated traffic flow information and the plurality of simulated parameter information to obtain the plurality of simulated traffic dynamic element images.
13. The apparatus of claim 12, wherein the second determining means is further configured to:
and inputting any simulation pose information into a trained traffic static element image generation model to obtain a traffic static element image output by the trained traffic static element image generation model.
14. The apparatus of claim 13, wherein the traffic static element image generation model is trained by the following modules:
the acquisition module is used for acquiring a sample traffic static element image, wherein the sample traffic static element image carries pose information of an on-vehicle sensor;
the input module is used for inputting the pose information of the vehicle-mounted sensor carried on the sample traffic static element image into an initial static element image generation model so as to obtain a traffic static element prediction image output by the initial static element image generation model;
and the training module is used for training the initial traffic static element image generation model according to the difference between the traffic static element prediction image and the sample traffic static element image.
15. The apparatus of claim 12, wherein the second determining means is further configured to:
and inputting the simulated traffic flow information and the plurality of pieces of simulated parameter information into a three-dimensional rendering model, so that the three-dimensional rendering model performs three-dimensional rendering on the simulated traffic flow information based on a plurality of pieces of simulated pose information in a plurality of simulated parameters, and a plurality of simulated traffic dynamic element images which are output by the three-dimensional rendering model and are matched with the plurality of pieces of simulated pose information are obtained.
16. The apparatus of claim 12, wherein the fusion module is configured to:
aiming at any one of the plurality of traffic static element images, determining a simulated traffic dynamic element image matched with any one of the traffic static element images according to the simulated pose information corresponding to any one of the traffic static element images;
carrying out augmented reality synthesis on any one of the traffic static element images and the simulated traffic dynamic element image matched with any one of the traffic static elements to obtain a synthesized image;
and determining the plurality of target fusion images according to the synthesized images.
17. The apparatus of claim 11, wherein the simulation module is to:
extracting driving parameter information of the plurality of vehicles from the traffic flow information to obtain the driving parameter information of the plurality of vehicles;
and according to the running parameter information, running simulation is carried out on the vehicles to obtain simulated traffic flow information corresponding to the vehicles.
18. The apparatus of claim 17, wherein the driving parameter information includes at least one of the following parameter information:
position information, direction information, speed information, acceleration information, and travel lane information.
19. The apparatus of claim 11, wherein the testing module is to:
according to the target fusion images, performing obstacle perception test on the target vehicle to obtain a perception test result of the target vehicle;
according to the target fusion images, carrying out track planning test on the target vehicle to obtain a track planning test result of the target vehicle;
and generating the test result according to the perception test result and the track planning test result.
20. The apparatus of any of claims 11-19, wherein the apparatus further comprises:
the comparison module is used for comparing the test result with the labeling result to obtain a first test evaluation index and a second test evaluation index corresponding to the test result, wherein the first test evaluation is used for representing the sensing accuracy of the target vehicle on the obstacle, and the second test evaluation is used for representing the track planning accuracy of the target vehicle;
and the generating module is used for generating a test report according to the first test evaluation index and the second test evaluation index.
21. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-10.
22. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-10.
23. A computer program product comprising a computer program, wherein the computer program realizes the method according to any one of claims 1-10 when executed by a processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211113822.1A CN115468778B (en) | 2022-09-14 | 2022-09-14 | Vehicle testing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211113822.1A CN115468778B (en) | 2022-09-14 | 2022-09-14 | Vehicle testing method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115468778A true CN115468778A (en) | 2022-12-13 |
CN115468778B CN115468778B (en) | 2023-08-15 |
Family
ID=84333890
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211113822.1A Active CN115468778B (en) | 2022-09-14 | 2022-09-14 | Vehicle testing method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115468778B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116298088A (en) * | 2022-12-29 | 2023-06-23 | 华世德电子科技(昆山)有限公司 | Test method and system for vehicle nitrogen and oxygen sensor |
WO2024243270A1 (en) * | 2023-05-22 | 2024-11-28 | The Regents Of The University Of Michigan | Automatic annotation and sensor-realistic data generation |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109781431A (en) * | 2018-12-07 | 2019-05-21 | 山东省科学院自动化研究所 | Autonomous driving test method and system based on mixed reality |
CN110160804A (en) * | 2019-05-31 | 2019-08-23 | 中国科学院深圳先进技术研究院 | A kind of test method of automatic driving vehicle, apparatus and system |
CN110263381A (en) * | 2019-05-27 | 2019-09-20 | 南京航空航天大学 | A kind of automatic driving vehicle test emulation scene generating method |
CN112198859A (en) * | 2020-09-07 | 2021-01-08 | 西安交通大学 | Method, system and device for testing automatic driving vehicle in vehicle ring under mixed scene |
WO2022033810A1 (en) * | 2020-08-14 | 2022-02-17 | Zf Friedrichshafen Ag | Computer-implemented method and computer programme product for obtaining an environment scene representation for an automated driving system, computer-implemented method for learning an environment scene prediction for an automated driving system, and control device for an automated driving system |
WO2022095023A1 (en) * | 2020-11-09 | 2022-05-12 | 驭势(上海)汽车科技有限公司 | Traffic stream information determination method and apparatus, electronic device and storage medium |
CN114817072A (en) * | 2022-05-31 | 2022-07-29 | 国汽智控(北京)科技有限公司 | Vehicle testing method, device, equipment and storage medium based on virtual scene |
-
2022
- 2022-09-14 CN CN202211113822.1A patent/CN115468778B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109781431A (en) * | 2018-12-07 | 2019-05-21 | 山东省科学院自动化研究所 | Autonomous driving test method and system based on mixed reality |
CN110263381A (en) * | 2019-05-27 | 2019-09-20 | 南京航空航天大学 | A kind of automatic driving vehicle test emulation scene generating method |
CN110160804A (en) * | 2019-05-31 | 2019-08-23 | 中国科学院深圳先进技术研究院 | A kind of test method of automatic driving vehicle, apparatus and system |
WO2022033810A1 (en) * | 2020-08-14 | 2022-02-17 | Zf Friedrichshafen Ag | Computer-implemented method and computer programme product for obtaining an environment scene representation for an automated driving system, computer-implemented method for learning an environment scene prediction for an automated driving system, and control device for an automated driving system |
CN112198859A (en) * | 2020-09-07 | 2021-01-08 | 西安交通大学 | Method, system and device for testing automatic driving vehicle in vehicle ring under mixed scene |
WO2022095023A1 (en) * | 2020-11-09 | 2022-05-12 | 驭势(上海)汽车科技有限公司 | Traffic stream information determination method and apparatus, electronic device and storage medium |
CN114817072A (en) * | 2022-05-31 | 2022-07-29 | 国汽智控(北京)科技有限公司 | Vehicle testing method, device, equipment and storage medium based on virtual scene |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116298088A (en) * | 2022-12-29 | 2023-06-23 | 华世德电子科技(昆山)有限公司 | Test method and system for vehicle nitrogen and oxygen sensor |
WO2024243270A1 (en) * | 2023-05-22 | 2024-11-28 | The Regents Of The University Of Michigan | Automatic annotation and sensor-realistic data generation |
Also Published As
Publication number | Publication date |
---|---|
CN115468778B (en) | 2023-08-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11783590B2 (en) | Method, apparatus, device and medium for classifying driving scenario data | |
CN112965466B (en) | Reduction test method, device, equipment and program product of automatic driving system | |
CN111079619B (en) | Method and apparatus for detecting target object in image | |
JP2023027777A (en) | Method and apparatus for predicting motion track of obstacle, and autonomous vehicle | |
JP2023055697A (en) | Automatic driving test method and apparatus, electronic apparatus and storage medium | |
CN115468778B (en) | Vehicle testing method and device, electronic equipment and storage medium | |
CN112699765B (en) | Method, device, electronic device and storage medium for evaluating visual positioning algorithm | |
CN114186007A (en) | High-precision map generation method and device, electronic equipment and storage medium | |
CN113467875A (en) | Training method, prediction method, device, electronic equipment and automatic driving vehicle | |
US20240262385A1 (en) | Spatio-temporal pose/object database | |
CN115575931A (en) | Calibration method, calibration device, electronic equipment and storage medium | |
CN115082690B (en) | Target recognition method, target recognition model training method and device | |
CN114111813B (en) | High-precision map element updating method and device, electronic equipment and storage medium | |
CN116449807B (en) | Simulation test method and system for automobile control system of Internet of things | |
CN115357500A (en) | Test method, device, equipment and medium for automatic driving system | |
CN114663879B (en) | Target detection method, device, electronic equipment and storage medium | |
CN115657494A (en) | Virtual object simulation method, device, equipment and storage medium | |
CN114596552A (en) | Information processing method, training method, device, equipment, vehicle and medium | |
CN116663329B (en) | Automatic driving simulation test scene generation method, device, equipment and storage medium | |
CN113361379B (en) | Method and device for generating target detection system and detecting target | |
CN116168366B (en) | Point cloud data generation method, model training method, target detection method and device | |
CN117668761A (en) | Training method, device, equipment and storage medium for automatic driving model | |
CN117710456A (en) | Training method and device for positioning and mapping model, electronic equipment and storage medium | |
CN119249894A (en) | Laser radar simulation method, device, equipment and storage medium | |
CN117826631A (en) | Automatic driving simulation test scene data generation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |