CN116543047A - Position estimation self-diagnosis method, device and storage medium for multi-camera system - Google Patents
Position estimation self-diagnosis method, device and storage medium for multi-camera system Download PDFInfo
- Publication number
- CN116543047A CN116543047A CN202310478990.9A CN202310478990A CN116543047A CN 116543047 A CN116543047 A CN 116543047A CN 202310478990 A CN202310478990 A CN 202310478990A CN 116543047 A CN116543047 A CN 116543047A
- Authority
- CN
- China
- Prior art keywords
- diagnosis
- position estimation
- information
- static target
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 161
- 238000004092 self-diagnosis Methods 0.000 title claims abstract description 49
- 230000003068 static effect Effects 0.000 claims abstract description 280
- 238000003745 diagnosis Methods 0.000 claims abstract description 189
- 230000009471 action Effects 0.000 claims abstract description 71
- 230000006870 function Effects 0.000 claims abstract description 23
- 238000012545 processing Methods 0.000 claims abstract description 12
- 230000009467 reduction Effects 0.000 claims abstract description 10
- 238000001514 detection method Methods 0.000 claims description 84
- 238000004364 calculation method Methods 0.000 claims description 61
- 230000007774 longterm Effects 0.000 claims description 58
- 230000011218 segmentation Effects 0.000 claims description 53
- 230000002159 abnormal effect Effects 0.000 claims description 41
- 239000006185 dispersion Substances 0.000 claims description 30
- 238000005259 measurement Methods 0.000 claims description 26
- 238000012795 verification Methods 0.000 claims description 25
- 230000016776 visual perception Effects 0.000 claims description 18
- 230000008447 perception Effects 0.000 claims description 14
- 238000007781 pre-processing Methods 0.000 claims description 14
- 230000000007 visual effect Effects 0.000 claims description 11
- 230000009977 dual effect Effects 0.000 claims description 9
- 238000013500 data storage Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000013524 data verification Methods 0.000 claims description 3
- 230000001960 triggered effect Effects 0.000 claims description 3
- 230000004438 eyesight Effects 0.000 abstract description 14
- 238000012935 Averaging Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 8
- 230000008859 change Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 238000009825 accumulation Methods 0.000 description 5
- 238000012216 screening Methods 0.000 description 3
- 230000032683 aging Effects 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 239000003292 glue Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 239000012141 concentrate Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000593 degrading effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000002405 diagnostic procedure Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/02—Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Automation & Control Theory (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention belongs to the technical field of vision systems for automatic driving, and particularly relates to a position estimation self-diagnosis method and device of a multi-camera system and a storage medium. S1, acquiring emergency action information of automatic driving; s2, acquiring geographic position information and navigation information of the vehicle and state information of the vehicle; s3, starting each sub-functional module of the multi-camera system and diagnosis association; s4, processing the acquired image data information, and calling a model to detect, classify and divide the static target; s5, accumulating the overrun result, positioning the fault position and disabling related functions. The purpose is that: the method can perform stable diagnosis on the ranging and position estimation performance of the automatic driving multi-camera system, judge whether the ranging performance and the position estimation performance exceed the tolerance, improve the accuracy of fault diagnosis on the reduction of the ranging performance and the reduction of the position estimation performance of the multi-camera vision system, and improve the safety of the automatic driving vehicle.
Description
Technical Field
The invention belongs to the technical field of vision perception systems for automatic driving, and particularly relates to a position estimation self-diagnosis method and device of a multi-camera system and a storage medium.
Background
The safety of the automatic driving system depends on the correct output of each sensing sensor, and the ranging and target position estimation performance of the video camera of the visual sensing system is determined by the internal parameters and the external parameters of the video camera, wherein the external parameters are easily influenced by the installation position, thereby influencing the ranging performance; the vibration of the vehicle during running can change the external parameters of the camera system, and the change of the external parameters can influence the distance measurement of the camera; glue is adopted between a Lens (Lens) of the camera and a base to be adhered or a structural member is adopted to be fixed, internal parameters can be changed due to ageing of the glue and loosening of the structural member, and the distance measurement performance of the camera can be influenced. The ranging performance of the vision system can further influence the position estimation of the target, so that the automatic driving safety is influenced.
In addition to vision systems, autopilot vehicles often have multiple sensors that are backed up for each other, all of which may be mass produced with the autopilot vehicle and experience gradual sensor aging and parameter changes, resulting in reduced ranging and position estimation performance.
The safety target of the automatic driving system is achieved, the accurate output of each sensor is highly dependent, and a vision measurement system which is used as a main sensor and is composed of a camera is more easily affected by parameter change, so that inaccurate estimation of the position of the target occurs. Most vehicles with high-speed automatic driving function are easy to generate serious safety accidents because of inaccurate position estimation caused by the change of internal and external parameters of a camera. However, an automatic driving system working in urban areas is also easy to cause inaccurate estimation of static targets, pedestrians and the like and speed estimation due to the change of internal parameters and external parameters, so that wiping is caused.
Chinese patent CN111007722a discloses a camera self-calibration method based on moving object appearance and moving information, which performs foreground detection on a video containing a moving object, coarse classification of the moving object, and then three vertical vanishing point estimations and camera parameter estimations, but uses three vertical vanishing points to estimate camera pose parameters alone, which is easy to introduce a large error in estimation because of inaccurate feature extraction. Moreover, the camera height in the monitored scene is fixed, and the vehicle camera height may change due to the vehicle movement, resulting in a large error in the estimated camera parameters. The self-diagnosis system for automatic driving is easy to cause false alarm or missing alarm.
Disclosure of Invention
The purpose of the invention is that: the method, the device and the storage medium for the self-diagnosis of the position estimation of the multi-camera system can perform stable diagnosis on the ranging and position estimation performance of the multi-camera system for automatic driving, judge whether the ranging performance and the position estimation performance exceed the tolerance, improve the accuracy of fault diagnosis on the reduction of the ranging performance and the reduction of the position estimation performance of the multi-camera vision system, and improve the safety of an automatic driving vehicle.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
in a first aspect, the present application discloses a method for self-diagnosis of position estimation for a multi-camera system, comprising the steps of,
s1, emergency action information of automatic driving is obtained, history information of emergency action times is calculated and stored according to the emergency action information, long-term history data of a preset static target position updated by a position estimation system is stored, and the stored history data of the static target is loaded timely for measurement and verification;
s2, acquiring geographic position information, navigation information and host vehicle state information of the vehicle, and judging whether a front scene and the host vehicle state meet preset multi-camera visual perception diagnosis triggering conditions or not;
S3, starting each sub-functional module related to the diagnosis of the multi-camera system, acquiring image data information, a navigation map and positioning information of the multi-camera system, and judging whether the diagnosis environment at the current moment meets the requirement or not according to the navigation map and the positioning information, wherein the diagnosis environment at the current moment has a map and a positioned low-speed driving road;
s4, preprocessing the acquired image data information, calling a model to detect, classify and divide a static target, estimating the position of the preset static target by using a position estimation system according to the result, and judging whether the range finding and position estimation errors of the multi-camera system are out of limit or not by using a plurality of estimation results;
and S5, accumulating the overrun results, judging the fault position according to the overrun results, and performing function prohibition on automatic driving according to the fault position.
With reference to the first aspect, as an optional implementation manner, the method further includes,
loading the historical data of the stored emergency action times, calculating the sum of the emergency action times according to the acquired emergency action times, and comparing the sum with a time threshold value in a preset duration;
when the sum of the emergency actions is larger than a threshold value, the visual range finding diagnosis system is started in advance; continuously acquiring emergency action information of automatic driving when the sum of the emergency action times is smaller than a threshold value;
And loading stored historical data of the estimation result of the preset static target under the selected coordinate system, and verifying the next estimation result.
With reference to the first aspect, as an optional implementation manner, the method further includes,
when the sum of the emergency action times is smaller than a threshold value, judging whether the visual ranging diagnosis system receives a closing request sent by the whole vehicle, and when the closing request is received, storing the sum of the emergency action times and a position estimation result of a preset static target; when the closing request is not accepted, emergency action information of the automatic driving is continuously acquired.
With reference to the first aspect, as an optional implementation manner, the method further includes,
judging whether a static target easy to diagnose exists in front according to a map and positioning and a vehicle navigation path or a driving direction when judging whether the front scene and the state of the vehicle meet preset visual perception diagnosis triggering conditions or not;
when a static target which is easy to diagnose exists, the distance between the static target and the vehicle is smaller than a threshold value, the speed of the vehicle is smaller than the threshold value, and the state of the vehicle is normal, a preset visual perception diagnosis condition is triggered; when the static target which is easy to diagnose does not exist or the distance between the static targets is larger than the threshold value or the speed of the vehicle is larger than the threshold value or the state of the vehicle is abnormal, continuously acquiring the geographic position information, the navigation information and the state information of the vehicle until the visual perception diagnosis condition is met.
With reference to the first aspect, as an optional implementation manner, the method further includes,
judging whether the vehicle exceeds or misses a preset static target, whether the vehicle speed is too fast and whether the yaw rate is too high according to the geographic position information and the navigation information of the vehicle when judging whether the current moment diagnosis environment meets the requirements; when the vehicle does not exceed (not miss) the preset static target, the vehicle speed is smaller than the threshold value, and the yaw rate is smaller than the threshold value, judging that the current moment diagnosis environment meets the requirements.
With reference to the first aspect, as an optional implementation manner, the method further includes a multi-camera system,
the multi-camera system includes, but is not limited to, a single camera position estimation module, a dual camera crossover region position estimation module, and a multi-modality high precision sensor position estimation module.
With reference to the first aspect, as an optional implementation manner, the method further includes,
when preprocessing the acquired image data information, preprocessing the image data information by using noise reduction, image conversion and white balance on an original image, and obtaining the image data information with good quality; then, a target detection network is called to carry out static target detection, a static target detection frame is obtained, and a classification network is operated to judge the category and the shielding state; and calling a semantic segmentation network or an instance segmentation network to carry out semantic segmentation on the preprocessed image, and segmenting the static target and the background.
With reference to the first aspect, as an optional implementation manner, the method further includes,
when the target detection network is called to detect a static target, and when the preset static target is detected and the target is effective, effective static target detection frame data, classification data and semantic segmentation feature data are sent to a position estimation system, and state information of judging whether the environment at the current moment meets the requirement is judged; and continuously acquiring image data information, a navigation map and positioning information of the multi-camera system when the preset static target is not detected or the target is invalid.
With reference to the first aspect, as an optional implementation manner, the method further includes,
when the preset static target position is estimated, the diagnosis system estimates the preset static target position through a single-camera position estimation module, a double-camera overlapping area position estimation module and a multi-mode high-precision sensor position estimation module to obtain a corresponding position error limit ellipsoid and a position estimation state;
when the estimation state of each position estimation module is completed, calculating whether intersection exists in position error limit ellipsoids corresponding to the single-camera position estimation module, the double-camera overlapping area position estimation module and the multi-mode high-precision sensor position estimation module or not, and calculating the size of the intersection;
And continuously acquiring the image data information, the navigation map and the positioning information of the multi-camera system when the estimation state of the position estimation module is not completed.
With reference to the first aspect, as an optional implementation manner, the method further includes,
judging whether the range finding and position estimating errors of the multi-camera system are out of limit or not by utilizing whether intersection exists between position error limit ellipsoids corresponding to the single-camera position estimating module, the double-camera overlapping area position estimating module and the multi-mode high-precision sensor position estimating module;
when the error exceeds the limit, accumulating and storing the overrun times; and when the error is not exceeded, continuously acquiring emergency action information of automatic driving.
With reference to the first aspect, as an optional implementation manner, the method further includes,
when the single-camera position estimation module is used for estimation, the diagnosis system starts the single-camera position estimation module and initializes the single-camera position estimation module; judging whether the state of the diagnosis environment meets the condition according to the static target detection frame data, the classification data and the segmentation feature data which are judged to be effective and the state information of the received diagnosis environment;
when the diagnostic environment state meets the condition, calculating a preset target position under a coordinate system measured at the current moment by using a camera parameter through a monocular distance measuring method, removing abnormal points, obtaining a low dispersion position of a static target estimated by the monocular method, and storing long-term historical data; and when the diagnosis environment state does not meet the conditions, discarding the calculation result, and clearing the diagnosis cache data.
With reference to the first aspect, as an optional implementation manner, the method further includes,
and verifying the low-variance position result of the static target estimated by the monocular method by using the long-term historical data: calculating the offset distances between the position of the preset static target in the coordinate system, which is obtained by the current frame of the position estimation module, and the long-term historical position in the coordinate system, which is obtained by the last n times of detection, to obtain n offset distances, wherein when m distances are not more than a set constraint distance, the static target position information obtained by the method adopted by the single-camera position estimation module at the current moment is judged to pass through long-term historical data verification;
when the verification is passed, calculating a static target position error limit ellipsoid obtained by the single-camera position estimation module, setting the estimation state of the single-camera position estimation module to be completed after completion, and sending completion error limit ellipsoid information and an estimation state completion signal; and discarding the calculation result when the verification is not passed.
With reference to the first aspect, as an optional implementation manner, the method further includes,
when the double-camera overlapping region position estimation module is used for estimation, the diagnosis system starts the double-camera overlapping region position estimation module and initializes the double-camera overlapping region position estimation module, and judges whether the state of the diagnosis environment meets the condition according to the static target detection frame data, the classification data and the segmentation characteristic data which are judged to be effective and the state information of the received diagnosis environment;
When the diagnostic environment state meets the conditions, the detection and classification network is utilized to obtain the depth of a static target, the type of the target, the edge and the texture of the target, a region of interest is defined by taking the central point of a preset static target detection frame as the center, and the region of interest is utilized to match the static targets in the two cameras; after the same static target in the double-camera overlapping area is successfully matched, ranging and position estimation are carried out by using a binocular range method to obtain the position of the current frame under a coordinate system, abnormal points are removed, the low-dispersion position of the static target estimated by the double-camera overlapping area method is obtained, and long-term historical data are stored; and when the diagnosis environment state does not meet the conditions, discarding the calculation result, and clearing the diagnosis cache data.
With reference to the first aspect, as an optional implementation manner, the method further includes,
checking long-term history data of the low dispersion position of the static target estimated by the double-camera overlapping area and binocular distance measuring method, calculating a static target position error limit ellipsoid obtained by a double-camera overlapping area position estimation module when the long-term history data passes the checking, setting the estimation state of the double-camera overlapping area position estimation module to be completed after the completion, and sending completion error limit ellipsoid information and an estimation state completion signal; and discarding the calculation result when the verification is not passed.
With reference to the first aspect, as an optional implementation manner, the method further includes,
when the multi-mode high-precision sensor position estimation module is used for estimation, the diagnosis system starts the multi-mode high-precision sensor position estimation module and is initialized, and whether the state of the diagnosis environment meets the condition is judged according to the static target detection frame data, the classification data and the segmentation feature data which are judged to be effective and the state information of the received diagnosis environment;
when the diagnosis environment state meets the condition, judging whether the received perception information of the multi-mode high-precision sensor is effective, calculating the position of a preset static target under the coordinate system measured at the current moment through the effective perception information of the multi-mode high-precision sensor, removing abnormal points, obtaining the low dispersion position of the static target estimated by the multi-mode high-precision sensor position estimation module, and storing long-term historical data; and when the diagnosis environment state does not meet the conditions, discarding the calculation result, and clearing the diagnosis cache data.
With reference to the first aspect, as an optional implementation manner, the method further includes,
checking long-term history data of the low dispersion position of the static target estimated by the multi-mode high-precision sensor position estimation module, calculating a static target position error limit ellipsoid obtained by the multi-mode high-precision sensor position estimation module when the low dispersion position passes the checking, setting an estimation state of the multi-mode high-precision sensor position estimation module to be completed after the completion, and sending completion error limit ellipsoid information and an estimation state completion signal; and discarding the calculation result when the verification is not passed.
In a second aspect, the present application also discloses a position estimation self-diagnosis apparatus of a multi-camera system, the self-diagnosis apparatus comprising:
the information reading and storing module is used for storing and acquiring historical data, acquiring emergency action information of automatic driving and storing the sum of emergency action times according to the emergency action information; the long-term historical data of a preset static target updated by the position estimation system is stored in a map data storage area, and the historical data is read for verification during estimation;
the judging module is used for acquiring the geographic position information, the navigation information and the state information of the vehicle and judging whether the front scene and the state of the vehicle meet the preset visual perception diagnosis triggering conditions or not;
the diagnosis association sub-function starting module is used for starting each sub-function module of the diagnosis system, acquiring image data information, a navigation map and positioning information of the multi-camera system, acquiring a static target which is easy to diagnose according to the navigation map and the positioning information, judging whether the current time diagnosis environment meets the requirement according to the static target, and starting each position estimation module when the current time diagnosis environment meets the requirement, wherein the current time diagnosis environment has a map and a positioned low-speed driving road;
The estimated data processing module is used for preprocessing the acquired image data information; invoking detection, classification and segmentation functions to obtain target detection, classification and segmentation information, and sending the target information to each position estimation module; each module in the position estimation system adopts different methods to estimate the preset static target position; judging whether the distance measurement and position estimation errors of the multi-camera system exceed the limits according to the estimation results;
and the execution module is used for accumulating the overrun result, judging the fault position according to the overrun result, and performing function prohibition on automatic driving according to the fault position.
With reference to the second aspect, as an optional implementation manner, the apparatus further includes,
the single-camera position estimation module is used for calling a target detection network to detect a static target, acquiring effective static target detection frame data, classification data, semantic segmentation feature data and receiving state information of a diagnosis environment when a preset static target is detected and the target is effective, and judging whether the state of the diagnosis environment meets the condition;
when the diagnostic environment state meets the conditions, calculating by using camera parameters through a monocular ranging method to obtain the position of the current frame of the preset target under the coordinate system, removing abnormal points, obtaining the low dispersion position of the static target estimated by the monocular method, storing long-term historical data, checking, transmitting the estimated position and error information to a multi-camera diagnostic system through the checking, estimating the completion state information, otherwise, clearing the calculated cache data, and discarding the calculation result; and when the diagnosis environment state does not meet the condition, the cache data of the calculation is also cleared, and the calculation result is discarded.
With reference to the second aspect, as an optional implementation manner, the apparatus further includes,
the double-camera overlapping region position estimation module is used for calling a target detection network to detect a static target, acquiring effective static target detection frame data, classification data, semantic segmentation feature data and receiving state information of a diagnosis environment when a preset static target is detected and the target is effective, and judging whether the state of the diagnosis environment meets the condition;
when the state of the diagnostic environment meets the conditions, the depth of a static target, the type of the target and the edges and textures of the target are obtained by using a detection and classification network, a region of interest is defined by taking the central point of a preset static target detection frame as the center, the static targets in two cameras are matched by using the region of interest, after the same static target in a double-camera overlapping area is successfully matched, ranging and position estimation are performed by using a binocular range method, the position of the current frame of the static target in a coordinate system is obtained, abnormal points are screened and removed, the low dispersion position of the static target estimated by using the double-camera overlapping area is obtained, the estimation result is checked by using long-term historical data of the preset static target, estimated position and error information are sent to a multi-camera diagnostic system by checking, and state information is estimated, otherwise, the calculation cache data is cleared, and the calculation result is discarded; and when the diagnosis environment state does not meet the condition, the cache data of the calculation is also cleared, and the calculation result is discarded.
With reference to the second aspect, as an optional implementation manner, the apparatus further includes,
the multi-mode high-precision sensor position estimation module is used for receiving detection and measurement information of the rest high-precision sensors in the current whole vehicle system on a preset static target, receiving state information of a diagnosis environment and judging whether the state of the diagnosis environment meets the condition;
when the diagnosis environment state meets the condition, judging whether the received perception information of the multi-mode high-precision sensor is valid or not; calculating to obtain the position of the preset static target current frame under the coordinate system through the effective perception information of the multi-mode high-precision sensor, removing abnormal points, obtaining the low dispersion position of the static target estimated by the multi-mode high-precision sensor position estimation module, storing long-term historical data, checking the estimated result by utilizing the previous long-term historical data, transmitting the estimated position and error information and estimating the completion state information to a diagnosis system through the checking, otherwise, clearing the calculated cache data, and discarding the calculated result; when the diagnosis environment state does not meet the condition, the cache data of the calculation is also cleared, and the calculation result is discarded.
In a third aspect, the present application also discloses a computer readable storage medium, in which a computer program is stored which, when run on a computer, causes the computer to perform the method described above.
The invention adopting the technical scheme has the following advantages:
1. the method has the advantages that whether a map and a positioned low-speed driving road exist in a diagnosis environment can be judged through the obtained emergency action information, the vehicle geographic position information and navigation information of the automatic driving and the vehicle state information, the ranging performance and the position estimation performance of the automatic driving multi-camera system are stably diagnosed according to the diagnosis environment, and whether the ranging performance and the position estimation performance exceed the tolerance is judged, so that whether the ranging performance and the position estimation performance of the multi-camera system are reduced can be accurately judged, the fault diagnosis accuracy of the reduction of the ranging performance and the reduction of the position estimation performance of the multi-camera vision system is improved, and the safety of the automatic driving vehicle is improved;
2. the distance measurement is carried out on static targets with known sizes and classifications through binocular vision and monocular vision methods and a visual field overlapping area, so that when a vehicle passes through the diagnosis area, whether the ranging performance of the multi-camera system is reduced can be rapidly and effectively determined;
3. By adopting the multi-mode high-precision sensor and combining with binocular vision, monocular vision and measurement of the overlapped area of the visual fields, the fault diagnosis accuracy of the multi-camera vision system with reduced ranging performance and reduced position estimation performance can be further improved;
4. the position estimation self-diagnosis of the multi-camera system can remind a driver to maintain and calibrate the vision system and the rest sensing system as much as possible, so that the purposes of protecting the driver and the automatic driving vehicle are achieved, and the safety of the automatic driving vehicle is improved.
Drawings
The present application may be further illustrated by the non-limiting examples given in the accompanying drawings. It is to be understood that the following drawings illustrate only certain embodiments of the present application and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may derive other relevant drawings from these drawings without inventive effort;
FIG. 1 is a schematic flow chart of a self-diagnosis method according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of a single camera position estimation module in the self-diagnosis method according to the embodiment of the present application;
FIG. 3 is one of schematic distributions of the overlapping regions of the fields of view in the 3D space used by the two-camera overlapping region position estimation module in the self-diagnosis method according to the embodiment of the present application;
FIG. 4 is a second schematic diagram of a distribution of a field of view overlapping region in a 3D space used by a dual camera overlapping region position estimation module in the self-diagnosis method according to the embodiment of the present application;
FIG. 5 is a schematic flow chart of a dual-camera overlay region position estimation module in a self-diagnosis method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of feature matching in a dual camera overlay region position estimation module in a self-diagnosis method according to an embodiment of the present disclosure;
FIG. 7 is a schematic flow chart of a multi-mode high-precision sensor position estimation module in the self-diagnosis method according to the embodiment of the present application;
FIG. 8 is a second flow chart of a self-diagnosis method according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of a field of view overlap region of a self-diagnostic method provided in an embodiment of the present application;
fig. 10 is a block diagram of a self-diagnosis apparatus provided in an embodiment of the present application;
the main reference numerals are as follows:
the front view camera 10, the side rear camera 20, the side front camera 30, the rear view camera 40, the front view and side front double-shot overlapping region 50, the side front and side rear double-shot overlapping region 60, the rear view and side rear double-shot overlapping region 70, the self-diagnosis device 200, the information reading and storing module 210, the judging module 220, the diagnosis association sub-function starting module 230, the estimation processing module 240, and the executing module 250.
Detailed Description
The present application will be described in detail below with reference to the drawings and the specific embodiments, and it should be noted that in the drawings or the description of the specification, similar or identical parts use the same reference numerals, and implementations not shown or described in the drawings are in a form known to those of ordinary skill in the art. In the description of the present application, the terms "first," "second," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Referring to fig. 1, an embodiment of the present application discloses a method for self-diagnosis of position estimation of a multi-camera system, comprising the following steps,
s1, emergency action information of automatic driving is obtained, history information of emergency action times is calculated and stored according to the emergency action information, long-term history data of a preset static target position updated by a position estimation system is stored, and the stored history data of the static target is loaded timely for measurement and verification;
s2, acquiring geographic position information, navigation information and host vehicle state information of the vehicle, and judging whether a front scene and the host vehicle state meet preset multi-camera visual perception diagnosis triggering conditions or not;
S3, starting each sub-functional module related to the diagnosis of the multi-camera system, acquiring image data information, a navigation map and positioning information of the multi-camera system, and judging whether the diagnosis environment at the current moment meets the requirement or not according to the navigation map and the positioning information, wherein the diagnosis environment at the current moment has a map and a positioned low-speed driving road;
s4, preprocessing the acquired image data information, calling a model to detect, classify and divide a static target, estimating the position of the preset static target by using a position estimation system according to the result, and judging whether the range finding and position estimation errors of the multi-camera system are out of limit or not by using a plurality of estimation results;
and S5, accumulating the overrun results, judging the fault position according to the overrun results, and performing function prohibition on automatic driving according to the fault position.
Based on the above embodiment, by acquiring the emergency action information, the geographic position information and the navigation information of the vehicle and the state information of the vehicle, whether the diagnosis environment has a map and a positioned low-speed driving road can be judged, so that the ranging performance and the position estimation performance of the multi-camera system for automatic driving can be stably diagnosed according to the diagnosis environment, whether the ranging performance and the position estimation performance exceed the tolerance can be judged, whether the ranging performance and the position estimation performance of the multi-camera system are reduced can be judged, the effective autonomous diagnosis of the multi-camera system is facilitated, the automatic driving function can be forbidden according to the result of the self-diagnosis, and the safety of the automatic driving is facilitated to be improved.
In this embodiment, the camera is not directly parameter-estimated during the diagnosis process, but the known static target is used as a measuring scale, and by measuring and estimating the position of the preset static target, it is indirectly determined whether an important change affecting the safety of automatic driving occurs in the visual ranging and position estimating system (including the camera parameters).
In this embodiment, the sub-functional modules include, but are not limited to, preprocessing, detection, classification, segmentation, ranging/position estimation modules.
In this embodiment, the self-diagnosis method is suitable for a specific road low-speed driving scene with a map and positioning to obtain a more accurate diagnosis result, and in a non-setting scene, the self-diagnosis method is in a closed state.
As an alternative embodiment, the method may further comprise,
in step 110, the history data of the emergency action times is loaded and stored, wherein the history data of the emergency action and the history data of the preset static target position estimation are included.
According to the history data of the emergency action and the acquired recently increased emergency action times, calculating the sum of the recently increased emergency action times and the time threshold T in the preset duration 1 Comparing;
when the sum of the emergency actions is greater than the threshold T 1 When the distance measurement diagnosis system is started in advance; when the sum of the emergency actions is smaller than the threshold value T 1 And continuously acquiring emergency action information of automatic driving.
And loading stored historical data of the estimation result of the preset static target under the selected coordinate system, and verifying the next estimation result.
In this embodiment, the emergency action number information includes emergency obstacle avoidance and emergency braking information of the automatic driving system, and active emergency takeover information of the automatic driving user (driver or safety officer), calculates the sum of the emergency action numbers according to the information of the emergency action number, and compares the sum with a threshold value of the emergency action number set in a preset duration. When the sum of the emergency actions is greater than the threshold T of times in the preset time period 1 When the distance measurement diagnosis system is started, each functional module in the distance measurement diagnosis system is started in advance, so that the distance measurement diagnosis system is started quickly; when the sum of the emergency action times is smaller than a threshold value in a preset time period, continuously acquiring the emergency action information of automatic driving, and recalculating the emergency action time information.
As an alternative embodiment, the method may further comprise,
In step 110, when the sum of emergency actions is smaller than a threshold value, judging whether the visual ranging diagnosis system sends a closing request to the whole vehicle, and when the closing request is received, storing the sum of emergency actions and a position estimation result of a preset static target; when the closing request is not accepted, emergency action information of the automatic driving is continuously acquired.
It can be understood that when a closing request sent by the whole vehicle is received, the sum of emergency braking information, emergency obstacle avoidance information and active taking over time information of a user of the automatic driving system is stored; and writing long-term history data of the preset static target into a map data storage area, and saving the long-term history data in a power-down mode for updating the position data of the static target of the map. The stored historical data of the preset static target position estimation is loaded after being started by the diagnosis system, and the historical data are used for checking the preset static target position estimation result by each estimation module.
It can be understood that the long-term history data of the static target is read by powering on, and the long-term history data of the static target is saved by powering off.
As an alternative embodiment, the method may further comprise,
in step 120, when judging whether the front scene and the vehicle state meet the preset visual perception diagnosis triggering conditions, judging whether a static target easy to diagnose exists in front according to the map and the positioning and the navigation path or the driving direction of the vehicle;
When a static target which is easy to diagnose exists, the distance between the static target and the vehicle is smaller than a threshold value, the speed of the vehicle is smaller than the threshold value, and the state of the vehicle is normal, a preset visual perception diagnosis condition is triggered; when the static target which is easy to diagnose does not exist or the distance between the static targets is larger than the threshold value or the speed of the vehicle is larger than the threshold value or the state of the vehicle is abnormal, continuously acquiring the geographic position information, the navigation information and the state information of the vehicle until the visual perception diagnosis condition is met.
In this embodiment, the easily diagnosed static targets include, but are not limited to, traffic lights or traffic signs of known categories and sizes. According to stationary static target diagnosis, on one hand, stability of a diagnosis result is guaranteed conveniently, and on the other hand, determination of geographic position of a vehicle, navigation information, distance between the vehicle and a static target and vehicle speed is facilitated, so that whether a scene meets preset visual perception diagnosis triggering conditions is judged.
As an alternative embodiment, the method may further comprise,
in step 120, when judging whether the current diagnosis environment meets the requirement, judging whether the vehicle exceeds or misses a preset static target, whether the vehicle speed is too fast and whether the yaw rate is too high according to the geographic position information and the navigation information of the vehicle; when the vehicle does not exceed (not miss) the preset static target, the vehicle speed is smaller than the threshold value, and the yaw rate is smaller than the threshold value, judging that the current moment diagnosis environment meets the requirements.
In this embodiment, by sending a start instruction to the multiple camera system, the multiple camera system receives the image data information, the navigation information and the vehicle geographic position information, and performs comparison calculation according to the navigation information, the vehicle geographic position information and the preset static target, so as to determine whether the vehicle exceeds or misses the preset static target, whether the vehicle speed is too fast, and whether the yaw rate is too high. When the vehicle exceeds or misses a preset static target or the vehicle speed is larger than a threshold value or the yaw rate is too large, judging that the diagnosis environment at the current moment is not in accordance with the requirement; when the vehicle does not exceed or miss the preset static target, the vehicle speed is smaller than the threshold value, and the yaw rate is smaller than the threshold value, judging that the current diagnosis environment meets the requirement, namely, the vehicle runs in the diagnosable environment.
As an alternative embodiment, the method may further comprise a multi-camera system,
the multi-camera system includes, but is not limited to, a single camera position estimation module, a dual camera crossover region position estimation module, and a multi-modality high precision sensor position estimation module.
In this embodiment, by using the single-camera position estimation module and the double-camera overlapping area position estimation module, parameters of the cameras can be diagnosed, that is, as long as more than two cameras exist in the system and a field of view overlapping area exists between the two cameras, the position estimation of the multi-camera system can be diagnosed based on a static target, and when a multi-mode high-precision sensor exists at the same time, the accuracy of diagnosis can be further improved.
It will be appreciated that multiple estimation results can be obtained from the estimation of the single camera position estimation module, the dual camera crossover region position estimation module, and the multi-modal high accuracy sensor position estimation module.
As an alternative embodiment, the method may further comprise,
in step 130, when preprocessing the acquired image data information, the image data information is preprocessed by, but not limited to, using noise reduction, image conversion and white balance on the original image, so as to obtain good quality image data information; then, a target detection network is called to carry out static target detection, a static target detection frame is obtained, and a classification network is operated to judge the category and the shielding state; and calling a semantic segmentation network or an instance segmentation network to carry out semantic segmentation on the preprocessed image, and segmenting the static target and the background.
It can be understood that the image data information with better effect can be obtained by preprocessing means such as noise reduction, image transformation, white balance and the like, and the detection and classification are convenient. Among them, image transformations include, but are not limited to, scaling, de-distortion, translation, and cropping. In some embodiments, an image processing chip is present within the camera to enable the camera to automatically process the acquired image data information. In some embodiments, no image processing chip is present within the camera, and an image processing chip is present within the multi-camera system, capable of processing the acquired image data information. And calling the target detection network, the classification network, the semantic segmentation network and the instance segmentation network through the preprocessed image data information, so that more stable and accurate static target detection, classification and segmentation results can be obtained.
As an alternative embodiment, the method may further comprise,
when a target detection network is called to detect a static target, and when the preset static target is detected and the target is effective, effective static target detection frame data, classification data and semantic segmentation feature data are sent to a position estimation system, and state information of judging whether the environment at the current moment meets the requirement is judged; and continuously acquiring image data information, a navigation map and positioning information of the multi-camera system when the preset static target is not detected or the target is invalid.
It can be understood that the confidence level is provided in the invoked target detection and classification network, the confidence level is compared with a set threshold value, and when the confidence level is greater than the threshold value, the preset target can be judged to be detected and is valid; when the confidence coefficient is smaller than the threshold value, namely, when the preset static target is not detected or the target is invalid, continuously acquiring the image data information, the navigation map and the positioning information of the multi-camera system, and continuously judging whether the preset target is detected or not and whether the preset target is valid or not according to the comparison of the confidence coefficient and the threshold value.
As an alternative embodiment, the method may further comprise,
In step 140, when estimating the preset static target position, the diagnostic system estimates the preset static target position through a single-camera position estimation module, a double-camera overlapping area position estimation module and a multi-mode high-precision sensor position estimation module to obtain a corresponding position error limit ellipsoid and a position estimation state;
when the estimation state of each position estimation module is completed, calculating whether intersection exists in position error limit ellipsoids corresponding to the single-camera position estimation module, the double-camera overlapping area position estimation module and the multi-mode high-precision sensor position estimation module or not, and calculating the size of the intersection;
and continuously acquiring the image data information, the navigation map and the positioning information of the multi-camera system when the estimation state of the position estimation module is not completed.
In this embodiment, the center of an ellipsoid is the position estimated by each position estimation module, and three radii of the ellipsoid are determined by the error limit of the corresponding estimation method, and the calculation mode of the position error limit ellipsoid is as follows: according to the characteristics of the sensor ranging method, the position average value is used as a circle center, the error average value of the sensor ranging method on a horizontal transverse axis, a longitudinal axis and a third axis perpendicular to the ground is used as three axes of a position error limit ellipsoid, and the obtained ellipsoid area is calculated. By acquiring the position error limit ellipsoids, whether an intersection exists between the position error limit ellipsoids of each estimation module in the multi-camera system can be calculated, so that the range finding and position estimation errors of the multi-camera system can be judged to exceed the allowable limit.
As an alternative embodiment, the method may further comprise,
in step 140, determining whether the ranging and position estimation errors of the multi-camera system are out of limit by using whether the intersection exists between the position error limit ellipsoids corresponding to the single-camera position estimation module, the double-camera overlapping area position estimation module and the multi-mode high-precision sensor position estimation module;
when the error exceeds the limit, accumulating and storing the overrun times; and when the error is not exceeded, continuously acquiring emergency action information of automatic driving.
It can be appreciated that when there is an intersection of the position error limit ellipsoids, the ranging and position estimation errors of the multi-camera system are within the allowable limits, i.e. there is no overrun, and when there is no overrun, emergency action information of automatic driving is continuously acquired; when the object error limit ellipsoid does not have intersection, the distance measurement and position estimation errors of the camera system exceed the allowable limit, namely the errors exceed the limit, when the errors exceed the limit, the overrun times are accumulated and stored, whether the overrun accumulation times are larger than a set threshold value is judged, and when the overrun accumulation times are larger than the threshold value, the calling system judges the distance measurement and position estimation corresponding to the single camera position estimation module to judge that the fault state is caused; when the overrun accumulation times of the double-camera overlapping area position estimation module exceeds a threshold value, the pose between the double cameras is abnormal, and the two corresponding cameras are judged to be in a fault state.
When any fault exists in the multi-camera system, the fault position alarms, the user is reminded to automatically drive the sensor to reduce the performance, concentrate on observing the states of roads and traffic participants, and disable advanced (more than three-level) automatic driving, wherein according to the automatic driving classification national standard, the more than three-level automatic driving can complete driving operation, surrounding environment monitoring and other actions in a specific environment, and the driver does not need to operate the multi-camera system, but in the automatic driving process of the vehicle, the driver needs to keep concentration, and prepare to take over the vehicle at any time so as to cope with the situation that the automatic driving system cannot process.
It will be appreciated that when the fault class contains forward-looking camera-perceived faults, all autopilot and auxiliary functions are disabled; when the fault category comprises a left side camera to sense faults, disabling urban intersection auxiliary driving, confluence auxiliary driving and left lane changing functions; the fault category comprises that a right side camera senses a fault, and then urban intersection auxiliary driving, confluence auxiliary driving and right lane changing functions are forbidden; the fault category includes a rearward camera sensing a fault, and urban auxiliary driving, confluence auxiliary driving and lane changing functions are disabled.
As an alternative embodiment, the method may further comprise,
when the single-camera position estimation module is used for estimation, the diagnosis system starts the single-camera position estimation module and initializes the single-camera position estimation module; judging whether the state of the diagnosis environment meets the condition according to the static target detection frame data, the classification data and the segmentation feature data which are judged to be effective and the state information of the received diagnosis environment;
when the diagnostic environment state meets the condition, calculating a preset target position under a coordinate system measured at the current moment by using a camera parameter through a monocular distance measuring method, removing abnormal points, obtaining a low dispersion position of a static target estimated by the monocular method, and storing long-term historical data; and when the diagnosis environment state does not meet the conditions, discarding the calculation result, and clearing the diagnosis cache data.
It will be appreciated that the single camera position estimation module includes two or more cameras.
Referring to fig. 2, in this embodiment, when the single-camera position estimation module receives a start command, the single-camera position estimation module initializes, receives valid static target detection frame data, classification data, segmentation feature data, and status information of a diagnostic environment, and determines whether the diagnostic environment status satisfies a condition;
When the vehicle exceeds or misses a preset static target, the vehicle speed is greater than a threshold value, and the yaw rate is greater, namely, the condition is not met, the diagnosis cache data is cleared. When meeting the condition that the vehicle is not overtakenWhen the preset static target is not missed, the vehicle speed is smaller than the threshold value, and the yaw rate is smaller than the threshold value, namely when the conditions are met, the position of the preset target of the current frame in the coordinate system is calculated by utilizing the camera parameters through a monocular ranging method (for example, the distance/position estimation based on the target of the known type, the traditional method based on the internal and external parameters and the end-to-end deep learning position estimation can be used), and the frame number n of the position information of the static target in the coordinate system calculated by the monocular ranging method is judged s Whether or not it is greater than a set frame number threshold T s ;
When estimated position information frame number N s Less than a set frame number threshold T s Continuously acquiring static target detection frame data, classification data, segmentation feature data and state information of a received diagnosis environment, wherein the static target detection frame data, the classification data, the segmentation feature data and the state information are judged to be effective;
when estimated position information frame number N s Greater than or equal to a set frame number threshold T s When the method is used, the next step is carried out, and T is respectively carried out on the multi-frame static targets obtained by the monocular ranging method under the coordinate system s Frame position averaging and standard deviation, eliminating abnormal points, and continuously averaging and standard deviation; when abnormal points are removed, removing points farthest from the average value points;
Determining the number N of times of optimization of repeated calculation and abnormal point elimination of monocular ranging method so Whether or not the set upper limit T is exceeded so If the number of times N of optimizing is calculated repeatedly so Exceeding the set upper limit T so Then the method is described that the method exceeds the upper limit T of the calculation optimizing frequency so The standard deviation is still not smaller than the threshold value T so Discarding the calculation result, clearing the diagnosis cache data, and setting the position estimation completion state as incomplete; if repeatedly calculating the optimizing times N so Does not exceed the set upper limit T so Judging whether the standard deviation of the multi-frame positions calculated by the monocular ranging method after abnormal points are removed is smaller than a threshold value, and if the standard deviation is smaller than the threshold value, storing the low-dispersion position of the static target estimated by the monocular ranging method into a long-term historical data storage area; if the standard deviation is greater than the threshold value, continuously repeating N of the static target obtained by the monocular ranging method under the coordinate system s Frame position coordinate averaging and standard deviation, abnormal point eliminating and N updating s For the number of non-abnormal points, continuously averaging and standard deviation, and calculating the optimizing times n so And returning to continuously executing the step of eliminating the abnormal points after the completion.
As an alternative embodiment, the method may further comprise,
and verifying the low-variance position result of the static target estimated by the monocular method by using long-term historical data: calculating the offset distance between the position of the preset static target in the coordinate system, which is obtained by the current frame of the position estimation module, and the long-term historical position in the coordinate system, which is obtained by the last n times (n is a set value), to obtain n offset distances, wherein when m (m=delta×n, delta epsilon (0, 1) and m & gt1) distances do not exceed the set constraint distance, the static target position information obtained by the method adopted by the single-camera position estimation module at the current moment is judged to pass through the long-term historical data verification.
When the verification is passed, calculating a static target position error limit ellipsoid obtained by the single-camera position estimation module, setting the estimation state of the single-camera position estimation module to be completed after completion, and sending completion error limit ellipsoid information and an estimation state completion signal; and discarding the calculation result when the verification is not passed.
It can be understood that when the static target position estimated by the monocular ranging method passes the verification when the long-term historical data is verified, calculating a static target position error limit ellipsoid obtained by the monocular ranging method, setting the state of the single-camera position estimation module to be completed after completion, and sending position error limit ellipsoid information estimated by the monocular method and an estimation state completion signal.
As an alternative embodiment, the method may further comprise,
when the double-camera overlapping region position estimation module is used for estimation, the diagnosis system starts the double-camera overlapping region position estimation module and initializes the double-camera overlapping region position estimation module, and judges whether the state of the diagnosis environment meets the condition according to the static target detection frame data, the classification data and the segmentation characteristic data which are judged to be effective and the state information of the received diagnosis environment;
when the diagnostic environment state meets the conditions, the detection and classification network is utilized to obtain the depth of a static target, the type of the target, the edge and the texture of the target, a region of interest is defined by taking the central point of a preset static target detection frame as the center, and the region of interest is utilized to match the static targets in the two cameras; after the same static target in the double-camera overlapping area is successfully matched, ranging and position estimation are carried out by using a binocular range method to obtain the position of the current frame under a coordinate system, abnormal points are removed, the low-dispersion position of the static target estimated by the double-camera overlapping area method is obtained, and long-term historical data are stored; and when the diagnosis environment state does not meet the conditions, discarding the calculation result, and clearing the diagnosis cache data.
It will be appreciated that the dual camera overlap region location estimation module includes at least one set of field of view overlap regions, i.e., there is field of view overlap for at least two cameras, as shown in fig. 3 and 4.
Referring to fig. 5 and 6, in this embodiment, when the dual-camera overlapping region position estimation module receives a start command, the dual-camera overlapping region position estimation module initializes, receives valid static target detection frame data, classification data, segmentation feature data and status information of a diagnostic environment, and determines whether the diagnostic environment status satisfies a condition;
when the vehicle exceeds or misses a preset static target, the vehicle speed is greater than a threshold value, and the yaw rate is greater, namely, the condition is not met, the diagnosis cache data is cleared. When the condition that the preset static target is not exceeded or missed by the vehicle, the vehicle speed is smaller than a threshold value and the yaw rate is smaller than the threshold value is met, if the preset static target is not detected by the double cameras, the field of view overlapping area of the double cameras is not entered, and the static target is not classified and has shielding, continuously acquiring static target detection frame data, classification data, segmentation feature data and state information of a receiving diagnosis environment, wherein the static target detection frame data, the classification data and the segmentation feature data are judged to be effective;
If the preset static target is detected by the double cameras, the field of view overlapping area of the two cameras is entered, the static target is classified and is not shielded, the classification information is utilized to obtain rough static target depth, the center point of a preset static target detection frame is taken as the center to define an interested area, rough space constraint conditions between the double cameras are utilized to roughly match the interested areas of the static target in the two cameras, and the time stamp interval of two pieces of image data information which are subjected to rough matching is smaller than an allowable threshold;
when rough matching is carried out, the category information is utilized to obtain the rough depth of the static target, the center of the detection rectangular frame is used as the ellipse center, the length of the rectangular length and width is enlarged by lambda and is used as the length of the major axis and the short axis of the ellipse of the region of interest to define the region of interest of the ellipse, and the region of interest is projected to the other camera of the double cameras in the overlapping area by utilizing the rough depth of the static target and the rough constraint relation of the two cameras to carry out rough matching;
when the improved intersection ratio of the regions of interest of the two cameras is greater than a threshold, the rough matching of the regions of interest of the predetermined static targets in the two cameras is successful; wherein the formula of the improved cross-over ratio (CIOU) is as follows:
Wherein IOU is the cross-over ratio, c 1 ,c 2 For the center points of the two regions of interest, ρ is the Euclidean distance between the two center points, c is the diameter of the minimum closure circle of the two regions of interest, α is the weight, and v is the similarity of the metrics.
Wherein, the calculation formula of alpha is as follows:wherein, the calculation formula of v is: />Wherein a is 1 ,a 2 ,b 1 ,b 2 A major axis length and a minor axis length of two regions of interest (ellipses).
When the improved cross ratio is greater than the threshold, the rough matching is successful; and continuously acquiring static target detection frame data, classification data, segmentation characteristic data and state information of a received diagnosis environment, which are judged to be valid, when the improved intersection ratio is smaller than a threshold value.
After the rough matching is successful, carrying out fine matching on the static targets in the interested area which is successfully subjected to the rough matching, wherein the time stamp interval of two pieces of static target information subjected to the fine matching is smaller than an allowable threshold;
when fine matching is carried out, judging whether the static target has rich edges and texture features or not by utilizing target class information obtained by a target detection network and a classification network;
if the static targets have rich edges and texture features, performing type matching and template matching on the classified targets, calculating type matching scores and template matching scores, calculating similarity according to feature vectors output by a target detection network, and calculating similarity scores according to the similarity; if the static target does not have rich edges and texture features, feature points are obtained from the region of interest where the static target is located and screening is carried out, wherein the screening method is that the feature points of the background in the region of interest are removed by using a semantic segmentation result, and only the feature points on the static target are reserved; extracting feature information of feature points on the static targets after screening, performing binocular feature point matching, calculating the number of the matched feature points, and calculating feature point matching scores by utilizing the matched feature point logarithm and the intersection information of the feature point pair links;
Calculating a matching comprehensive score according to the binocular feature point matching score, the type matching, the template matching score and the similarity score, and judging that binocular matching is successful when the matching comprehensive score is larger than a threshold value; and discarding the calculation result when the matching comprehensive score is smaller than the threshold value.
Performing visual distance measurement on the precisely matched static target by using a binocular method, and calculating to obtain the position of the current frame under a coordinate system; judging the frame number N of the position information of the static target under the coordinate system calculated by the binocular distance measuring method d Whether or not it is greater than a set frame number threshold T d ;
If the current frame is acquiredN of the obtained position information in the coordinate system d The number of frames is smaller than a set threshold T of frames d Continuously acquiring static target detection frame data, classification data, segmentation feature data and state information of a received diagnosis environment, wherein the static target detection frame data, the classification data, the segmentation feature data and the state information are judged to be effective;
if the current frame has acquired the number N of frames of the obtained position information in the coordinate system d Greater than or equal to a set frame number threshold T d T of static target obtained by binocular distance measurement method under coordinate system d Frame position averaging and standard deviation, and eliminating abnormal points, and continuously averaging and standard deviation; when abnormal points are removed, removing points farthest from the average value points;
The method for judging the binocular distance repeatedly calculates the optimizing frequency N for eliminating abnormal points do Whether or not the set upper limit T is exceeded do If the number of times N of optimizing is calculated repeatedly do Exceeding the set upper limit T do If the method exceeds the upper limit of the calculation optimizing times, the standard deviation still cannot be smaller than the threshold value, the calculation result is discarded, the diagnosis cache data is cleared, and the position estimation completion state is set to be incomplete; if repeatedly calculating the optimizing times N do Judging whether the standard deviation of the multi-frame positions calculated by the binocular distance measuring method after eliminating abnormal points is smaller than a threshold value T or not without exceeding a set upper limit do If the standard deviation is smaller than the threshold value T do Storing the low dispersion difference position of the static target estimated by the binocular distance measuring method into a long-term history data storage area; if the standard deviation is greater than the threshold value T do N of multi-frame static target obtained by continuously repeating pair binocular ranging method under coordinate system d Frame position averaging and standard deviation, outlier removal and N updating d For the number of non-abnormal points, continuously averaging and standard deviation, and calculating the number of times N of performing repeated calculation and optimizing do After the process is completed, the step of continuously executing the abnormal point eliminating step is returned.
As an alternative embodiment, the method may further comprise,
Checking long-term historical data of the low dispersion position of the static target estimated by the double-camera overlapping area and the binocular range method, calculating a static target position error limit ellipsoid obtained by a double-camera overlapping area position estimation module when the long-term historical data passes the checking, setting the estimation state of the double-camera overlapping area position estimation module to be finished after the completion, and sending finishing error limit ellipsoid information and an estimation state finishing signal; and discarding the calculation result when the verification is not passed.
In this embodiment, when it is determined whether the calculated static target position can pass the verification of the long-term history data, and when the double-camera overlapping area and the static target position estimated by the binocular range method pass the verification, a static target position error limit ellipsoid obtained by the binocular range method is calculated, and after completion, the state of the binocular position estimation module is set to be completed, and position error limit ellipsoid information estimated by the binocular method and an estimation state completion signal are sent.
And when the static target position estimated by the double-camera overlapping area and the binocular distance measuring method does not pass the verification of the long-term history data, discarding the calculation result, clearing the diagnosis cache data, and setting the position estimation completion state to be incomplete.
As an alternative embodiment, the method may further comprise,
when the multi-mode high-precision sensor position estimation module is used for estimation, the diagnosis system starts the multi-mode high-precision sensor position estimation module and initializes the multi-mode high-precision sensor position estimation module, and judges whether the state of the diagnosis environment meets the condition according to the static target detection frame data, the classification data and the segmentation feature data which are judged to be effective and the state information of the received diagnosis environment;
when the diagnosis environment state meets the condition, judging whether the received perception information of the multi-mode high-precision sensor is effective, calculating the position of a preset static target under the coordinate system measured at the current moment through the effective perception information of the multi-mode high-precision sensor, removing abnormal points, obtaining the low dispersion position of the static target estimated by the multi-mode high-precision sensor position estimation module, and storing long-term historical data; and when the diagnosis environment state does not meet the conditions, discarding the calculation result, and clearing the diagnosis cache data.
It is to be appreciated that the multi-modal high accuracy sensor position estimation module includes, but is not limited to, high accuracy maps, combined positioning systems, lidar, and the like.
Referring to fig. 7, in this embodiment, when the multi-mode high-precision sensor position estimation module receives a start command, the multi-mode high-precision sensor position estimation module is started, initialized, and whether the diagnostic environment state meets the condition is determined according to the static target detection frame data, the classification data, the segmentation feature data and the state information of the received diagnostic environment which are determined to be valid;
When the vehicle exceeds or misses a preset static target, the vehicle speed is greater than a threshold value, and the yaw rate is greater, namely, the condition is not met, the diagnosis cache data is cleared.
When the condition that the vehicle does not exceed or miss a preset static target, the vehicle speed is smaller than a threshold value and the yaw rate is smaller than the threshold value is met, whether the received sensing information of the multi-mode high-precision sensor position estimation module is effective or not is judged, namely the multi-mode sensor position estimation module diagnoses the sensing information output by the user and gives out confidence. Illustratively, when the confidence is greater than 95%, the perceived result is determined to be valid; continuously acquiring static target detection frame data, classification data and segmentation feature data which are judged to be effective and receiving state information of a diagnosis environment when the confidence is less than 95%;
when the perception information of the multi-mode high-precision sensor position estimation module is effective, calculating to obtain the position of the current frame of the static target under the coordinate system, and continuously obtaining the static target detection frame data, the classification data, the segmentation feature data and the state information of the receiving diagnosis environment, wherein the static target detection frame data, the classification data, the segmentation feature data and the state information are judged to be effective, when the obtained position information of the current frame under the coordinate system is smaller than a set frame number threshold value;
When the obtained position information of the current frame in the coordinate system is larger than the set frame number threshold, N of the multi-frame static targets obtained by the multi-mode high-precision sensor position estimation module in the coordinate system are respectively located m Frame position averaging and standard deviation, and eliminating abnormal points, continuously averaging and standard deviation, wherein when eliminating abnormal points, eliminating the abnormal points with the farthest distance from the average pointsIs a point of (2);
judging whether the number of repeated calculation optimizing times of the multi-mode high-precision sensor position estimation module exceeds a set upper limit, if the number of repeated calculation optimizing times exceeds the set upper limit, indicating that the number of repeated calculation optimizing times exceeds the set upper limit, discarding the calculation result, clearing the diagnosis cache data, and setting the position estimation completion state to be incomplete; if the number of times of repeated calculation and optimization does not exceed the set upper limit, judging whether the standard deviation of the multi-frame positions calculated by the multi-mode high-precision sensor position estimation module after abnormal points are removed is smaller than a threshold value, and if the standard deviation is smaller than the threshold value, storing the low dispersion position of the static target estimated by the multi-mode high-precision sensor position estimation module into a long-term historical data storage area; if the standard deviation is greater than the threshold value, continuously repeating the operation of respectively positioning N of the multi-frame static targets obtained by the multi-mode high-precision sensor position estimation module under the coordinate system m Frame position averaging and standard deviation, outlier removal and N updating m For the number of non-abnormal points, continuously averaging and standard deviation, and calculating the number of times N of performing repeated calculation and optimizing mo After the process is completed, the step of continuously executing the abnormal point eliminating step is returned.
As an alternative embodiment, the method may further comprise,
checking long-term historical data of the low dispersion position of the static target estimated by the multi-mode high-precision sensor position estimation module, calculating a static target position error limit ellipsoid obtained by the multi-mode high-precision sensor position estimation module when the low dispersion position passes the checking, setting the estimation state of the multi-mode high-precision sensor position estimation module to be completed after the completion, and sending completion error limit ellipsoid information and an estimation state completion signal; and discarding the calculation result when the verification is not passed.
In this embodiment, when it is determined whether the calculated static target position can pass the verification of the long-term history data, and when the static target position estimated by the multi-mode high-precision sensor position estimation module passes the verification, the static target position error limit ellipsoid obtained by the multi-mode high-precision sensor position estimation module is calculated, and after completion, the state of the multi-mode high-precision sensor position estimation module is set to be complete, and the estimated position error limit ellipsoid information and the estimated state completion signal of the multi-mode high-precision sensor position estimation module are sent to the multi-camera visual diagnosis system.
And when the static target position estimated by the multi-mode high-precision sensor position estimation module does not pass the verification, discarding the calculation result, clearing the diagnosis cache data, and setting the position estimation completion state as incomplete.
It can be understood that the stored historical data of the static target is loaded timely for measurement and verification, namely, the static target positions estimated by the single-camera position estimation module, the double-camera overlapping area position estimation module and the multi-mode high-precision sensor position estimation module are verified respectively.
Referring to fig. 8, a method for estimating and self-diagnosing a position of a multi-camera system is specifically described as follows:
s0.1, initializing and loading stored historical data.
S0.2, emergency obstacle avoidance information, emergency braking information and active emergency takeover information of an automatic driving user (a driver or a safety officer) of the automatic driving system are received.
S0.3, calculating the sum of emergency action times of automatic driving.
S0.4, judging whether the sum of emergency operation times of automatic driving within the set time is larger than a set threshold, if so, entering the step S0.5, and if so, entering the step S0.6.
S0.5, the ranging and position estimation diagnosis system is pre-started, and the diagnosis function is initialized and operated.
S0.6, judging whether a closing request sent by the whole vehicle is received, and returning to S0.2 to continuously execute S0.2 when the closing request sent by the whole vehicle is not received.
And S0.7, when a closing request is received, storing the sum of emergency action times (comprising emergency braking/obstacle avoidance information and user taking over time information), writing long-term historical data of a preset static target into a map data storage area, powering down and storing the long-term historical data for updating the position data of the map static target.
S1, acquiring geographic position information and navigation information of a vehicle from a map and positioning system, and acquiring state information of the vehicle from a whole vehicle system.
S2, judging whether the front scene and the state of the vehicle meet preset visual perception diagnosis triggering conditions or not:
judging whether a static target easy to diagnose exists in front of the vehicle from a map, a positioning and a vehicle navigation path or a driving direction; when a static target easy to diagnose exists, the distance between the static targets is smaller than a threshold value, the speed of the vehicle is smaller than the threshold value, the state of the vehicle is normal, and S3 is carried out; otherwise, returning to the step S1, and continuously acquiring the geographic position information and the navigation information of the vehicle and the state information of the vehicle.
And S3, sending a starting instruction to each diagnosis association sub-function module of the multi-camera system.
And S4, receiving camera image data information, a navigation map and positioning information of the multi-camera system.
And S5, judging whether the current diagnosis environment meets the requirement or not through the navigation map and the positioning information, if so, entering S6, otherwise, returning to S4, and continuously receiving the geographic position information and the navigation information of the vehicle.
S6, preprocessing the received image data information of the multi-camera system.
S7, the preprocessed image data information is called to detect and classify the target detection network and the classification network, and the segmentation network is called to segment the foreground and the background.
S8, judging whether a preset static target is detected and the target is valid, and entering a step S7 when the valid target is detected; otherwise, returning to S4 to continuously receive the geographic position information and the navigation information of the vehicle.
S9, effective static target detection frame data, classification data and semantic segmentation feature data and state information of whether the diagnosis environment meets requirements or not are sent to a position estimation module of the multi-camera system.
The position estimation module in a multi-camera system includes, but is not limited to, a single camera position estimation module, a dual camera crossover region position estimation module, and a multi-modal high accuracy sensor position estimation module.
S10, receiving the state of the position estimation of the preset static target by each position estimation module and the corresponding position error limit ellipsoid.
S11, judging whether the estimation states of at least three position estimation modules are completed, if so, entering a step S12, otherwise, returning to the step S4 to continuously acquire the geographic position information and the navigation information of the vehicle.
S12, calculating static target position error limit ellipsoids obtained by each estimation module by adopting a monocular distance measuring method, a binocular distance measuring method and a multi-mode sensor, judging whether an intersection exists between the error limit ellipsoids, and calculating the size of the intersection.
S13, judging whether the distance measurement and position estimation errors are out of limit according to whether the error limit ellipsoids have intersection sets or not, judging the dangerous degree of the out-of-limit according to the size of the intersection sets, if so, entering a step S14, otherwise, returning to the step S0.2 to continuously receive the automatic driving emergency action information.
S14, accumulating and storing the overrun times.
S15, judging whether the overrun accumulation times are larger than a set threshold value, if so, entering into a step S16, and if so, returning to a step S0.2 to continuously receive the automatic driving emergency action information.
S16, judging the fault position according to whether the overrun accumulation times are larger than a set threshold value.
And S17, degrading the automatic driving according to the fault position alarm, and disabling part or all driving functions.
Based on the design, whether a map and a positioned low-speed driving road exist in a diagnosis environment or not can be diagnosed through the obtained emergency action information, the geographical position information and the navigation information of the vehicle and the state information of the vehicle, so that the distance measurement performance and the position estimation performance of the multi-camera system for automatic driving are stably diagnosed according to the diagnosis environment, whether the distance measurement performance and the position estimation performance exceed the tolerance is judged, whether the distance measurement performance and the position estimation performance of the multi-camera system are reduced or not can be judged, a driver is reminded of maintaining and calibrating the multi-camera system as much as possible, the purposes of protecting the driver and the vehicle for automatic driving are achieved, and the safety of the vehicle for automatic driving is improved.
It will be appreciated that in compliance with the diagnostic environment, where the multi-camera system is traveling with the vehicle in the diagnostic environment, there may be both a single camera position estimation module and a dual camera overlap region position estimation module, such that there are multiple field of view overlap regions, as shown in FIG. 9. A multi-modality high-precision sensor position estimation module may also be present so that the position estimation self-diagnostics of the multi-camera system may be more accurate.
Referring to fig. 10, the embodiment of the present application further discloses a position estimation self-diagnosis apparatus for a multi-camera System, where the self-diagnosis apparatus 200 includes at least one software function module that may be stored in a memory module in the form of software or Firmware (Firmware) or cured in an Operating System (OS). Such as software functional modules and computer programs included in the self-diagnostic apparatus 200.
The self-diagnosis apparatus 200 may include an information reading and storing module 210, a judging module 220, a diagnosis association sub-function starting module 230, an estimation data processing module 240, and an executing module 250, each of which may have the following functions:
an information reading and storing module 210, configured to store and acquire history data, acquire emergency action information of automatic driving, and store a sum of emergency action times according to the emergency action information; the long-term history data of the preset static target updated by the position estimation system is stored in a map data storage area;
the judging module 220 is configured to obtain geographic location information and navigation information of the vehicle, and status information of the vehicle, and judge whether the front scene and the status of the vehicle meet preset visual perception diagnosis triggering conditions;
The diagnosis association sub-function starting module 230 is used for starting each sub-function module of the diagnosis system, acquiring image data information, a navigation map and positioning information of the multi-camera system, acquiring a static target which is easy to diagnose according to the navigation map and the positioning information, judging whether the current time diagnosis environment meets the requirement according to the static target, and starting each position estimation module when the current time diagnosis environment meets the requirement, wherein the current time diagnosis environment has a map and a positioned low-speed driving road;
an estimated data processing module 240 for preprocessing the acquired image data information; invoking detection, classification and segmentation functions to obtain target detection, classification and segmentation information, and sending the target information to each position estimation module; each module in the position estimation system adopts different methods to estimate the preset static target position; judging whether the distance measurement and position estimation errors of the multi-camera system exceed the limits according to the estimation results;
and the execution module 250 is used for accumulating the overrun result, judging the fault position according to the overrun result, and performing function prohibition on automatic driving according to the fault position.
Whether the emergency action information acquired by the emergency action information module 210 and the front scene and the vehicle state acquired by the judging module 220 meet the preset visual perception diagnosis triggering conditions or not can diagnose whether the current time environment can carry out self-diagnosis on the parameters of the multi-camera system according to the data of the static target, thereby judging whether the ranging and position estimation performance of the multi-camera system is reduced or not, and partially or completely prohibiting the automatic driving function according to the ranging and position estimation of the multi-camera system.
Optionally, the self-diagnosis apparatus 200 may further include:
the single-camera position estimation module is used for calling a target detection network to detect a static target, acquiring effective static target detection frame data, classification data, semantic segmentation feature data and receiving state information of a diagnosis environment when a preset static target is detected and the target is effective, and judging whether the state of the diagnosis environment meets the condition;
when the diagnostic environment state meets the condition, calculating by using camera parameters through a monocular distance measuring method to obtain the position of a preset target of the current frame under a coordinate system, removing abnormal points, obtaining the low dispersion position of a static target estimated by the monocular method, storing long-term historical data, checking, and transmitting completion information passing the checking; and when the diagnosis environment state does not meet the conditions, clearing the diagnosis cache data.
Optionally, the self-diagnosis apparatus 200 may further include:
the double-camera overlapping region position estimation module is used for calling a target detection network to detect a static target, acquiring effective static target detection frame data, classification data, semantic segmentation feature data and receiving state information of a diagnosis environment when a preset static target is detected and the target is effective, and judging whether the state of the diagnosis environment meets the condition;
When the state of the diagnostic environment meets the conditions, the depth of a static target, the type of the target and the edges and textures of the target are obtained by using a detection and classification network, a region of interest is defined by taking the central point of a preset static target detection frame as the center, the static targets in two cameras are matched by using the region of interest, after the same static target in a double-camera overlapping area is successfully matched, ranging and position estimation are performed by using a binocular range method, the position of the current frame of the static target in a coordinate system is obtained, abnormal points are screened and removed, the low dispersion position of the static target estimated by using the double-camera overlapping area is obtained, the estimation result is checked by using long-term historical data of the preset static target, estimated position and error information are sent to a multi-camera diagnostic system by checking, and state information is estimated, otherwise, the calculation cache data is cleared, and the calculation result is discarded; and when the diagnosis environment state does not meet the condition, the cache data of the calculation is also cleared, and the calculation result is discarded.
Optionally, the self-diagnosis apparatus 200 may further include:
the multi-mode high-precision sensor position estimation module is used for receiving detection and measurement information of the rest high-precision sensors in the current whole vehicle system on a preset static target, receiving state information of a diagnosis environment and judging whether the state of the diagnosis environment meets the condition;
When the diagnosis environment state meets the condition, judging whether the received perception information of the multi-mode high-precision sensor is valid or not; calculating to obtain the position of the preset static target current frame under the coordinate system through the effective perception information of the multi-mode high-precision sensor, removing abnormal points, obtaining the low dispersion position of the static target estimated by the multi-mode high-precision sensor position estimation module, storing long-term historical data, checking the estimated result by utilizing the previous long-term historical data, transmitting the estimated position and error information and estimating the completion state information to a diagnosis system through the checking, otherwise, clearing the calculated cache data, and discarding the calculated result; when the diagnosis environment state does not meet the condition, the cache data of the calculation is also cleared, and the calculation result is discarded.
In this embodiment, the memory module may be, but is not limited to, a random access memory, a read-only memory, a programmable read-only memory, an erasable programmable read-only memory, an electrically erasable programmable read-only memory, etc. In this embodiment, the storage module may be used to store the operating states, the work logs, etc. of the information reading and storage module 210, the judgment module 220, the diagnosis association sub-function starting module 230, the estimation data processing module 240, and the execution module 250. Of course, the storage module may also be used to store a program, and the processing module executes the program after receiving the execution instruction.
Embodiments of the present application also provide a computer-readable storage medium. The computer-readable storage medium has stored therein a computer program which, when run on a computer, causes the computer to execute the self-diagnosis method as described in the above embodiments.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that the present application may be implemented in hardware, or by means of software plus a necessary general hardware platform, and based on this understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disc, a mobile hard disk, etc.), and includes several instructions for causing a computer device (may be a personal computer, a brake device, or a network device, etc.) to perform the methods described in the various implementation scenarios of the present application.
In summary, the embodiments of the present application provide a method, an apparatus and a storage medium for self-diagnosis of position estimation of a multi-camera system. In the scheme, based on the obtained emergency action information, the geographical position information and the navigation information of the automatic driving and the state information of the vehicle, whether a map and a positioned low-speed driving road exist in a diagnosis environment can be diagnosed, so that the ranging and position estimation performance of the automatic driving multi-camera system is stably diagnosed according to the diagnosis environment, whether the ranging performance and the position estimation performance exceed the tolerance is judged, and whether the ranging performance and the position estimation performance of the multi-camera system are reduced can be judged, thereby reminding a driver to maintain and calibrate the multi-camera system as much as possible, achieving the purposes of protecting the driver and the automatic driving vehicle and improving the safety of the automatic driving vehicle.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus, system, and method may be implemented in other manners as well. The above-described apparatus, systems, and method embodiments are merely illustrative, for example, flow charts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. In addition, the functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application, and various modifications and variations may be suggested to one skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.
Claims (21)
1. A method for self-diagnosis of position estimation in a multi-camera system, characterized by: comprises the steps of,
s1, emergency action information of automatic driving is obtained, history information of emergency action times is calculated and stored according to the emergency action information, long-term history data of a preset static target position updated by a position estimation system is stored, and the stored history data of the static target is loaded timely for measurement and verification;
s2, acquiring geographic position information, navigation information and host vehicle state information of the vehicle, and judging whether a front scene and the host vehicle state meet preset multi-camera visual perception diagnosis triggering conditions or not;
s3, starting each sub-functional module related to the diagnosis of the multi-camera system, acquiring image data information, a navigation map and positioning information of the multi-camera system, and judging whether the diagnosis environment at the current moment meets the requirement or not according to the navigation map and the positioning information, wherein the diagnosis environment at the current moment has a map and a positioned low-speed driving road;
S4, preprocessing the acquired image data information, calling a model to detect, classify and divide a static target, estimating the position of the preset static target by using a position estimation system according to the result, and judging whether the range finding and position estimation errors of the multi-camera system are out of limit or not by using a plurality of estimation results;
and S5, accumulating the overrun results, judging the fault position according to the overrun results, and performing function prohibition on automatic driving according to the fault position.
2. The method for self-diagnosis of position estimation for a multiple camera system according to claim 1, wherein: the method may further comprise the steps of,
loading the historical data of the stored emergency action times, calculating the sum of the emergency action times according to the acquired emergency action times, and comparing the sum with a time threshold value in a preset duration;
when the sum of the emergency actions is larger than a threshold value, the visual range finding diagnosis system is started in advance; continuously acquiring emergency action information of automatic driving when the sum of the emergency action times is smaller than a threshold value;
and loading stored historical data of the estimation result of the preset static target under the selected coordinate system, and verifying the next estimation result.
3. The method for self-diagnosis of position estimation for a multiple camera system according to claim 2, wherein: the method may further comprise the steps of,
when the sum of the emergency action times is smaller than a threshold value, judging whether the visual ranging diagnosis system receives a closing request sent by the whole vehicle, and when the closing request is received, storing the sum of the emergency action times and a position estimation result of a preset static target; when the closing request is not accepted, emergency action information of the automatic driving is continuously acquired.
4. The method for self-diagnosis of position estimation for a multiple camera system according to claim 1, wherein: the method may further comprise the steps of,
judging whether a static target easy to diagnose exists in front according to a map and positioning and a vehicle navigation path or a driving direction when judging whether the front scene and the state of the vehicle meet preset visual perception diagnosis triggering conditions or not;
when a static target which is easy to diagnose exists, the distance between the static target and the vehicle is smaller than a threshold value, the speed of the vehicle is smaller than the threshold value, and the state of the vehicle is normal, a preset visual perception diagnosis condition is triggered; when the static target which is easy to diagnose does not exist or the distance between the static targets is larger than the threshold value or the speed of the vehicle is larger than the threshold value or the state of the vehicle is abnormal, continuously acquiring the geographic position information, the navigation information and the state information of the vehicle until the visual perception diagnosis condition is met.
5. The method for self-diagnosis of position estimation for a multiple camera system according to claim 4, wherein: the method may further comprise the steps of,
judging whether the vehicle exceeds or misses a preset static target, whether the vehicle speed is too fast and whether the yaw rate is too high according to the geographic position information and the navigation information of the vehicle when judging whether the current moment diagnosis environment meets the requirements; when the vehicle does not exceed (not miss) the preset static target, the vehicle speed is smaller than the threshold value, and the yaw rate is smaller than the threshold value, judging that the current moment diagnosis environment meets the requirements.
6. The method for self-diagnosis of position estimation for a multiple camera system according to claim 1, wherein: the method further comprises a multi-camera system,
the multi-camera system includes, but is not limited to, a single camera position estimation module, a dual camera crossover region position estimation module, and a multi-modality high precision sensor position estimation module.
7. The method of self-diagnosis of position estimation for a multiple camera system according to claim 6, wherein: the method may further comprise the steps of,
when preprocessing the acquired image data information, preprocessing the image data information by using noise reduction, image conversion and white balance on an original image, and obtaining the image data information with good quality; then, a target detection network is called to carry out static target detection, a static target detection frame is obtained, and a classification network is operated to judge the category and the shielding state; and calling a semantic segmentation network or an instance segmentation network to carry out semantic segmentation on the preprocessed image, and segmenting the static target and the background.
8. The method for self-diagnosis of position estimation for a multiple camera system according to claim 7, wherein: the method may further comprise the steps of,
when the target detection network is called to detect a static target, and when the preset static target is detected and the target is effective, effective static target detection frame data, classification data and semantic segmentation feature data are sent to a position estimation system, and state information of judging whether the environment at the current moment meets the requirement is judged; and continuously acquiring image data information, a navigation map and positioning information of the multi-camera system when the preset static target is not detected or the target is invalid.
9. The method of self-diagnosis of position estimation for a multiple camera system according to claim 6, wherein: the method may further comprise the steps of,
when the preset static target position is estimated, the diagnosis system estimates the preset static target position through a single-camera position estimation module, a double-camera overlapping area position estimation module and a multi-mode high-precision sensor position estimation module to obtain a corresponding position error limit ellipsoid and a position estimation state;
when the estimation state of each position estimation module is completed, calculating whether intersection exists in position error limit ellipsoids corresponding to the single-camera position estimation module, the double-camera overlapping area position estimation module and the multi-mode high-precision sensor position estimation module or not, and calculating the size of the intersection;
And continuously acquiring the image data information, the navigation map and the positioning information of the multi-camera system when the estimation state of the position estimation module is not completed.
10. The method for self-diagnosis of position estimation for a multiple camera system according to claim 9, wherein: the method may further comprise the steps of,
judging whether the range finding and position estimating errors of the multi-camera system are out of limit or not by utilizing whether intersection exists between position error limit ellipsoids corresponding to the single-camera position estimating module, the double-camera overlapping area position estimating module and the multi-mode high-precision sensor position estimating module;
when the error exceeds the limit, accumulating and storing the overrun times; and when the error is not exceeded, continuously acquiring emergency action information of automatic driving.
11. The method of self-diagnosis of position estimation for a multiple camera system according to claim 8, wherein: the method may further comprise the steps of,
when the single-camera position estimation module is used for estimation, the diagnosis system starts the single-camera position estimation module and initializes the single-camera position estimation module; judging whether the state of the diagnosis environment meets the condition according to the static target detection frame data, the classification data and the segmentation feature data which are judged to be effective and the state information of the received diagnosis environment;
When the diagnostic environment state meets the condition, calculating a preset target position under a coordinate system measured at the current moment by using a camera parameter through a monocular distance measuring method, removing abnormal points, obtaining a low dispersion position of a static target estimated by the monocular method, and storing long-term historical data; and when the diagnosis environment state does not meet the conditions, discarding the calculation result, and clearing the diagnosis cache data.
12. The method for self-diagnosis of position estimation for a multiple camera system according to claim 11, wherein: the method may further comprise the steps of,
and verifying the low-variance position result of the static target estimated by the monocular method by using the long-term historical data: calculating the offset distances between the position of the preset static target in the coordinate system, which is obtained by the current frame of the position estimation module, and the long-term historical position in the coordinate system, which is obtained by the last n times of detection, to obtain n offset distances, wherein when m distances are not more than a set constraint distance, the static target position information obtained by the method adopted by the single-camera position estimation module at the current moment is judged to pass through long-term historical data verification;
when the verification is passed, calculating a static target position error limit ellipsoid obtained by the single-camera position estimation module, setting the estimation state of the single-camera position estimation module to be completed after completion, and sending completion error limit ellipsoid information and an estimation state completion signal; and discarding the calculation result when the verification is not passed.
13. The method of self-diagnosis of position estimation for a multiple camera system according to claim 8, wherein: the method may further comprise the steps of,
when the double-camera overlapping region position estimation module is used for estimation, the diagnosis system starts the double-camera overlapping region position estimation module and initializes the double-camera overlapping region position estimation module, and judges whether the state of the diagnosis environment meets the condition according to the static target detection frame data, the classification data and the segmentation characteristic data which are judged to be effective and the state information of the received diagnosis environment;
when the diagnostic environment state meets the conditions, the detection and classification network is utilized to obtain the depth of a static target, the type of the target, the edge and the texture of the target, a region of interest is defined by taking the central point of a preset static target detection frame as the center, and the region of interest is utilized to match the static targets in the two cameras; after the same static target in the double-camera overlapping area is successfully matched, ranging and position estimation are carried out by using a binocular range method to obtain the position of the current frame under a coordinate system, abnormal points are removed, the low-dispersion position of the static target estimated by the double-camera overlapping area method is obtained, and long-term historical data are stored; and when the diagnosis environment state does not meet the conditions, discarding the calculation result, and clearing the diagnosis cache data.
14. The method for self-diagnosis of position estimation for a multiple camera system according to claim 13, wherein: the method may further comprise the steps of,
checking long-term history data of the low dispersion position of the static target estimated by the double-camera overlapping area and binocular distance measuring method, calculating a static target position error limit ellipsoid obtained by a double-camera overlapping area position estimation module when the long-term history data passes the checking, setting the estimation state of the double-camera overlapping area position estimation module to be completed after the completion, and sending completion error limit ellipsoid information and an estimation state completion signal; and discarding the calculation result when the verification is not passed.
15. The method of self-diagnosis of position estimation for a multiple camera system according to claim 8, wherein: the method may further comprise the steps of,
when the multi-mode high-precision sensor position estimation module is used for estimation, the diagnosis system starts the multi-mode high-precision sensor position estimation module and is initialized, and whether the state of the diagnosis environment meets the condition is judged according to the static target detection frame data, the classification data and the segmentation feature data which are judged to be effective and the state information of the received diagnosis environment;
when the diagnosis environment state meets the condition, judging whether the received perception information of the multi-mode high-precision sensor is effective, calculating the position of a preset static target under the coordinate system measured at the current moment through the effective perception information of the multi-mode high-precision sensor, removing abnormal points, obtaining the low dispersion position of the static target estimated by the multi-mode high-precision sensor position estimation module, and storing long-term historical data; and when the diagnosis environment state does not meet the conditions, discarding the calculation result, and clearing the diagnosis cache data.
16. The method of self-diagnosis of position estimation for a multiple camera system according to claim 15, wherein: the method may further comprise the steps of,
checking long-term history data of the low dispersion position of the static target estimated by the multi-mode high-precision sensor position estimation module, calculating a static target position error limit ellipsoid obtained by the multi-mode high-precision sensor position estimation module when the low dispersion position passes the checking, setting an estimation state of the multi-mode high-precision sensor position estimation module to be completed after the completion, and sending completion error limit ellipsoid information and an estimation state completion signal; and discarding the calculation result when the verification is not passed.
17. A position estimation self-diagnostic apparatus of a multi-camera system, the self-diagnostic apparatus comprising:
the information reading and storing module is used for storing and acquiring historical data, acquiring emergency action information of automatic driving and storing the sum of emergency action times according to the emergency action information; the long-term historical data of a preset static target updated by the position estimation system is stored in a map data storage area, and the historical data is read for verification during estimation;
the judging module is used for acquiring the geographic position information, the navigation information and the state information of the vehicle and judging whether the front scene and the state of the vehicle meet the preset visual perception diagnosis triggering conditions or not;
The diagnosis association sub-function starting module is used for starting each sub-function module of the diagnosis system, acquiring image data information, a navigation map and positioning information of the multi-camera system, acquiring a static target which is easy to diagnose according to the navigation map and the positioning information, judging whether the current time diagnosis environment meets the requirement according to the static target, and starting each position estimation module when the current time diagnosis environment meets the requirement, wherein the current time diagnosis environment has a map and a positioned low-speed driving road;
the estimated data processing module is used for preprocessing the acquired image data information; invoking detection, classification and segmentation functions to obtain target detection, classification and segmentation information, and sending the target information to each position estimation module; each module in the position estimation system adopts different methods to estimate the preset static target position; judging whether the distance measurement and position estimation errors of the multi-camera system exceed the limits according to the estimation results;
and the execution module is used for accumulating the overrun result, judging the fault position according to the overrun result, and performing function prohibition on automatic driving according to the fault position.
18. The position estimation self-diagnostic apparatus of a multiple camera system according to claim 17, wherein: the apparatus may further comprise a device for controlling the operation of the apparatus,
The single-camera position estimation module is used for calling a target detection network to detect a static target, acquiring effective static target detection frame data, classification data, semantic segmentation feature data and receiving state information of a diagnosis environment when a preset static target is detected and the target is effective, and judging whether the state of the diagnosis environment meets the condition;
when the diagnostic environment state meets the conditions, calculating by using camera parameters through a monocular ranging method to obtain the position of the current frame of the preset target under the coordinate system, removing abnormal points, obtaining the low dispersion position of the static target estimated by the monocular method, storing long-term historical data, checking, transmitting the estimated position and error information to a multi-camera diagnostic system through the checking, estimating the completion state information, otherwise, clearing the calculated cache data, and discarding the calculation result; and when the diagnosis environment state does not meet the condition, the cache data of the calculation is also cleared, and the calculation result is discarded.
19. The position estimation self-diagnostic apparatus of a multiple camera system according to claim 17, wherein: the apparatus may further comprise a device for controlling the operation of the apparatus,
the double-camera overlapping region position estimation module is used for calling a target detection network to detect a static target, acquiring effective static target detection frame data, classification data, semantic segmentation feature data and receiving state information of a diagnosis environment when a preset static target is detected and the target is effective, and judging whether the state of the diagnosis environment meets the condition;
When the state of the diagnostic environment meets the conditions, the depth of a static target, the type of the target and the edges and textures of the target are obtained by using a detection and classification network, a region of interest is defined by taking the central point of a preset static target detection frame as the center, the static targets in two cameras are matched by using the region of interest, after the same static target in a double-camera overlapping area is successfully matched, ranging and position estimation are performed by using a binocular range method, the position of the current frame of the static target in a coordinate system is obtained, abnormal points are screened and removed, the low dispersion position of the static target estimated by using the double-camera overlapping area is obtained, the estimation result is checked by using long-term historical data of the preset static target, estimated position and error information are sent to a multi-camera diagnostic system by checking, and state information is estimated, otherwise, the calculation cache data is cleared, and the calculation result is discarded; and when the diagnosis environment state does not meet the condition, the cache data of the calculation is also cleared, and the calculation result is discarded.
20. The position estimation self-diagnostic apparatus of a multiple camera system according to claim 17, wherein: the apparatus may further comprise a device for controlling the operation of the apparatus,
The multi-mode high-precision sensor position estimation module is used for receiving detection and measurement information of the rest high-precision sensors in the current whole vehicle system on a preset static target, receiving state information of a diagnosis environment and judging whether the state of the diagnosis environment meets the condition;
when the diagnosis environment state meets the condition, judging whether the received perception information of the multi-mode high-precision sensor is valid or not; calculating to obtain the position of the preset static target current frame under the coordinate system through the effective perception information of the multi-mode high-precision sensor, removing abnormal points, obtaining the low dispersion position of the static target estimated by the multi-mode high-precision sensor position estimation module, storing long-term historical data, checking the estimated result by utilizing the previous long-term historical data, transmitting the estimated position and error information and estimating the completion state information to a diagnosis system through the checking, otherwise, clearing the calculated cache data, and discarding the calculated result; when the diagnosis environment state does not meet the condition, the cache data of the calculation is also cleared, and the calculation result is discarded.
21. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when run on a computer, causes the computer to perform the method according to any of claims 1-16.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310478990.9A CN116543047A (en) | 2023-04-28 | 2023-04-28 | Position estimation self-diagnosis method, device and storage medium for multi-camera system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310478990.9A CN116543047A (en) | 2023-04-28 | 2023-04-28 | Position estimation self-diagnosis method, device and storage medium for multi-camera system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116543047A true CN116543047A (en) | 2023-08-04 |
Family
ID=87453648
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310478990.9A Pending CN116543047A (en) | 2023-04-28 | 2023-04-28 | Position estimation self-diagnosis method, device and storage medium for multi-camera system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116543047A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118013465A (en) * | 2024-04-09 | 2024-05-10 | 微网优联科技(成都)有限公司 | Non-motor vehicle identification method and system based on multi-sensor cooperation |
CN118068307A (en) * | 2024-04-18 | 2024-05-24 | 上海禾赛科技有限公司 | Detection method and device, optical detection device and carrier |
-
2023
- 2023-04-28 CN CN202310478990.9A patent/CN116543047A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118013465A (en) * | 2024-04-09 | 2024-05-10 | 微网优联科技(成都)有限公司 | Non-motor vehicle identification method and system based on multi-sensor cooperation |
CN118068307A (en) * | 2024-04-18 | 2024-05-24 | 上海禾赛科技有限公司 | Detection method and device, optical detection device and carrier |
CN118068307B (en) * | 2024-04-18 | 2025-03-18 | 上海禾赛科技有限公司 | Detection method and device, optical detection device and carrier |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6458734B2 (en) | Passenger number measuring device, passenger number measuring method, and passenger number measuring program | |
RU2597066C2 (en) | Method and device for identification of road signs | |
CN116543047A (en) | Position estimation self-diagnosis method, device and storage medium for multi-camera system | |
KR101103526B1 (en) | Collision Avoidance Using Stereo Camera | |
EP2928178B1 (en) | On-board control device | |
CN110298307B (en) | A real-time detection method for abnormal parking based on deep learning | |
CN111213153A (en) | Target object motion state detection method, device and storage medium | |
CN110341621B (en) | Obstacle detection method and device | |
JP2013057992A (en) | Inter-vehicle distance calculation device and vehicle control system using the same | |
KR102635090B1 (en) | Method and device for calibrating the camera pitch of a car, and method of continuously learning a vanishing point estimation model for this | |
CN114296095A (en) | Method, device, vehicle and medium for extracting effective target of automatic driving vehicle | |
CN111104824B (en) | Lane departure detection method, electronic device and computer readable storage medium | |
CN114120266A (en) | Vehicle lane change detection method, device, electronic device and storage medium | |
KR102283053B1 (en) | Real-Time Multi-Class Multi-Object Tracking Method Using Image Based Object Detection Information | |
EP4024330B1 (en) | Object recognition method and object recognition device | |
KR102003387B1 (en) | Method for detecting and locating traffic participants using bird's-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program | |
CN115249407B (en) | Indicator light state identification method and device, electronic equipment, storage medium and product | |
CN113942503A (en) | Lane keeping method and device | |
CN111539279A (en) | Road height limit height detection method, device, equipment and storage medium | |
CN112990117B (en) | Installation data processing method and device based on intelligent driving system | |
CN115965636A (en) | Vehicle side view generating method and device and terminal equipment | |
CN117037122A (en) | Information processing apparatus, system, method, and program | |
US11816903B2 (en) | Method for determining a type of parking space | |
JPH079880A (en) | Abnormality warning device for driver | |
KR102161905B1 (en) | Backward vehicle detection apparatus and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |