[go: up one dir, main page]

CN118859232B - Laser radar camera co-location perception method, system and medium - Google Patents

Laser radar camera co-location perception method, system and medium Download PDF

Info

Publication number
CN118859232B
CN118859232B CN202411337083.3A CN202411337083A CN118859232B CN 118859232 B CN118859232 B CN 118859232B CN 202411337083 A CN202411337083 A CN 202411337083A CN 118859232 B CN118859232 B CN 118859232B
Authority
CN
China
Prior art keywords
data
light
distortion
real
light reflection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411337083.3A
Other languages
Chinese (zh)
Other versions
CN118859232A (en
Inventor
陆洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yongtai Photoelectric Co ltd
Original Assignee
Shenzhen Yongtai Photoelectric Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yongtai Photoelectric Co ltd filed Critical Shenzhen Yongtai Photoelectric Co ltd
Priority to CN202411337083.3A priority Critical patent/CN118859232B/en
Publication of CN118859232A publication Critical patent/CN118859232A/en
Application granted granted Critical
Publication of CN118859232B publication Critical patent/CN118859232B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/092Reinforcement learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computer Graphics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Geometry (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明涉及同位感知技术领域,尤其涉及一种激光雷达摄像头同位感知方法、系统及介质。所述方法包括以下步骤:通过扫地机器人上的机载摄像镜头进行实时环境光线数据提取并进行移动协同视角三维模型构建,得到移动协同视角三维模型;基于移动协同视角三维模型进行环境空间扭曲行为概率计算,得到环境空间扭曲行为概率数据;对环境空间扭曲行为概率数据进行空间扭曲结构化识别,得到空间扭曲结构化数据;根据空间扭曲结构化数据进行光线扭曲行为校正处理,得到光线扭曲行为校正数据;对光线扭曲行为校正数据进行策略梯度学习并进行光线同位感知模型构建,得到光线同位感知模型。本发明通过对同位感知技术的优化处理使得同位感知技术更加完善。

The present invention relates to the field of co-location perception technology, and in particular to a co-location perception method, system and medium for a laser radar camera. The method comprises the following steps: extracting real-time ambient light data through an onboard camera lens on a sweeping robot and constructing a mobile collaborative perspective three-dimensional model to obtain a mobile collaborative perspective three-dimensional model; calculating the probability of environmental space distortion behavior based on the mobile collaborative perspective three-dimensional model to obtain environmental space distortion behavior probability data; performing spatial distortion structured identification on the environmental space distortion behavior probability data to obtain spatial distortion structured data; performing light distortion behavior correction processing according to the spatial distortion structured data to obtain light distortion behavior correction data; performing strategy gradient learning on the light distortion behavior correction data and constructing a light co-location perception model to obtain a light co-location perception model. The present invention makes the co-location perception technology more perfect by optimizing the co-location perception technology.

Description

Laser radar camera co-located sensing method, system and medium
Technical Field
The invention relates to the technical field of co-located sensing, in particular to a laser radar camera co-located sensing method, a laser radar camera co-located sensing system and a laser radar camera co-located sensing medium.
Background
LiDAR (LiDAR) and cameras serve as two important sensors that play an important role in distance measurement and visual identification on sweeping robots, respectively. However, they face many challenges in the data fusion process. Lidar is capable of providing highly accurate range information and stable operation under a variety of lighting conditions, but lacks in-depth understanding of object details. The camera can capture rich image information and details, but the performance of the camera is obviously affected by light change, reflection and shielding, and particularly in low-light or complex environments, the positioning and object recognition accuracy is unstable. Therefore, how to effectively fuse the data of the laser radar and the camera so as to improve the perception precision and reliability becomes a hot spot of current research. The co-located sensing method of the laser radar and the camera aims to solve the core problem of how to ensure high accuracy of positioning and sensing under the influence of interference factors such as uneven illumination, object shielding, environmental reflection and the like in the data fusion of the sensor. According to the method, the stability and the accuracy of the fusion result are gradually improved from the preprocessing of the sensor data to the deep fusion through a refined processing flow. In particular, under low light or complex environment, the data of the camera on the robot can not accurately reflect the real scene, and the high-precision distance data provided by the laser radar can help to correct the shortage of the camera information, so that the system can stably and reliably perform environment sensing and object recognition. However, the conventional co-located sensing method of the laser radar camera has the problem that the ambient light change analysis is inaccurate, so that the ambient distortion correction precision in the reflection process is low.
Disclosure of Invention
Based on the foregoing, it is necessary to provide a method, a system and a medium for co-located sensing of a laser radar camera, so as to solve at least one of the above technical problems.
In order to achieve the above purpose, the method, the system and the medium for co-located sensing of the laser radar camera, the method comprises the following steps:
The method comprises the steps of S1, extracting real-time environment light data through an airborne camera lens to obtain real-time environment light data, constructing a mobile collaborative viewing angle three-dimensional model according to the real-time environment light data, and obtaining a mobile collaborative viewing angle three-dimensional model;
S2, carrying out light reflection intensity evaluation on real-time environment light data based on a mobile collaborative visual angle three-dimensional model to obtain light reflection intensity evaluation data, carrying out structural interference effect analysis on the light reflection intensity evaluation data to obtain light reflection structure interference effect data, and carrying out environment space distortion behavior probability calculation according to the light reflection structure interference effect data to obtain environment space distortion behavior probability data;
Step S3, performing spatial distortion structural recognition on the environmental spatial distortion behavior probability data to obtain spatial distortion structural data;
And S4, performing strategy gradient learning on the light distortion behavior correction data based on a strategy gradient algorithm to obtain light correction strategy gradient data, performing light parity perception model construction according to the light correction strategy gradient data to obtain a light parity perception model, and transmitting the light parity perception model to a data cloud platform to execute a laser radar camera parity perception method.
The invention captures the light information in the environment in real time by using the onboard camera lens, and ensures the timeliness and accuracy of the data. A three-dimensional model is constructed through real-time data, and the model can reflect the real situation of the environment, including the distribution and influence of light rays. Based on the three-dimensional model of the mobile collaborative viewing angle, real-time ambient light data is evaluated, and the intensity and the characteristics of light reflection are quantified. By evaluating the data, the structural interference effect of the light in the environment, i.e. the complex interaction effects of reflection, refraction and diffraction of the light at the object surface, is analyzed. Based on the light reflection structure interference effect data, the probability of spatial warping behavior occurring in the environment is calculated. These data help to understand and predict the behavior of light in a particular environment, including its impact on visual perception and object shape awareness. By analyzing the environmental spatial distortion behavior probability data, specific spatial distortion structural features, such as bending, torsion or deformation of light, are identified. And based on the identified distortion characteristics, performing correction processing of the light distortion behaviors so as to improve the accuracy of light transmission and the accuracy of environment perception. The light warp behavior correction data is learned and optimized using a strategy gradient algorithm to find an optimal correction strategy. Optimized light correction strategy gradient data is obtained that reflects how light transmission is adjusted to reduce distortion effects based on real-time environmental data. And constructing a light co-located perception model based on the result of strategy gradient learning. The model can adjust the positions and angles of the laser radar and the camera according to the actual light conditions in a real-time or near-real-time environment so as to realize more accurate co-located sensing. And sending the optimized light co-located perception model to a data cloud platform so as to execute a co-located perception method of the laser radar and the camera in a wide application scene, and improving the accuracy of environment perception and space perception and reducing errors and uncertainty through the correction of light distortion behaviors and the application of the light co-located perception model. Therefore, the invention is an improvement treatment for the traditional laser radar camera co-position sensing method, solves the problem that the traditional laser radar camera co-position sensing method has inaccurate analysis on the change of the ambient light, thereby causing low correction precision on the ambient distortion in the reflecting process, improves the accuracy of the analysis on the change of the ambient light, and improves the correction precision on the ambient distortion in the reflecting process.
Preferably, step S1 comprises the steps of:
S11, acquiring real-time environment data through an onboard camera lens to obtain real-time environment data;
Step S12, extracting real-time environment light data from the real-time environment data to obtain real-time environment light data;
step S13, performing real-time distance measurement on the obstacle based on real-time environment data by using a laser radar to obtain real-time distance measurement data of the obstacle;
And S14, constructing a mobile collaborative viewing angle three-dimensional model of the real-time environment data according to the real-time environment light data and the obstacle real-time ranging data to obtain the mobile collaborative viewing angle three-dimensional model.
The present invention uses an on-board camera lens to acquire real-time environmental data, including images, video, or other sensory data. This step ensures that comprehensive information of the current environment is obtained. Light information including light intensity, illumination distribution, etc. is extracted from the collected environmental data. These data are the basis for subsequent three-dimensional model construction and ray reflection evaluation. Real-time ranging of obstacles is performed based on real-time environmental data by using a sensor such as a laser radar. These data not only contribute to environmental awareness, but also provide an important basis for subsequent modeling and path planning. And constructing an accurate movement collaborative visual angle three-dimensional model by combining the real-time ambient light data and the obstacle real-time ranging data. The model integrates the physical characteristics of light and the spatial position of the obstacle in the environment, and can provide accurate visual information of the environment. By integrating different sources (vision, laser ranging) of real-time environmental data, step S1 improves the overall perceptibility of the environment. The construction of the mobile collaborative viewing angle three-dimensional model is not only based on light data, but also combines the spatial information of the obstacle, so that more accurate environment simulation and scene analysis capability can be provided. The real-time data is collected and processed, so that the system can quickly respond to environmental changes, and the method has important significance for instant decision and safety.
Preferably, step S2 comprises the steps of:
s21, carrying out light change fluctuation analysis on real-time environment light data based on a mobile collaborative visual angle three-dimensional model to obtain light change fluctuation data;
S22, evaluating the light reflection intensity of the light variation fluctuation data to obtain light reflection intensity evaluation data;
S23, carrying out structural interference effect analysis on the light reflection intensity evaluation data to obtain light reflection structural interference effect data;
And S24, performing environment space distortion behavior probability calculation according to the light reflection structure interference effect data and the light reflection intensity evaluation data to obtain environment space distortion behavior probability data.
According to the invention, the change fluctuation of the real-time environment light data is analyzed by moving the collaborative visual angle three-dimensional model. These fluctuations may be caused by factors such as different light sources, obstructions, and topography in the environment. Accurate capture of light change fluctuation data is ensured, and understanding of dynamic characteristics and change trend of light in the environment is facilitated. Based on the light variation fluctuation data, the reflection intensity of the light at different areas and surfaces is evaluated. These data provide a quantitative analysis of the light distribution in the environment. The light reflection intensity evaluation data directly affects the realism and accuracy of visual simulation, image rendering and augmented reality techniques. Further analysis of the light reflection intensity assessment data explored the interference effects of light on different surfaces and structures. This includes optical phenomena such as reflection, refraction and diffraction. The structural interference effect data helps to understand the complex propagation mode of light in a specific environment, and provides basis for more accurate light simulation and environment simulation. Based on the light reflection structure interference effect data and the light reflection intensity evaluation data, the probability of spatial warping behavior occurring in the environment is calculated. These data are critical to improving visual accuracy in virtual reality, augmented reality, and simulated environments, helping to reduce visual misdirection and user perceived inconsistencies.
Preferably, step S23 comprises the steps of:
step S231, performing light reflection path marking on the light reflection intensity evaluation data to obtain light reflection path marking data;
s232, performing light reflection staggered structure analysis according to the light reflection path marking data to obtain light reflection staggered structure data;
Step S233, evaluating the light color difference among different reflected light rays according to the light ray reflection intensity evaluation data to obtain reflected light ray light color difference data;
Step S234, performing optical wave phase difference calculation between different reflected light rays on the light ray reflection staggered structure data according to the reflected light ray chromatic aberration data and the light ray reflection intensity evaluation data to obtain the reflection staggered structure optical wave phase difference data;
And S235, carrying out structural interference effect analysis based on the reflection staggered structure light wave phase difference data to obtain light reflection structure interference effect data.
The invention can accurately track the reflection path of the light in the environment by carrying out path marking on the light reflection intensity evaluation data. This data is important for analyzing the specific path and distance traveled by the light. A detailed description of the behavior of light reflection is provided, providing the underlying data for subsequent analysis of the structure of the light interlacing. Based on the light reflection path marking data, the staggered reflection condition of the light on different surfaces and structures is analyzed. These data reveal reflection patterns and path crossings of light in complex environments. By analyzing the light staggered structure, the complexity of light reflection in the environment can be deeply understood, including factors such as multiple reflection, change of reflection angle and the like. And evaluating the light color difference between different reflected lights according to the light reflection intensity evaluation data. These data take into account the color change and attenuation of the light that occurs during reflection. For virtual reality, augmented reality, and image rendering applications, photochromic difference data is critical to ensuring the authenticity and color fidelity of a visual scene. And calculating the light wave phase difference between different reflected lights based on the light color difference data of the reflected lights and the light reflection intensity evaluation data. These data reflect the phase change of the light wave during spatial propagation. The optical interference phenomenon can be quantified through the optical wave phase difference data, and a foundation is provided for the accurate simulation and emulation of the environmental optical characteristics. Based on the reflection staggered structure light wave phase difference data, the interference effect of light on the structure is further analyzed. These data reveal interference phenomena that occur when light is reflected and propagates in the environment. The structural interference effect data provides critical information for accurately modeling optical phenomena in complex environments, such as design optimization of optical devices and visual performance predictions under environmental conditions.
Preferably, step S24 comprises the steps of:
s241, performing light path difference calculation of different light reflections on the light reflection structure interference effect data to obtain light reflection light path difference data;
step S242, performing medium reflection interference simulation on the light reflection structure interference effect data according to the light reflection light path difference data to obtain medium reflection interference simulation data;
Step S243, carrying out polarized light interaction analysis on the interference effect data of the light reflection structure according to the medium reflection interference simulation data to obtain polarized light interaction data of the reflection structure;
Step S244, performing light distortion critical state analysis based on the polarized light interaction data of the reflecting structure, the medium reflection interference simulation data and the light reflection light path difference data to obtain light distortion critical state data;
step S245, carrying out regression interval estimation on the light ray distortion critical state data according to the light ray reflection intensity estimation data to obtain light ray reflection distortion interval estimation data;
And step S246, performing environmental space distortion behavior probability calculation based on the light reflection distortion interval estimation data to obtain environmental space distortion behavior probability data.
The invention can quantify the path length difference of the light rays in the space propagation process by calculating the light path difference of different light ray reflection paths. These data are important to understand the time delay and spatial distribution of light reflection. The optical path difference data provides an accurate measurement of the optical path length, providing a basis for subsequent optical simulation and interference effect analysis. Based on the light reflection light path difference data, an optical interference phenomenon on the medium surface is simulated. These analog data reflect the phase changes and interference effects that occur when light is reflected at the surface of the medium. The medium reflection interference simulation data is helpful for evaluating the reflection characteristics of the optical material, and provides important basis for material selection and optical design. The change in polarization state and interaction of light on the structured surface is analyzed based on the medium reflection interference simulation data. These data are critical to understanding the polarization behavior of light on complex surfaces. The polarized light interaction data helps to optimize the design of the polarizing filters, coatings and optical elements, improving the efficiency and performance of the optical system. And analyzing the distortion and the distortion state of the light under specific conditions based on the polarized light interaction data of the reflecting structure and the light reflection light path difference data. These data reflect the propagation limitations and distortions of light rays in complex environments. The light distortion critical state data provides critical information for predicting the performance of vision systems, sensors and optical devices in complex environments, helping to optimize device design and application environment selection. Based on the light reflection twist interval estimation data, a probability of an optical twist phenomenon occurring in the environment is calculated. These data help assess and predict the impact of optical distortions on the vision system and image processing. The environment space distortion behavior probability data provides important visual environment prediction for applications such as virtual reality, augmented reality, automatic driving and the like, and supports real-time decision making and system optimization.
Preferably, the regression interval estimation of the light ray distortion critical state data according to the light ray reflection intensity estimation data includes the steps of:
performing light distortion incremental rule recognition on the light distortion critical state data according to the light reflection intensity evaluation data to obtain light distortion incremental rule data;
Incremental stepwise regression analysis is carried out on the light distortion incremental rule data to obtain distortion incremental stepwise regression data;
performing multiple collinearity constraint processing on the light distortion incremental rule data according to the distortion incremental stepwise regression data to obtain the light distortion incremental rule collinearity constraint data;
Performing regression analysis on the light distortion incremental rule data according to the light distortion incremental rule collinear constraint data to obtain light distortion incremental collinear regression data;
And carrying out regression interval estimation according to the light ray distortion incremental collinearly regression data to obtain light ray reflection distortion interval estimation data.
According to the light reflection intensity evaluation data, the rule of increasing the light distortion is identified. This step focuses on determining how the optical distortion gradually increases or decreases under different conditions. The light distortion incremental rule data is modeled as a mathematical equation using a stepwise regression analysis method. Stepwise regression helps to gradually add the most relevant variables to optimize the predictive power of the model. And processing multiple collinearity existing in the distortion increment rule data, and ensuring the robustness and reliability of the regression model. This step helps to reduce redundant information between variables in the model and improve prediction accuracy. And carrying out regression analysis based on the data subjected to the collinearity constraint processing, and further optimizing a prediction model of increasing the light distortion. This model enables a more accurate description of the trend and law of optical distortions. And performing regression interval estimation by utilizing the light distortion incremental collinearly regression data. This step provides a range of optical distortions that occur, thereby helping to evaluate the performance and stability of the optical system under different environmental conditions.
Preferably, step S3 comprises the steps of:
s31, performing space distortion structural identification on the environment space distortion behavior probability data to obtain space distortion structural data;
Step S32, extracting features of the spatial distortion structural data to obtain spatial distortion structural feature data;
S33, performing a distortion structure multidimensional analysis on the spatial distortion structured data according to the spatial distortion structure characteristic data to obtain distortion structure multidimensional characteristic data;
and step S34, performing light distortion behavior correction processing according to the multidimensional characteristic data of the distortion structure to obtain light distortion behavior correction data.
The invention analyzes and identifies probability data of light distortion behavior in the environment and sorts the probability data into structured data. This step helps to transform the complex environmental distortion behavior into an operable data form that can be further analyzed. Detailed descriptions and statistics of environmental distortion phenomena are provided, providing underlying data for subsequent analysis. Key features are extracted from the structured warp data, such as the intensity, frequency, spatial distribution, etc. characteristics of the warp. These features reflect important aspects of the distortion phenomenon, helping to further understand the nature and behavior of light distortion. A detailed description and quantitative analysis of the light distortion behaviour is provided, providing basis for subsequent processing and correction. Based on the space distortion structural feature data, multidimensional statistics and analysis such as cluster analysis, principal component analysis and the like are carried out. These analyses can reveal relationships and differences between different warp modes. Helping to identify and understand different types of light distortion patterns provides in-depth data support for formulating correction strategies. And (3) based on the multidimensional characteristic data of the distortion structure, formulating a correction strategy aiming at the light distortion behavior. This includes adjusting the position of the optical device, optimizing the environmental settings, or employing digital correction techniques, among other means. By effective corrective measures, the influence caused by environmental distortion in the optical system is reduced or eliminated, and the performance and stability of the system are improved.
Preferably, step S34 includes the steps of:
step S341, performing distortion curvature calculation on the multidimensional feature data of the distortion structure to obtain distortion structure curvature data;
Step S342, carrying out layered characteristic processing on the multidimensional characteristic data of the twisted structure according to the bending data of the twisted structure to obtain bending layered data of the twisted structure;
Step S343, performing layered light distortion index calculation according to the distortion structure curvature layered data to obtain layered light distortion index data;
step S344, performing ray dynamic parameter correction according to the layered ray distortion index data to obtain layered ray dynamic parameter correction data;
Step S345, performing light distortion behavior correction processing according to the layered light dynamic parameter correction data to obtain light distortion behavior correction data.
According to the method, the bending data of each twisted structure are obtained by analyzing and calculating the multidimensional characteristic data of the twisted structure. These data reflect the specific morphology and degree of curvature of the light ray twist. A more detailed description of the light distortion behaviour is provided, helping to identify the type and extent of distortion that needs to be corrected. And carrying out layered characteristic processing based on the tortuosity data of the tortuosity structure, and classifying and analyzing the multidimensional characteristic data according to different levels of tortuosity. Specific analysis for different levels of tortuosity is provided to help distinguish and understand the extent of impact of various tortuosity phenomena on the optical system. Based on the distortion structure curvature layering data, light distortion indexes of each layer, such as distortion intensity, influence range and the like, are calculated. A deeper quantitative analysis of the light distortion behaviour is provided, providing specific guidance and direction of optimization for subsequent correction. And (3) formulating a dynamic parameter correction strategy of the optical system according to the layered light distortion index data, and relating to operations such as lens adjustment, light source position change and the like. By adjusting parameters of the optical system in real time, light distortion caused by distortion is reduced or eliminated, and accuracy and stability of the optical system are improved. And applying the layered light dynamic parameter correction data to execute actual light distortion behavior correction processing, so as to ensure that the system output accords with the expected optical performance. Finally, the effective management and optimization of the light distortion behavior in the optical system are realized, and the functions and application effects of the system are improved.
Preferably, the present invention also provides a laser radar camera co-location sensing system for performing the laser radar camera co-location sensing method as described above, the laser radar camera co-location sensing system comprising:
the system comprises a visual angle three-dimensional model construction module, a mobile collaborative visual angle three-dimensional model construction module, a visual angle three-dimensional model generation module and a visual angle three-dimensional model generation module, wherein the visual angle three-dimensional model construction module is used for extracting real-time environment light data through an airborne camera lens to obtain real-time environment light data;
The system comprises an environment space distortion behavior probability calculation module, a structural interference effect analysis module, an environment space distortion behavior probability calculation module and a display module, wherein the environment space distortion behavior probability calculation module is used for carrying out light reflection intensity evaluation on real-time environment light data based on a mobile collaborative visual angle three-dimensional model to obtain light reflection intensity evaluation data;
The correction processing module is used for carrying out space distortion structural identification on the environment space distortion behavior probability data to obtain space distortion structural data;
The co-located perception model construction module is used for carrying out strategy gradient learning on the light distortion behavior correction data based on a strategy gradient algorithm to obtain light correction strategy gradient data; and constructing a light co-located sensing model according to the gradient data of the light correction strategy to obtain the light co-located sensing model, and sending the light co-located sensing model to the data cloud platform to execute the laser radar camera co-located sensing method.
The invention also provides a computer program which is executed to realize the laser radar camera co-position sensing method according to any one of the above.
The invention has the beneficial effects that the light information in the environment is captured in real time by utilizing the onboard camera lens, so that the timeliness and the accuracy of the data are ensured. A three-dimensional model is constructed through real-time data, and the model can reflect the real situation of the environment, including the distribution and influence of light rays. Based on the three-dimensional model of the mobile collaborative viewing angle, real-time ambient light data is evaluated, and the intensity and the characteristics of light reflection are quantified. By evaluating the data, the structural interference effect of the light in the environment, i.e. the complex interaction effects of reflection, refraction and diffraction of the light at the object surface, is analyzed. Based on the light reflection structure interference effect data, the probability of spatial warping behavior occurring in the environment is calculated. These data help to understand and predict the behavior of light in a particular environment, including its impact on visual perception and object shape awareness. By analyzing the environmental spatial distortion behavior probability data, specific spatial distortion structural features, such as bending, torsion or deformation of light, are identified. And based on the identified distortion characteristics, performing correction processing of the light distortion behaviors so as to improve the accuracy of light transmission and the accuracy of environment perception. The light warp behavior correction data is learned and optimized using a strategy gradient algorithm to find an optimal correction strategy. Optimized light correction strategy gradient data is obtained that reflects how light transmission is adjusted to reduce distortion effects based on real-time environmental data. And constructing a light co-located perception model based on the result of strategy gradient learning. The model can adjust the positions and angles of the laser radar and the camera according to the actual light conditions in a real-time or near-real-time environment so as to realize more accurate co-located sensing. And sending the optimized light co-located perception model to a data cloud platform so as to execute a co-located perception method of the laser radar and the camera in a wide application scene, and improving the accuracy of environment perception and space perception and reducing errors and uncertainty through the correction of light distortion behaviors and the application of the light co-located perception model. Therefore, the invention is an improvement treatment for the traditional laser radar camera co-position sensing method, solves the problem that the traditional laser radar camera co-position sensing method has inaccurate analysis on the change of the ambient light, thereby causing low correction precision on the ambient distortion in the reflecting process, improves the accuracy of the analysis on the change of the ambient light, and improves the correction precision on the ambient distortion in the reflecting process.
Drawings
FIG. 1 is a schematic flow chart of a method for co-located sensing of a laser radar camera;
FIG. 2 is a flowchart illustrating the detailed implementation of step S2 in FIG. 1;
Fig. 3 is a detailed implementation step flow diagram of step S3 in fig. 1.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The following is a clear and complete description of the technical method of the present invention, taken in conjunction with the accompanying drawings, and it is evident that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, are intended to fall within the scope of the present invention.
Furthermore, the drawings are merely schematic illustrations of the present invention and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. The functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor methods and/or microcontroller methods.
It will be understood that, although the terms "first," "second," etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
To achieve the above objective, please refer to fig. 1 to 3, a method for co-location sensing of a laser radar camera, the method comprising the following steps:
The method comprises the steps of S1, extracting real-time environment light data through an airborne camera lens to obtain real-time environment light data, constructing a mobile collaborative viewing angle three-dimensional model according to the real-time environment light data, and obtaining a mobile collaborative viewing angle three-dimensional model;
S2, carrying out light reflection intensity evaluation on real-time environment light data based on a mobile collaborative visual angle three-dimensional model to obtain light reflection intensity evaluation data, carrying out structural interference effect analysis on the light reflection intensity evaluation data to obtain light reflection structure interference effect data, and carrying out environment space distortion behavior probability calculation according to the light reflection structure interference effect data to obtain environment space distortion behavior probability data;
Step S3, performing spatial distortion structural recognition on the environmental spatial distortion behavior probability data to obtain spatial distortion structural data;
And S4, performing strategy gradient learning on the light distortion behavior correction data based on a strategy gradient algorithm to obtain light correction strategy gradient data, performing light parity perception model construction according to the light correction strategy gradient data to obtain a light parity perception model, and transmitting the light parity perception model to a data cloud platform to execute a laser radar camera parity perception method.
In the embodiment of the present invention, as described with reference to fig. 1, the step flow diagram of a method for co-location sensing of a lidar camera of the present invention is shown, and in this example, the method for co-location sensing of a lidar camera includes the following steps:
The method comprises the steps of S1, extracting real-time environment light data through an airborne camera lens to obtain real-time environment light data, constructing a mobile collaborative viewing angle three-dimensional model according to the real-time environment light data, and obtaining a mobile collaborative viewing angle three-dimensional model;
In the embodiment of the invention, the ambient light data is captured in real time through the onboard camera lens on the sweeping robot. The airborne camera lens is arranged on the unmanned aerial vehicle and is provided with a high dynamic range imaging sensor, so that the ambient light can be accurately captured under different illumination conditions. The real-time captured image is subjected to noise filtering and image enhancement by a preprocessing algorithm so as to reduce the influence of ambient light change on data. Next, from these image data, a moving collaborative perspective three-dimensional model is constructed using multi-perspective stereoscopic technology. The method comprises the steps of carrying out feature point matching on image data obtained from different angles, generating a depth map by utilizing a parallax calculation method, and synthesizing a three-dimensional model of the environment through a three-dimensional reconstruction algorithm. Finally, the obtained three-dimensional model can dynamically reflect the actual form and illumination change of the environment, and provides a precise spatial data base for subsequent analysis.
S2, carrying out light reflection intensity evaluation on real-time environment light data based on a mobile collaborative visual angle three-dimensional model to obtain light reflection intensity evaluation data, carrying out structural interference effect analysis on the light reflection intensity evaluation data to obtain light reflection structure interference effect data, and carrying out environment space distortion behavior probability calculation according to the light reflection structure interference effect data to obtain environment space distortion behavior probability data;
In the embodiment of the invention, after the three-dimensional model of the mobile collaborative viewing angle is obtained, the light reflection intensity evaluation is carried out on the real-time environment light data through the light ray tracing technology. The reflection intensity of each three-dimensional model surface point is calculated by using a high-precision ray tracing algorithm, and the process considers the light source position, the surface material property and the environmental light scattering effect. And further processing the light reflection intensity evaluation result, analyzing the change trend of the light reflection intensity evaluation result under different illumination conditions, and generating light reflection intensity evaluation data. Based on this data, an interference effect analysis of the light reflecting structure is performed. And processing the reflected data by using the structural interference effect model, and identifying and quantifying interference modes such as light spots and fringe effects in the reflected image so as to obtain light reflection structural interference effect data. Further, using these interference effect data, an environmental spatial warping behavior probability calculation model is applied. The model comprehensively considers the reflection intensity and interference effect of the ambient light, calculates the space distortion behavior probability caused by structural interference when the light propagates in the environment through probability statistical analysis, and finally obtains the ambient space distortion behavior probability data. The data can be used for analyzing light propagation characteristics in a complex environment and providing support for co-located perception of the laser radar and the camera.
Step S3, performing spatial distortion structural recognition on the environmental spatial distortion behavior probability data to obtain spatial distortion structural data;
In the embodiment of the invention, the spatial distortion structural identification is carried out on the environmental spatial distortion behavior probability data. The method comprises the steps of using a spatial distortion recognition algorithm, firstly mapping probability data into a three-dimensional space, performing gridding processing on the data through a high-resolution grid generation algorithm, and precisely dividing a distortion region in the space. Next, a spatial segmentation analysis method is applied to identify specific shape and distribution characteristics of the warped region. The structure of the warp field is described in detail by a spatial analysis tool and a structured dataset is generated. These datasets contain the geometric features of the warped region, the distribution law, and its position in three-dimensional space. Based on the structured data, a light ray distortion behavior correction process is performed. The twisted areas are analyzed by a reverse modeling method by using a light propagation model, and propagation paths of light in the areas are calculated and corrected. The correction algorithm comprises the steps of light refractive index adjustment, path correction and the like, light distortion behavior correction data are obtained, and the data can effectively compensate the change of light in a distortion area, so that the accuracy of a follow-up model is ensured.
And S4, performing strategy gradient learning on the light distortion behavior correction data based on a strategy gradient algorithm to obtain light correction strategy gradient data, performing light parity perception model construction according to the light correction strategy gradient data to obtain a light parity perception model, and transmitting the light parity perception model to a data cloud platform to execute a laser radar camera parity perception method.
In the embodiment of the invention, based on the light distortion behavior correction data, a strategy gradient algorithm is applied to carry out strategy gradient learning. The method comprises the steps of firstly establishing a strategy network which receives light distortion behavior correction data as input and processes the data through a multi-layer neural network. And optimizing network parameters by using a strategy gradient method, calculating gradients and adjusting network weights by comparing differences between actual correction effects and expected effects. This process will gradually improve the strategy over multiple iterations so that the light correction effect is constantly optimized. The obtained gradient data of the light correction strategy contains the optimal correction strategy and adjustment parameters. And constructing a light ray co-located perception model according to the data. And constructing a comprehensive light co-located perception model by using the correction strategy gradient data, integrating the measurement data of the laser radar and the camera by using the model, and carrying out real-time data fusion by using the correction strategy. The constructed light co-located sensing model is uploaded to a data cloud platform, and the cloud platform is used for executing a laser radar camera co-located sensing method to realize real-time environment sensing and analysis.
Preferably, step S1 comprises the steps of:
S11, acquiring real-time environment data through an onboard camera lens to obtain real-time environment data;
Step S12, extracting real-time environment light data from the real-time environment data to obtain real-time environment light data;
step S13, performing real-time distance measurement on the obstacle based on real-time environment data by using a laser radar to obtain real-time distance measurement data of the obstacle;
And S14, constructing a mobile collaborative viewing angle three-dimensional model of the real-time environment data according to the real-time environment light data and the obstacle real-time ranging data to obtain the mobile collaborative viewing angle three-dimensional model.
In the embodiment of the invention, the on-board camera lens on the sweeping robot captures the ambient light data in real time, the high-resolution image sensor and the real-time image processing module are configured, and the lens is stabilized through the automatic control system, so that the stability and the definition of image capturing are ensured. The image sensor periodically collects environmental image data including information about objects, obstructions, and light conditions in the environment. The acquired image data is processed in real time for noise removal and color correction to improve data quality. Finally, the obtained real-time environment data comprises a plurality of high-resolution image sequences, and the real-time environment light data is extracted from the real-time environment data. First, an image frame is extracted from the real-time environment data obtained in step S11. And (3) carrying out detailed analysis on the light information in the image by applying a high dynamic range image processing technology, and identifying and measuring the intensity and direction of the light. And analyzing the image by using a ray tracing algorithm, and extracting ray reflection information of each pixel point. And then, carrying out spectrum resolution processing on the light ray data by a spectrum analysis method, and extracting light ray intensity data of each wave band in the environment. Finally, the obtained real-time ambient light data comprise light intensity and distribution information of each spectrum band, and the data can reflect illumination conditions and changes of the current environment. The laser radar emits laser pulses and receives laser signals reflected from the obstacle, thereby calculating the distance from the obstacle to the laser radar. In order to improve the ranging accuracy, the laser radar performs multiple measurements at different angles and positions, and the multiple measurement results are weighted and averaged by using a sensor fusion technology. In combination with image information captured in the real-time environmental data, the obstacle real-time ranging data provides accurate position and distance information of the obstacle in the environment. And constructing the three-dimensional model of the mobile collaborative viewing angle for the real-time environment data according to the real-time environment light data and the real-time range finding data of the obstacle. Firstly, fusing real-time environment light data and obstacle real-time ranging data, and processing depth information in the environment through a stereoscopic vision algorithm. And integrating the multi-view image captured by the camera lens with laser radar ranging data by using an image matching technology to generate a preliminary three-dimensional point cloud model. And then, processing the point cloud data by applying a three-dimensional reconstruction algorithm to form a complete three-dimensional environment model. Further, the light tracking technology is utilized to simulate the light of the three-dimensional model, and the light effect of the model is adjusted by combining the light data, so that the model is ensured to reflect the light condition and the obstacle position in the real environment. Finally, the obtained three-dimensional model of the mobile collaborative viewing angle can accurately represent the three-dimensional structure of the environment and the illumination characteristics thereof, and basic data is provided for subsequent analysis.
Preferably, step S2 comprises the steps of:
s21, carrying out light change fluctuation analysis on real-time environment light data based on a mobile collaborative visual angle three-dimensional model to obtain light change fluctuation data;
S22, evaluating the light reflection intensity of the light variation fluctuation data to obtain light reflection intensity evaluation data;
S23, carrying out structural interference effect analysis on the light reflection intensity evaluation data to obtain light reflection structural interference effect data;
And S24, performing environment space distortion behavior probability calculation according to the light reflection structure interference effect data and the light reflection intensity evaluation data to obtain environment space distortion behavior probability data.
As an example of the present invention, referring to fig. 2, the step S2 in this example includes:
s21, carrying out light change fluctuation analysis on real-time environment light data based on a mobile collaborative visual angle three-dimensional model to obtain light change fluctuation data;
In the embodiment of the invention, the light variation fluctuation analysis is performed on the real-time environment light data based on the mobile collaborative visual angle three-dimensional model. First, the movement collaborative viewing angle three-dimensional model and the real-time ambient light data obtained from step S14 are data aligned, so that spatial consistency of the three-dimensional model and the light data is ensured. And (3) carrying out light variation fluctuation analysis on each surface point in the model by using a light propagation simulation algorithm, and measuring the intensity variation of light at different time points and different positions. And in the analysis process, a local light fluctuation calculation method is adopted, the time series data of the light intensity is subjected to Fourier transformation, and the frequency components and the amplitude changes of the light fluctuation are extracted. Finally, light variation fluctuation data are obtained, wherein the fluctuation data comprise fluctuation characteristic information of light intensity in space and time, and the fluctuation characteristic information is used for subsequent evaluation and analysis.
S22, evaluating the light reflection intensity of the light variation fluctuation data to obtain light reflection intensity evaluation data;
In the embodiment of the invention, the light reflection intensity evaluation is carried out on the light variation fluctuation data. First, the intensity fluctuation feature of each light is extracted from the light variation fluctuation data obtained in step S21. The reflection intensity of the light on different angles and surfaces is calculated by statistical analysis of the light fluctuation data using a reflection intensity evaluation algorithm. The method comprises the steps of carrying out local mean value and standard deviation calculation on the light fluctuation data, and evaluating the intensity stability of the reflected light. And then, calculating the reflection intensity of the light on each surface by combining the material properties of the surfaces in the three-dimensional model, and generating corresponding reflection intensity data. Finally, light reflection intensity evaluation data are obtained, and the data reflect reflection characteristics and change rules of light under different environmental conditions.
S23, carrying out structural interference effect analysis on the light reflection intensity evaluation data to obtain light reflection structural interference effect data;
In the embodiment of the invention, structural interference effect analysis is performed on the light reflection intensity evaluation data. First, based on the light reflection intensity evaluation data obtained in step S22, an interference effect analysis algorithm is applied to evaluate the light reflection pattern. The specific steps include identifying patterns of light interference fringes by analyzing periodic variations in reflected intensity data using an interference fringe detection method. And then, combining a spatial frequency analysis technology, carrying out detailed analysis on the spatial distribution of interference fringes, and calculating the intensity and phase information of interference effects. Further, modeling is performed on the data through an interference effect model, and interference effects in light reflection are quantified, including structural effects such as light spots and light bands. Finally, interference effect data of the light reflection structure are generated, and the characteristics and distribution of the interference effect in the light reflection process are described.
And S24, performing environment space distortion behavior probability calculation according to the light reflection structure interference effect data and the light reflection intensity evaluation data to obtain environment space distortion behavior probability data.
In the embodiment of the invention, the calculation of the probability of the environmental space distortion behavior is performed according to the interference effect data of the light reflection structure and the light reflection intensity evaluation data. First, the light reflection structure interference effect data obtained in step S23 is fused with the light reflection intensity evaluation data in step S22. The environmental spatial distortion calculation model is used, the reflection intensity of the light and the interference effect are taken as input, and the spatial distortion caused by the interference effect when the light propagates in the environment is simulated. The method comprises the steps of constructing a light propagation model of an environment space based on light reflection data, and quantifying the probability of distortion in a light propagation path by applying a space distortion probability calculation algorithm. The algorithm combines a statistical method to comprehensively analyze interference effect and light intensity change, and calculates the probability of spatial distortion of light in the environment. Finally, ambient spatial warping behavior probability data is generated, providing the likelihood and distribution of light warping in the environment.
Preferably, step S23 comprises the steps of:
step S231, performing light reflection path marking on the light reflection intensity evaluation data to obtain light reflection path marking data;
s232, performing light reflection staggered structure analysis according to the light reflection path marking data to obtain light reflection staggered structure data;
Step S233, evaluating the light color difference among different reflected light rays according to the light ray reflection intensity evaluation data to obtain reflected light ray light color difference data;
Step S234, performing optical wave phase difference calculation between different reflected light rays on the light ray reflection staggered structure data according to the reflected light ray chromatic aberration data and the light ray reflection intensity evaluation data to obtain the reflection staggered structure optical wave phase difference data;
And S235, carrying out structural interference effect analysis based on the reflection staggered structure light wave phase difference data to obtain light reflection structure interference effect data.
In the embodiment of the invention, the light reflection path marking is performed on the light reflection intensity evaluation data. First, path information of light is extracted from the light reflection intensity evaluation data obtained in step S22. And marking the propagation path of the light rays in the three-dimensional environment model by utilizing a ray tracing algorithm, and recording the incidence point, the reflection point and the propagation path of each light ray. The path tracking technique is applied to each ray to ensure the accuracy of the path marking. The marking data is stored in a data structure comprising information such as reflection points, path lengths and reflection angles of the light rays. Finally, ray reflection path marker data is generated, which details the propagation path of the ray and its interaction with the environmental surface. and carrying out light reflection staggered structure analysis according to the light reflection path marking data. And performing staggered structure analysis on the light paths in the marking data by using a light staggered mode analysis method. First, an intersection area in the path mark data is identified by a ray intersection detection algorithm. The interlacing structural analysis includes high resolution imaging of reflection path interlacing points and extraction of spatial features of the interlacing areas using image processing techniques. And analyzing the interaction mode of the light rays in the staggered structure by combining the reflection angles and the path lengths of the light rays. Light reflection interlacing structural data is generated describing structural features and distribution of light in the interlacing area. And evaluating the light color difference between different reflected light rays according to the light reflection intensity evaluation data. Firstly, analyzing reflection intensity data of each ray in the ray reflection staggered structure data by utilizing a chromatic aberration calculation algorithm. The intensity data of the light is converted into color difference values in the color space by applying a color space conversion technique. And (3) evaluating the light color difference between different light rays by using a spectral resolution analysis method, and calculating the absolute value and the relative value of the light color difference. Finally, the resulting reflected light color difference data provides the color difference of the different light rays in the interlaced region. And calculating the optical wave phase difference between different reflected light rays according to the reflected light ray chromatic aberration data and the light ray reflection intensity evaluation data. Firstly, the light wave phase measurement technology is utilized to process the reflection intensity data of the light rays, and the light wave phase information of the light rays is calculated. The phase difference between the reflected light rays is calculated by analyzing the light color difference data using an interferometry technique. And quantitatively analyzing the phase difference of the light rays in the staggered structure by using a phase calculation algorithm, and integrating phase difference data of different reflected light rays. Finally, the generated reflective interleaved structured light wave phase difference data reflects the phase difference condition of the light rays in the interleaved region. And carrying out structural interference effect analysis based on the reflection staggered structure light wave phase difference data. First, phase difference information is extracted from the optical wave phase difference data obtained in step S234. And using an interference effect model to perform interference fringe analysis on the phase difference data to determine an interference mode of the light rays in the staggered area. And calculating the intensity and distribution of the interference fringes by a light wave interference algorithm. And analyzing the interference effect intensity and the spatial distribution of the light by using an interference effect quantification method to generate interference effect data of the light reflection structure. The final data describes the characteristics of the interference effect of light rays in an interleaved structure due to the phase difference of the light waves.
Preferably, step S24 comprises the steps of:
s241, performing light path difference calculation of different light reflections on the light reflection structure interference effect data to obtain light reflection light path difference data;
step S242, performing medium reflection interference simulation on the light reflection structure interference effect data according to the light reflection light path difference data to obtain medium reflection interference simulation data;
Step S243, carrying out polarized light interaction analysis on the interference effect data of the light reflection structure according to the medium reflection interference simulation data to obtain polarized light interaction data of the reflection structure;
Step S244, performing light distortion critical state analysis based on the polarized light interaction data of the reflecting structure, the medium reflection interference simulation data and the light reflection light path difference data to obtain light distortion critical state data;
step S245, carrying out regression interval estimation on the light ray distortion critical state data according to the light ray reflection intensity estimation data to obtain light ray reflection distortion interval estimation data;
And step S246, performing environmental space distortion behavior probability calculation based on the light reflection distortion interval estimation data to obtain environmental space distortion behavior probability data.
In the embodiment of the invention, the light path difference calculation of different light reflection is carried out on the light reflection structure interference effect data. First, interference fringe information of the reflected light is extracted from the light reflection structure interference effect data obtained in step S235. The reflected paths of the different rays are compared using an optical path difference calculation algorithm. The specific operation includes calculating the optical path difference of each ray, i.e. the optical path difference from the incident point to the reflection point, according to the reflection point and the path recorded in the ray reflection path mark data. By comparing the path differences of the different reflected light rays, light ray reflected path difference data is generated, which describes the path differences of the light rays caused by the path differences during interference. And performing medium reflection interference simulation on the interference effect data of the light reflection structure according to the light reflection light path difference data. First, the optical path difference information is extracted from the light reflection optical path difference data obtained in step S241. And inputting the light reflection optical path difference data into the model by using a medium reflection interference simulation model, and simulating interference effects when light passes through different mediums. The interference simulation tool is used to calculate the reflection and transmission of light at different medium interfaces and simulate the interference mode and intensity distribution. Finally, medium reflection interference simulation data are generated, and reflection interference conditions of light rays on various medium interfaces are described. And carrying out polarized light interaction analysis on the interference effect data of the light reflection structure according to the medium reflection interference simulation data. First, interaction information of polarized light is extracted from the medium reflection interference simulation data obtained in step S242. And analyzing the change of the polarization state of the light on the medium interface by using a polarized light analysis tool. The specific operation comprises the steps of measuring the polarization degree and the polarization angle of light, and recording the polarization characteristics of the light under the interaction of different media by using a polarization interferometer. And comprehensively analyzing the data and the light reflection structure interference effect data to generate reflection structure polarized light interaction data, wherein the data show the polarized interaction characteristics of light in different media. And carrying out light distortion critical state analysis based on the polarized light interaction data of the reflecting structure, the medium reflection interference simulation data and the light reflection light path difference data. First, the reflective structure polarized light interaction data of step S243, the medium reflection interference simulation data of step S242, and the light reflection optical path difference data of step S241 are integrated. And (3) comprehensively analyzing the data by using a light distortion analysis model, and calculating the distortion critical state of the light under the influence of interference and polarization effects. Specific operations include analyzing critical conditions of a twisted state using an optical simulation tool based on reflection and twist characteristics of light. Finally, light distortion critical state data is obtained, and the distortion condition of light under different interference and polarization conditions is described. And carrying out regression interval estimation on the light distortion critical state data according to the light reflection intensity estimation data. First, light intensity information is extracted from the light reflection intensity evaluation data obtained in step S22. And (3) carrying out interval estimation on the distortion state by using a regression analysis method in combination with the light distortion critical state data obtained in the step S244. The specific operation includes regression modeling of the distortion critical state data, calculating the effect of light intensity on the distortion state, and estimating the range of light distortion. Light reflection twist interval estimation data is generated describing the degree of twist under different light intensity conditions. And calculating the probability of the environmental space distortion behavior based on the light reflection distortion interval estimation data. First, the twist section information is extracted from the light reflection twist section estimation data obtained in step S245. And (3) combining the light reflection intensity evaluation data and the light distortion critical state data in the step S24, and calculating the probability of the light distortion in the environment by using a probability calculation model. The method comprises the steps of carrying out probability distribution analysis on a distortion section of the light, and calculating probability distribution of light distortion by combining light propagation characteristics in an environment model. Finally, environmental spatial warping behavior probability data is generated, providing probability information of warping of light in the environment.
Preferably, the regression interval estimation of the light ray distortion critical state data according to the light ray reflection intensity estimation data includes the steps of:
performing light distortion incremental rule recognition on the light distortion critical state data according to the light reflection intensity evaluation data to obtain light distortion incremental rule data;
Incremental stepwise regression analysis is carried out on the light distortion incremental rule data to obtain distortion incremental stepwise regression data;
performing multiple collinearity constraint processing on the light distortion incremental rule data according to the distortion incremental stepwise regression data to obtain the light distortion incremental rule collinearity constraint data;
Performing regression analysis on the light distortion incremental rule data according to the light distortion incremental rule collinear constraint data to obtain light distortion incremental collinear regression data;
And carrying out regression interval estimation according to the light ray distortion incremental collinearly regression data to obtain light ray reflection distortion interval estimation data.
In the embodiment of the invention, the light distortion increasing rule identification is performed on the light distortion critical state data according to the light reflection intensity evaluation data. First, a record of changes in light intensity and degree of distortion is extracted from the light reflection intensity evaluation data. And modeling the change trend of the light distortion degree along with the light intensity by using a curve fitting technology. In specific implementation, the intensity of light is taken as an independent variable, the degree of light distortion is taken as an independent variable, an incremental trend analysis algorithm is applied to identify a distortion incremental rule, and light distortion incremental rule data is obtained. This data details the increasing trend of the degree of twist at different light intensities. Incremental stepwise regression analysis is performed on the light distortion incremental rule data. And importing the obtained light distortion incremental rule data into a regression analysis tool to perform stepwise regression modeling. In the stepwise regression process, starting from the simplest model, new independent variables are gradually added, and the contribution of each step to the model fit is evaluated. And determining the influence degree of the respective variable on the light distortion increment rule through adjustment and optimization of the regression model, and obtaining distortion increment stepwise regression data which describe the specific influence of the light intensity on the distortion increment. And performing multiple collinearity constraint processing on the light distortion increasing rule data according to the distortion increasing stepwise regression data. First, the correlation between the independent variables in the regression model is evaluated using a co-linear diagnostic tool, and a variance expansion factor (VIF) is calculated. And screening and adjusting independent variables with serious collinearity, and adjusting independent variable combinations in the model through a multiple collinearity constraint algorithm to reduce correlation among the independent variables. And after the processing is finished, collinear constraint data of the light distortion increment rule are obtained, and the collinear constraint data reflect the distortion increment rule after being subjected to the colinear constraint. And carrying out regression analysis on the light distortion incremental rule data according to the collinear constraint data of the light distortion incremental rule. And applying the obtained collinearly constraint data to a regression model to perform standard regression analysis. And establishing a new regression model according to the data after collinearly constraint by using a linear regression tool, and evaluating the relation between the light intensity and the distortion degree. Light ray distortion increasing collinear regression data is generated describing specific regression results of light ray intensity versus distortion increase in the co-linear processed model. And carrying out regression interval estimation according to the light distortion incremental collinearly regression data. And (3) using the regression analysis result obtained in the step S245.4 in the interval estimation model to calculate a confidence interval of the light distortion degree. The specific operation comprises the steps of calculating the confidence intervals of the regression coefficients of each interval through the coefficients and standard errors in the regression model. And (3) utilizing an interval estimation tool to sort the estimation result into light reflection distortion interval estimation data. The data provides confidence intervals for the degree of light distortion at different light intensities, showing the range of distortion.
Preferably, step S3 comprises the steps of:
s31, performing space distortion structural identification on the environment space distortion behavior probability data to obtain space distortion structural data;
Step S32, extracting features of the spatial distortion structural data to obtain spatial distortion structural feature data;
S33, performing a distortion structure multidimensional analysis on the spatial distortion structured data according to the spatial distortion structure characteristic data to obtain distortion structure multidimensional characteristic data;
and step S34, performing light distortion behavior correction processing according to the multidimensional characteristic data of the distortion structure to obtain light distortion behavior correction data.
As an example of the present invention, referring to fig. 3, the step S3 in this example includes:
s31, performing space distortion structural identification on the environment space distortion behavior probability data to obtain space distortion structural data;
In the embodiment of the invention, the spatial distortion structural identification is carried out on the environmental spatial distortion behavior probability data. Firstly, the environmental space distortion behavior probability data are partitioned according to preset space grids, and each grid represents a space unit. And identifying and classifying the distortion behaviors of each space unit by using a space data analysis tool, and marking and structuring the distortion patterns in the data by setting threshold values and classification rules. Regions with similar distortion characteristics are identified through a spatial mapping algorithm, and are grouped and labeled to generate spatial distortion structured data. This data describes the regional distribution and structural characteristics of the warp behavior in space.
Step S32, extracting features of the spatial distortion structural data to obtain spatial distortion structural feature data;
In the embodiment of the invention, the spatial distortion structural data is subjected to characteristic extraction. Geometric features, including degree of distortion, shape features, boundary information, etc., are extracted for each marked spatial cell using a feature extraction algorithm. First, geometric properties of each space cell, such as twist angle, area variation, edge curvature, etc., are calculated. These geometric attributes are then quantized into feature vectors, forming spatially warped structural feature data. By using feature engineering tools, the extracted data will describe specific geometric features of the spatial warping, facilitating subsequent analysis.
S33, performing a distortion structure multidimensional analysis on the spatial distortion structured data according to the spatial distortion structure characteristic data to obtain distortion structure multidimensional characteristic data;
In the embodiment of the invention, the spatial distortion structural data is subjected to distortion structure multidimensional analysis according to the spatial distortion structural characteristic data. And deep analysis is carried out on the extracted characteristic data by adopting a multidimensional data analysis method. The feature data is subjected to dimension reduction processing by using a Principal Component Analysis (PCA) or factor analysis method, and the multi-dimensional features are converted into main components or factors. And analyzing main influencing factors and correlations of space distortion by constructing a multidimensional space model to obtain multidimensional feature data of the distortion structure. This data reveals the complexity and major driving factors of the warp behavior in multidimensional space.
And step S34, performing light distortion behavior correction processing according to the multidimensional characteristic data of the distortion structure to obtain light distortion behavior correction data.
In the embodiment of the invention, the light distortion behavior correction processing is performed according to the multidimensional characteristic data of the distortion structure. Firstly, a light distortion behavior model is constructed by utilizing the multidimensional feature data of the distortion structure obtained in the step S33, and the change rule of light in a distortion environment is described. The light distortion behaviour is then adjusted by a correction algorithm. The correction algorithm applies the distortion parameters in the multidimensional feature data to the light model, and the light is corrected in the distortion environment through calculation and adjustment. And finally, generating light distortion behavior correction data, wherein the data are used for correcting the distortion effect of original light in an actual environment, so that the co-located sensing precision of the laser radar and the camera is improved.
Preferably, step S34 includes the steps of:
step S341, performing distortion curvature calculation on the multidimensional feature data of the distortion structure to obtain distortion structure curvature data;
Step S342, carrying out layered characteristic processing on the multidimensional characteristic data of the twisted structure according to the bending data of the twisted structure to obtain bending layered data of the twisted structure;
Step S343, performing layered light distortion index calculation according to the distortion structure curvature layered data to obtain layered light distortion index data;
step S344, performing ray dynamic parameter correction according to the layered ray distortion index data to obtain layered ray dynamic parameter correction data;
Step S345, performing light distortion behavior correction processing according to the layered light dynamic parameter correction data to obtain light distortion behavior correction data.
In the embodiment of the invention, the distortion curvature calculation is performed on the multidimensional characteristic data of the distortion structure. Firstly, inputting multidimensional characteristic data of the twisted structure into a bending analysis tool, and obtaining the bending degree of a light twisting region by calculating the local bending and the global bending of each data point. Curvature calculation uses a curvature formula in differential geometry, which involves solving the second derivative of each spatial point to determine its curvature in multidimensional space. The results of these calculations are consolidated into tortuosity data for the twisted structure, providing a specific quantitative indicator of the bending of the light in the twisted environment. And carrying out layered feature processing on the multidimensional feature data of the twisted structure according to the bending data of the twisted structure. And layering the curvature data by adopting a layering algorithm. First, the data is divided into multiple layers, such as a light warp, a medium warp, and a heavy warp, according to the magnitude of the tortuosity value. Statistical features of the hierarchy, such as mean tortuosity and standard deviation, are then calculated for each hierarchy. These hierarchical features reflect the distribution of different degrees of twist, and the resulting tortuosity hierarchical data of the twisted structure is used for subsequent analysis. And carrying out layered light distortion index calculation according to the distortion structure curvature layered data. For each level of data, the light distortion model is used to calculate its distortion index. The specific method comprises the step of carrying out model simulation on the light rays of each layer so as to determine the distortion degree of the light rays in different curvature layers. The calculation includes analysis of optical behavior such as refraction, reflection and scattering of light in a particular distortion level, to derive light distortion index data for each level, which describes the variation of light at different distortion levels. And correcting the dynamic parameters of the light rays according to the layered light ray distortion index data. First, the layered light distortion index data is input into a light dynamic correction algorithm. The algorithm adjusts the dynamic parameters of the light, including light intensity, wavelength, phase, etc., by comparing the actually measured light data with the model predicted light data. The parameters are adjusted through an optimization technology, so that the performance of the light in the actual environment is consistent with the expected performance, the correction process comprises linear and nonlinear optimization steps, and the obtained layered light dynamic parameter correction data are used for further light correction. And performing light distortion behavior correction processing according to the layered light dynamic parameter correction data. And applying the corrected light dynamic parameters to a light distortion behavior model, and correcting the distortion effect of the light in the real environment by reversely simulating and adjusting the correction data. The process involves applying layered ray dynamics parameters to actual lidar and camera data, and fine-tuning the ray-distortion behavior using a correction algorithm. And finally, the generated light distortion behavior correction data are used for improving the perception precision of the laser radar and the camera system and ensuring the accuracy and the reliability of the measurement result.
Preferably, the present invention also provides a laser radar camera co-location sensing system for performing the laser radar camera co-location sensing method as described above, the laser radar camera co-location sensing system comprising:
the system comprises a visual angle three-dimensional model construction module, a mobile collaborative visual angle three-dimensional model construction module, a visual angle three-dimensional model generation module and a visual angle three-dimensional model generation module, wherein the visual angle three-dimensional model construction module is used for extracting real-time environment light data through an airborne camera lens to obtain real-time environment light data;
The system comprises an environment space distortion behavior probability calculation module, a structural interference effect analysis module, an environment space distortion behavior probability calculation module and a display module, wherein the environment space distortion behavior probability calculation module is used for carrying out light reflection intensity evaluation on real-time environment light data based on a mobile collaborative visual angle three-dimensional model to obtain light reflection intensity evaluation data;
The correction processing module is used for carrying out space distortion structural identification on the environment space distortion behavior probability data to obtain space distortion structural data;
The co-located perception model construction module is used for carrying out strategy gradient learning on the light distortion behavior correction data based on a strategy gradient algorithm to obtain light correction strategy gradient data; and constructing a light co-located sensing model according to the gradient data of the light correction strategy to obtain the light co-located sensing model, and sending the light co-located sensing model to the data cloud platform to execute the laser radar camera co-located sensing method.
The invention also provides a computer program which is executed to realize the laser radar camera co-position sensing method according to any one of the above.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
The foregoing is only a specific embodiment of the invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (4)

1.一种激光雷达摄像头同位感知方法,其特征在于,包括以下步骤:1. A laser radar camera co-location perception method, characterized in that it includes the following steps: 步骤S1:通过机载摄像镜头进行实时环境光线数据提取,得到实时环境光线数据;根据实时环境光线数据进行移动协同视角三维模型构建,得到移动协同视角三维模型;其中,步骤S1包括:Step S1: extracting real-time ambient light data through an onboard camera lens to obtain real-time ambient light data; constructing a mobile collaborative perspective three-dimensional model based on the real-time ambient light data to obtain a mobile collaborative perspective three-dimensional model; wherein step S1 includes: 步骤S11:通过机载摄像镜头进行实时环境数据采集,得到实时环境数据;Step S11: collecting real-time environmental data through an onboard camera lens to obtain real-time environmental data; 步骤S12:对实时环境数据进行实时环境光线数据提取,得到实时环境光线数据;Step S12: extracting real-time ambient light data from the real-time ambient data to obtain real-time ambient light data; 步骤S13:通过激光雷达,并基于实时环境数据进行障碍物实时测距,得到障碍物实时测距数据;Step S13: Obstacle real-time ranging is performed based on real-time environmental data by using a laser radar to obtain obstacle real-time ranging data; 步骤S14:根据实时环境光线数据以及障碍物实时测距数据对实时环境数据进行移动协同视角三维模型构建,得到移动协同视角三维模型;Step S14: constructing a mobile collaborative perspective three-dimensional model for the real-time environmental data according to the real-time environmental light data and the real-time obstacle ranging data to obtain a mobile collaborative perspective three-dimensional model; 步骤S2:基于移动协同视角三维模型对实时环境光线数据进行光线反射强度评估,得到光线反射强度评估数据;对光线反射强度评估数据进行结构性干涉效应分析,得到光线反射结构干涉效应数据;根据光线反射结构干涉效应数据进行环境空间扭曲行为概率计算,得到环境空间扭曲行为概率数据;其中,步骤S2包括:Step S2: Based on the mobile collaborative perspective three-dimensional model, the real-time environmental light data is evaluated for light reflection intensity to obtain light reflection intensity evaluation data; the light reflection intensity evaluation data is analyzed for structural interference effect to obtain light reflection structural interference effect data; the probability of environmental space distortion behavior is calculated based on the light reflection structural interference effect data to obtain environmental space distortion behavior probability data; wherein step S2 includes: 步骤S21:基于移动协同视角三维模型对实时环境光线数据进行光线变化波动分析,得到光线变化波动数据;Step S21: performing light change fluctuation analysis on the real-time ambient light data based on the mobile collaborative perspective three-dimensional model to obtain light change fluctuation data; 步骤S22:对光线变化波动数据进行光线反射强度评估,得到光线反射强度评估数据;Step S22: evaluating the light reflection intensity of the light change fluctuation data to obtain light reflection intensity evaluation data; 步骤S23:对光线反射强度评估数据进行结构性干涉效应分析,得到光线反射结构干涉效应数据;其中,步骤S23包括:Step S23: performing structural interference effect analysis on the light reflection intensity evaluation data to obtain light reflection structural interference effect data; wherein step S23 includes: 步骤S231:对光线反射强度评估数据进行光线反射路径标记,得到光线反射路径标记数据;Step S231: marking the light reflection path of the light reflection intensity evaluation data to obtain light reflection path marking data; 步骤S232:根据光线反射路径标记数据进行光线反射交错结构分析,得到光线反射交错结构数据;Step S232: performing light reflection interlaced structure analysis according to the light reflection path mark data to obtain light reflection interlaced structure data; 步骤S233:根据光线反射强度评估数据对光线反射交错结构数据进行不同反射光线间的光色差评估,得到反射光线光色差数据;Step S233: evaluating the color difference between different reflected lights on the light reflection interlaced structure data according to the light reflection intensity evaluation data, to obtain the color difference data of the reflected light; 步骤S234:根据反射光线光色差数据以及光线反射强度评估数据对光线反射交错结构数据进行不同反射光线间的光波相位差计算,得到反射交错结构光波相位差数据;Step S234: calculating the light wave phase difference between different reflected light rays for the light reflection interlaced structure data according to the reflected light chromatic aberration data and the light reflection intensity evaluation data, to obtain the reflected interlaced structure light wave phase difference data; 步骤S235:基于反射交错结构光波相位差数据进行结构性干涉效应分析,得到光线反射结构干涉效应数据;Step S235: performing structural interference effect analysis based on the phase difference data of the reflected staggered structure light wave to obtain light reflection structure interference effect data; 步骤S24:根据光线反射结构干涉效应数据以及光线反射强度评估数据进行环境空间扭曲行为概率计算,得到环境空间扭曲行为概率数据;其中,步骤S24包括:Step S24: Calculate the probability of the environmental space distortion behavior according to the light reflection structure interference effect data and the light reflection intensity evaluation data to obtain the environmental space distortion behavior probability data; wherein step S24 includes: 步骤S241:对光线反射结构干涉效应数据进行不同光线反射的光路差计算,得到光线反射光路差数据;Step S241: calculating the optical path difference of different light reflections on the light reflection structure interference effect data to obtain the light reflection optical path difference data; 步骤S242:根据光线反射光路差数据对光线反射结构干涉效应数据进行介质反射干涉模拟,得到介质反射干涉模拟数据;Step S242: performing medium reflection interference simulation on the light reflection structure interference effect data according to the light reflection optical path difference data to obtain medium reflection interference simulation data; 步骤S243:根据介质反射干涉模拟数据对光线反射结构干涉效应数据进行偏振光交互作用分析,得到反射结构偏振光交互作用数据;Step S243: performing polarization light interaction analysis on the light reflection structure interference effect data according to the medium reflection interference simulation data to obtain reflection structure polarization light interaction data; 步骤S244:基于反射结构偏振光交互作用数据、介质反射干涉模拟数据以及光线反射光路差数据进行光线扭曲临界状态分析,得到光线扭曲临界状态数据;Step S244: performing light distortion critical state analysis based on the reflection structure polarization light interaction data, the medium reflection interference simulation data, and the light reflection optical path difference data to obtain light distortion critical state data; 步骤S245:根据光线反射强度评估数据对光线扭曲临界状态数据进行回归区间估计,得到光线反射扭曲区间估计数据;Step S245: performing regression interval estimation on the light distortion critical state data according to the light reflection intensity evaluation data to obtain light reflection distortion interval estimation data; 步骤S246:基于光线反射扭曲区间估计数据进行环境空间扭曲行为概率计算,得到环境空间扭曲行为概率数据;Step S246: Calculating the probability of environmental space distortion behavior based on the light reflection distortion interval estimation data to obtain environmental space distortion behavior probability data; 步骤S3:对环境空间扭曲行为概率数据进行空间扭曲结构化识别,得到空间扭曲结构化数据;根据空间扭曲结构化数据进行光线扭曲行为校正处理,得到光线扭曲行为校正数据;其中,步骤S3包括:Step S3: performing spatial distortion structured recognition on the environmental spatial distortion behavior probability data to obtain spatial distortion structured data; performing light distortion behavior correction processing according to the spatial distortion structured data to obtain light distortion behavior correction data; wherein step S3 includes: 步骤S31:对环境空间扭曲行为概率数据进行空间扭曲结构化识别,得到空间扭曲结构化数据;Step S31: performing spatial distortion structured recognition on the environmental spatial distortion behavior probability data to obtain spatial distortion structured data; 步骤S32:对空间扭曲结构化数据进行特征提取,得到空间扭曲结构特征数据;Step S32: extracting features from the spatial distortion structured data to obtain spatial distortion structure feature data; 步骤S33:根据空间扭曲结构特征数据对空间扭曲结构化数据进行扭曲结构多维分析,得到扭曲结构多维特征数据;Step S33: performing a distortion structure multi-dimensional analysis on the spatial distortion structure data according to the spatial distortion structure feature data to obtain the distortion structure multi-dimensional feature data; 步骤S34:根据扭曲结构多维特征数据进行光线扭曲行为校正处理,得到光线扭曲行为校正数据;其中,步骤S34包括:Step S34: performing light distortion behavior correction processing according to the multi-dimensional characteristic data of the distortion structure to obtain light distortion behavior correction data; wherein step S34 includes: 步骤S341:对扭曲结构多维特征数据进行扭曲弯曲度计算,得到扭曲结构弯曲度数据;Step S341: calculating the curvature of the twisted structure multi-dimensional feature data to obtain the curvature data of the twisted structure; 步骤S342:根据扭曲结构弯曲度数据对扭曲结构多维特征数据进行分层特征处理,得到扭曲结构弯曲度分层数据;Step S342: performing hierarchical feature processing on the multi-dimensional feature data of the twisted structure according to the twisted structure curvature data to obtain twisted structure curvature hierarchical data; 步骤S343:根据扭曲结构弯曲度分层数据进行分层光线扭曲指标计算,得到分层光线扭曲指标数据;Step S343: calculating the layered light distortion index according to the layered data of the curvature of the distortion structure to obtain the layered light distortion index data; 步骤S344:根据分层光线扭曲指标数据进行光线动态参数校正,得到分层光线动态参数校正数据;Step S344: performing light dynamic parameter correction according to the layered light distortion index data to obtain layered light dynamic parameter correction data; 步骤S345:根据分层光线动态参数校正数据进行光线扭曲行为校正处理,得到光线扭曲行为校正数据;Step S345: performing light distortion behavior correction processing according to the layered light dynamic parameter correction data to obtain light distortion behavior correction data; 步骤S4:基于策略梯度算法对光线扭曲行为校正数据进行策略梯度学习,得到光线校正策略梯度数据;根据光线校正策略梯度数据进行光线同位感知模型构建,得到光线同位感知模型,将光线同位感知模型发送至数据云平台,以执行激光雷达摄像头同位感知方法;其中,进行光线同位感知模型构建包括:基于光线扭曲行为校正数据,应用策略梯度算法进行策略梯度学习,包括:首先建立一个策略网络,该网络接受光线扭曲行为校正数据作为输入,通过多层神经网络进行处理,使用策略梯度算法优化网络参数,通过对比实际校正效果与期望效果之间的差异,计算梯度并调整网络权重,这个过程会在多次迭代中逐步改进策略,使得光线校正效果不断优化,得到的光线校正策略梯度数据包含了最佳的校正策略和调整参数,根据这些数据,进行光线同位感知模型构建。Step S4: Based on the policy gradient algorithm, policy gradient learning is performed on the light distortion behavior correction data to obtain light correction policy gradient data; a light co-location perception model is constructed according to the light correction policy gradient data to obtain a light co-location perception model, and the light co-location perception model is sent to the data cloud platform to execute the laser radar camera co-location perception method; wherein, the light co-location perception model is constructed including: based on the light distortion behavior correction data, a policy gradient algorithm is applied to perform policy gradient learning, including: firstly, a policy network is established, which accepts the light distortion behavior correction data as input, is processed through a multi-layer neural network, and the network parameters are optimized using the policy gradient algorithm, and the gradient is calculated and the network weight is adjusted by comparing the difference between the actual correction effect and the expected effect. This process will gradually improve the strategy in multiple iterations, so that the light correction effect is continuously optimized, and the obtained light correction policy gradient data contains the best correction strategy and adjustment parameters. Based on these data, the light co-location perception model is constructed. 2.根据权利要求1所述的激光雷达摄像头同位感知方法,其特征在于,根据光线反射强度评估数据对光线扭曲临界状态数据进行回归区间估计包括以下步骤:2. The laser radar camera co-location perception method according to claim 1 is characterized in that the step of performing regression interval estimation on the light distortion critical state data according to the light reflection intensity evaluation data comprises the following steps: 根据光线反射强度评估数据对光线扭曲临界状态数据进行光线扭曲递增规律识别,得到光线扭曲递增规律数据;According to the light reflection intensity evaluation data, the light distortion critical state data is subjected to the light distortion increasing law identification to obtain the light distortion increasing law data; 对光线扭曲递增规律数据进行递增逐步回归分析,得到扭曲递增逐步回归数据;Perform incremental stepwise regression analysis on the data of increasing law of light distortion to obtain the incremental stepwise regression data of distortion; 根据扭曲递增逐步回归数据对光线扭曲递增规律数据进行多重共线性约束处理,得到光线扭曲递增规律共线约束数据;According to the distortion increasing stepwise regression data, multicollinearity constraint processing is performed on the light distortion increasing law data to obtain the collinear constraint data of the light distortion increasing law; 根据光线扭曲递增规律共线约束数据对光线扭曲递增规律数据进行回归分析,得到光线扭曲递增共线回归数据;According to the collinear constraint data of the increasing law of light distortion, regression analysis is performed on the data of the increasing law of light distortion to obtain the collinear regression data of the increasing law of light distortion; 根据光线扭曲递增共线回归数据进行回归区间估计,得到光线反射扭曲区间估计数据。The regression interval is estimated based on the light distortion incremental collinear regression data to obtain the light reflection distortion interval estimation data. 3.一种激光雷达摄像头同位感知系统,其特征在于,用于执行如权利要求1所述的激光雷达摄像头同位感知方法,该激光雷达摄像头同位感知系统包括:3. A laser radar camera co-location perception system, characterized in that it is used to execute the laser radar camera co-location perception method according to claim 1, and the laser radar camera co-location perception system comprises: 视角三维模型构建模块,用于通过机载摄像镜头进行实时环境光线数据提取,得到实时环境光线数据;根据实时环境光线数据进行移动协同视角三维模型构建,得到移动协同视角三维模型;A perspective 3D model construction module is used to extract real-time ambient light data through an onboard camera lens to obtain real-time ambient light data; construct a mobile collaborative perspective 3D model based on the real-time ambient light data to obtain a mobile collaborative perspective 3D model; 环境空间扭曲行为概率计算模块,用于基于移动协同视角三维模型对实时环境光线数据进行光线反射强度评估,得到光线反射强度评估数据;对光线反射强度评估数据进行结构性干涉效应分析,得到光线反射结构干涉效应数据;根据光线反射结构干涉效应数据进行环境空间扭曲行为概率计算,得到环境空间扭曲行为概率数据;The environment space distortion behavior probability calculation module is used to evaluate the light reflection intensity of the real-time environment light data based on the mobile collaborative perspective three-dimensional model to obtain the light reflection intensity evaluation data; perform structural interference effect analysis on the light reflection intensity evaluation data to obtain the light reflection structural interference effect data; perform environment space distortion behavior probability calculation based on the light reflection structural interference effect data to obtain the environment space distortion behavior probability data; 校正处理模块,用于对环境空间扭曲行为概率数据进行空间扭曲结构化识别,得到空间扭曲结构化数据;根据空间扭曲结构化数据进行光线扭曲行为校正处理,得到光线扭曲行为校正数据;The correction processing module is used to perform spatial distortion structured recognition on the environmental spatial distortion behavior probability data to obtain spatial distortion structured data; perform light distortion behavior correction processing according to the spatial distortion structured data to obtain light distortion behavior correction data; 同位感知模型构建模块,用于基于策略梯度算法对光线扭曲行为校正数据进行策略梯度学习,得到光线校正策略梯度数据;根据光线校正策略梯度数据进行光线同位感知模型构建,得到光线同位感知模型,将光线同位感知模型发送至数据云平台,以执行激光雷达摄像头同位感知方法。The co-location perception model building module is used to perform policy gradient learning on the light distortion behavior correction data based on the policy gradient algorithm to obtain the light correction policy gradient data; build the light co-location perception model according to the light correction policy gradient data to obtain the light co-location perception model, and send the light co-location perception model to the data cloud platform to execute the lidar camera co-location perception method. 4.一种计算机可读存储介质,其特征在于,其上存储有计算机程序,所述计算机程序被执行时实现如权利要求1-2中任意一项所述的激光雷达摄像头同位感知方法。4. A computer-readable storage medium, characterized in that a computer program is stored thereon, and when the computer program is executed, the laser radar camera co-location perception method as described in any one of claims 1-2 is implemented.
CN202411337083.3A 2024-09-25 2024-09-25 Laser radar camera co-location perception method, system and medium Active CN118859232B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411337083.3A CN118859232B (en) 2024-09-25 2024-09-25 Laser radar camera co-location perception method, system and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411337083.3A CN118859232B (en) 2024-09-25 2024-09-25 Laser radar camera co-location perception method, system and medium

Publications (2)

Publication Number Publication Date
CN118859232A CN118859232A (en) 2024-10-29
CN118859232B true CN118859232B (en) 2025-01-17

Family

ID=93162102

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411337083.3A Active CN118859232B (en) 2024-09-25 2024-09-25 Laser radar camera co-location perception method, system and medium

Country Status (1)

Country Link
CN (1) CN118859232B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115034324A (en) * 2022-06-21 2022-09-09 同济大学 Multi-sensor fusion perception efficiency enhancement method
CN116817891A (en) * 2023-07-03 2023-09-29 四川吉利学院 Real-time multi-mode sensing high-precision map construction method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3134819A1 (en) * 2019-03-23 2020-10-01 Uatc, Llc Systems and methods for generating synthetic sensor data via machine learning
CN118644686A (en) * 2024-05-29 2024-09-13 遵义师范学院 An image acquisition system for pattern recognition analysis
CN118574002B (en) * 2024-08-01 2024-11-12 深圳市臻呈科技有限公司 Dual-lens motion camera switching method, device and computer equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115034324A (en) * 2022-06-21 2022-09-09 同济大学 Multi-sensor fusion perception efficiency enhancement method
CN116817891A (en) * 2023-07-03 2023-09-29 四川吉利学院 Real-time multi-mode sensing high-precision map construction method

Also Published As

Publication number Publication date
CN118859232A (en) 2024-10-29

Similar Documents

Publication Publication Date Title
CN118758221B (en) A land area information collection system based on national land space planning
CN102461186A (en) Stereoscopic image capturing method, system and camera
CN118247408A (en) Illumination mode regulation and control method, device, equipment and storage medium
CN119691412B (en) River channel measurement data preprocessing method based on artificial intelligence
CN119559531B (en) Intelligent monitoring method for steel structure construction based on drone inspection and point cloud processing
Mohammadikaji Simulation-based planning of machine vision inspection systems with an application to laser triangulation
CN119673099A (en) Flexible screen brightness control method and device based on area segmentation
CN111189414B (en) Real-time single-frame phase extraction method
CN114065650B (en) Material crack tip multi-scale strain field measurement tracking method based on deep learning
CN119600215B (en) Three-dimensional reconstruction method of underwater environment
CN104813217A (en) Method for designing a passive single-channel imager capable of estimating depth of field
CN118864804B (en) Object recognition method, system and laser radar camera
CN118972696B (en) Automatic lens switching method and system based on intelligent light sensing test
CN118859232B (en) Laser radar camera co-location perception method, system and medium
CN118884459A (en) A system and method for underwater three-dimensional topographic measurement of oil fields
CN118587376A (en) A three-dimensional real scene modeling system and method based on unmanned aerial vehicle aerial survey data
CN118298126A (en) Underwater imaging method and medium based on super-continuous laser light source and binocular calibration
Loktev et al. Image Blur Simulation for the Estimation of the Behavior of Real Objects by Monitoring Systems.
KR102022888B1 (en) Method and tool for measuring the geometric structure of an optical component
Miao et al. No-reference point cloud quality assessment based on multi-projection and hierarchical pyramid network
CN119126326B (en) Automatic focusing method and system for optical lens
CN119884895B (en) A photovoltaic module quality detection method based on artificial intelligence
CN118864737B (en) A three-dimensional image reconstruction method and system
Magaña et al. Exposure time and point cloud quality prediction for active 3D imaging sensors using Gaussian process regression
CN117518462B (en) Adaptive design method and system for thickness of optical film

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB02 Change of applicant information

Country or region after: China

Address after: Unit 1506, Fucheng Times Square, No. 221 Baomin 1st Road, Wenhui Community, Xin'an Street, Bao'an District, Shenzhen City, Guangdong Province 518000

Applicant after: SHENZHEN YONGTAI PHOTOELECTRIC Co.,Ltd.

Address before: Room 318, Block D, Hongtaifu Building, Fashion Times, District 34, Xin'an Street, Bao'an District, Shenzhen, Guangdong, 518000

Applicant before: SHENZHEN YONGTAI PHOTOELECTRIC Co.,Ltd.

Country or region before: China

CB02 Change of applicant information