Disclosure of Invention
Based on the foregoing, it is necessary to provide a method, a system and a medium for co-located sensing of a laser radar camera, so as to solve at least one of the above technical problems.
In order to achieve the above purpose, the method, the system and the medium for co-located sensing of the laser radar camera, the method comprises the following steps:
The method comprises the steps of S1, extracting real-time environment light data through an airborne camera lens to obtain real-time environment light data, constructing a mobile collaborative viewing angle three-dimensional model according to the real-time environment light data, and obtaining a mobile collaborative viewing angle three-dimensional model;
S2, carrying out light reflection intensity evaluation on real-time environment light data based on a mobile collaborative visual angle three-dimensional model to obtain light reflection intensity evaluation data, carrying out structural interference effect analysis on the light reflection intensity evaluation data to obtain light reflection structure interference effect data, and carrying out environment space distortion behavior probability calculation according to the light reflection structure interference effect data to obtain environment space distortion behavior probability data;
Step S3, performing spatial distortion structural recognition on the environmental spatial distortion behavior probability data to obtain spatial distortion structural data;
And S4, performing strategy gradient learning on the light distortion behavior correction data based on a strategy gradient algorithm to obtain light correction strategy gradient data, performing light parity perception model construction according to the light correction strategy gradient data to obtain a light parity perception model, and transmitting the light parity perception model to a data cloud platform to execute a laser radar camera parity perception method.
The invention captures the light information in the environment in real time by using the onboard camera lens, and ensures the timeliness and accuracy of the data. A three-dimensional model is constructed through real-time data, and the model can reflect the real situation of the environment, including the distribution and influence of light rays. Based on the three-dimensional model of the mobile collaborative viewing angle, real-time ambient light data is evaluated, and the intensity and the characteristics of light reflection are quantified. By evaluating the data, the structural interference effect of the light in the environment, i.e. the complex interaction effects of reflection, refraction and diffraction of the light at the object surface, is analyzed. Based on the light reflection structure interference effect data, the probability of spatial warping behavior occurring in the environment is calculated. These data help to understand and predict the behavior of light in a particular environment, including its impact on visual perception and object shape awareness. By analyzing the environmental spatial distortion behavior probability data, specific spatial distortion structural features, such as bending, torsion or deformation of light, are identified. And based on the identified distortion characteristics, performing correction processing of the light distortion behaviors so as to improve the accuracy of light transmission and the accuracy of environment perception. The light warp behavior correction data is learned and optimized using a strategy gradient algorithm to find an optimal correction strategy. Optimized light correction strategy gradient data is obtained that reflects how light transmission is adjusted to reduce distortion effects based on real-time environmental data. And constructing a light co-located perception model based on the result of strategy gradient learning. The model can adjust the positions and angles of the laser radar and the camera according to the actual light conditions in a real-time or near-real-time environment so as to realize more accurate co-located sensing. And sending the optimized light co-located perception model to a data cloud platform so as to execute a co-located perception method of the laser radar and the camera in a wide application scene, and improving the accuracy of environment perception and space perception and reducing errors and uncertainty through the correction of light distortion behaviors and the application of the light co-located perception model. Therefore, the invention is an improvement treatment for the traditional laser radar camera co-position sensing method, solves the problem that the traditional laser radar camera co-position sensing method has inaccurate analysis on the change of the ambient light, thereby causing low correction precision on the ambient distortion in the reflecting process, improves the accuracy of the analysis on the change of the ambient light, and improves the correction precision on the ambient distortion in the reflecting process.
Preferably, step S1 comprises the steps of:
S11, acquiring real-time environment data through an onboard camera lens to obtain real-time environment data;
Step S12, extracting real-time environment light data from the real-time environment data to obtain real-time environment light data;
step S13, performing real-time distance measurement on the obstacle based on real-time environment data by using a laser radar to obtain real-time distance measurement data of the obstacle;
And S14, constructing a mobile collaborative viewing angle three-dimensional model of the real-time environment data according to the real-time environment light data and the obstacle real-time ranging data to obtain the mobile collaborative viewing angle three-dimensional model.
The present invention uses an on-board camera lens to acquire real-time environmental data, including images, video, or other sensory data. This step ensures that comprehensive information of the current environment is obtained. Light information including light intensity, illumination distribution, etc. is extracted from the collected environmental data. These data are the basis for subsequent three-dimensional model construction and ray reflection evaluation. Real-time ranging of obstacles is performed based on real-time environmental data by using a sensor such as a laser radar. These data not only contribute to environmental awareness, but also provide an important basis for subsequent modeling and path planning. And constructing an accurate movement collaborative visual angle three-dimensional model by combining the real-time ambient light data and the obstacle real-time ranging data. The model integrates the physical characteristics of light and the spatial position of the obstacle in the environment, and can provide accurate visual information of the environment. By integrating different sources (vision, laser ranging) of real-time environmental data, step S1 improves the overall perceptibility of the environment. The construction of the mobile collaborative viewing angle three-dimensional model is not only based on light data, but also combines the spatial information of the obstacle, so that more accurate environment simulation and scene analysis capability can be provided. The real-time data is collected and processed, so that the system can quickly respond to environmental changes, and the method has important significance for instant decision and safety.
Preferably, step S2 comprises the steps of:
s21, carrying out light change fluctuation analysis on real-time environment light data based on a mobile collaborative visual angle three-dimensional model to obtain light change fluctuation data;
S22, evaluating the light reflection intensity of the light variation fluctuation data to obtain light reflection intensity evaluation data;
S23, carrying out structural interference effect analysis on the light reflection intensity evaluation data to obtain light reflection structural interference effect data;
And S24, performing environment space distortion behavior probability calculation according to the light reflection structure interference effect data and the light reflection intensity evaluation data to obtain environment space distortion behavior probability data.
According to the invention, the change fluctuation of the real-time environment light data is analyzed by moving the collaborative visual angle three-dimensional model. These fluctuations may be caused by factors such as different light sources, obstructions, and topography in the environment. Accurate capture of light change fluctuation data is ensured, and understanding of dynamic characteristics and change trend of light in the environment is facilitated. Based on the light variation fluctuation data, the reflection intensity of the light at different areas and surfaces is evaluated. These data provide a quantitative analysis of the light distribution in the environment. The light reflection intensity evaluation data directly affects the realism and accuracy of visual simulation, image rendering and augmented reality techniques. Further analysis of the light reflection intensity assessment data explored the interference effects of light on different surfaces and structures. This includes optical phenomena such as reflection, refraction and diffraction. The structural interference effect data helps to understand the complex propagation mode of light in a specific environment, and provides basis for more accurate light simulation and environment simulation. Based on the light reflection structure interference effect data and the light reflection intensity evaluation data, the probability of spatial warping behavior occurring in the environment is calculated. These data are critical to improving visual accuracy in virtual reality, augmented reality, and simulated environments, helping to reduce visual misdirection and user perceived inconsistencies.
Preferably, step S23 comprises the steps of:
step S231, performing light reflection path marking on the light reflection intensity evaluation data to obtain light reflection path marking data;
s232, performing light reflection staggered structure analysis according to the light reflection path marking data to obtain light reflection staggered structure data;
Step S233, evaluating the light color difference among different reflected light rays according to the light ray reflection intensity evaluation data to obtain reflected light ray light color difference data;
Step S234, performing optical wave phase difference calculation between different reflected light rays on the light ray reflection staggered structure data according to the reflected light ray chromatic aberration data and the light ray reflection intensity evaluation data to obtain the reflection staggered structure optical wave phase difference data;
And S235, carrying out structural interference effect analysis based on the reflection staggered structure light wave phase difference data to obtain light reflection structure interference effect data.
The invention can accurately track the reflection path of the light in the environment by carrying out path marking on the light reflection intensity evaluation data. This data is important for analyzing the specific path and distance traveled by the light. A detailed description of the behavior of light reflection is provided, providing the underlying data for subsequent analysis of the structure of the light interlacing. Based on the light reflection path marking data, the staggered reflection condition of the light on different surfaces and structures is analyzed. These data reveal reflection patterns and path crossings of light in complex environments. By analyzing the light staggered structure, the complexity of light reflection in the environment can be deeply understood, including factors such as multiple reflection, change of reflection angle and the like. And evaluating the light color difference between different reflected lights according to the light reflection intensity evaluation data. These data take into account the color change and attenuation of the light that occurs during reflection. For virtual reality, augmented reality, and image rendering applications, photochromic difference data is critical to ensuring the authenticity and color fidelity of a visual scene. And calculating the light wave phase difference between different reflected lights based on the light color difference data of the reflected lights and the light reflection intensity evaluation data. These data reflect the phase change of the light wave during spatial propagation. The optical interference phenomenon can be quantified through the optical wave phase difference data, and a foundation is provided for the accurate simulation and emulation of the environmental optical characteristics. Based on the reflection staggered structure light wave phase difference data, the interference effect of light on the structure is further analyzed. These data reveal interference phenomena that occur when light is reflected and propagates in the environment. The structural interference effect data provides critical information for accurately modeling optical phenomena in complex environments, such as design optimization of optical devices and visual performance predictions under environmental conditions.
Preferably, step S24 comprises the steps of:
s241, performing light path difference calculation of different light reflections on the light reflection structure interference effect data to obtain light reflection light path difference data;
step S242, performing medium reflection interference simulation on the light reflection structure interference effect data according to the light reflection light path difference data to obtain medium reflection interference simulation data;
Step S243, carrying out polarized light interaction analysis on the interference effect data of the light reflection structure according to the medium reflection interference simulation data to obtain polarized light interaction data of the reflection structure;
Step S244, performing light distortion critical state analysis based on the polarized light interaction data of the reflecting structure, the medium reflection interference simulation data and the light reflection light path difference data to obtain light distortion critical state data;
step S245, carrying out regression interval estimation on the light ray distortion critical state data according to the light ray reflection intensity estimation data to obtain light ray reflection distortion interval estimation data;
And step S246, performing environmental space distortion behavior probability calculation based on the light reflection distortion interval estimation data to obtain environmental space distortion behavior probability data.
The invention can quantify the path length difference of the light rays in the space propagation process by calculating the light path difference of different light ray reflection paths. These data are important to understand the time delay and spatial distribution of light reflection. The optical path difference data provides an accurate measurement of the optical path length, providing a basis for subsequent optical simulation and interference effect analysis. Based on the light reflection light path difference data, an optical interference phenomenon on the medium surface is simulated. These analog data reflect the phase changes and interference effects that occur when light is reflected at the surface of the medium. The medium reflection interference simulation data is helpful for evaluating the reflection characteristics of the optical material, and provides important basis for material selection and optical design. The change in polarization state and interaction of light on the structured surface is analyzed based on the medium reflection interference simulation data. These data are critical to understanding the polarization behavior of light on complex surfaces. The polarized light interaction data helps to optimize the design of the polarizing filters, coatings and optical elements, improving the efficiency and performance of the optical system. And analyzing the distortion and the distortion state of the light under specific conditions based on the polarized light interaction data of the reflecting structure and the light reflection light path difference data. These data reflect the propagation limitations and distortions of light rays in complex environments. The light distortion critical state data provides critical information for predicting the performance of vision systems, sensors and optical devices in complex environments, helping to optimize device design and application environment selection. Based on the light reflection twist interval estimation data, a probability of an optical twist phenomenon occurring in the environment is calculated. These data help assess and predict the impact of optical distortions on the vision system and image processing. The environment space distortion behavior probability data provides important visual environment prediction for applications such as virtual reality, augmented reality, automatic driving and the like, and supports real-time decision making and system optimization.
Preferably, the regression interval estimation of the light ray distortion critical state data according to the light ray reflection intensity estimation data includes the steps of:
performing light distortion incremental rule recognition on the light distortion critical state data according to the light reflection intensity evaluation data to obtain light distortion incremental rule data;
Incremental stepwise regression analysis is carried out on the light distortion incremental rule data to obtain distortion incremental stepwise regression data;
performing multiple collinearity constraint processing on the light distortion incremental rule data according to the distortion incremental stepwise regression data to obtain the light distortion incremental rule collinearity constraint data;
Performing regression analysis on the light distortion incremental rule data according to the light distortion incremental rule collinear constraint data to obtain light distortion incremental collinear regression data;
And carrying out regression interval estimation according to the light ray distortion incremental collinearly regression data to obtain light ray reflection distortion interval estimation data.
According to the light reflection intensity evaluation data, the rule of increasing the light distortion is identified. This step focuses on determining how the optical distortion gradually increases or decreases under different conditions. The light distortion incremental rule data is modeled as a mathematical equation using a stepwise regression analysis method. Stepwise regression helps to gradually add the most relevant variables to optimize the predictive power of the model. And processing multiple collinearity existing in the distortion increment rule data, and ensuring the robustness and reliability of the regression model. This step helps to reduce redundant information between variables in the model and improve prediction accuracy. And carrying out regression analysis based on the data subjected to the collinearity constraint processing, and further optimizing a prediction model of increasing the light distortion. This model enables a more accurate description of the trend and law of optical distortions. And performing regression interval estimation by utilizing the light distortion incremental collinearly regression data. This step provides a range of optical distortions that occur, thereby helping to evaluate the performance and stability of the optical system under different environmental conditions.
Preferably, step S3 comprises the steps of:
s31, performing space distortion structural identification on the environment space distortion behavior probability data to obtain space distortion structural data;
Step S32, extracting features of the spatial distortion structural data to obtain spatial distortion structural feature data;
S33, performing a distortion structure multidimensional analysis on the spatial distortion structured data according to the spatial distortion structure characteristic data to obtain distortion structure multidimensional characteristic data;
and step S34, performing light distortion behavior correction processing according to the multidimensional characteristic data of the distortion structure to obtain light distortion behavior correction data.
The invention analyzes and identifies probability data of light distortion behavior in the environment and sorts the probability data into structured data. This step helps to transform the complex environmental distortion behavior into an operable data form that can be further analyzed. Detailed descriptions and statistics of environmental distortion phenomena are provided, providing underlying data for subsequent analysis. Key features are extracted from the structured warp data, such as the intensity, frequency, spatial distribution, etc. characteristics of the warp. These features reflect important aspects of the distortion phenomenon, helping to further understand the nature and behavior of light distortion. A detailed description and quantitative analysis of the light distortion behaviour is provided, providing basis for subsequent processing and correction. Based on the space distortion structural feature data, multidimensional statistics and analysis such as cluster analysis, principal component analysis and the like are carried out. These analyses can reveal relationships and differences between different warp modes. Helping to identify and understand different types of light distortion patterns provides in-depth data support for formulating correction strategies. And (3) based on the multidimensional characteristic data of the distortion structure, formulating a correction strategy aiming at the light distortion behavior. This includes adjusting the position of the optical device, optimizing the environmental settings, or employing digital correction techniques, among other means. By effective corrective measures, the influence caused by environmental distortion in the optical system is reduced or eliminated, and the performance and stability of the system are improved.
Preferably, step S34 includes the steps of:
step S341, performing distortion curvature calculation on the multidimensional feature data of the distortion structure to obtain distortion structure curvature data;
Step S342, carrying out layered characteristic processing on the multidimensional characteristic data of the twisted structure according to the bending data of the twisted structure to obtain bending layered data of the twisted structure;
Step S343, performing layered light distortion index calculation according to the distortion structure curvature layered data to obtain layered light distortion index data;
step S344, performing ray dynamic parameter correction according to the layered ray distortion index data to obtain layered ray dynamic parameter correction data;
Step S345, performing light distortion behavior correction processing according to the layered light dynamic parameter correction data to obtain light distortion behavior correction data.
According to the method, the bending data of each twisted structure are obtained by analyzing and calculating the multidimensional characteristic data of the twisted structure. These data reflect the specific morphology and degree of curvature of the light ray twist. A more detailed description of the light distortion behaviour is provided, helping to identify the type and extent of distortion that needs to be corrected. And carrying out layered characteristic processing based on the tortuosity data of the tortuosity structure, and classifying and analyzing the multidimensional characteristic data according to different levels of tortuosity. Specific analysis for different levels of tortuosity is provided to help distinguish and understand the extent of impact of various tortuosity phenomena on the optical system. Based on the distortion structure curvature layering data, light distortion indexes of each layer, such as distortion intensity, influence range and the like, are calculated. A deeper quantitative analysis of the light distortion behaviour is provided, providing specific guidance and direction of optimization for subsequent correction. And (3) formulating a dynamic parameter correction strategy of the optical system according to the layered light distortion index data, and relating to operations such as lens adjustment, light source position change and the like. By adjusting parameters of the optical system in real time, light distortion caused by distortion is reduced or eliminated, and accuracy and stability of the optical system are improved. And applying the layered light dynamic parameter correction data to execute actual light distortion behavior correction processing, so as to ensure that the system output accords with the expected optical performance. Finally, the effective management and optimization of the light distortion behavior in the optical system are realized, and the functions and application effects of the system are improved.
Preferably, the present invention also provides a laser radar camera co-location sensing system for performing the laser radar camera co-location sensing method as described above, the laser radar camera co-location sensing system comprising:
the system comprises a visual angle three-dimensional model construction module, a mobile collaborative visual angle three-dimensional model construction module, a visual angle three-dimensional model generation module and a visual angle three-dimensional model generation module, wherein the visual angle three-dimensional model construction module is used for extracting real-time environment light data through an airborne camera lens to obtain real-time environment light data;
The system comprises an environment space distortion behavior probability calculation module, a structural interference effect analysis module, an environment space distortion behavior probability calculation module and a display module, wherein the environment space distortion behavior probability calculation module is used for carrying out light reflection intensity evaluation on real-time environment light data based on a mobile collaborative visual angle three-dimensional model to obtain light reflection intensity evaluation data;
The correction processing module is used for carrying out space distortion structural identification on the environment space distortion behavior probability data to obtain space distortion structural data;
The co-located perception model construction module is used for carrying out strategy gradient learning on the light distortion behavior correction data based on a strategy gradient algorithm to obtain light correction strategy gradient data; and constructing a light co-located sensing model according to the gradient data of the light correction strategy to obtain the light co-located sensing model, and sending the light co-located sensing model to the data cloud platform to execute the laser radar camera co-located sensing method.
The invention also provides a computer program which is executed to realize the laser radar camera co-position sensing method according to any one of the above.
The invention has the beneficial effects that the light information in the environment is captured in real time by utilizing the onboard camera lens, so that the timeliness and the accuracy of the data are ensured. A three-dimensional model is constructed through real-time data, and the model can reflect the real situation of the environment, including the distribution and influence of light rays. Based on the three-dimensional model of the mobile collaborative viewing angle, real-time ambient light data is evaluated, and the intensity and the characteristics of light reflection are quantified. By evaluating the data, the structural interference effect of the light in the environment, i.e. the complex interaction effects of reflection, refraction and diffraction of the light at the object surface, is analyzed. Based on the light reflection structure interference effect data, the probability of spatial warping behavior occurring in the environment is calculated. These data help to understand and predict the behavior of light in a particular environment, including its impact on visual perception and object shape awareness. By analyzing the environmental spatial distortion behavior probability data, specific spatial distortion structural features, such as bending, torsion or deformation of light, are identified. And based on the identified distortion characteristics, performing correction processing of the light distortion behaviors so as to improve the accuracy of light transmission and the accuracy of environment perception. The light warp behavior correction data is learned and optimized using a strategy gradient algorithm to find an optimal correction strategy. Optimized light correction strategy gradient data is obtained that reflects how light transmission is adjusted to reduce distortion effects based on real-time environmental data. And constructing a light co-located perception model based on the result of strategy gradient learning. The model can adjust the positions and angles of the laser radar and the camera according to the actual light conditions in a real-time or near-real-time environment so as to realize more accurate co-located sensing. And sending the optimized light co-located perception model to a data cloud platform so as to execute a co-located perception method of the laser radar and the camera in a wide application scene, and improving the accuracy of environment perception and space perception and reducing errors and uncertainty through the correction of light distortion behaviors and the application of the light co-located perception model. Therefore, the invention is an improvement treatment for the traditional laser radar camera co-position sensing method, solves the problem that the traditional laser radar camera co-position sensing method has inaccurate analysis on the change of the ambient light, thereby causing low correction precision on the ambient distortion in the reflecting process, improves the accuracy of the analysis on the change of the ambient light, and improves the correction precision on the ambient distortion in the reflecting process.
Detailed Description
The following is a clear and complete description of the technical method of the present invention, taken in conjunction with the accompanying drawings, and it is evident that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, are intended to fall within the scope of the present invention.
Furthermore, the drawings are merely schematic illustrations of the present invention and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. The functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor methods and/or microcontroller methods.
It will be understood that, although the terms "first," "second," etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
To achieve the above objective, please refer to fig. 1 to 3, a method for co-location sensing of a laser radar camera, the method comprising the following steps:
The method comprises the steps of S1, extracting real-time environment light data through an airborne camera lens to obtain real-time environment light data, constructing a mobile collaborative viewing angle three-dimensional model according to the real-time environment light data, and obtaining a mobile collaborative viewing angle three-dimensional model;
S2, carrying out light reflection intensity evaluation on real-time environment light data based on a mobile collaborative visual angle three-dimensional model to obtain light reflection intensity evaluation data, carrying out structural interference effect analysis on the light reflection intensity evaluation data to obtain light reflection structure interference effect data, and carrying out environment space distortion behavior probability calculation according to the light reflection structure interference effect data to obtain environment space distortion behavior probability data;
Step S3, performing spatial distortion structural recognition on the environmental spatial distortion behavior probability data to obtain spatial distortion structural data;
And S4, performing strategy gradient learning on the light distortion behavior correction data based on a strategy gradient algorithm to obtain light correction strategy gradient data, performing light parity perception model construction according to the light correction strategy gradient data to obtain a light parity perception model, and transmitting the light parity perception model to a data cloud platform to execute a laser radar camera parity perception method.
In the embodiment of the present invention, as described with reference to fig. 1, the step flow diagram of a method for co-location sensing of a lidar camera of the present invention is shown, and in this example, the method for co-location sensing of a lidar camera includes the following steps:
The method comprises the steps of S1, extracting real-time environment light data through an airborne camera lens to obtain real-time environment light data, constructing a mobile collaborative viewing angle three-dimensional model according to the real-time environment light data, and obtaining a mobile collaborative viewing angle three-dimensional model;
In the embodiment of the invention, the ambient light data is captured in real time through the onboard camera lens on the sweeping robot. The airborne camera lens is arranged on the unmanned aerial vehicle and is provided with a high dynamic range imaging sensor, so that the ambient light can be accurately captured under different illumination conditions. The real-time captured image is subjected to noise filtering and image enhancement by a preprocessing algorithm so as to reduce the influence of ambient light change on data. Next, from these image data, a moving collaborative perspective three-dimensional model is constructed using multi-perspective stereoscopic technology. The method comprises the steps of carrying out feature point matching on image data obtained from different angles, generating a depth map by utilizing a parallax calculation method, and synthesizing a three-dimensional model of the environment through a three-dimensional reconstruction algorithm. Finally, the obtained three-dimensional model can dynamically reflect the actual form and illumination change of the environment, and provides a precise spatial data base for subsequent analysis.
S2, carrying out light reflection intensity evaluation on real-time environment light data based on a mobile collaborative visual angle three-dimensional model to obtain light reflection intensity evaluation data, carrying out structural interference effect analysis on the light reflection intensity evaluation data to obtain light reflection structure interference effect data, and carrying out environment space distortion behavior probability calculation according to the light reflection structure interference effect data to obtain environment space distortion behavior probability data;
In the embodiment of the invention, after the three-dimensional model of the mobile collaborative viewing angle is obtained, the light reflection intensity evaluation is carried out on the real-time environment light data through the light ray tracing technology. The reflection intensity of each three-dimensional model surface point is calculated by using a high-precision ray tracing algorithm, and the process considers the light source position, the surface material property and the environmental light scattering effect. And further processing the light reflection intensity evaluation result, analyzing the change trend of the light reflection intensity evaluation result under different illumination conditions, and generating light reflection intensity evaluation data. Based on this data, an interference effect analysis of the light reflecting structure is performed. And processing the reflected data by using the structural interference effect model, and identifying and quantifying interference modes such as light spots and fringe effects in the reflected image so as to obtain light reflection structural interference effect data. Further, using these interference effect data, an environmental spatial warping behavior probability calculation model is applied. The model comprehensively considers the reflection intensity and interference effect of the ambient light, calculates the space distortion behavior probability caused by structural interference when the light propagates in the environment through probability statistical analysis, and finally obtains the ambient space distortion behavior probability data. The data can be used for analyzing light propagation characteristics in a complex environment and providing support for co-located perception of the laser radar and the camera.
Step S3, performing spatial distortion structural recognition on the environmental spatial distortion behavior probability data to obtain spatial distortion structural data;
In the embodiment of the invention, the spatial distortion structural identification is carried out on the environmental spatial distortion behavior probability data. The method comprises the steps of using a spatial distortion recognition algorithm, firstly mapping probability data into a three-dimensional space, performing gridding processing on the data through a high-resolution grid generation algorithm, and precisely dividing a distortion region in the space. Next, a spatial segmentation analysis method is applied to identify specific shape and distribution characteristics of the warped region. The structure of the warp field is described in detail by a spatial analysis tool and a structured dataset is generated. These datasets contain the geometric features of the warped region, the distribution law, and its position in three-dimensional space. Based on the structured data, a light ray distortion behavior correction process is performed. The twisted areas are analyzed by a reverse modeling method by using a light propagation model, and propagation paths of light in the areas are calculated and corrected. The correction algorithm comprises the steps of light refractive index adjustment, path correction and the like, light distortion behavior correction data are obtained, and the data can effectively compensate the change of light in a distortion area, so that the accuracy of a follow-up model is ensured.
And S4, performing strategy gradient learning on the light distortion behavior correction data based on a strategy gradient algorithm to obtain light correction strategy gradient data, performing light parity perception model construction according to the light correction strategy gradient data to obtain a light parity perception model, and transmitting the light parity perception model to a data cloud platform to execute a laser radar camera parity perception method.
In the embodiment of the invention, based on the light distortion behavior correction data, a strategy gradient algorithm is applied to carry out strategy gradient learning. The method comprises the steps of firstly establishing a strategy network which receives light distortion behavior correction data as input and processes the data through a multi-layer neural network. And optimizing network parameters by using a strategy gradient method, calculating gradients and adjusting network weights by comparing differences between actual correction effects and expected effects. This process will gradually improve the strategy over multiple iterations so that the light correction effect is constantly optimized. The obtained gradient data of the light correction strategy contains the optimal correction strategy and adjustment parameters. And constructing a light ray co-located perception model according to the data. And constructing a comprehensive light co-located perception model by using the correction strategy gradient data, integrating the measurement data of the laser radar and the camera by using the model, and carrying out real-time data fusion by using the correction strategy. The constructed light co-located sensing model is uploaded to a data cloud platform, and the cloud platform is used for executing a laser radar camera co-located sensing method to realize real-time environment sensing and analysis.
Preferably, step S1 comprises the steps of:
S11, acquiring real-time environment data through an onboard camera lens to obtain real-time environment data;
Step S12, extracting real-time environment light data from the real-time environment data to obtain real-time environment light data;
step S13, performing real-time distance measurement on the obstacle based on real-time environment data by using a laser radar to obtain real-time distance measurement data of the obstacle;
And S14, constructing a mobile collaborative viewing angle three-dimensional model of the real-time environment data according to the real-time environment light data and the obstacle real-time ranging data to obtain the mobile collaborative viewing angle three-dimensional model.
In the embodiment of the invention, the on-board camera lens on the sweeping robot captures the ambient light data in real time, the high-resolution image sensor and the real-time image processing module are configured, and the lens is stabilized through the automatic control system, so that the stability and the definition of image capturing are ensured. The image sensor periodically collects environmental image data including information about objects, obstructions, and light conditions in the environment. The acquired image data is processed in real time for noise removal and color correction to improve data quality. Finally, the obtained real-time environment data comprises a plurality of high-resolution image sequences, and the real-time environment light data is extracted from the real-time environment data. First, an image frame is extracted from the real-time environment data obtained in step S11. And (3) carrying out detailed analysis on the light information in the image by applying a high dynamic range image processing technology, and identifying and measuring the intensity and direction of the light. And analyzing the image by using a ray tracing algorithm, and extracting ray reflection information of each pixel point. And then, carrying out spectrum resolution processing on the light ray data by a spectrum analysis method, and extracting light ray intensity data of each wave band in the environment. Finally, the obtained real-time ambient light data comprise light intensity and distribution information of each spectrum band, and the data can reflect illumination conditions and changes of the current environment. The laser radar emits laser pulses and receives laser signals reflected from the obstacle, thereby calculating the distance from the obstacle to the laser radar. In order to improve the ranging accuracy, the laser radar performs multiple measurements at different angles and positions, and the multiple measurement results are weighted and averaged by using a sensor fusion technology. In combination with image information captured in the real-time environmental data, the obstacle real-time ranging data provides accurate position and distance information of the obstacle in the environment. And constructing the three-dimensional model of the mobile collaborative viewing angle for the real-time environment data according to the real-time environment light data and the real-time range finding data of the obstacle. Firstly, fusing real-time environment light data and obstacle real-time ranging data, and processing depth information in the environment through a stereoscopic vision algorithm. And integrating the multi-view image captured by the camera lens with laser radar ranging data by using an image matching technology to generate a preliminary three-dimensional point cloud model. And then, processing the point cloud data by applying a three-dimensional reconstruction algorithm to form a complete three-dimensional environment model. Further, the light tracking technology is utilized to simulate the light of the three-dimensional model, and the light effect of the model is adjusted by combining the light data, so that the model is ensured to reflect the light condition and the obstacle position in the real environment. Finally, the obtained three-dimensional model of the mobile collaborative viewing angle can accurately represent the three-dimensional structure of the environment and the illumination characteristics thereof, and basic data is provided for subsequent analysis.
Preferably, step S2 comprises the steps of:
s21, carrying out light change fluctuation analysis on real-time environment light data based on a mobile collaborative visual angle three-dimensional model to obtain light change fluctuation data;
S22, evaluating the light reflection intensity of the light variation fluctuation data to obtain light reflection intensity evaluation data;
S23, carrying out structural interference effect analysis on the light reflection intensity evaluation data to obtain light reflection structural interference effect data;
And S24, performing environment space distortion behavior probability calculation according to the light reflection structure interference effect data and the light reflection intensity evaluation data to obtain environment space distortion behavior probability data.
As an example of the present invention, referring to fig. 2, the step S2 in this example includes:
s21, carrying out light change fluctuation analysis on real-time environment light data based on a mobile collaborative visual angle three-dimensional model to obtain light change fluctuation data;
In the embodiment of the invention, the light variation fluctuation analysis is performed on the real-time environment light data based on the mobile collaborative visual angle three-dimensional model. First, the movement collaborative viewing angle three-dimensional model and the real-time ambient light data obtained from step S14 are data aligned, so that spatial consistency of the three-dimensional model and the light data is ensured. And (3) carrying out light variation fluctuation analysis on each surface point in the model by using a light propagation simulation algorithm, and measuring the intensity variation of light at different time points and different positions. And in the analysis process, a local light fluctuation calculation method is adopted, the time series data of the light intensity is subjected to Fourier transformation, and the frequency components and the amplitude changes of the light fluctuation are extracted. Finally, light variation fluctuation data are obtained, wherein the fluctuation data comprise fluctuation characteristic information of light intensity in space and time, and the fluctuation characteristic information is used for subsequent evaluation and analysis.
S22, evaluating the light reflection intensity of the light variation fluctuation data to obtain light reflection intensity evaluation data;
In the embodiment of the invention, the light reflection intensity evaluation is carried out on the light variation fluctuation data. First, the intensity fluctuation feature of each light is extracted from the light variation fluctuation data obtained in step S21. The reflection intensity of the light on different angles and surfaces is calculated by statistical analysis of the light fluctuation data using a reflection intensity evaluation algorithm. The method comprises the steps of carrying out local mean value and standard deviation calculation on the light fluctuation data, and evaluating the intensity stability of the reflected light. And then, calculating the reflection intensity of the light on each surface by combining the material properties of the surfaces in the three-dimensional model, and generating corresponding reflection intensity data. Finally, light reflection intensity evaluation data are obtained, and the data reflect reflection characteristics and change rules of light under different environmental conditions.
S23, carrying out structural interference effect analysis on the light reflection intensity evaluation data to obtain light reflection structural interference effect data;
In the embodiment of the invention, structural interference effect analysis is performed on the light reflection intensity evaluation data. First, based on the light reflection intensity evaluation data obtained in step S22, an interference effect analysis algorithm is applied to evaluate the light reflection pattern. The specific steps include identifying patterns of light interference fringes by analyzing periodic variations in reflected intensity data using an interference fringe detection method. And then, combining a spatial frequency analysis technology, carrying out detailed analysis on the spatial distribution of interference fringes, and calculating the intensity and phase information of interference effects. Further, modeling is performed on the data through an interference effect model, and interference effects in light reflection are quantified, including structural effects such as light spots and light bands. Finally, interference effect data of the light reflection structure are generated, and the characteristics and distribution of the interference effect in the light reflection process are described.
And S24, performing environment space distortion behavior probability calculation according to the light reflection structure interference effect data and the light reflection intensity evaluation data to obtain environment space distortion behavior probability data.
In the embodiment of the invention, the calculation of the probability of the environmental space distortion behavior is performed according to the interference effect data of the light reflection structure and the light reflection intensity evaluation data. First, the light reflection structure interference effect data obtained in step S23 is fused with the light reflection intensity evaluation data in step S22. The environmental spatial distortion calculation model is used, the reflection intensity of the light and the interference effect are taken as input, and the spatial distortion caused by the interference effect when the light propagates in the environment is simulated. The method comprises the steps of constructing a light propagation model of an environment space based on light reflection data, and quantifying the probability of distortion in a light propagation path by applying a space distortion probability calculation algorithm. The algorithm combines a statistical method to comprehensively analyze interference effect and light intensity change, and calculates the probability of spatial distortion of light in the environment. Finally, ambient spatial warping behavior probability data is generated, providing the likelihood and distribution of light warping in the environment.
Preferably, step S23 comprises the steps of:
step S231, performing light reflection path marking on the light reflection intensity evaluation data to obtain light reflection path marking data;
s232, performing light reflection staggered structure analysis according to the light reflection path marking data to obtain light reflection staggered structure data;
Step S233, evaluating the light color difference among different reflected light rays according to the light ray reflection intensity evaluation data to obtain reflected light ray light color difference data;
Step S234, performing optical wave phase difference calculation between different reflected light rays on the light ray reflection staggered structure data according to the reflected light ray chromatic aberration data and the light ray reflection intensity evaluation data to obtain the reflection staggered structure optical wave phase difference data;
And S235, carrying out structural interference effect analysis based on the reflection staggered structure light wave phase difference data to obtain light reflection structure interference effect data.
In the embodiment of the invention, the light reflection path marking is performed on the light reflection intensity evaluation data. First, path information of light is extracted from the light reflection intensity evaluation data obtained in step S22. And marking the propagation path of the light rays in the three-dimensional environment model by utilizing a ray tracing algorithm, and recording the incidence point, the reflection point and the propagation path of each light ray. The path tracking technique is applied to each ray to ensure the accuracy of the path marking. The marking data is stored in a data structure comprising information such as reflection points, path lengths and reflection angles of the light rays. Finally, ray reflection path marker data is generated, which details the propagation path of the ray and its interaction with the environmental surface. and carrying out light reflection staggered structure analysis according to the light reflection path marking data. And performing staggered structure analysis on the light paths in the marking data by using a light staggered mode analysis method. First, an intersection area in the path mark data is identified by a ray intersection detection algorithm. The interlacing structural analysis includes high resolution imaging of reflection path interlacing points and extraction of spatial features of the interlacing areas using image processing techniques. And analyzing the interaction mode of the light rays in the staggered structure by combining the reflection angles and the path lengths of the light rays. Light reflection interlacing structural data is generated describing structural features and distribution of light in the interlacing area. And evaluating the light color difference between different reflected light rays according to the light reflection intensity evaluation data. Firstly, analyzing reflection intensity data of each ray in the ray reflection staggered structure data by utilizing a chromatic aberration calculation algorithm. The intensity data of the light is converted into color difference values in the color space by applying a color space conversion technique. And (3) evaluating the light color difference between different light rays by using a spectral resolution analysis method, and calculating the absolute value and the relative value of the light color difference. Finally, the resulting reflected light color difference data provides the color difference of the different light rays in the interlaced region. And calculating the optical wave phase difference between different reflected light rays according to the reflected light ray chromatic aberration data and the light ray reflection intensity evaluation data. Firstly, the light wave phase measurement technology is utilized to process the reflection intensity data of the light rays, and the light wave phase information of the light rays is calculated. The phase difference between the reflected light rays is calculated by analyzing the light color difference data using an interferometry technique. And quantitatively analyzing the phase difference of the light rays in the staggered structure by using a phase calculation algorithm, and integrating phase difference data of different reflected light rays. Finally, the generated reflective interleaved structured light wave phase difference data reflects the phase difference condition of the light rays in the interleaved region. And carrying out structural interference effect analysis based on the reflection staggered structure light wave phase difference data. First, phase difference information is extracted from the optical wave phase difference data obtained in step S234. And using an interference effect model to perform interference fringe analysis on the phase difference data to determine an interference mode of the light rays in the staggered area. And calculating the intensity and distribution of the interference fringes by a light wave interference algorithm. And analyzing the interference effect intensity and the spatial distribution of the light by using an interference effect quantification method to generate interference effect data of the light reflection structure. The final data describes the characteristics of the interference effect of light rays in an interleaved structure due to the phase difference of the light waves.
Preferably, step S24 comprises the steps of:
s241, performing light path difference calculation of different light reflections on the light reflection structure interference effect data to obtain light reflection light path difference data;
step S242, performing medium reflection interference simulation on the light reflection structure interference effect data according to the light reflection light path difference data to obtain medium reflection interference simulation data;
Step S243, carrying out polarized light interaction analysis on the interference effect data of the light reflection structure according to the medium reflection interference simulation data to obtain polarized light interaction data of the reflection structure;
Step S244, performing light distortion critical state analysis based on the polarized light interaction data of the reflecting structure, the medium reflection interference simulation data and the light reflection light path difference data to obtain light distortion critical state data;
step S245, carrying out regression interval estimation on the light ray distortion critical state data according to the light ray reflection intensity estimation data to obtain light ray reflection distortion interval estimation data;
And step S246, performing environmental space distortion behavior probability calculation based on the light reflection distortion interval estimation data to obtain environmental space distortion behavior probability data.
In the embodiment of the invention, the light path difference calculation of different light reflection is carried out on the light reflection structure interference effect data. First, interference fringe information of the reflected light is extracted from the light reflection structure interference effect data obtained in step S235. The reflected paths of the different rays are compared using an optical path difference calculation algorithm. The specific operation includes calculating the optical path difference of each ray, i.e. the optical path difference from the incident point to the reflection point, according to the reflection point and the path recorded in the ray reflection path mark data. By comparing the path differences of the different reflected light rays, light ray reflected path difference data is generated, which describes the path differences of the light rays caused by the path differences during interference. And performing medium reflection interference simulation on the interference effect data of the light reflection structure according to the light reflection light path difference data. First, the optical path difference information is extracted from the light reflection optical path difference data obtained in step S241. And inputting the light reflection optical path difference data into the model by using a medium reflection interference simulation model, and simulating interference effects when light passes through different mediums. The interference simulation tool is used to calculate the reflection and transmission of light at different medium interfaces and simulate the interference mode and intensity distribution. Finally, medium reflection interference simulation data are generated, and reflection interference conditions of light rays on various medium interfaces are described. And carrying out polarized light interaction analysis on the interference effect data of the light reflection structure according to the medium reflection interference simulation data. First, interaction information of polarized light is extracted from the medium reflection interference simulation data obtained in step S242. And analyzing the change of the polarization state of the light on the medium interface by using a polarized light analysis tool. The specific operation comprises the steps of measuring the polarization degree and the polarization angle of light, and recording the polarization characteristics of the light under the interaction of different media by using a polarization interferometer. And comprehensively analyzing the data and the light reflection structure interference effect data to generate reflection structure polarized light interaction data, wherein the data show the polarized interaction characteristics of light in different media. And carrying out light distortion critical state analysis based on the polarized light interaction data of the reflecting structure, the medium reflection interference simulation data and the light reflection light path difference data. First, the reflective structure polarized light interaction data of step S243, the medium reflection interference simulation data of step S242, and the light reflection optical path difference data of step S241 are integrated. And (3) comprehensively analyzing the data by using a light distortion analysis model, and calculating the distortion critical state of the light under the influence of interference and polarization effects. Specific operations include analyzing critical conditions of a twisted state using an optical simulation tool based on reflection and twist characteristics of light. Finally, light distortion critical state data is obtained, and the distortion condition of light under different interference and polarization conditions is described. And carrying out regression interval estimation on the light distortion critical state data according to the light reflection intensity estimation data. First, light intensity information is extracted from the light reflection intensity evaluation data obtained in step S22. And (3) carrying out interval estimation on the distortion state by using a regression analysis method in combination with the light distortion critical state data obtained in the step S244. The specific operation includes regression modeling of the distortion critical state data, calculating the effect of light intensity on the distortion state, and estimating the range of light distortion. Light reflection twist interval estimation data is generated describing the degree of twist under different light intensity conditions. And calculating the probability of the environmental space distortion behavior based on the light reflection distortion interval estimation data. First, the twist section information is extracted from the light reflection twist section estimation data obtained in step S245. And (3) combining the light reflection intensity evaluation data and the light distortion critical state data in the step S24, and calculating the probability of the light distortion in the environment by using a probability calculation model. The method comprises the steps of carrying out probability distribution analysis on a distortion section of the light, and calculating probability distribution of light distortion by combining light propagation characteristics in an environment model. Finally, environmental spatial warping behavior probability data is generated, providing probability information of warping of light in the environment.
Preferably, the regression interval estimation of the light ray distortion critical state data according to the light ray reflection intensity estimation data includes the steps of:
performing light distortion incremental rule recognition on the light distortion critical state data according to the light reflection intensity evaluation data to obtain light distortion incremental rule data;
Incremental stepwise regression analysis is carried out on the light distortion incremental rule data to obtain distortion incremental stepwise regression data;
performing multiple collinearity constraint processing on the light distortion incremental rule data according to the distortion incremental stepwise regression data to obtain the light distortion incremental rule collinearity constraint data;
Performing regression analysis on the light distortion incremental rule data according to the light distortion incremental rule collinear constraint data to obtain light distortion incremental collinear regression data;
And carrying out regression interval estimation according to the light ray distortion incremental collinearly regression data to obtain light ray reflection distortion interval estimation data.
In the embodiment of the invention, the light distortion increasing rule identification is performed on the light distortion critical state data according to the light reflection intensity evaluation data. First, a record of changes in light intensity and degree of distortion is extracted from the light reflection intensity evaluation data. And modeling the change trend of the light distortion degree along with the light intensity by using a curve fitting technology. In specific implementation, the intensity of light is taken as an independent variable, the degree of light distortion is taken as an independent variable, an incremental trend analysis algorithm is applied to identify a distortion incremental rule, and light distortion incremental rule data is obtained. This data details the increasing trend of the degree of twist at different light intensities. Incremental stepwise regression analysis is performed on the light distortion incremental rule data. And importing the obtained light distortion incremental rule data into a regression analysis tool to perform stepwise regression modeling. In the stepwise regression process, starting from the simplest model, new independent variables are gradually added, and the contribution of each step to the model fit is evaluated. And determining the influence degree of the respective variable on the light distortion increment rule through adjustment and optimization of the regression model, and obtaining distortion increment stepwise regression data which describe the specific influence of the light intensity on the distortion increment. And performing multiple collinearity constraint processing on the light distortion increasing rule data according to the distortion increasing stepwise regression data. First, the correlation between the independent variables in the regression model is evaluated using a co-linear diagnostic tool, and a variance expansion factor (VIF) is calculated. And screening and adjusting independent variables with serious collinearity, and adjusting independent variable combinations in the model through a multiple collinearity constraint algorithm to reduce correlation among the independent variables. And after the processing is finished, collinear constraint data of the light distortion increment rule are obtained, and the collinear constraint data reflect the distortion increment rule after being subjected to the colinear constraint. And carrying out regression analysis on the light distortion incremental rule data according to the collinear constraint data of the light distortion incremental rule. And applying the obtained collinearly constraint data to a regression model to perform standard regression analysis. And establishing a new regression model according to the data after collinearly constraint by using a linear regression tool, and evaluating the relation between the light intensity and the distortion degree. Light ray distortion increasing collinear regression data is generated describing specific regression results of light ray intensity versus distortion increase in the co-linear processed model. And carrying out regression interval estimation according to the light distortion incremental collinearly regression data. And (3) using the regression analysis result obtained in the step S245.4 in the interval estimation model to calculate a confidence interval of the light distortion degree. The specific operation comprises the steps of calculating the confidence intervals of the regression coefficients of each interval through the coefficients and standard errors in the regression model. And (3) utilizing an interval estimation tool to sort the estimation result into light reflection distortion interval estimation data. The data provides confidence intervals for the degree of light distortion at different light intensities, showing the range of distortion.
Preferably, step S3 comprises the steps of:
s31, performing space distortion structural identification on the environment space distortion behavior probability data to obtain space distortion structural data;
Step S32, extracting features of the spatial distortion structural data to obtain spatial distortion structural feature data;
S33, performing a distortion structure multidimensional analysis on the spatial distortion structured data according to the spatial distortion structure characteristic data to obtain distortion structure multidimensional characteristic data;
and step S34, performing light distortion behavior correction processing according to the multidimensional characteristic data of the distortion structure to obtain light distortion behavior correction data.
As an example of the present invention, referring to fig. 3, the step S3 in this example includes:
s31, performing space distortion structural identification on the environment space distortion behavior probability data to obtain space distortion structural data;
In the embodiment of the invention, the spatial distortion structural identification is carried out on the environmental spatial distortion behavior probability data. Firstly, the environmental space distortion behavior probability data are partitioned according to preset space grids, and each grid represents a space unit. And identifying and classifying the distortion behaviors of each space unit by using a space data analysis tool, and marking and structuring the distortion patterns in the data by setting threshold values and classification rules. Regions with similar distortion characteristics are identified through a spatial mapping algorithm, and are grouped and labeled to generate spatial distortion structured data. This data describes the regional distribution and structural characteristics of the warp behavior in space.
Step S32, extracting features of the spatial distortion structural data to obtain spatial distortion structural feature data;
In the embodiment of the invention, the spatial distortion structural data is subjected to characteristic extraction. Geometric features, including degree of distortion, shape features, boundary information, etc., are extracted for each marked spatial cell using a feature extraction algorithm. First, geometric properties of each space cell, such as twist angle, area variation, edge curvature, etc., are calculated. These geometric attributes are then quantized into feature vectors, forming spatially warped structural feature data. By using feature engineering tools, the extracted data will describe specific geometric features of the spatial warping, facilitating subsequent analysis.
S33, performing a distortion structure multidimensional analysis on the spatial distortion structured data according to the spatial distortion structure characteristic data to obtain distortion structure multidimensional characteristic data;
In the embodiment of the invention, the spatial distortion structural data is subjected to distortion structure multidimensional analysis according to the spatial distortion structural characteristic data. And deep analysis is carried out on the extracted characteristic data by adopting a multidimensional data analysis method. The feature data is subjected to dimension reduction processing by using a Principal Component Analysis (PCA) or factor analysis method, and the multi-dimensional features are converted into main components or factors. And analyzing main influencing factors and correlations of space distortion by constructing a multidimensional space model to obtain multidimensional feature data of the distortion structure. This data reveals the complexity and major driving factors of the warp behavior in multidimensional space.
And step S34, performing light distortion behavior correction processing according to the multidimensional characteristic data of the distortion structure to obtain light distortion behavior correction data.
In the embodiment of the invention, the light distortion behavior correction processing is performed according to the multidimensional characteristic data of the distortion structure. Firstly, a light distortion behavior model is constructed by utilizing the multidimensional feature data of the distortion structure obtained in the step S33, and the change rule of light in a distortion environment is described. The light distortion behaviour is then adjusted by a correction algorithm. The correction algorithm applies the distortion parameters in the multidimensional feature data to the light model, and the light is corrected in the distortion environment through calculation and adjustment. And finally, generating light distortion behavior correction data, wherein the data are used for correcting the distortion effect of original light in an actual environment, so that the co-located sensing precision of the laser radar and the camera is improved.
Preferably, step S34 includes the steps of:
step S341, performing distortion curvature calculation on the multidimensional feature data of the distortion structure to obtain distortion structure curvature data;
Step S342, carrying out layered characteristic processing on the multidimensional characteristic data of the twisted structure according to the bending data of the twisted structure to obtain bending layered data of the twisted structure;
Step S343, performing layered light distortion index calculation according to the distortion structure curvature layered data to obtain layered light distortion index data;
step S344, performing ray dynamic parameter correction according to the layered ray distortion index data to obtain layered ray dynamic parameter correction data;
Step S345, performing light distortion behavior correction processing according to the layered light dynamic parameter correction data to obtain light distortion behavior correction data.
In the embodiment of the invention, the distortion curvature calculation is performed on the multidimensional characteristic data of the distortion structure. Firstly, inputting multidimensional characteristic data of the twisted structure into a bending analysis tool, and obtaining the bending degree of a light twisting region by calculating the local bending and the global bending of each data point. Curvature calculation uses a curvature formula in differential geometry, which involves solving the second derivative of each spatial point to determine its curvature in multidimensional space. The results of these calculations are consolidated into tortuosity data for the twisted structure, providing a specific quantitative indicator of the bending of the light in the twisted environment. And carrying out layered feature processing on the multidimensional feature data of the twisted structure according to the bending data of the twisted structure. And layering the curvature data by adopting a layering algorithm. First, the data is divided into multiple layers, such as a light warp, a medium warp, and a heavy warp, according to the magnitude of the tortuosity value. Statistical features of the hierarchy, such as mean tortuosity and standard deviation, are then calculated for each hierarchy. These hierarchical features reflect the distribution of different degrees of twist, and the resulting tortuosity hierarchical data of the twisted structure is used for subsequent analysis. And carrying out layered light distortion index calculation according to the distortion structure curvature layered data. For each level of data, the light distortion model is used to calculate its distortion index. The specific method comprises the step of carrying out model simulation on the light rays of each layer so as to determine the distortion degree of the light rays in different curvature layers. The calculation includes analysis of optical behavior such as refraction, reflection and scattering of light in a particular distortion level, to derive light distortion index data for each level, which describes the variation of light at different distortion levels. And correcting the dynamic parameters of the light rays according to the layered light ray distortion index data. First, the layered light distortion index data is input into a light dynamic correction algorithm. The algorithm adjusts the dynamic parameters of the light, including light intensity, wavelength, phase, etc., by comparing the actually measured light data with the model predicted light data. The parameters are adjusted through an optimization technology, so that the performance of the light in the actual environment is consistent with the expected performance, the correction process comprises linear and nonlinear optimization steps, and the obtained layered light dynamic parameter correction data are used for further light correction. And performing light distortion behavior correction processing according to the layered light dynamic parameter correction data. And applying the corrected light dynamic parameters to a light distortion behavior model, and correcting the distortion effect of the light in the real environment by reversely simulating and adjusting the correction data. The process involves applying layered ray dynamics parameters to actual lidar and camera data, and fine-tuning the ray-distortion behavior using a correction algorithm. And finally, the generated light distortion behavior correction data are used for improving the perception precision of the laser radar and the camera system and ensuring the accuracy and the reliability of the measurement result.
Preferably, the present invention also provides a laser radar camera co-location sensing system for performing the laser radar camera co-location sensing method as described above, the laser radar camera co-location sensing system comprising:
the system comprises a visual angle three-dimensional model construction module, a mobile collaborative visual angle three-dimensional model construction module, a visual angle three-dimensional model generation module and a visual angle three-dimensional model generation module, wherein the visual angle three-dimensional model construction module is used for extracting real-time environment light data through an airborne camera lens to obtain real-time environment light data;
The system comprises an environment space distortion behavior probability calculation module, a structural interference effect analysis module, an environment space distortion behavior probability calculation module and a display module, wherein the environment space distortion behavior probability calculation module is used for carrying out light reflection intensity evaluation on real-time environment light data based on a mobile collaborative visual angle three-dimensional model to obtain light reflection intensity evaluation data;
The correction processing module is used for carrying out space distortion structural identification on the environment space distortion behavior probability data to obtain space distortion structural data;
The co-located perception model construction module is used for carrying out strategy gradient learning on the light distortion behavior correction data based on a strategy gradient algorithm to obtain light correction strategy gradient data; and constructing a light co-located sensing model according to the gradient data of the light correction strategy to obtain the light co-located sensing model, and sending the light co-located sensing model to the data cloud platform to execute the laser radar camera co-located sensing method.
The invention also provides a computer program which is executed to realize the laser radar camera co-position sensing method according to any one of the above.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
The foregoing is only a specific embodiment of the invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.