Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a visual evaluation method for the urban road intersection based on point cloud data, which considers different parameters such as the height, the range, the visual angle position and the like of the sight of a driver.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
in a first aspect, an embodiment of the present invention provides a visual evaluation method for an urban road intersection based on point cloud data, where the three-dimensional line-of-sight calculation method for the road intersection includes the following steps:
S1, collecting point cloud data of a road intersection;
s2, carrying out pavement identification and segmentation processing on the point cloud data, and distinguishing a pavement part from other discrete plane points to obtain complete pavement point cloud data;
S3, establishing a plane grid lattice, taking the identified road surface points as basic interpolation references, acquiring the height information of the plane grid points by utilizing an adjacent point interpolation algorithm, and constructing a digital elevation model;
S4, the visual field concept is quantized into a fan shape formed by countless rays emitted by a sight original point of a driver, and the angle of the fan shape is related to the visual angle range of the driver;
Constructing a coordinate system by taking a sight starting point of a driver as an origin, converting a three-dimensional coordinate system into a two-dimensional coordinate system, converting a two-dimensional rectangular coordinate system into a polar coordinate system, and determining the position of an obstacle blocking the sight of the driver;
S5, analyzing variable factors which possibly influence the visual field of the urban road intersection.
Further, in step S1, the process of collecting the point cloud data of the road intersection includes the following steps:
s11, fixing a small laser radar on a rotary tripod head, placing the small laser radar in the center of an urban road intersection, and keeping a laser radar carrying bracket fixed;
And S12, performing data splicing on the obtained point cloud file through the corresponding characteristic points to obtain complete urban road intersection point cloud data.
Further, in step S2, the process of performing the road surface recognition and segmentation processing on the point cloud data includes the following steps:
S21, rasterizing original three-dimensional point cloud data, wherein each rasterized point of the point cloud data has a corresponding linear index value, data points at the same position of the same matrix have the same linear index value, and when each data point spans one matrix, the linear index value zeta of the data point is stepped;
S22, screening out points with lower elevations in the columnar units by utilizing a height threshold h t, and analyzing the remaining data points by using a principal component analysis method to obtain the principal directions of the columnar units;
S23, sequencing the detected road points by using a height histogram method, and extracting features of the obtained plane point cloud data; the characteristic variables comprise the number of point clouds in a unit area, and standard deviation and median of adjacent term differences along the x and y directions for describing the distribution characteristics of the point clouds on an x-y plane, so as to obtain an observation value array consistent with the number of columnar units; dividing the cylindrical units into three types of uniform plane points, non-uniform plane points and pseudo plane points by using an unsupervised classification K-Means clustering algorithm, carrying out standardization processing on all variables to eliminate the variability of the variables so that the intervals of the variables fall in [0,1 ];
And S24, distinguishing the pavement part from other discrete plane points by using an ultra-voxel clustering method to obtain complete pavement point cloud data.
Further, in step S21, the process of rasterizing the original three-dimensional point cloud data includes the following steps:
S211, assuming that the original three-dimensional point cloud data space coordinates are (X, Y, Z) T, creating a temporary array (X, Y, Z) T:
S212, assuming that (min x, max y) T and (max x, min y) T are the start point and the end point of the linear index calculation, after the point cloud data is rasterized, the horizontal distance D x and the vertical distance D y between the start point and the end point are calculated by a formula, wherein D x and D y are both positive integers, and the calculation formula is as follows:
S213, establishing a zero value matrix ψ and an empty set cell matrix omega with the size of (1+D y)×(1+Dx), calculating to obtain a horizontal distance D x and a vertical distance D y from any grid point to the starting point of a data index, using a row number 1+d y and a column number 1+d x to represent elements in the zero value matrix ψ and the empty set cell matrix omega, and converting the row number and the column number into a linear index represented by ζ through a formula ζ=d x·(1+Dy)+dy +1;
When the point cloud data are segmented, the point cloud data are arranged in an ascending order according to the value of the linear index zeta, whether the point is a step point or not is judged by calculating the difference delta zeta of adjacent data points, if delta zeta is smaller than 1, the point is not the step point, otherwise, the point cloud data between the step point zeta j and the last step point zeta j-1 are stored in a zeta j-1 unit of a cellular omega matrix, and meanwhile, the zeta j-1 element of the psi matrix is assigned to be 1.
Further, in step S23, the processing procedure of the height orthogonal diagram sorting method includes the following steps:
S231, setting the height delta h of a single stripe in the height histogram to be 1-4 m, establishing a frequency histogram along the Z direction, and simultaneously establishing an empty cell array phi;
S232, sorting the frequency numbers according to the frequency number descending order, and setting F { F 1,F2...Fi...Fn|1≤i≤n,i,n∈N+ } as the height value corresponding to each arranged stripe;
S233, establishing a loop i=2, and k=1, and merging and storing the point cloud data corresponding to F i-1 to Φ { k };
S234, if F i-Fi-1≤δh, merging and storing the point cloud data corresponding to F i and F i-1 to phi { k }, otherwise, storing the point cloud data corresponding to F i to phi { k }, wherein k=k+1;
S235, let i=i+1, if i is less than or equal to N h, return to step S234, otherwise end the loop.
Further, in step S24, the processing procedure of the super voxel clustering method is as follows:
And searching to obtain three-dimensional point cloud data in the corresponding cell matrix by utilizing the index position information, splicing the three-dimensional point cloud data to obtain a final road surface super-voxel clustering result to obtain complete road surface point cloud data, and distinguishing a road surface part from other discrete plane points.
Further, in step S4, the processing procedure of converting the three-dimensional coordinate point into the two-dimensional coordinate point and converting the three-dimensional problem into the two-dimensional problem through the cylindrical perspective projection model is as follows:
Setting a viewpoint starting point as an origin, setting coordinates as (X m,ym,zm)T, setting a vehicle advancing direction as a Y 'axis, wherein the X' axis is parallel to an XOY plane and perpendicular to the Y axis, and the Z 'axis is perpendicular to the X' OY plane, and constructing a three-dimensional space coordinate system;
Constructing a corresponding two-dimensional coordinate system u-v, wherein the origin of the two-dimensional coordinate system is arranged on a Y ' axis, the distance from the origin of the sight line is R, the v axis is arranged in parallel with a Z ' axis along a vector, and the u axis is arranged as a clockwise arc section and is in vertical relation with the v axis and the Y ';
in two spatial coordinate systems, the coordinate calculation formula of the two-dimensional point on the cylindrical surface is as follows:
Wherein:
(x, y, z) T represents coordinates of points around the driver in geodetic space;
(x m,ym,zm)T represents line-of-sight origin coordinates;
(X ', Y', Z ') T represents coordinates converted into an X' Y 'Z' coordinate system;
λ represents a scale factor between two reference frames, and has a value of 1.0;
θ represents the rotation matrix of the line-of-sight rigid transformation;
θ x,θy,θz represents a rotation matrix around X, Y and the Z axis;
Alpha m represents the rotation angle around the Z axis;
gamma m represents the rotation angle around the X-axis;
R represents the radius of the cylindrical surface and takes a value of 3.0;
(u, v) T represents the coordinates of the cylindrical surface;
Alpha m and gamma m respectively represent the azimuth and vertical angles of the origin of the line of sight, the direction of the Y 'axis being coincident with the direction of advance of the vehicle, so in this case the direction of the Y' axis is coincident with the direction of advance of the vehicle;
In the local three-dimensional space of the local construction, the coordinate calculation formulas of all the sight-line end points are as follows:
η=θ/θres+1
[ρ]={ρ1,ρ2...ρj...ρη-1,ρη|ρi=D(1≤j≤η),η=θ/θres+1}
[y′e,x′e]=Polar To Cartesian([θ,ρ])
[z′e]={z1,z2...zj...zη-1,zη|zi=D·tanθv(1≤j≤η),η=θ/θres+1}
Wherein:
[ x' e,y′e,z′e]T represents the three-dimensional coordinates of the line of sight end in the local three-dimensional coordinate system;
θ represents the driver's viewing angle;
θ res represents the angular spacing between view lines;
j represents the line of sight index, and when j is more than or equal to 1 and less than or equal to eta, eta is the line of sight index;
The area where the coordinates of the projection points are located is regarded as a square area with the side length of W i, the coordinates of the projection points are set as the center position of the area Setting collectionsStoring two-dimensional coordinates of all sight-line end points, setting the coordinates of the target point as Ob i(xi,yi,zi)T, searching by using a KD tree algorithm and using a Chebyshev sequence with the parameter of W i/2Points within the neighborhood;
Setting a set psi i to store two-dimensional coordinates of all points in a square area where the projection points are located, wherein all two-dimensional points of the set psi i have three-dimensional coordinates corresponding to the two-dimensional coordinates, setting kappa i to represent the three-dimensional space distance between the sight origin and the target point, and excluding any points with the three-dimensional space distance exceeding kappa i from the sight origin from the set psi i. The beneficial effects of the invention are as follows:
1. According to the invention, the small-sized laser radar is fixed on the holder, the point cloud data are collected in a plurality of rotations and are spliced, so that the problems of small visual angle, incomplete data collection and inaccuracy of the small-sized laser radar are solved.
2. The point cloud data subunit dividing method based on the linear index improves the efficiency of data blocking and indexing.
3. The pavement points in the point cloud data are rapidly and efficiently segmented and extracted by adopting the preliminary filtering method, the height histogram sorting method and the super voxel clustering method.
4. According to the invention, a digital elevation model is adopted to convert a space coordinate system and a local plane coordinate system according to a cylindrical perspective projection model, a visual field model is constructed, the position of an obstacle affecting the visibility of a driver is analyzed and determined, and the visualization of an intersection is analyzed.
5. According to the method for calculating the three-dimensional apparent distance, the processing and calculating efficiency of the point cloud data are improved, and the processing difficulty of mass point cloud data is solved.
6. According to the invention, the transformation from a space three-dimensional coordinate point to a two-dimensional coordinate point is realized through the cylindrical perspective projection model, and the three-dimensional problem is transformed into the two-dimensional problem.
Detailed Description
The invention will now be described in further detail with reference to the accompanying drawings.
It should be noted that the terms like "upper", "lower", "left", "right", "front", "rear", and the like are also used for descriptive purposes only and are not intended to limit the scope of the invention in which the invention may be practiced, but rather the relative relationship of the terms may be altered or modified without materially altering the teachings of the invention.
Example 1
Fig. 1 is a schematic structural diagram of an urban road intersection visual evaluation method based on point cloud data. FIG. 2 is a schematic diagram of a method for constructing a digital elevation model according to the present invention. The embodiment is suitable for a three-dimensional line-of-sight calculating method of an urban road intersection of point cloud data, and the three-dimensional line-of-sight calculating method of the road intersection comprises the following steps:
S1, collecting point cloud data of a road intersection.
S2, carrying out pavement identification and segmentation processing on the point cloud data, and distinguishing the pavement part from other discrete plane points to obtain complete pavement point cloud data.
And S3, establishing a plane grid lattice, taking the identified road surface points as basic interpolation references, acquiring the height information of the plane grid points by utilizing an adjacent point interpolation algorithm, and constructing a digital elevation model.
S4, the visual field concept is quantized into a fan shape formed by countless rays emitted by a sight original point of a driver, the angle of the fan shape is related to the visual angle range of the driver, the three-dimensional coordinate point in space is converted into a two-dimensional coordinate point through a cylindrical perspective projection model, and the three-dimensional problem is converted into a two-dimensional problem.
And constructing a coordinate system by taking the starting point of the sight of the driver as an origin, converting the three-dimensional coordinate system into a two-dimensional coordinate system, converting the two-dimensional rectangular coordinate system into a polar coordinate system, and determining the position of the obstacle blocking the sight of the driver.
S5, analyzing variable factors which possibly influence the visual field of the urban road intersection.
1. Collecting point cloud data of road intersections
In the step S1, the process of collecting the point cloud data of the road intersection includes the following sub-steps:
S11, fixing the small-sized laser radar on a rotary tripod head, placing the small-sized laser radar in the center of an urban road intersection, keeping a laser radar carrying support fixed, acquiring subfiles of the overall point cloud data of the intersection through the rotary tripod head for multiple data acquisition, and enabling the overlapping ratio of 2 adjacent point cloud files to be 1/3.
And S12, performing data splicing on the obtained point cloud file through the corresponding characteristic points to obtain complete urban road intersection point cloud data.
2. Road surface recognition and segmentation processing for point cloud data
In step S2, the process of performing the road surface recognition segmentation processing on the point cloud data includes the following sub-steps:
And S21, rasterizing the original three-dimensional point cloud data, wherein each rasterized point of the point cloud data has a corresponding linear index value, the data points at the same position of the same matrix have the same linear index value, and when each data point spans one matrix, the linear index value zeta of the data point is stepped. The specific rasterization processing steps are as follows:
Firstly, the original three-dimensional point cloud data is subjected to rasterization, and assuming that the spatial coordinates of the original three-dimensional point cloud data are (X, Y, Z) T, a temporary array (X, Y, Z) T,(x,y,z)T is created and calculated by the following formula:
Wherein [ (surface ] represents a rounded symbol, ∈ x,εy,εz represents grid intervals along the X-axis, Y-axis and Z-axis, respectively, and the smaller the interval is, the smaller the size of the block is, and vice versa. The partitioning process is similar in different dimensions and is therefore explained herein by taking the division of two-dimensional cylindrical cells as an example. Assuming that (min x, max y) T and (min x, min y) T are the start and end points, respectively, of the linear index calculation, after the point cloud data is rasterized, the formula is passed The calculation can obtain that the horizontal distance D x and the vertical distances D y,Dx and D y between the starting point and the ending point are all positive integers. A zero value matrix ψ of (1+d y)×(1+Dx) and an empty set cell matrix Ω are established simultaneously. The horizontal distance d x and the vertical distance d y from any grid point to the beginning of the data index are calculated in the same way. Thus, in a computer memory system, elements in the zero value matrix ψ and the empty set cell matrix Ω can be considered to be represented using row numbers 1+d y and column numbers 1+d x (0<d x≤Dx,0<dy≤Dy). To facilitate data indexing, the row and column numbers can be converted to linear indexes denoted ζ by the formula ζ=d x·(1+Dy)+dy +1. Each grid point after the point cloud data is rasterized has a corresponding linear index value, the data points at the same position of the same matrix have the same linear index value, and when each data point spans across one matrix, the zeta of the data points is stepped, which is called as a step point and is defined by zeta j. When the point cloud data is segmented, the point cloud data can be arranged in ascending order according to the value of the linear index zeta, then whether the point is a step point or not is judged by calculating the difference delta zeta of adjacent data points, if delta zeta is smaller than 1, the point is not the step point, and otherwise, the point is the step point. And (3) carrying out the same processing on all the step points ζ j, namely storing the point cloud data of ζ j-1 between the step points and the last step point into a ζ j-1 unit of a cellular omega matrix, and simultaneously assigning 1 to the ζ j-1 element of the ψ matrix, thereby realizing the blocking of the point cloud data.
S22, screening out points with lower elevations in the columnar units by utilizing a height threshold h t, and analyzing the remaining data points by using a principal component analysis method to obtain the principal directions of the columnar units.
FIG. 3 is a schematic diagram showing the effect of the preliminary filtering method of the present invention. In this embodiment, the unit grid is divided into 0.5m by 0.5m, and in each cylindrical unit, the height range of the road point cloud data is extremely limited and approximately equal, so that the height threshold h t can be utilized to screen the point with the lower height in the cylindrical unit first. The remaining data points are then analyzed using principal component analysis methods to obtain their principal directions. Principal component analysis is one of the common statistical methods, namely, a group of variables possibly with correlation are converted into linear uncorrelated variables through matrix orthotopic transformation, namely, the principal components. Specifically, in the present embodiment, it is assumed that (x, y, z) T is set as any point coordinate in the columnar unit, and (λ, u, v) T is a converted variable thereof. For the point cloud with good plane characteristics, the main direction v is consistent with the plane normal vector formed by the point cloud, namely the point cloud is in a perpendicular relation, so that compared with non-characteristic points, the point cloud has small average absolute error in the upsilon direction. The average absolute error is simply the average of the absolute values of the deviations of all individual data from their overall arithmetic mean. For a set of m data, the average absolute error is as followsThe calculation is performed such that,Since the elevation of the ground point is at a lower position in all the point cloud data, the elevation threshold is set to 0.2m in this embodiment, that is, 0.5×0.5×0.2m per columnar unit size.
S23, sorting and detecting the road surface points by using a height histogram method, carrying out feature extraction on the obtained plane point cloud data, wherein feature variables comprise the number of point clouds in a unit area, standard deviation and median of adjacent term differences along the x and y directions for describing the distribution features of the point clouds on an x-y plane, obtaining an observation value array consistent with the number of cylindrical units, dividing the cylindrical units into three types of uniform plane points, non-uniform plane points and pseudo-plane points by using an unsupervised classification K-Means clustering algorithm, carrying out standardization processing on all variables to eliminate the difference of the cylindrical units, enabling intervals of the variables to fall in [0,1], eliminating the non-uniform plane points and the pseudo-plane points, merging the uniform plane points into the cylindrical units subjected to preliminary filtering processing, and obtaining the optimized plane point cloud data.
Fig. 4 is a schematic diagram of the structure of the height histogram method of the present invention. In step S23 of this embodiment, the processing procedure of the height orthogonal diagram sorting method includes the following steps:
S231, setting the height delta h of a single stripe in the height histogram to be 1-4 m, establishing a frequency histogram along the Z direction, and simultaneously establishing an empty cell array phi.
S232, sorting the frequency numbers according to the frequency number descending order, and setting F { F 1,F2...Fi...Fn|1≤i≤n,i,n∈N+ } as the height value corresponding to each stripe after the arrangement.
S233, a loop i=2 and k=1 is established, and the point cloud data corresponding to F i-1 is merged and stored to Φ { k }.
S234, if F i-Fi-1≤δh, merging and storing the point cloud data corresponding to F i and F i-1 to Φ { k }, otherwise, k=k+1, and storing the point cloud data corresponding to F i to Φ { k }.
S235, let i=i+1, if i is less than or equal to N h, return to step S234, otherwise end the loop.
Considering that the algorithm processes in each columnar unit are similar and mutually independent, the embodiment fully utilizes the parallel computing advantage of the multithreading processor and improves the automatic detection efficiency of the plane points in each columnar unit. In consideration of differences of data points in different cylindrical units, the embodiment also performs feature extraction on the plane point cloud data obtained by the algorithm, the extracted feature variables mainly comprise standard deviation and median of adjacent item differences along the x and y directions, the standard deviation and median are firstly arranged in ascending order during calculation, and the feature variables mainly describe distribution features of the point cloud on the x-y plane. Because the road points are distributed continuously and regularly, rather than unevenly. In addition, considering that the road points in the partial cylindrical units are not completely uniformly distributed, the point cloud density of the number of point clouds in a unit area is also taken as one of characteristic variables. The method for calculating the number of the point clouds in the unit area and the point cloud density adopts a method of dividing and calculating the point clouds and then taking the average value, and each cylindrical unit is divided into 100 subregions of 10 x 10 equally, wherein the point cloud density is equal to the ratio of the subregion area containing the points to the total point number.
After the feature vector is extracted, an observation value array with the same number as that of the columnar units is obtained, and then the columnar units can be divided into three types of uniform plane points, non-uniform plane points and pseudo plane points by using an unsupervised classification K-Means clustering algorithm. The pseudo-planar points are points which are misclassified as planar points due to small height intervals in the Z direction, and therefore, the pseudo-planar points are linear structures. Firstly, all variables are required to be standardized to eliminate the variability, so that the intervals of the variables are all within [0,1 ]. And then eliminating the uneven plane points and the pseudo plane points, and merging the even plane points into the cylindrical units after the preliminary filtering treatment to obtain the optimized plane point cloud data.
And S24, distinguishing the pavement part from other discrete plane points by using an ultra-voxel clustering method to obtain complete pavement point cloud data, obtaining three-dimensional point cloud data in a corresponding cell matrix by searching through index position information, and splicing the three-dimensional point cloud data to obtain a final pavement ultra-voxel clustering result. The clustering method based on the super voxels can effectively distinguish the pavement part from other discrete plane points and obtain complete pavement point cloud data.
3. Construction of digital elevation model
In this embodiment, the digital elevation model construction includes the sub-steps of:
a plane lattice is established on a plane by taking (minx, maxy) T and (maxx, miny) T as angular points and taking 0.2m as an interval;
And taking the identified road surface points as basic interpolation references, and acquiring the height information of the plane lattice points by using a neighboring point interpolation algorithm. When interpolation is carried out under the digital elevation model, adjacent lattice points can be positioned based on the plane positions of the observation points, and the height information of the positions of the observation points can be rapidly acquired, so that subsequent visibility analysis is convenient.
4. Converting a three-dimensional problem to a two-dimensional problem
Fig. 5 is a schematic structural diagram of the cylindrical perspective projection calculation of the present invention. Fig. 6 is a view blind zone structure diagram of the present invention. In step S4 of this embodiment, the processing procedure for converting the three-dimensional problem into the two-dimensional problem by converting the spatial three-dimensional coordinate point into the two-dimensional coordinate point through the cylindrical perspective projection model is as follows:
In a two-dimensional plane, the field of view concept is quantized into a sector consisting of innumerable rays emitted by the origin of the driver's line of sight. The distance between the line of sight initiation points is the sector radius D, the driver's viewing angle is θ, and both D and θ are set as variable parameters for a more comprehensive assessment. In this sector, detection of an obstacle is achieved when the line of sight is obstructed, θ being set to 120 °.
And the transformation from the space three-dimensional coordinate point to the two-dimensional coordinate point is realized through the cylindrical perspective projection model, and the three-dimensional problem is transformed into the two-dimensional problem. Two local coordinate systems are first established to obtain coordinate data of cylindrical surface points. The viewpoint origin is now set as the origin, the coordinates are set (X m,ym,zm)T, the vehicle advancing direction is set as the Y 'axis, the X' axis is parallel to the XOY plane, perpendicular to the Y axis, and the Z 'axis is perpendicular to the X' OY 'plane. On the basis of this three-dimensional space establishment, another two-dimensional coordinate system u-v can be constructed, the origin of which is set on the Y' axis and at a distance R from the line of sight origin, the v axis is parallel to the Z 'axis along the vector, the u axis is set as a clockwise arc, and is perpendicular to both the v axis and the Y', in both spatial coordinate systems, the coordinates of the two-dimensional point on the cylindrical surface can be calculated by:
Wherein (X, Y, Z) T represents coordinates of points around the driver in the geodetic space, (X m,ym,zm)T represents coordinates of the origin of the line of sight, (X ', Y', Z ') T represents coordinates converted into an X' Y 'Z' coordinate system, λ represents a scale factor between two reference coordinate systems, which takes a value of 1.0, θ represents a rotation matrix of the rigid conversion of the line of sight, θ x,θy,θz represents a rotation matrix around X, Y and the Z axis, α m represents a rotation angle around the Z axis, γ m represents a rotation angle around the X axis, R represents a radius of the cylindrical surface, which takes a value of 3.0, (u, v) T represents coordinates of the cylindrical surface, α m and γ m represent an azimuth angle and a vertical angle of the origin of the line of sight, respectively, and the direction of the Y 'axis coincides with the advancing direction of the vehicle, so that in this case, the direction of the Y' axis coincides with the advancing direction of the vehicle.
η=θ/θres+1
[ρ]={ρ1,ρ2…ρj…ρη-1,ρη|ρi=D(1≤j≤η),η=θ/θres+1}
[y′e,x′e]=Polar To Cartesian([θ,ρ])
[z′e]={z1,z2…zj…zη-1,zη|zi=D·tanθv(1≤j≤η),η=θ/θres+1}
Wherein, each parameter in the formula has the following meaning:
[ x 'e,y′e,z′e]T ] represents the three-dimensional coordinates of the line of sight end in the local three-dimensional coordinate system, θ represents the driver's viewing angle, θ res represents the angular spacing between the lines of sight, j represents the line of sight index, and η is the line of sight index when 1.ltoreq.j.ltoreq.η.
By the above calculation, two-dimensional coordinates of points representing the road environment and all line-of-sight end points can be obtained.
The next step is to obtain the coordinates of the projection point shown in fig. 5, namely, the projection point of the target point on the cylindrical surface, the area where the coordinates of the projection point are located can be regarded as a square area with a side length of W i, the center position of the projection point is located, and the coordinates are set asSetting collectionsStoring two-dimensional coordinates of all sight-line end points, setting the coordinates of the target point as Ob i(xi,yi,zi)T, and searching by using a KD tree algorithm and using a Chebyshev sequence with the parameter of W i/2Points within the neighborhood. The set ψ i is set to store the two-dimensional coordinates of all the points in the square area where the projection points are located, and all the two-dimensional points of the set ψ i have three-dimensional coordinates corresponding to the two-dimensional coordinates. Let κ i denote the three-dimensional spatial distance between the line of sight origin and the target point, any point whose three-dimensional spatial distance from the line of sight origin exceeds κ i will be excluded from the set ψ i, which can greatly improve the efficiency of the search along the line of sight.
In the embodiment, the visual field modeling process mainly comprises the steps of constructing a coordinate system by taking a sight line starting point of a driver as an origin, converting a three-dimensional coordinate system into a two-dimensional coordinate system and converting the two-dimensional rectangular coordinate system into a polar coordinate system, determining the position of an obstacle blocking the sight line of the driver through two times of coordinate conversion, and when the coordinate system is converted based on a specific matrix equation, mapping coordinates in the polar coordinate system into a real three-dimensional space coordinate system, so that the three-dimensional visual field model in the coordinate system can be constructed.
Fig. 7 is a schematic diagram showing the correspondence between the visual field and the line-of-sight curve according to the present invention. According to the visual field modeling and the corresponding schematic diagram of the visual range curve, which are obtained by the embodiment, the position of the obstacle can be rapidly positioned and the visual range size can be quantitatively calculated by the method, so that the visibility evaluation of the urban road intersection is obtained.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above examples, and all technical solutions belonging to the concept of the present invention belong to the protection scope of the present invention. It should be noted that modifications and adaptations to the invention without departing from the principles thereof are intended to be within the scope of the invention as set forth in the following claims.