Intelligent vehicle regulation and control method for preventing rear-end collision under multiple conditions
Technical Field
The invention belongs to the technical field of intelligent driving, and particularly relates to an intelligent regulation and control method for a rear-end collision prevention vehicle under multiple conditions.
Background
With the improvement of living standard of people, driving and traveling become travel modes selected by most people, but frequent occurrence of traffic events is also caused, wherein the rear-end collision event occupies high occurrence probability in the traffic events.
In order to deal with the handling of rear-end events, a vehicle recorder, road condition monitoring and the like become necessary devices for traffic management and control. In the prior art, the prevention of rear-end collision is too much dependent on calibrating with reference to road boundaries or other large vehicles, and has great limitation on road types, so that an intelligent regulation and control method for the rear-end collision prevention under multiple conditions is needed.
Disclosure of Invention
The invention provides an intelligent regulation and control method for a vehicle for preventing rear-end collision under multiple road conditions, which aims to prevent the occurrence of rear-end collision events under multiple road conditions in advance by intelligently regulating and controlling the speed of the vehicle and reduce the occurrence rate of the rear-end collision events to a greater extent.
The invention relates to an intelligent regulation and control method for a vehicle with multiple paths of conditions for preventing rear-end collision, which comprises the following steps:
s1, monitoring the speed and the relative distance of a front vehicle in real time based on an optical principle by utilizing an imaging device of a vehicle recorder to obtain a target vehicle coordinate and a relative speed;
S2, calculating the friction coefficient of the vehicle under the environmental parameters;
S3, establishing a coordinate type neural network model: outputting a probability value of a rear-end collision event of the vehicle by a transverse coordinate value i and a longitudinal coordinate value j of the front vehicle in a two-dimensional plane, a real-time vehicle speed V of the front vehicle relative to the vehicle and a rolling friction coefficient mu of the vehicle;
and S4, respectively setting thresholds according to the output of the coordinate type neural network to perform early warning and intelligently regulating and controlling the vehicle.
Further, the step S1 includes:
s11, respectively setting two-dimensional image plane coordinate systems formed after shooting by a three-dimensional coordinate system of a camera:
the position of the camera is taken as the origin of coordinates, the x-axis and the z-axis of the three-dimensional coordinates are arranged in the plane of the road where the vehicle runs, the x-axis is perpendicular to the vehicle advancing direction, the y-axis is perpendicular to the road surface where the vehicle runs, and the z-axis is parallel to the vehicle advancing direction. The position of the camera is on the axis of the set three-dimensional coordinates. The direction of the optical axis of the camera is in a coordinate plane formed by a y axis and a z axis, the included angle between the camera and the road plane is theta, the distance between the direction of the camera along the optical axis and the road plane is epsilon, wherein the included angles theta and epsilon are adjustable variables, and the adjustment can be carried out according to the actual condition of the vehicle.
Let h be the height from the ground when the camera is installed, be the known height, and use O (x *,h*,z*) to represent the position coordinates of any point on the road surface that can be photographed by the camera.
A two-dimensional image coordinate system formed after shooting by a camera is set, a light center G point of the camera is taken as a coordinate origin, and a transverse coordinate axis i * and a longitudinal coordinate axis j * are set. Wherein the i * axis is parallel to the x axis, the j * axis is perpendicular to the i * axis and the optical axis, and the coordinates of the point of the two-dimensional image plane formed after the imaging by the camera are represented by O' (i,j).
The mapping of O (x *,h*,=*) to O' (i, j) can be expressed by the following formula:
d in the above formula represents the focal length of the camera.
O' (i, j) can also be expressed by O (x *,h*,z*), and the specific formula is as follows:
s12, estimating the distance between the vehicle and the front vehicle in real time according to the image shot by the camera, and the speed of the front vehicle relative to the vehicle:
considering the front vehicle as a point, the position coordinate of the point on the road surface can be represented by O (x *,h*,z*); the center point of the bottom end of the shadow of the vehicle in the image is calibrated and marked as a point O 'to represent the position point of the vehicle in front, and then the plane coordinate of the two-dimensional image formed by the point after the shooting of the camera can also be represented by O' (i, j).
The distance between the vehicle and the front vehicle is denoted as d, and the imaging relation of the camera can be obtained:
alpha represents the sharp angle formed by the straight line GO and the road surface, namely the z axis.
The distance d between the front vehicles and the speed V of the corresponding front vehicle relative to the self vehicle can be measured in real time through laser ranging.
Then can be obtained from the above
S13, respectively calculating the transverse distance and the longitudinal distance between the vehicle and the front vehicle, namely the value of x *、z*, according to the calculated alpha, and obtaining the following formula:
The two-dimensional plane image coordinates O '(i, j) formed by the front vehicle shot by the automobile data recorder can be obtained by the formula, and the two-dimensional plane image coordinates O' (i, j) are expressed as follows:
and taking the transverse coordinate value i and the longitudinal coordinate value j of the front vehicle in the two-dimensional plane and the speed V of the real-time front vehicle relative to the self vehicle of the front vehicle as a group of parameters, and using the parameters in the construction of the neural network model in the step S2.
Further, the step S2 includes:
the magnitude of the friction factor generated by the vehicle in the environment and the ground is denoted by γ, γ car 1、γcar 2、γcar 3、γcar 4 denotes the friction factors of four tires of the vehicle, n= {1,2,3,4} denotes the tire, and any tire is denoted by n. The friction factor of any one tire can be represented by gamma car n, and the forward pressure value of any tire and the ground can be represented by F n. Let e= {1,2,3,4}, F e. Gamma road denotes the friction factor of the road, and μ denotes the total rolling friction coefficient of the four tires of the vehicle and the road.
The expression for the total rolling friction coefficient of the vehicle and the road is:
Wherein gamma car n、γroad、Fn is acquired in real time by a wireless sensor and transmitted to a computer of the vehicle for calculation, sigma F represents the variance of the forward pressure value F n of the tire and the ground, and sigma car represents the variance of the friction factors of four tires of the vehicle.
Further, the step S3 includes:
S31 obtains variables x= [ i, j, V, μ ] from the transverse coordinate value i and the longitudinal coordinate value j of the preceding vehicle in the two-dimensional plane, the vehicle speed V of the preceding vehicle relative to the own vehicle in real time, and the rolling friction coefficient μ of the vehicle.
Data normalization preprocessing is performed on X= [ i, j, V, mu ]:
Wherein t is a parameter, and t.fwdarw.infinity.
The normalized data X ' = [ i ', j ', V ', μ ' ] is obtained and is input as an input variable to the coordinate-type neural network established by the present invention.
S32, the coordinate type neural network model structure created by the invention has 5 layers: layer 1 is the data input layer C, with input variables X ' = [ i ', j ', V ', μ ' ]; layer 2 is a rule selection layer, and represents that the input data processing rule is selected; layer 3 is the first hidden layer; layer 4 is a data fusion layer; layer 5 is the output layer, and Y 1 outputs a probability value for the occurrence of a rear-end collision event.
S321 layer 1: with 4 neurons, i.e., c=4, c= {1,2,3,4}, any one neuron can be represented by C.
The input of the input layer is X ' = [ i ', j ', V ', μ ' ], the output is equal to the input.
S322 layer 2 has M neurons, and m= {1,2,3,..m }, then any neuron is represented by M:
the generation rule function is as follows:
Where u= {1,2,3,4}, u represents the dimension of the input quantity, v= {1,2,3,.. C u represents the C u precision, g uv represents the center of the rule function, θ uv represents the width of the rule function, a 1、a2 is a constant, and a 1<a2.
The output of layer 2 is
Where w cm and b cm are weights and biases for layer 1 of the present invention to layer 2 of the present invention.
S323 layer 3 is an implicit layer, and has L neurons, any of which can be represented by L.
The output of any neuron in layer 3 is
Where w ml and b ml are the connection weights and offsets of the mth neuron of layer 2 and the first neuron of layer 3,As an excitation function, and/> And/>Is a set of parameters.
The output of any one neuron of layer 3 is representable as
S324 layer 4 is a data fusion layer with Q neurons, and q= {1,2,3,..q }, any neuron can be represented by Q.
The data input by the data fusion layer is normalized, the processing mode is the prior art, and the normalization data is recorded as
Respectively findIs denoted as xi q,/>, respectivelyThe calculation method is the prior art and will not be described here.
The output of the layer 4 data after fusion is noted as:
Wherein the method comprises the steps of As excitation functions, w lq and b lq are the connection weights and offsets of the first neuron of layer 3 and the q-th neuron of layer 4, respectively,/>Where k is a constant.
The output of layer 4 can be derived from the above as:
S325 layer 5 has 4 neurons, where r= {1,2,3,..4 }, then any one neuron is denoted by r.
Wherein Y 1 outputs a probability value of rear-end collision, Y 2 outputs a rear-end collision prevention speed adjustment value, Y 3 is a self-vehicle rear-end collision prevention adjustment coordinate value, and Y 4 is a braking capability value. The specific calculation mode is as follows:
Yr=f1(Qq)×wqr+bqr
Wherein w qr and b qr are respectively the connection weight and bias of the q-th neuron of the layer 4 and the r-th neuron of the layer 5, and t is a parameter;
The expression above can be derived from:
further, the step S4 includes:
and S4, respectively setting thresholds according to the output of the coordinate type neural network to perform early warning and intelligently regulating and controlling the vehicle.
Through the coordinate type neural network, Y 1 is obtained as a probability value of rear-end collision.
And setting a rear-end collision prevention early warning threshold tau 1, and when Y 1≥τ1 is adopted, carrying out early warning to warn a driver to carry out intelligent adjustment on the vehicle.
The vehicle is intelligently regulated and controlled by a pre-set rear-end collision prevention safety regulation scheme of the vehicle, and the specific mode is as follows:
The speed of the vehicle is adjusted through the set standard speed value, the azimuth of the vehicle is adjusted through the set rear-end collision prevention direction adjustment value, and the braking capacity of the vehicle is adjusted through the set braking capacity safety value.
The invention has at least the following beneficial effects:
1. Compared with the existing formula, the formula used for expressing the relation between the coordinates of the same point on the road surface and the coordinates of the same point on the two-dimensional image plane after imaging is more accurate and finer, so that the following detection and calculation results of the vehicle distance and the vehicle speed are more accurate, and the rear-end collision event is better prevented.
2. The invention uses the wireless sensor to collect and transmit the friction factors and the pressure values, avoids the data movement caused by the abrasion of the tire and the variability of the road surface condition, collects and feeds back the data in real time, effectively realizes the prevention of rear-end collision of the vehicle in the running process of multiple road conditions, and ensures that the regulation and control of the vehicle speed are more intelligent and accurate.
3. Data fusion in layer 4 according to the inventionThe excitation function utilizes the mean value and the variance of the data and combines the constant parameters to calculate the data, thereby reducing the complexity of the neural network calculation, accelerating the convergence of the network and effectively preventing the disappearance or explosion of the gradient.
4. Compared with the prior art, the intelligent vehicle control system can more intelligently control the vehicle in all directions, so that the rear-end collision event is avoided in all directions, and the use efficiency is higher.
Drawings
FIG. 1 is a flow chart of intelligent regulation and control of a vehicle for preventing rear-end collision under multiple conditions;
fig. 2 is a diagram of a coordinate-type neural network according to the present invention.
Detailed Description
For a clearer description of the invention, reference will be made to the following detailed description of the invention taken in conjunction with the accompanying drawings and specific embodiments.
Referring to fig. 1, the invention provides an intelligent regulation and control method for a vehicle for preventing rear-end collision under multiple conditions, which comprises the following steps:
S1, monitoring the speed and the relative distance of a front vehicle in real time based on an optical principle by utilizing an imaging device of a vehicle recorder, and obtaining the coordinates and the relative speed of a target vehicle.
S11, respectively setting two-dimensional image plane coordinate systems formed after shooting by a three-dimensional coordinate system of a camera:
The position of the camera is taken as the origin of coordinates, the x-axis and the = axis of the three-dimensional coordinates are arranged in the plane of the road where the vehicle runs, the x-axis is perpendicular to the vehicle advancing direction, the y-axis is perpendicular to the road surface where the vehicle runs, and the z-axis is parallel to the vehicle advancing direction. The position of the camera is on the axis of the set three-dimensional coordinates. The direction of the optical axis of the camera is in a coordinate plane formed by a y axis and a z axis, the included angle between the camera and the road plane is theta, the distance between the direction of the camera along the optical axis and the road plane is epsilon, wherein the included angles theta and epsilon are adjustable variables, and the adjustment can be carried out according to the actual condition of the vehicle.
Let h be the height from the ground when the camera is installed, be the known height, and use O (x *,h*,z*) to represent the position coordinates of any point on the road surface that can be photographed by the camera.
A two-dimensional image coordinate system formed after shooting by a camera is set, a light center G point of the camera is taken as a coordinate origin, and a transverse coordinate axis i * and a longitudinal coordinate axis j * are set. Wherein the i * axis is parallel to the x axis, the j * axis is perpendicular to the i * axis and the optical axis, and the coordinates of the point of the two-dimensional image plane formed after the imaging by the camera are represented by O' (i, j).
The mapping of O (x *,h*,z*) to O' (i, j) can be expressed by the following formula:
d in the above formula represents the focal length of the camera.
O' (i, j) can also be expressed by O (x *,h*,z*), the specific formula is as follows:
Compared with the existing formula, the formula used for expressing the relation between the coordinates of the same point on the road surface and the coordinates of the same point on the two-dimensional image plane after imaging is more accurate and finer, so that the following detection and calculation results of the vehicle distance and the vehicle speed are more accurate, and the rear-end collision event is better prevented.
S12, estimating the distance between the vehicle and the front vehicle and the speed of the front vehicle relative to the vehicle according to the image shot by the camera.
Considering the front vehicle as a point, the position coordinate of the point on the road surface can be represented by O (x *,h*,z*); the center point of the bottom end of the shadow of the vehicle in the image is calibrated and marked as a point O 'to represent the position point of the vehicle in front, and then the plane coordinate of the two-dimensional image formed by the point after the shooting of the camera can also be represented by O' (i, j).
The distance between the vehicle and the front vehicle is denoted as d, and the imaging relation of the camera can be obtained:
alpha represents the sharp angle formed by the straight line GO and the road surface, namely the z axis.
The distance d between the front vehicle and the corresponding speed V of the front vehicle relative to the vehicle can be measured in real time by laser ranging, which is the prior art and will not be described herein.
Then can be obtained from the above
S13, respectively calculating the transverse distance and the longitudinal distance between the vehicle and the front vehicle, namely the value of x *、z*, according to the calculated alpha, and obtaining the following formula:
The two-dimensional plane image coordinates O '(i, j) formed by the front vehicle shot by the automobile data recorder can be obtained by the formula, and the two-dimensional plane image coordinates O' (i, j) are expressed as follows:
and taking the transverse coordinate value i and the longitudinal coordinate value j of the front vehicle in the two-dimensional plane and the speed V of the real-time front vehicle relative to the self vehicle of the front vehicle as a group of parameters, and using the parameters in the construction of the neural network model in the step S2.
S2, calculating the friction coefficient of the vehicle under the environmental parameters.
The magnitude of the friction factor generated by the vehicle in the environment and the ground is denoted by γ, γ car 1、γcar 2、γcar 3、γcar 4 denotes the friction factors of four tires of the vehicle, n= {1,2,3,4} denotes the tire, and any tire is denoted by n. The friction factor of any one tire can be represented by gamma car n, and the forward pressure value of any tire and the ground can be represented by F n. Let e= {1,2,3,4}, F e. Gamma road denotes the friction factor of the road, and μ denotes the total rolling friction coefficient of the four tires of the vehicle and the road.
The expression for the total rolling friction coefficient of the vehicle and the road is:
Wherein gamma car n、γroad、Fn is acquired in real time by a wireless sensor and transmitted to a computer of the vehicle for calculation, sigma F represents the variance of the forward pressure value F n of the tire and the ground, and sigma car represents the variance of the friction factors of four tires of the vehicle.
The invention uses the wireless sensor to collect and transmit the friction factors and the pressure values, avoids the data movement caused by the abrasion of the tire and the variability of the road surface condition, collects and feeds back the data in real time, effectively realizes the prevention of rear-end collision of the vehicle in the running process of multiple road conditions, and ensures that the regulation and control of the vehicle speed are more intelligent and accurate.
S3, establishing a coordinate type neural network model: the transverse coordinate value i and the longitudinal coordinate value j of the front vehicle in the two-dimensional plane, the speed V of the real-time front vehicle relative to the self vehicle and the rolling friction coefficient mu of the vehicle of the front vehicle, and the probability value of the rear-end collision event of the vehicle is output.
Referring to fig. 2, S31 obtains a transverse coordinate value i and a longitudinal coordinate value j of a preceding vehicle in a two-dimensional plane, a vehicle speed V of the preceding vehicle relative to a vehicle of the preceding vehicle in real time, and a rolling friction coefficient μ of the vehicle from steps S1 and S2 of the present invention, and obtains variables x= [ i, j, V, μ ].
Data normalization preprocessing is performed on X= [ i, j, V, mu ]:
Wherein t is a parameter, and t.fwdarw.infinity.
Compared with the prior art, the data standardization processing mode adopted by the invention abandons the defects caused by the use of the mean value and the variance of the data, and the data is standardized by utilizing the limit data processing mode, so that the calculation is simpler and more convenient.
The normalized data X ' = [ i ', j ', V ', μ ' ] is obtained and is input as an input variable to the coordinate-type neural network established by the present invention.
S32, the coordinate type neural network model structure created by the invention has 5 layers: layer 1 is the data input layer C, with input variables X ' = [ i ', j ', V ', μ ' ]; layer 2 is a rule selection layer, and represents that the input data processing rule is selected; layer 3 is the first hidden layer; layer 4 is a data fusion layer; layer 5 is the output layer, and Y 1 outputs a probability value for the occurrence of a rear-end collision event.
S321 layer 1: with 4 neurons, i.e., c=4, c= {1,2,3,4}, any one neuron can be represented by C.
The input of the input layer is X ' = [ i ', j ', V ', μ ' ], the output is equal to the input.
S322 layer 2 has M neurons, and m= {1,2,3,..m }, then any neuron is represented by M:
the generation rule function is as follows:
Where u= {1,2,3,4}, u represents the dimension of the input quantity, v= {1,2,3,.. C u represents the C u precision, g uv represents the center of the rule function, θ uv represents the width of the rule function, a 1、a2 is a constant, and a 1<a2.
The output of layer 2 is
Where w cm and b cm are weights and biases for layer 1 of the present invention to layer 2 of the present invention.
The rule function adopted in the construction of the coordinate type neural network is used for efficiently carrying out accurate processing on the input data, so that the convergence rate of the neural network is improved.
S323 layer 3 is an implicit layer, and has L neurons, any of which can be represented by L.
The output of any neuron in layer 3 is
Where w ml and b ml are the connection weights and offsets of the mth neuron of layer 2 and the first neuron of layer 3,As an excitation function, and/> And/>Is a set of parameters.
The output of any one neuron of layer 3 is representable as
The excitation function used in the layer 3 of the invention makes the calculation process simpler and more convenient, and more effectively prevents the occurrence of the problem of excessive convergence of the neural network.
S324 layer 4 is a data fusion layer with Q neurons, and q= {1,2,3,..q }, any neuron can be represented by Q.
The data input by the data fusion layer is normalized, the processing mode is the prior art, and the normalization data is recorded as
Respectively findIs denoted as xi q,/>, respectivelyThe calculation method is the prior art and will not be described here.
The output of the layer 4 data after fusion is noted as:
Wherein the method comprises the steps of As excitation functions, w lq and b lq are the connection weights and offsets of the first neuron of layer 3 and the q-th neuron of layer 4, respectively,/>Where k is a constant.
The output of layer 4 can be derived from the above as:
data fusion in layer 4 according to the invention The formula utilizes the mean value and variance of the data and combines the constant parameters to calculate the data, thereby reducing the complexity of the neural network calculation, accelerating the convergence of the network and effectively preventing the disappearance or explosion of the gradient.
S325 layer 5 has 4 neurons, where r= {1,2,3,..4 }, then any one neuron is denoted by r.
Wherein Y 1 outputs a probability value of rear-end collision, Y 2 outputs a rear-end collision prevention speed adjustment value, Y 3 is a self-vehicle rear-end collision prevention adjustment coordinate value, and Y 4 is a braking capability value. The specific calculation mode is as follows:
Yr=f1(Qq)×wqr+bqr
Where w ar and b qr are the connection weights and offsets of the q-th neuron of layer 4 and the r-th neuron of layer 5, respectively, and t is a parameter.
The expression above can be derived from:
and S4, respectively setting thresholds according to the output of the coordinate type neural network to perform early warning and intelligently regulating and controlling the vehicle.
Through the coordinate type neural network established by the invention, Y 1 is obtained as the probability value of rear-end collision.
And setting a rear-end collision prevention early warning threshold tau 1, and when Y 1≥τ1 is adopted, carrying out early warning to warn a driver to carry out intelligent adjustment on the vehicle.
The vehicle is intelligently regulated and controlled by a pre-set rear-end collision prevention safety regulation scheme of the vehicle, and the specific mode is as follows: the speed of the vehicle is adjusted through the set standard speed value, the azimuth of the vehicle is adjusted through the set rear-end collision prevention direction adjustment value, and the braking capacity of the vehicle is adjusted through the set braking capacity safety value.
Compared with the prior art, the intelligent vehicle control system can more intelligently control the vehicle in all directions, so that the rear-end collision event is avoided in all directions, and the use efficiency is higher.
In conclusion, the intelligent vehicle regulation and control method for preventing rear-end collision under multiple conditions is achieved.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.