[go: up one dir, main page]

CN114822036B - Intelligent vehicle regulation and control method for preventing rear-end collision under multiple conditions - Google Patents

Intelligent vehicle regulation and control method for preventing rear-end collision under multiple conditions Download PDF

Info

Publication number
CN114822036B
CN114822036B CN202210531874.4A CN202210531874A CN114822036B CN 114822036 B CN114822036 B CN 114822036B CN 202210531874 A CN202210531874 A CN 202210531874A CN 114822036 B CN114822036 B CN 114822036B
Authority
CN
China
Prior art keywords
vehicle
layer
end collision
coordinate
axis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210531874.4A
Other languages
Chinese (zh)
Other versions
CN114822036A (en
Inventor
文强
江晓
王聿隽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong All Things Machinery Technology Co ltd
Original Assignee
Shandong All Things Machinery Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong All Things Machinery Technology Co ltd filed Critical Shandong All Things Machinery Technology Co ltd
Priority to CN202210531874.4A priority Critical patent/CN114822036B/en
Publication of CN114822036A publication Critical patent/CN114822036A/en
Application granted granted Critical
Publication of CN114822036B publication Critical patent/CN114822036B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides an intelligent regulation and control method for a vehicle with multiple paths of conditions for preventing rear-end collision, which utilizes a camera device of a vehicle recorder to monitor the speed and relative distance of a front vehicle in real time based on an optical principle so as to obtain the coordinates and relative speed of a target vehicle; calculating the friction coefficient of the vehicle under the environmental parameters; establishing a coordinate type neural network model: inputting front vehicle coordinates, front vehicle relative speed and rolling friction coefficient, and outputting a probability value of vehicle rear-end collision, a rear-end collision prevention speed adjustment value, a self vehicle rear-end collision prevention adjustment coordinate value and a braking capability value; and respectively setting thresholds according to the output of the coordinate type neural network to perform early warning and intelligently regulating and controlling the vehicle. The invention can prevent the occurrence of rear-end collision event under various road conditions in advance by intelligently regulating and controlling the speed of the vehicle, and greatly reduces the occurrence rate of the rear-end collision event.

Description

Intelligent vehicle regulation and control method for preventing rear-end collision under multiple conditions
Technical Field
The invention belongs to the technical field of intelligent driving, and particularly relates to an intelligent regulation and control method for a rear-end collision prevention vehicle under multiple conditions.
Background
With the improvement of living standard of people, driving and traveling become travel modes selected by most people, but frequent occurrence of traffic events is also caused, wherein the rear-end collision event occupies high occurrence probability in the traffic events.
In order to deal with the handling of rear-end events, a vehicle recorder, road condition monitoring and the like become necessary devices for traffic management and control. In the prior art, the prevention of rear-end collision is too much dependent on calibrating with reference to road boundaries or other large vehicles, and has great limitation on road types, so that an intelligent regulation and control method for the rear-end collision prevention under multiple conditions is needed.
Disclosure of Invention
The invention provides an intelligent regulation and control method for a vehicle for preventing rear-end collision under multiple road conditions, which aims to prevent the occurrence of rear-end collision events under multiple road conditions in advance by intelligently regulating and controlling the speed of the vehicle and reduce the occurrence rate of the rear-end collision events to a greater extent.
The invention relates to an intelligent regulation and control method for a vehicle with multiple paths of conditions for preventing rear-end collision, which comprises the following steps:
s1, monitoring the speed and the relative distance of a front vehicle in real time based on an optical principle by utilizing an imaging device of a vehicle recorder to obtain a target vehicle coordinate and a relative speed;
S2, calculating the friction coefficient of the vehicle under the environmental parameters;
S3, establishing a coordinate type neural network model: outputting a probability value of a rear-end collision event of the vehicle by a transverse coordinate value i and a longitudinal coordinate value j of the front vehicle in a two-dimensional plane, a real-time vehicle speed V of the front vehicle relative to the vehicle and a rolling friction coefficient mu of the vehicle;
and S4, respectively setting thresholds according to the output of the coordinate type neural network to perform early warning and intelligently regulating and controlling the vehicle.
Further, the step S1 includes:
s11, respectively setting two-dimensional image plane coordinate systems formed after shooting by a three-dimensional coordinate system of a camera:
the position of the camera is taken as the origin of coordinates, the x-axis and the z-axis of the three-dimensional coordinates are arranged in the plane of the road where the vehicle runs, the x-axis is perpendicular to the vehicle advancing direction, the y-axis is perpendicular to the road surface where the vehicle runs, and the z-axis is parallel to the vehicle advancing direction. The position of the camera is on the axis of the set three-dimensional coordinates. The direction of the optical axis of the camera is in a coordinate plane formed by a y axis and a z axis, the included angle between the camera and the road plane is theta, the distance between the direction of the camera along the optical axis and the road plane is epsilon, wherein the included angles theta and epsilon are adjustable variables, and the adjustment can be carried out according to the actual condition of the vehicle.
Let h be the height from the ground when the camera is installed, be the known height, and use O (x *,h*,z*) to represent the position coordinates of any point on the road surface that can be photographed by the camera.
A two-dimensional image coordinate system formed after shooting by a camera is set, a light center G point of the camera is taken as a coordinate origin, and a transverse coordinate axis i * and a longitudinal coordinate axis j * are set. Wherein the i * axis is parallel to the x axis, the j * axis is perpendicular to the i * axis and the optical axis, and the coordinates of the point of the two-dimensional image plane formed after the imaging by the camera are represented by O' (i,j).
The mapping of O (x *,h*,=*) to O' (i, j) can be expressed by the following formula:
d in the above formula represents the focal length of the camera.
O' (i, j) can also be expressed by O (x *,h*,z*), and the specific formula is as follows:
s12, estimating the distance between the vehicle and the front vehicle in real time according to the image shot by the camera, and the speed of the front vehicle relative to the vehicle:
considering the front vehicle as a point, the position coordinate of the point on the road surface can be represented by O (x *,h*,z*); the center point of the bottom end of the shadow of the vehicle in the image is calibrated and marked as a point O 'to represent the position point of the vehicle in front, and then the plane coordinate of the two-dimensional image formed by the point after the shooting of the camera can also be represented by O' (i, j).
The distance between the vehicle and the front vehicle is denoted as d, and the imaging relation of the camera can be obtained:
alpha represents the sharp angle formed by the straight line GO and the road surface, namely the z axis.
The distance d between the front vehicles and the speed V of the corresponding front vehicle relative to the self vehicle can be measured in real time through laser ranging.
Then can be obtained from the above
S13, respectively calculating the transverse distance and the longitudinal distance between the vehicle and the front vehicle, namely the value of x *、z*, according to the calculated alpha, and obtaining the following formula:
The two-dimensional plane image coordinates O '(i, j) formed by the front vehicle shot by the automobile data recorder can be obtained by the formula, and the two-dimensional plane image coordinates O' (i, j) are expressed as follows:
and taking the transverse coordinate value i and the longitudinal coordinate value j of the front vehicle in the two-dimensional plane and the speed V of the real-time front vehicle relative to the self vehicle of the front vehicle as a group of parameters, and using the parameters in the construction of the neural network model in the step S2.
Further, the step S2 includes:
the magnitude of the friction factor generated by the vehicle in the environment and the ground is denoted by γ, γ car 1、γcar 2、γcar 3、γcar 4 denotes the friction factors of four tires of the vehicle, n= {1,2,3,4} denotes the tire, and any tire is denoted by n. The friction factor of any one tire can be represented by gamma car n, and the forward pressure value of any tire and the ground can be represented by F n. Let e= {1,2,3,4}, F e. Gamma road denotes the friction factor of the road, and μ denotes the total rolling friction coefficient of the four tires of the vehicle and the road.
The expression for the total rolling friction coefficient of the vehicle and the road is:
Wherein gamma car n、γroad、Fn is acquired in real time by a wireless sensor and transmitted to a computer of the vehicle for calculation, sigma F represents the variance of the forward pressure value F n of the tire and the ground, and sigma car represents the variance of the friction factors of four tires of the vehicle.
Further, the step S3 includes:
S31 obtains variables x= [ i, j, V, μ ] from the transverse coordinate value i and the longitudinal coordinate value j of the preceding vehicle in the two-dimensional plane, the vehicle speed V of the preceding vehicle relative to the own vehicle in real time, and the rolling friction coefficient μ of the vehicle.
Data normalization preprocessing is performed on X= [ i, j, V, mu ]:
Wherein t is a parameter, and t.fwdarw.infinity.
The normalized data X ' = [ i ', j ', V ', μ ' ] is obtained and is input as an input variable to the coordinate-type neural network established by the present invention.
S32, the coordinate type neural network model structure created by the invention has 5 layers: layer 1 is the data input layer C, with input variables X ' = [ i ', j ', V ', μ ' ]; layer 2 is a rule selection layer, and represents that the input data processing rule is selected; layer 3 is the first hidden layer; layer 4 is a data fusion layer; layer 5 is the output layer, and Y 1 outputs a probability value for the occurrence of a rear-end collision event.
S321 layer 1: with 4 neurons, i.e., c=4, c= {1,2,3,4}, any one neuron can be represented by C.
The input of the input layer is X ' = [ i ', j ', V ', μ ' ], the output is equal to the input.
S322 layer 2 has M neurons, and m= {1,2,3,..m }, then any neuron is represented by M:
the generation rule function is as follows:
Where u= {1,2,3,4}, u represents the dimension of the input quantity, v= {1,2,3,.. C u represents the C u precision, g uv represents the center of the rule function, θ uv represents the width of the rule function, a 1、a2 is a constant, and a 1<a2.
The output of layer 2 is
Where w cm and b cm are weights and biases for layer 1 of the present invention to layer 2 of the present invention.
S323 layer 3 is an implicit layer, and has L neurons, any of which can be represented by L.
The output of any neuron in layer 3 is
Where w ml and b ml are the connection weights and offsets of the mth neuron of layer 2 and the first neuron of layer 3,As an excitation function, and/> And/>Is a set of parameters.
The output of any one neuron of layer 3 is representable as
S324 layer 4 is a data fusion layer with Q neurons, and q= {1,2,3,..q }, any neuron can be represented by Q.
The data input by the data fusion layer is normalized, the processing mode is the prior art, and the normalization data is recorded as
Respectively findIs denoted as xi q,/>, respectivelyThe calculation method is the prior art and will not be described here.
The output of the layer 4 data after fusion is noted as:
Wherein the method comprises the steps of As excitation functions, w lq and b lq are the connection weights and offsets of the first neuron of layer 3 and the q-th neuron of layer 4, respectively,/>Where k is a constant.
The output of layer 4 can be derived from the above as:
S325 layer 5 has 4 neurons, where r= {1,2,3,..4 }, then any one neuron is denoted by r.
Wherein Y 1 outputs a probability value of rear-end collision, Y 2 outputs a rear-end collision prevention speed adjustment value, Y 3 is a self-vehicle rear-end collision prevention adjustment coordinate value, and Y 4 is a braking capability value. The specific calculation mode is as follows:
Yr=f1(Qq)×wqr+bqr
Wherein w qr and b qr are respectively the connection weight and bias of the q-th neuron of the layer 4 and the r-th neuron of the layer 5, and t is a parameter;
The expression above can be derived from:
further, the step S4 includes:
and S4, respectively setting thresholds according to the output of the coordinate type neural network to perform early warning and intelligently regulating and controlling the vehicle.
Through the coordinate type neural network, Y 1 is obtained as a probability value of rear-end collision.
And setting a rear-end collision prevention early warning threshold tau 1, and when Y 1≥τ1 is adopted, carrying out early warning to warn a driver to carry out intelligent adjustment on the vehicle.
The vehicle is intelligently regulated and controlled by a pre-set rear-end collision prevention safety regulation scheme of the vehicle, and the specific mode is as follows:
The speed of the vehicle is adjusted through the set standard speed value, the azimuth of the vehicle is adjusted through the set rear-end collision prevention direction adjustment value, and the braking capacity of the vehicle is adjusted through the set braking capacity safety value.
The invention has at least the following beneficial effects:
1. Compared with the existing formula, the formula used for expressing the relation between the coordinates of the same point on the road surface and the coordinates of the same point on the two-dimensional image plane after imaging is more accurate and finer, so that the following detection and calculation results of the vehicle distance and the vehicle speed are more accurate, and the rear-end collision event is better prevented.
2. The invention uses the wireless sensor to collect and transmit the friction factors and the pressure values, avoids the data movement caused by the abrasion of the tire and the variability of the road surface condition, collects and feeds back the data in real time, effectively realizes the prevention of rear-end collision of the vehicle in the running process of multiple road conditions, and ensures that the regulation and control of the vehicle speed are more intelligent and accurate.
3. Data fusion in layer 4 according to the inventionThe excitation function utilizes the mean value and the variance of the data and combines the constant parameters to calculate the data, thereby reducing the complexity of the neural network calculation, accelerating the convergence of the network and effectively preventing the disappearance or explosion of the gradient.
4. Compared with the prior art, the intelligent vehicle control system can more intelligently control the vehicle in all directions, so that the rear-end collision event is avoided in all directions, and the use efficiency is higher.
Drawings
FIG. 1 is a flow chart of intelligent regulation and control of a vehicle for preventing rear-end collision under multiple conditions;
fig. 2 is a diagram of a coordinate-type neural network according to the present invention.
Detailed Description
For a clearer description of the invention, reference will be made to the following detailed description of the invention taken in conjunction with the accompanying drawings and specific embodiments.
Referring to fig. 1, the invention provides an intelligent regulation and control method for a vehicle for preventing rear-end collision under multiple conditions, which comprises the following steps:
S1, monitoring the speed and the relative distance of a front vehicle in real time based on an optical principle by utilizing an imaging device of a vehicle recorder, and obtaining the coordinates and the relative speed of a target vehicle.
S11, respectively setting two-dimensional image plane coordinate systems formed after shooting by a three-dimensional coordinate system of a camera:
The position of the camera is taken as the origin of coordinates, the x-axis and the = axis of the three-dimensional coordinates are arranged in the plane of the road where the vehicle runs, the x-axis is perpendicular to the vehicle advancing direction, the y-axis is perpendicular to the road surface where the vehicle runs, and the z-axis is parallel to the vehicle advancing direction. The position of the camera is on the axis of the set three-dimensional coordinates. The direction of the optical axis of the camera is in a coordinate plane formed by a y axis and a z axis, the included angle between the camera and the road plane is theta, the distance between the direction of the camera along the optical axis and the road plane is epsilon, wherein the included angles theta and epsilon are adjustable variables, and the adjustment can be carried out according to the actual condition of the vehicle.
Let h be the height from the ground when the camera is installed, be the known height, and use O (x *,h*,z*) to represent the position coordinates of any point on the road surface that can be photographed by the camera.
A two-dimensional image coordinate system formed after shooting by a camera is set, a light center G point of the camera is taken as a coordinate origin, and a transverse coordinate axis i * and a longitudinal coordinate axis j * are set. Wherein the i * axis is parallel to the x axis, the j * axis is perpendicular to the i * axis and the optical axis, and the coordinates of the point of the two-dimensional image plane formed after the imaging by the camera are represented by O' (i, j).
The mapping of O (x *,h*,z*) to O' (i, j) can be expressed by the following formula:
d in the above formula represents the focal length of the camera.
O' (i, j) can also be expressed by O (x *,h*,z*), the specific formula is as follows:
Compared with the existing formula, the formula used for expressing the relation between the coordinates of the same point on the road surface and the coordinates of the same point on the two-dimensional image plane after imaging is more accurate and finer, so that the following detection and calculation results of the vehicle distance and the vehicle speed are more accurate, and the rear-end collision event is better prevented.
S12, estimating the distance between the vehicle and the front vehicle and the speed of the front vehicle relative to the vehicle according to the image shot by the camera.
Considering the front vehicle as a point, the position coordinate of the point on the road surface can be represented by O (x *,h*,z*); the center point of the bottom end of the shadow of the vehicle in the image is calibrated and marked as a point O 'to represent the position point of the vehicle in front, and then the plane coordinate of the two-dimensional image formed by the point after the shooting of the camera can also be represented by O' (i, j).
The distance between the vehicle and the front vehicle is denoted as d, and the imaging relation of the camera can be obtained:
alpha represents the sharp angle formed by the straight line GO and the road surface, namely the z axis.
The distance d between the front vehicle and the corresponding speed V of the front vehicle relative to the vehicle can be measured in real time by laser ranging, which is the prior art and will not be described herein.
Then can be obtained from the above
S13, respectively calculating the transverse distance and the longitudinal distance between the vehicle and the front vehicle, namely the value of x *、z*, according to the calculated alpha, and obtaining the following formula:
The two-dimensional plane image coordinates O '(i, j) formed by the front vehicle shot by the automobile data recorder can be obtained by the formula, and the two-dimensional plane image coordinates O' (i, j) are expressed as follows:
and taking the transverse coordinate value i and the longitudinal coordinate value j of the front vehicle in the two-dimensional plane and the speed V of the real-time front vehicle relative to the self vehicle of the front vehicle as a group of parameters, and using the parameters in the construction of the neural network model in the step S2.
S2, calculating the friction coefficient of the vehicle under the environmental parameters.
The magnitude of the friction factor generated by the vehicle in the environment and the ground is denoted by γ, γ car 1、γcar 2、γcar 3、γcar 4 denotes the friction factors of four tires of the vehicle, n= {1,2,3,4} denotes the tire, and any tire is denoted by n. The friction factor of any one tire can be represented by gamma car n, and the forward pressure value of any tire and the ground can be represented by F n. Let e= {1,2,3,4}, F e. Gamma road denotes the friction factor of the road, and μ denotes the total rolling friction coefficient of the four tires of the vehicle and the road.
The expression for the total rolling friction coefficient of the vehicle and the road is:
Wherein gamma car n、γroad、Fn is acquired in real time by a wireless sensor and transmitted to a computer of the vehicle for calculation, sigma F represents the variance of the forward pressure value F n of the tire and the ground, and sigma car represents the variance of the friction factors of four tires of the vehicle.
The invention uses the wireless sensor to collect and transmit the friction factors and the pressure values, avoids the data movement caused by the abrasion of the tire and the variability of the road surface condition, collects and feeds back the data in real time, effectively realizes the prevention of rear-end collision of the vehicle in the running process of multiple road conditions, and ensures that the regulation and control of the vehicle speed are more intelligent and accurate.
S3, establishing a coordinate type neural network model: the transverse coordinate value i and the longitudinal coordinate value j of the front vehicle in the two-dimensional plane, the speed V of the real-time front vehicle relative to the self vehicle and the rolling friction coefficient mu of the vehicle of the front vehicle, and the probability value of the rear-end collision event of the vehicle is output.
Referring to fig. 2, S31 obtains a transverse coordinate value i and a longitudinal coordinate value j of a preceding vehicle in a two-dimensional plane, a vehicle speed V of the preceding vehicle relative to a vehicle of the preceding vehicle in real time, and a rolling friction coefficient μ of the vehicle from steps S1 and S2 of the present invention, and obtains variables x= [ i, j, V, μ ].
Data normalization preprocessing is performed on X= [ i, j, V, mu ]:
Wherein t is a parameter, and t.fwdarw.infinity.
Compared with the prior art, the data standardization processing mode adopted by the invention abandons the defects caused by the use of the mean value and the variance of the data, and the data is standardized by utilizing the limit data processing mode, so that the calculation is simpler and more convenient.
The normalized data X ' = [ i ', j ', V ', μ ' ] is obtained and is input as an input variable to the coordinate-type neural network established by the present invention.
S32, the coordinate type neural network model structure created by the invention has 5 layers: layer 1 is the data input layer C, with input variables X ' = [ i ', j ', V ', μ ' ]; layer 2 is a rule selection layer, and represents that the input data processing rule is selected; layer 3 is the first hidden layer; layer 4 is a data fusion layer; layer 5 is the output layer, and Y 1 outputs a probability value for the occurrence of a rear-end collision event.
S321 layer 1: with 4 neurons, i.e., c=4, c= {1,2,3,4}, any one neuron can be represented by C.
The input of the input layer is X ' = [ i ', j ', V ', μ ' ], the output is equal to the input.
S322 layer 2 has M neurons, and m= {1,2,3,..m }, then any neuron is represented by M:
the generation rule function is as follows:
Where u= {1,2,3,4}, u represents the dimension of the input quantity, v= {1,2,3,.. C u represents the C u precision, g uv represents the center of the rule function, θ uv represents the width of the rule function, a 1、a2 is a constant, and a 1<a2.
The output of layer 2 is
Where w cm and b cm are weights and biases for layer 1 of the present invention to layer 2 of the present invention.
The rule function adopted in the construction of the coordinate type neural network is used for efficiently carrying out accurate processing on the input data, so that the convergence rate of the neural network is improved.
S323 layer 3 is an implicit layer, and has L neurons, any of which can be represented by L.
The output of any neuron in layer 3 is
Where w ml and b ml are the connection weights and offsets of the mth neuron of layer 2 and the first neuron of layer 3,As an excitation function, and/> And/>Is a set of parameters.
The output of any one neuron of layer 3 is representable as
The excitation function used in the layer 3 of the invention makes the calculation process simpler and more convenient, and more effectively prevents the occurrence of the problem of excessive convergence of the neural network.
S324 layer 4 is a data fusion layer with Q neurons, and q= {1,2,3,..q }, any neuron can be represented by Q.
The data input by the data fusion layer is normalized, the processing mode is the prior art, and the normalization data is recorded as
Respectively findIs denoted as xi q,/>, respectivelyThe calculation method is the prior art and will not be described here.
The output of the layer 4 data after fusion is noted as:
Wherein the method comprises the steps of As excitation functions, w lq and b lq are the connection weights and offsets of the first neuron of layer 3 and the q-th neuron of layer 4, respectively,/>Where k is a constant.
The output of layer 4 can be derived from the above as:
data fusion in layer 4 according to the invention The formula utilizes the mean value and variance of the data and combines the constant parameters to calculate the data, thereby reducing the complexity of the neural network calculation, accelerating the convergence of the network and effectively preventing the disappearance or explosion of the gradient.
S325 layer 5 has 4 neurons, where r= {1,2,3,..4 }, then any one neuron is denoted by r.
Wherein Y 1 outputs a probability value of rear-end collision, Y 2 outputs a rear-end collision prevention speed adjustment value, Y 3 is a self-vehicle rear-end collision prevention adjustment coordinate value, and Y 4 is a braking capability value. The specific calculation mode is as follows:
Yr=f1(Qq)×wqr+bqr
Where w ar and b qr are the connection weights and offsets of the q-th neuron of layer 4 and the r-th neuron of layer 5, respectively, and t is a parameter.
The expression above can be derived from:
and S4, respectively setting thresholds according to the output of the coordinate type neural network to perform early warning and intelligently regulating and controlling the vehicle.
Through the coordinate type neural network established by the invention, Y 1 is obtained as the probability value of rear-end collision.
And setting a rear-end collision prevention early warning threshold tau 1, and when Y 1≥τ1 is adopted, carrying out early warning to warn a driver to carry out intelligent adjustment on the vehicle.
The vehicle is intelligently regulated and controlled by a pre-set rear-end collision prevention safety regulation scheme of the vehicle, and the specific mode is as follows: the speed of the vehicle is adjusted through the set standard speed value, the azimuth of the vehicle is adjusted through the set rear-end collision prevention direction adjustment value, and the braking capacity of the vehicle is adjusted through the set braking capacity safety value.
Compared with the prior art, the intelligent vehicle control system can more intelligently control the vehicle in all directions, so that the rear-end collision event is avoided in all directions, and the use efficiency is higher.
In conclusion, the intelligent vehicle regulation and control method for preventing rear-end collision under multiple conditions is achieved.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (3)

1. The intelligent regulation and control method for the rear-end collision prevention under the multipath condition is characterized by comprising the following steps of:
s1, monitoring the speed and the relative distance of a front vehicle in real time based on an optical principle by utilizing an imaging device of a vehicle recorder to obtain a target vehicle coordinate and a relative speed;
s2, calculating a rolling friction coefficient of the vehicle under the environmental parameters;
S3, establishing a coordinate type neural network model: inputting the coordinates of the front vehicle, the relative speed of the front vehicle and the rolling friction coefficient, and outputting a probability value of rear-end collision of the vehicle, an adjustment value of rear-end collision prevention speed, an adjustment coordinate value of rear-end collision prevention of the vehicle and a braking capability value;
S31, obtaining variables X= [ i, j, V, mu ] from a transverse coordinate value i and a longitudinal coordinate value j of a front vehicle in a two-dimensional plane, a speed V of the front vehicle relative to a vehicle and a rolling friction coefficient mu of the vehicle in real time;
Data normalization preprocessing is performed on X= [ i, j, V, mu ]:
wherein t is a parameter, and t is → infinity;
obtaining standardized data X ' = [ i ', j ', V ', mu ' ], and inputting the standardized data into the coordinate type neural network by taking the standardized data as an input variable;
The coordinate type neural network model structure created in the step S32 has 5 layers: layer 1 is the data input layer C, with input variables X ' = [ i ', j ', V ', μ ' ]; layer 2 is a rule selection layer, and represents that the input data processing rule is selected; layer 3 is the first hidden layer; layer 4 is a data fusion layer; layer 5 is the output layer, Y 1 outputs the probability value of the occurrence of the rear-end collision event;
S321 layer 1: with 4 neurons, i.e. c=4, c= {1,2,3,4}, then any one neuron is denoted by C;
the input of the input layer is X ' = [ i ', j ', V ', mu ' ], and the output is equal to the input;
s322 layer 2: there are M neurons, and m= {1,2,3,..m }, then any neuron is represented by M:
the generation rule function is as follows:
Where u= {1,2,3,4}, u represents the dimension of the input quantity, v= {1,2,3,.,. C u},v={1,2,3,...,Cu }, v represents the precision of the input quantity, C u represents the C u -th precision, g uv represents the center of the rule function, θ uv represents the width of the rule function, a 1、a2 is a constant, and a 1<a2;
The output of layer 2 is
Wherein w cm and b cm are the weights and biases for the layer 1 to the layer 2;
S323 layer 3 is an implicit layer, which has L neurons, and any neuron is represented by L;
The output of any neuron in layer 3 is
Where w ml and b ml are the connection weights and offsets of the mth neuron of layer 2 and the first neuron of layer 3,As an excitation function, and/> And/>Is a set of parameters;
the output of any one neuron of layer 3 is:
S324 layer 4 is a data fusion layer with Q neurons, and q= {1,2,3,..q }, any neuron is represented by Q;
Normalizing the data input by the data fusion layer to obtain normalized data, and recording the normalized data as
Respectively findIs denoted as xi q,/>, respectively
The output of the layer 4 data after fusion is noted as:
Wherein the method comprises the steps of As excitation functions, w lq and b lq are the connection weights and offsets of the first neuron of layer 3 and the q-th neuron of layer 4, respectively,/>Wherein k is a constant;
The output of layer 4 is derived from the above:
s325 layer 5 has 4 neurons, where r= {1,2,3,..4 }, then any one neuron is denoted by r;
Wherein the probability value of rear-end collision is output by Y 1, the speed adjustment value of rear-end collision is output by Y 2, Y 3 is the coordinate value of rear-end collision prevention adjustment of the vehicle, and Y 4 is the braking capability value; the specific calculation mode is as follows:
Yr=f1(Qq)×wqr+bqr
Wherein w qr and b qr are respectively the connection weight and bias of the q-th neuron of the layer 4 and the r-th neuron of the layer 5, and t is a parameter;
From the above expression:
S4, respectively setting thresholds according to the output of the coordinate type neural network to perform early warning and intelligently regulating and controlling the vehicle; y 1 is obtained as a probability value of rear-end collision through a coordinate type neural network;
Setting a rear-end collision prevention early warning threshold tau 1, and when Y 1≥τ1, carrying out early warning to warn a driver to carry out intelligent adjustment on the vehicle;
the vehicle is intelligently regulated and controlled by a pre-set rear-end collision prevention safety regulation scheme of the vehicle, and the specific mode is as follows:
the speed of the vehicle is adjusted through the set standard speed value, the azimuth of the vehicle is adjusted through the set rear-end collision prevention direction adjustment value, and the braking capacity of the vehicle is adjusted through the set braking capacity safety value.
2. The intelligent control method for a vehicle for preventing rear-end collision under multiple conditions according to claim 1, wherein the step S1 comprises:
s11, respectively setting two-dimensional image plane coordinate systems formed after shooting by a three-dimensional coordinate system of a camera:
Taking the position of the camera as a coordinate origin, setting an x-axis and a z-axis of a three-dimensional coordinate to be in a plane of a road where the vehicle runs, wherein the x-axis is perpendicular to the vehicle advancing direction, the y-axis is perpendicular to the road surface where the vehicle runs, and the z-axis is parallel to the vehicle advancing direction; the position of the camera is on the axis of the set three-dimensional coordinates; the direction of the optical axis of the camera is in a coordinate plane formed by a y axis and a z axis, the included angle between the camera and the road plane is theta, the distance between the camera and the road plane along the direction of the optical axis is epsilon, wherein the included angles theta and epsilon are adjustable variables, and the adjustment is carried out according to the actual condition of the vehicle;
Setting h as the height from the ground when the camera is installed, and using O (x *,h*,z*) to represent the position coordinates of any point of the road surface which can be shot by the camera, wherein the height is the known height;
Setting a two-dimensional image coordinate system formed after shooting by a camera, taking a light center G point of the camera as a coordinate origin, and setting a transverse coordinate axis i * and a longitudinal coordinate axis j *; wherein the i * axis is in parallel relation with the x axis, the j * axis is in perpendicular relation with the i * axis and the optical axis, and O' (i, j) is used for representing the coordinates of the point of the two-dimensional image plane formed after the camera shoots;
the mapping of O (x *,h*,x*) to O' (i, j) is expressed by the following formula:
d in the above formula represents the focal length of the camera;
Or O' (i, j) is expressed by O (x *,h*,z*), and the specific formula is as follows:
s12, estimating the distance between the vehicle and the front vehicle in real time according to the image shot by the camera, and the speed of the front vehicle relative to the vehicle:
considering the front vehicle as a point, the position coordinates of this point on the road surface are denoted by O (x *,h*,z*); calibrating a bottom center point of a vehicle shadow in the image, and marking the bottom center point as a point O 'to represent a position point of a vehicle in front, wherein plane coordinates of a two-dimensional image formed by the point after the point is photographed by a camera are represented by O' (i, j);
The distance between the vehicle and the front vehicle is denoted as d, and the imaging relation of the camera is obtained:
alpha represents a sharp included angle formed by the straight line GO and the road surface, namely the z axis;
Then from the above
S13, respectively calculating the transverse distance and the longitudinal distance between the vehicle and the front vehicle, namely the value of x *、z*, according to the calculated alpha, and obtaining the following formula:
the two-dimensional plane image coordinates O' (i, j) formed by the front vehicle shot by the automobile data recorder are obtained by the formula and are expressed as follows:
and taking the transverse coordinate value i and the longitudinal coordinate value j of the front vehicle in the two-dimensional plane and the speed V of the real-time front vehicle relative to the self vehicle of the front vehicle as a group of parameters for constructing the coordinate neural network model in the step S3.
3. The intelligent control method for the vehicle for preventing rear-end collision under the multiple conditions according to claim 2, wherein the step S2 includes:
The magnitude of the friction factor generated by the vehicle and the ground in the environment is represented by gamma, gamma car 1、γcar 2、γcar 3、γcar 4 represents the friction factors of four tires of the vehicle respectively, n= {1,2,3,4} represents the tire, and any tire is represented by n; the friction factor of any one tire is represented by gamma car n, and the forward pressure value of any tire and the ground is represented by F n; let e= {1,2,3,4}, F e; gamma road denotes the friction factor of the road, and μ denotes the total rolling friction coefficient of the four tires of the vehicle and the road;
the expression for the total rolling friction coefficient of the vehicle and the road is:
Wherein gamma car n、γroad、Fn is acquired in real time by a wireless sensor and transmitted to a computer of the vehicle for calculation, sigma F represents the variance of the forward pressure value F n of the tire and the ground, and sigma car represents the variance of the friction factors of four tires of the vehicle.
CN202210531874.4A 2022-05-16 2022-05-16 Intelligent vehicle regulation and control method for preventing rear-end collision under multiple conditions Active CN114822036B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210531874.4A CN114822036B (en) 2022-05-16 2022-05-16 Intelligent vehicle regulation and control method for preventing rear-end collision under multiple conditions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210531874.4A CN114822036B (en) 2022-05-16 2022-05-16 Intelligent vehicle regulation and control method for preventing rear-end collision under multiple conditions

Publications (2)

Publication Number Publication Date
CN114822036A CN114822036A (en) 2022-07-29
CN114822036B true CN114822036B (en) 2024-06-14

Family

ID=82515516

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210531874.4A Active CN114822036B (en) 2022-05-16 2022-05-16 Intelligent vehicle regulation and control method for preventing rear-end collision under multiple conditions

Country Status (1)

Country Link
CN (1) CN114822036B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110091868A (en) * 2019-05-20 2019-08-06 合肥工业大学 A kind of longitudinal collision avoidance method and its system, intelligent automobile of man-machine coordination control
CN111994068A (en) * 2020-10-29 2020-11-27 北京航空航天大学 Intelligent driving automobile control system based on intelligent tire touch perception

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105235681B (en) * 2015-11-11 2018-06-08 吉林大学 A vehicle rear-end collision avoidance system and method based on road conditions
CN105938660B (en) * 2016-06-07 2018-10-12 长安大学 A kind of automobile anti-rear end collision method for early warning and system
CN110682907B (en) * 2019-10-17 2021-06-01 四川大学 Automobile rear-end collision prevention control system and method
JP2021088230A (en) * 2019-12-02 2021-06-10 Toyo Tire株式会社 Vehicle safety assist system and vehicle safety assist method
CN112037159B (en) * 2020-07-29 2023-06-23 中天智控科技控股股份有限公司 Cross-camera road space fusion and vehicle target detection tracking method and system
KR20220027327A (en) * 2020-08-26 2022-03-08 현대모비스 주식회사 Method And Apparatus for Controlling Terrain Mode Using Road Condition Judgment Model Based on Deep Learning
CN112248986B (en) * 2020-10-23 2021-11-05 厦门理工学院 Automatic braking method, device, equipment and storage medium for vehicle
CN114148322B (en) * 2022-01-04 2023-11-17 吉林大学 A road adhesion adaptive air pressure automatic emergency braking control method for commercial vehicles

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110091868A (en) * 2019-05-20 2019-08-06 合肥工业大学 A kind of longitudinal collision avoidance method and its system, intelligent automobile of man-machine coordination control
CN111994068A (en) * 2020-10-29 2020-11-27 北京航空航天大学 Intelligent driving automobile control system based on intelligent tire touch perception

Also Published As

Publication number Publication date
CN114822036A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN110745140B (en) Vehicle lane change early warning method based on continuous image constraint pose estimation
US9390568B2 (en) Driver identification based on driving maneuver signature
DE102013209575B4 (en) METHOD OF CONTROLLING A VEHICLE
US9802599B2 (en) Vehicle lane placement
DE102005009814B4 (en) Vehicle condition detection system and method
CN109074069A (en) Autonomous vehicle with improved vision-based detection ability
DE102023104789A1 (en) TRACKING OF MULTIPLE OBJECTS
DE102021132853A1 (en) CAMERA CALIBRATION BASED ON DEEP LEARNING
CN111142091A (en) Automatic driving system laser radar online calibration method fusing vehicle-mounted information
DE102010005290A1 (en) Vehicle controlling method for vehicle operator i.e. driver, involves associating tracked objects based on dissimilarity measure, and utilizing associated objects in collision preparation system to control operation of vehicle
CN111950483A (en) A Vision-Based Method for Predicting Vehicle Front Collision
CN109299656B (en) Scene depth determination method for vehicle-mounted vision system
DE112019001078T5 (en) METHOD AND DEVICE FOR VEHICLE CONTROL
EP4148599A1 (en) Systems and methods for providing and using confidence estimations for semantic labeling
DE102022121602A1 (en) OBJECT MOTION PATH PREDICTION
DE102020101837A1 (en) Method and device for determining the condition of a vehicle
DE102022104054A1 (en) THE VEHICLE CONDITION ESTIMATION IMPROVES SENSOR DATA FOR VEHICLE CONTROL AND AUTONOMOUS DRIVING
CN114964445A (en) Multi-module dynamic weighing method based on vehicle identification
DE102022132847A1 (en) THREE-DIMENSIONAL OBJECT DETECTION
CN110888441B (en) Gyroscope-based wheelchair control system
CN114822036B (en) Intelligent vehicle regulation and control method for preventing rear-end collision under multiple conditions
DE102019006935A1 (en) Technology for dead time compensation for transverse and longitudinal guidance of a motor vehicle
CN116691626B (en) Vehicle braking system and method based on artificial intelligence
EP4145352A1 (en) Systems and methods for training and using machine learning models and algorithms
CN117841985A (en) Monitoring regulation and control system applied to closed-loop vehicle gesture contour sensing module

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240513

Address after: Room 214, Yijing Building, No.1 Hengshan Road, Yantai Economic and Technological Development Zone, Shandong Province, 264000

Applicant after: Shandong all things Machinery Technology Co.,Ltd.

Country or region after: China

Address before: No. 3203, block C, Range Rover mansion, No. 588, Gangcheng East Street, Laishan District, Yantai City, Shandong Province, 264003

Applicant before: SHANDONG HENGHAO INFORMATION TECHNOLOGY Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant