CN118447485B - Vehicle target recognition system based on edge calculation - Google Patents
Vehicle target recognition system based on edge calculation Download PDFInfo
- Publication number
- CN118447485B CN118447485B CN202410905665.0A CN202410905665A CN118447485B CN 118447485 B CN118447485 B CN 118447485B CN 202410905665 A CN202410905665 A CN 202410905665A CN 118447485 B CN118447485 B CN 118447485B
- Authority
- CN
- China
- Prior art keywords
- obstacle
- image
- vehicle
- module
- quality
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000004364 calculation method Methods 0.000 title claims abstract description 39
- 238000011217 control strategy Methods 0.000 claims abstract description 24
- 238000004458 analytical method Methods 0.000 claims abstract description 18
- 238000000034 method Methods 0.000 claims description 22
- 238000010606 normalization Methods 0.000 claims description 21
- 230000003068 static effect Effects 0.000 claims description 12
- 230000002618 waking effect Effects 0.000 claims description 5
- 238000010586 diagram Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 description 9
- 238000001514 detection method Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60T—VEHICLE BRAKE CONTROL SYSTEMS OR PARTS THEREOF; BRAKE CONTROL SYSTEMS OR PARTS THEREOF, IN GENERAL; ARRANGEMENT OF BRAKING ELEMENTS ON VEHICLES IN GENERAL; PORTABLE DEVICES FOR PREVENTING UNWANTED MOVEMENT OF VEHICLES; VEHICLE MODIFICATIONS TO FACILITATE COOLING OF BRAKES
- B60T7/00—Brake-action initiating means
- B60T7/12—Brake-action initiating means for automatic initiation; for initiation not subject to will of driver or passenger
- B60T7/22—Brake-action initiating means for automatic initiation; for initiation not subject to will of driver or passenger initiated by contact of vehicle, e.g. bumper, with an external object, e.g. another vehicle, or by means of contactless obstacle detectors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/16—Image acquisition using multiple overlapping images; Image stitching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/98—Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
- G06V10/993—Evaluation of the quality of the acquired pattern
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/502—Proximity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a vehicle target recognition system based on edge calculation, which relates to the technical field of target recognition, wherein an obstacle recognition module is used for recognizing the occurrence of an obstacle in an area image acquired by a left camera or a right camera, an image acquisition module is used for continuously acquiring a plurality of left-eye images and right-eye images through the left camera and the right camera, a quality analysis module is used for generating quality scores for each left-eye image and each right-eye image, a value assignment module is used for carrying out weight assignment for each left-eye image and each right-eye image according to the quality scores, a coefficient calculation module is used for carrying out weighted average calculation on all left-eye image and right-eye image weight assignments through edge calculation equipment to obtain the integral quality coefficient of the recognition system, and a strategy generation module is used for generating a vehicle control strategy by combining a vehicle state, an obstacle state and the integral quality coefficient based on fuzzy rules. The recognition system can effectively combine multiple factors to optimize the braking state of the vehicle when encountering obstacles, and ensure the safe running of the vehicle.
Description
Technical Field
The invention relates to the technical field of target recognition, in particular to a vehicle target recognition system based on edge calculation.
Background
Binocular vision is a technology imitating a human binocular vision system, acquires image information from different angles through two cameras or sensors, and realizes depth perception and target recognition by calculating and analyzing the information, and the technology imitates a mode that human eyes observe the world, and calculates the distance and depth of an object by utilizing parallax between the eyes;
In binocular vision recognition systems, the images captured by each camera or sensor are referred to as left-eye and right-eye images, and the difference between these two images provides information about the distance and depth of objects in the scene, which by comparing them to pre-stored models or algorithms, the system can recognize the objects and determine their position and pose in three-dimensional space.
The prior art has the following defects:
the binocular vision recognition system applied to active protection of a vehicle and pedestrians generally generates a braking command when an obstacle is shot in a monitoring range during running of the vehicle, and the vehicle control system controls the vehicle to slow down or stop according to the braking command, however, in practical situations, the obstacle may be a static obstacle or a dynamic obstacle and is also influenced by other factors (such as unclear images) because the speed of the vehicle is not unique when the obstacle is monitored in different periods, the situation may cause the vehicle control system to brake excessively or brake excessively, and the excessive braking may cause passengers in the vehicle to be injured in a high-speed running state, and the excessive braking may cause safety accidents;
Based on the method, the vehicle target recognition system based on edge calculation can effectively combine multiple factors to optimize the vehicle braking state when encountering obstacles, and ensure the safe running of the vehicle.
Disclosure of Invention
The invention aims to provide a vehicle target recognition system based on edge calculation, which aims to solve the defects in the background technology.
In order to achieve the above object, the present invention provides the following technical solutions: a vehicle target recognition system based on edge calculation comprises an image acquisition module, an obstacle recognition module, an image acquisition module, a quality analysis module, an assignment module, a coefficient calculation module and a strategy generation module;
The image acquisition module is used for: acquiring an area image of the vehicle in the traveling direction through a left camera and a right camera which are arranged on the vehicle;
Obstacle recognition module: when an obstacle appears in the region image acquired by the left camera or the right camera, analyzing the state of the obstacle, and waking up the image acquisition module;
An image acquisition module: continuously acquiring a plurality of left-eye images and right-eye images through a left camera and a right camera;
And a mass analysis module: after analyzing the quality of the left eye images and the right eye images, generating quality scores for each left eye image and each right eye image;
assignment module: performing weight assignment on each left eye image and each right eye image according to the quality scores;
And a coefficient calculating module: the edge computing equipment carries out weighted average computation on all left eye image and right eye image weight assignment to obtain the integral quality coefficient of the recognition system;
The strategy generation module: and generating a vehicle control strategy based on the fuzzy rule in combination with the vehicle state, the obstacle state and the overall quality coefficient, and sending the vehicle control strategy to a vehicle control system.
In a preferred embodiment, the obstacle recognition module analyzes the obstacle state, wherein the obstacle state includes analyzing whether the obstacle belongs to a static obstacle or a dynamic obstacle, and acquiring a moving direction and a moving speed of the dynamic obstacle when the obstacle is analyzed as the dynamic obstacle.
In a preferred embodiment, the quality analysis module obtains gray value variances and signal-to-noise ratios of the plurality of left eye images and the plurality of right eye images;
after the gray value variance and the signal-to-noise ratio are normalized, a gray value variance normalization value and a signal-to-noise ratio normalization value are obtained;
and adding the gray value variance normalization value and the signal to noise ratio normalization value to obtain a quality score.
In a preferred embodiment, the assignment module obtains quality scores of 6 images in the travelling direction, sorts the 6 images in the travelling direction according to the quality scores from large to small, generates a TTL index for each image according to the sorting result, and performs weight assignment for each left eye image and right eye image through the TTL index based on an order graph method;
and after the coefficient calculation module acquires weight assignments of 6 images, carrying out weighted average calculation on the weight assignments of 3 left-eye images and 3 right-eye images to obtain the overall quality coefficient of the identification system.
In a preferred embodiment, the strategy generation module obtains the vehicle state, the overall quality coefficient of the binocular vision system and the obstacle state definition as input variables and divides the input variables into different fuzzy sets;
defining a vehicle control strategy as an output variable, and dividing the vehicle control strategy into fuzzy sets;
formulating a fuzzy rule, and describing the influence of different input variables on output variables;
and (3) taking the vehicle state, the overall quality coefficient of the binocular vision system and the obstacle state as input variables, inputting fuzzy rules, performing fuzzy reasoning, and outputting corresponding vehicle control strategies.
In a preferred embodiment, the gray value variance is calculated as: Wherein GVV is the gray value variance, n is the number of image pixels, I i is the I-th pixel gray value, and μ is the average gray value;
the signal-to-noise ratio calculation expression is: Where SNR is the Signal-to-Noise ratio, signal Power is the Signal energy in the image, and Noise Power is the Noise energy in the image.
In a preferred embodiment, the image acquisition module acquires a plurality of left-eye images and right-eye images continuously through the left camera and the right camera, the number of the left-eye images and the right-eye images acquired is the same, and the left-eye images and the right-eye images are acquired at the same time.
A binocular vision-based target recognition method, the recognition method comprising the steps of:
the recognition system acquires area images of the vehicle in the traveling direction through a left camera and a right camera which are arranged on the vehicle, analyzes the state of an obstacle when the obstacle appears in the area images acquired by the left camera or the right camera, and continuously acquires a plurality of left-eye images and right-eye images through the left camera and the right camera;
After the quality of the left eye images and the right eye images is analyzed, generating quality scores for each left eye image and each right eye image, and carrying out weight assignment for each left eye image and each right eye image according to the quality scores;
and carrying out weighted average calculation on all left-eye image and right-eye image weight assignment through edge calculation equipment to obtain the overall quality coefficient of the identification system, and generating a vehicle control strategy based on the combination of the fuzzy rule, the vehicle state, the obstacle state and the overall quality coefficient.
In the technical scheme, the invention has the technical effects and advantages that:
According to the invention, when an obstacle appears in an area image acquired by a left camera or a right camera through an obstacle recognition module, the obstacle state is analyzed, an image acquisition module is awakened, the obstacle state comprises that the obstacle is analyzed to belong to a static obstacle or a dynamic obstacle, the moving direction and the moving speed of the dynamic obstacle are acquired when the obstacle is analyzed to be the dynamic obstacle, the image acquisition module continuously acquires a plurality of left eye images and right eye images through the left camera and the right camera, the quality analysis module generates a quality score for each left eye image and each right eye image after analyzing the quality of the plurality of left eye images and the plurality of right eye images, the assignment module carries out weight assignment for each left eye image and each right eye image according to the quality score, the coefficient calculation module carries out weighted average calculation on the weight assignment of all left eye images and right eye images through an edge calculation device, and then the overall quality coefficient of the recognition system is obtained, and the strategy generation module is used for generating a vehicle control strategy based on fuzzy rule combination. The recognition system can effectively combine multiple factors to optimize the braking state of the vehicle when encountering obstacles, and ensure the safe running of the vehicle.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings required for the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
FIG. 1 is a block diagram of a system according to the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Examples: in the application, the binocular vision-based target recognition system is applied to the driving active protection auxiliary use of the vehicle;
referring to fig. 1, the vehicle target recognition system based on edge calculation according to the present embodiment includes an image acquisition module, an obstacle recognition module, an image acquisition module, a quality analysis module, an assignment module, a coefficient calculation module, and a policy generation module;
The image acquisition module is used for: the method comprises the steps that regional images of the vehicle in the traveling direction are obtained through a left camera and a right camera which are arranged on the vehicle, wherein the left camera and the right camera are symmetrically arranged on two sides of the vehicle, and the regional images are sent to an obstacle recognition module and an image obtaining module;
Obstacle recognition module: when an obstacle appears in the region image acquired by the left camera or the right camera, analyzing an obstacle state, waking up the image acquisition module, wherein the obstacle state comprises that the obstacle is analyzed to belong to a static obstacle or a dynamic obstacle, and when the obstacle is analyzed to be the dynamic obstacle, acquiring the moving direction and the moving speed of the dynamic obstacle, and transmitting the obstacle state to the strategy generation module;
An image acquisition module: continuously acquiring a plurality of left-eye images and right-eye images through a left camera and a right camera, wherein the acquisition quantity of the left-eye images and the right-eye images is the same, the left-eye images and the right-eye images are simultaneously acquired at the same time, and the left-eye images and the right-eye images are sent to a quality analysis module;
and a mass analysis module: after the quality of the left eye images and the right eye images is analyzed, generating quality scores for each left eye image and each right eye image, and sending the quality scores to an assignment module;
Assignment module: carrying out weight assignment on each left eye image and each right eye image according to the quality scores, and sending weight assignment information to a coefficient calculation module;
And a coefficient calculating module: the method comprises the steps that weighted average calculation is carried out on all left eye image weight assignment and right eye image weight assignment through edge calculation equipment to obtain the overall quality coefficient of the identification system, and the overall quality coefficient is sent to a strategy generation module;
The strategy generation module: and generating a vehicle control strategy based on the fuzzy rule in combination with the vehicle state, the obstacle state and the overall quality coefficient, and sending the vehicle control strategy to a vehicle control system.
The specific flow is as follows:
The recognition system acquires area images of the vehicle in the traveling direction through a left camera and a right camera which are arranged on the vehicle, wherein the left camera and the right camera are symmetrically arranged on two sides of the vehicle, when an obstacle appears in the area images acquired by the left camera or the right camera, the state of the obstacle is analyzed, the image acquisition module is awakened, the obstacle state comprises the analysis that the obstacle belongs to a static obstacle or a dynamic obstacle, when the obstacle is analyzed to be the dynamic obstacle, the moving direction and the moving speed of the dynamic obstacle are acquired, the recognition system continuously acquires a plurality of left-eye images and right-eye images through the left camera and the right camera, the acquisition quantity of the left-eye images and the right-eye images is the same, the left-eye images and the right-eye images are simultaneously acquired at the same moment, the quality score is generated for each left-eye image and each right-eye image, the weight is assigned according to the quality score, the weight average calculation is carried out on the weights of all the left-eye images and the right-eye images through the edge calculation device, and the overall quality coefficient of the recognition system is obtained after the weight average calculation is carried out, and the overall quality coefficient is calculated, the overall quality coefficient is generated based on the overall quality coefficient is calculated, the overall quality coefficient is based on the overall quality coefficient, the overall quality coefficient and the overall quality system is calculated, and the overall quality system is based on the overall system.
According to the application, when an obstacle appears in an area image acquired by a left camera or a right camera through an obstacle recognition module, the obstacle state is analyzed, an image acquisition module is awakened, the obstacle state comprises that the obstacle is analyzed to belong to a static obstacle or a dynamic obstacle, the moving direction and the moving speed of the dynamic obstacle are acquired when the obstacle is analyzed to be the dynamic obstacle, the image acquisition module continuously acquires a plurality of left eye images and right eye images through the left camera and the right camera, the quality analysis module generates a quality score for each left eye image and each right eye image after analyzing the quality of the plurality of left eye images and the plurality of right eye images, the assignment module carries out weight assignment for each left eye image and each right eye image according to the quality score, the coefficient calculation module carries out weighted average calculation on the weight assignment of all left eye images and right eye images through an edge calculation device, and then the overall quality coefficient of the recognition system is obtained, and the strategy generation module is used for generating a vehicle control strategy based on fuzzy rule combination. The recognition system can effectively combine multiple factors to optimize the braking state of the vehicle when encountering obstacles, and ensure the safe running of the vehicle.
The image acquisition module is used for: the method comprises the steps that a left camera and a right camera arranged on a vehicle are used for obtaining regional images of the running direction of the vehicle, wherein the left camera and the right camera are symmetrically arranged on two sides of the vehicle;
image capturing: and acquiring image data of the left camera and the right camera in real time. By a camera driver, an image acquisition card or an embedded image processing unit.
Synchronizing images: the images captured by the left camera and the right camera are ensured to be synchronous, namely, the time of the images captured by the two cameras is kept consistent, so that stereoscopic vision processing can be accurately performed.
Image pretreatment: the captured image is preprocessed, including white balance, exposure compensation, denoising and other operations, so as to improve the quality and stability of the image, and facilitate subsequent image analysis and processing.
Stereoscopic vision processing: stereoscopic vision processing, such as stereoscopic matching, depth estimation, etc., is performed using the image data acquired by the left and right cameras to acquire three-dimensional information of the vehicle traveling direction region.
Obstacle recognition module: when an obstacle appears in the region image acquired by the left camera or the right camera, analyzing the obstacle state, waking up the image acquisition module, wherein the obstacle state comprises the steps of analyzing whether the obstacle belongs to a static obstacle or a dynamic obstacle, and acquiring the moving direction and the moving speed of the dynamic obstacle when the obstacle is analyzed to be the dynamic obstacle;
And (3) image monitoring: and monitoring the area images captured by the left camera and the right camera, and detecting whether an obstacle appears in real time.
Obstacle detection: the captured images are processed using object detection or object recognition algorithms to identify and locate obstructions present in the images. The target detection may be performed by using a deep learning method, such as a Convolutional Neural Network (CNN), or may be a conventional image processing method, and the detection method of the obstacle belongs to the prior art, which is not described herein.
Obstacle state analysis: and carrying out state analysis on the identified obstacle, and judging whether the obstacle belongs to a static obstacle or a dynamic obstacle by detecting the position, the movement track and other information of the obstacle. The static or dynamic method of moving an obstacle is not limited in this regard.
Dynamic obstacle analysis: if the identified obstacle is a dynamic obstacle, the moving direction and moving speed of the obstacle are further analyzed by using a light flow method. The optical flow method further analyzes the moving direction and moving speed of the obstacle, which are not described herein.
Waking up an image acquisition module: if an obstacle appears, the image acquisition module is awakened to ensure that the image is acquired in real time and the obstacle identification and processing are performed.
An image acquisition module: a plurality of left-eye images and right-eye images are continuously acquired through a left camera and a right camera, the acquisition quantity of the left-eye images and the right-eye images is the same, and the left-eye images and the right-eye images are simultaneously acquired at the same time;
and (3) synchronous capturing: the left camera and the right camera are triggered simultaneously at the same time to ensure that the left eye image and the right eye image can be acquired at the same time.
Continuous capture: a plurality of left-eye images and right-eye images are acquired successively. A cycle may be provided to continue to acquire images from the camera until the desired number of images is reached.
Storing the image: and saving the acquired left eye image and right eye image in a memory for subsequent image processing and analysis.
And a mass analysis module: after analyzing the quality of the left eye images and the right eye images, generating quality scores for each left eye image and each right eye image;
For active protection of a vehicle, the quality of a left eye image and a plurality of right eye images is mainly affected by image definition and image noise;
Image sharpness: the blurring or defocusing of the image can cause the unclear outline of the object in the image, and the accurate recognition and the distance measurement are difficult, so that the judgment error is increased;
Image noise: noise in the image can interfere with the edges and contours of the object, making the boundary of the object obscured, increasing difficulty in recognition and positioning, and resulting in increased judgment errors.
Based on the gray value variances of the left eye images and the right eye images and the signal to noise ratio are obtained by the recognition system;
The gray value variance is calculated as: Wherein GVV is the variance of gray values, n is the number of pixels of the image, I i is the gray value of the ith pixel, mu is the average gray value, and the larger the variance value is, the larger the change of the gray value of the pixel in the image is, the higher the definition of the image is, namely the better the image quality is;
the signal-to-noise ratio is calculated as: where SNR is the signal-to-noise ratio, signalPower is the signal energy in the image, noisePower is the noise energy in the image, and a higher signal-to-noise ratio means that the signal in the image is stronger than the noise, the image is clearer, and the details are more abundant. A low signal-to-noise ratio may cause blurring of the image, loss of detail, and a high signal-to-noise ratio may improve contrast of the image, making objects in the image easier to resolve. A low signal-to-noise ratio reduces the contrast of the image, resulting in a reduced degree of differentiation of the object from the background.
After the gray value variance and the signal-to-noise ratio are normalized, a gray value variance normalization value and a signal-to-noise ratio normalization value are obtained;
And adding the gray value variance normalization value and the signal to noise ratio normalization value to obtain a quality score, wherein the expression is: quality (x) =cvv g+SNRg, where quality (x) is the quality score of the image, GVV g is the gray value variance normalization value, and SNR g is the signal-to-noise ratio normalization value.
The normalization processing of the gray value variance and the signal to noise ratio is obtained by calculating a normalization general formula, wherein the normalization general formula is as follows:
Wherein Normalized (Value) is a normalized Value, value is real-time data, max (Value) is a maximum Value of the real-time data, and Min (Value) is a minimum Value of the real-time data;
It should be noted that, the normalization general formula does not mention the gray value variance and the signal-to-noise ratio in the present application, and the gray value variance and the signal-to-noise ratio are calculated by the normalization general formula to obtain the gray value variance normalization value and the signal-to-noise ratio normalization value.
Assignment module: performing weight assignment on each left eye image and each right eye image according to the quality scores;
from the calculated expression of the quality score, the higher the quality score of the image is, the better the quality of the image is;
In addition, when an obstacle is detected in the traveling direction region, the processing efficiency of the binocular vision system is required to be high because the vehicle is in a moving state, and in the present application, the time interval for continuously capturing the left eye image and the right eye image by the left camera and the right camera is 10 milliseconds. The left eye image and the right eye image which are continuously shot are three images, so when the left camera or the right camera shoots an obstacle, the binocular vision system acquires 6 images of the travelling direction area in 30 milliseconds;
The quality scores of the 6 travelling direction area images are obtained, the 6 travelling direction area images are ranked according to the quality scores from large to small, after TTL indexes are generated for each image according to the ranking result, weight assignment is carried out on each left eye image and right eye image through the TTL indexes based on an order graph method;
The weight assignment is carried out on each left eye image and each right eye image through TTL indexes based on an order graph method, and the weight assignment is specifically shown in the table 1:
TABLE 1
In table 1, the images T1, T2, T3, T4, T5, and T6 are in one-to-one correspondence with the 6 images in this embodiment, and the 6 images are sorted from the top to the bottom according to the quality scores and then are respectively corresponding to the images T1, T2, T3, T4, T5, and T6 in the order from the top to the bottom.
And a coefficient calculating module: the edge computing equipment carries out weighted average computation on all left eye image and right eye image weight assignment to obtain the integral quality coefficient of the recognition system;
After weight assignment of 6 images is obtained, carrying out weighted average calculation on the weight assignment of 3 left eye images and 3 right eye images to obtain the overall quality coefficient of the identification system;
the invention determines the relative weight of each image by evaluating the quality state of each image, and carries out weighted average according to the relative weight, so that the calculated overall quality coefficient is more representative and the use state of the binocular vision system can be grasped;
The strategy generation module: generating a vehicle control strategy based on the fuzzy rule in combination with the vehicle state, the obstacle state and the overall quality coefficient, and sending the vehicle control strategy to a vehicle control system;
After the integral quality coefficient is obtained, the integral quality coefficient is compared with a preset first quality threshold and a second quality threshold, the second quality threshold is used for judging the quality of the image obtained by the binocular vision system, the first quality threshold is used for judging the severity of the low-quality image, and the first quality threshold is smaller than the second quality threshold;
if the overall quality coefficient is greater than or equal to the second quality threshold, judging that the quality of the image acquired by the binocular vision system is high;
If the overall quality coefficient is smaller than the second quality threshold and is larger than or equal to the first quality threshold, judging that the quality of the image acquired by the binocular vision system is medium;
And if the overall quality coefficient is smaller than the first quality threshold, judging that the quality of the image acquired by the binocular vision system is poor.
The strategy generation module acquires a vehicle state, an overall quality coefficient of the binocular vision system and an obstacle state definition as input variables, and divides the vehicle state, the overall quality coefficient of the binocular vision system and the obstacle state definition into different fuzzy sets;
For example:
Dividing the overall quality coefficient of the binocular vision system into 'High quality', 'Medium quality', 'Low quality', wherein the overall quality coefficient is greater than or equal to a second quality threshold value and is 'High quality', the overall quality coefficient is smaller than the second quality threshold value, the overall quality coefficient is greater than or equal to a first quality threshold value and is 'Medium quality', and the overall quality coefficient is smaller than the first quality threshold value and is 'Low quality';
dividing the obstacle states into 'Dangerous', 'Uncertain', 'No danger', wherein the static obstacle moves along the vehicle running path or the dynamic obstacle moves towards the vehicle direction and has a certain moving speed of 'Dangerous', the static obstacle does not move along the vehicle running path or the dynamic obstacle does not move towards the vehicle direction of 'No danger', and the dynamic obstacle has a changeable direction and a certain distance from the vehicle of 'Uncertain';
dividing the vehicle state into 'FAST SPEED', 'Medium speed', 'Low speed', wherein the vehicle speed is 'FAST SPEED' above 61km/h, the vehicle speed is 'Medium speed' between 31km/h and 60km/h, and the vehicle speed is 'Low speed' below 30 km/h;
defining a vehicle control strategy as an output variable, and dividing the vehicle control strategy into fuzzy sets;
For example: the vehicle control strategy is divided into: "Moderate deceleration", "EMERGENCY DECELERATION", "Brake stop", wherein the vehicle decelerates to "Moderate deceleration" every 5km/h at the current speed, the vehicle decelerates to "EMERGENCY DECELERATION" every 15km/h at the current speed, and the vehicle stops to "Brake stop" at the shortest time at the current speed;
And (3) formulating fuzzy rules, describing the influence of different input variables on output variables, wherein the definition of the rules can be based on expert knowledge, and can also be obtained through data analysis and experiments. For example:
Marking the overall quality coefficient as Z, the obstacle state as A, the vehicle state as C, and the vehicle control strategy as C_strategy, it can be defined as:
Rule 1:IF(Z is Low quality)AND(A is Dangerous)AND(C is Fast speed)THEN(C_strategy is Brake stop);
Rule 2:IF(Z is Low quality)AND(A is No danger)AND(C is Slow speed)THEN(C_strategy is Moderate deceleration);
Rule 3:IF(Z is Medium quality)AND(A is Uncertain)AND(C is Medium speed)THEN(C_strategy is Emergency deceleration);
......;
It should be noted that, the division of the fuzzy sets may be adjusted according to the actual situation, for example, although the embodiment uses four fuzzy sets as examples, the obstacle state, the vehicle state and the vehicle control policy may be actually divided into more than three sets, so as to facilitate better control of the vehicle.
And (3) taking the vehicle state, the overall quality coefficient of the binocular vision system and the obstacle state as input variables, inputting fuzzy rules, performing fuzzy reasoning, and outputting corresponding vehicle control strategies.
The above formulas are all formulas with dimensions removed and numerical values calculated, the formulas are formulas with a large amount of data collected for software simulation to obtain the latest real situation, and preset parameters in the formulas are set by those skilled in the art according to the actual situation.
It should be understood that the term "and/or" is merely an association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean: there are three cases, a alone, a and B together, and B alone, wherein a, B may be singular or plural. In addition, the character "/" herein generally indicates that the associated object is an "or" relationship, but may also indicate an "and/or" relationship, and may be understood by referring to the context.
It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application. It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (4)
1. A vehicle target recognition system based on edge calculation, characterized in that: the system comprises an image acquisition module, an obstacle recognition module, an image acquisition module, a quality analysis module, an assignment module, a coefficient calculation module and a strategy generation module;
The image acquisition module is used for: acquiring an area image of the vehicle in the traveling direction through a left camera and a right camera which are arranged on the vehicle;
Obstacle recognition module: when an obstacle appears in the region image acquired by the left camera or the right camera, analyzing the state of the obstacle, and waking up the image acquisition module;
An image acquisition module: continuously acquiring a plurality of left-eye images and right-eye images through a left camera and a right camera;
And a mass analysis module: after analyzing the quality of the left eye images and the right eye images, generating quality scores for each left eye image and each right eye image;
assignment module: performing weight assignment on each left eye image and each right eye image according to the quality scores;
And a coefficient calculating module: the edge computing equipment carries out weighted average computation on all left eye image and right eye image weight assignment to obtain the integral quality coefficient of the recognition system;
The strategy generation module: generating a vehicle control strategy based on the fuzzy rule in combination with the vehicle state, the obstacle state and the overall quality coefficient, and sending the vehicle control strategy to a vehicle control system;
The obstacle recognition module analyzes the obstacle state, wherein the obstacle state comprises that the obstacle is a static obstacle or a dynamic obstacle, and when the obstacle is a dynamic obstacle, the moving direction and the moving speed of the dynamic obstacle are obtained;
the quality analysis module acquires gray value variances of a plurality of left eye images and a plurality of right eye images and signal to noise ratios;
after the gray value variance and the signal-to-noise ratio are normalized, a gray value variance normalization value and a signal-to-noise ratio normalization value are obtained;
adding the gray value variance normalization value and the signal to noise ratio normalization value to obtain a quality score;
The assignment module acquires quality scores of 6 travelling direction area images, sorts the 6 travelling direction area images according to the quality scores from large to small, generates TTL indexes for each image according to the sorting result, and carries out weight assignment on each left eye image and right eye image through the TTL indexes based on an priority diagram method;
The coefficient calculation module obtains weight assignment of 6 images, and then carries out weighted average calculation on the weight assignment of 3 left eye images and 3 right eye images to obtain the overall quality coefficient of the identification system;
The strategy generation module acquires a vehicle state, an overall quality coefficient of the binocular vision system and an obstacle state definition as input variables and divides the input variables into different fuzzy sets;
defining a vehicle control strategy as an output variable, and dividing the vehicle control strategy into fuzzy sets;
formulating a fuzzy rule, and describing the influence of different input variables on output variables;
and (3) taking the vehicle state, the overall quality coefficient of the binocular vision system and the obstacle state as input variables, inputting fuzzy rules, performing fuzzy reasoning, and outputting corresponding vehicle control strategies.
2. An edge computing-based vehicle object recognition system according to claim 1, wherein: the calculation expression of the gray value variance is as follows: Wherein GVV is the gray value variance, n is the number of image pixels, I i is the I-th pixel gray value, and μ is the average gray value;
the signal-to-noise ratio calculation expression is: Where SNR is the Signal-to-Noise ratio, signal Power is the Signal energy in the image, and Noise Power is the Noise energy in the image.
3. An edge computing-based vehicle object recognition system according to claim 2, wherein: the image acquisition module continuously acquires a plurality of left-eye images and right-eye images through the left camera and the right camera, the acquisition quantity of the left-eye images and the right-eye images is the same, and the left-eye images and the right-eye images are simultaneously acquired at the same time.
4. A binocular vision-based object recognition method, implemented by the recognition system of any one of claims 1-3, characterized in that: the identification method comprises the following steps:
the recognition system acquires area images of the vehicle in the traveling direction through a left camera and a right camera which are arranged on the vehicle, analyzes the state of an obstacle when the obstacle appears in the area images acquired by the left camera or the right camera, and continuously acquires a plurality of left-eye images and right-eye images through the left camera and the right camera;
After the quality of the left eye images and the right eye images is analyzed, generating quality scores for each left eye image and each right eye image, and carrying out weight assignment for each left eye image and each right eye image according to the quality scores;
and carrying out weighted average calculation on all left-eye image and right-eye image weight assignment through edge calculation equipment to obtain the overall quality coefficient of the identification system, and generating a vehicle control strategy based on the combination of the fuzzy rule, the vehicle state, the obstacle state and the overall quality coefficient.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410905665.0A CN118447485B (en) | 2024-07-08 | 2024-07-08 | Vehicle target recognition system based on edge calculation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410905665.0A CN118447485B (en) | 2024-07-08 | 2024-07-08 | Vehicle target recognition system based on edge calculation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118447485A CN118447485A (en) | 2024-08-06 |
CN118447485B true CN118447485B (en) | 2024-10-15 |
Family
ID=92333701
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410905665.0A Active CN118447485B (en) | 2024-07-08 | 2024-07-08 | Vehicle target recognition system based on edge calculation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118447485B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107121979A (en) * | 2016-02-25 | 2017-09-01 | 福特全球技术公司 | Autonomous confidence control |
CN115616557A (en) * | 2021-07-12 | 2023-01-17 | 一汽-大众汽车有限公司 | Vehicle visibility detection method and system |
CN116437052A (en) * | 2023-04-26 | 2023-07-14 | 图为信息科技(深圳)有限公司 | Transmission monitoring method, device and equipment for remotely collecting expressway communication image |
CN118262128A (en) * | 2024-04-11 | 2024-06-28 | 湖南上容信息技术有限公司 | Knowledge graph-based image recognition optimization method |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
SE541180C2 (en) * | 2017-04-03 | 2019-04-23 | Cargotec Patenter Ab | Driver assistance system for a vehicle provided with a crane using 3D representations |
CN109920246B (en) * | 2019-02-22 | 2022-02-11 | 重庆邮电大学 | Collaborative local path planning method based on V2X communication and binocular vision |
CN113954826B (en) * | 2021-12-16 | 2022-04-05 | 深圳佑驾创新科技有限公司 | Vehicle control method and system for vehicle blind area and vehicle |
-
2024
- 2024-07-08 CN CN202410905665.0A patent/CN118447485B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107121979A (en) * | 2016-02-25 | 2017-09-01 | 福特全球技术公司 | Autonomous confidence control |
CN115616557A (en) * | 2021-07-12 | 2023-01-17 | 一汽-大众汽车有限公司 | Vehicle visibility detection method and system |
CN116437052A (en) * | 2023-04-26 | 2023-07-14 | 图为信息科技(深圳)有限公司 | Transmission monitoring method, device and equipment for remotely collecting expressway communication image |
CN118262128A (en) * | 2024-04-11 | 2024-06-28 | 湖南上容信息技术有限公司 | Knowledge graph-based image recognition optimization method |
Also Published As
Publication number | Publication date |
---|---|
CN118447485A (en) | 2024-08-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110765922B (en) | An AGV detection obstacle system with binocular vision objects | |
EP3259734B1 (en) | Glare reduction | |
CN110942449A (en) | Vehicle detection method based on laser and vision fusion | |
CN111563446A (en) | A security early warning and control method for human-computer interaction based on digital twin | |
CN112193252B (en) | Driving risk warning method, device, computing equipment and storage medium | |
EP2960858B1 (en) | Sensor system for determining distance information based on stereoscopic images | |
CN104598915A (en) | Gesture recognition method and gesture recognition device | |
CN110231013A (en) | A kind of Chinese herbaceous peony pedestrian detection based on binocular vision and people's vehicle are apart from acquisition methods | |
CN107886043A (en) | The vehicle front-viewing vehicle and pedestrian anti-collision early warning system and method for visually-perceptible | |
CN114419547B (en) | Vehicle detection method and system based on monocular vision and deep learning | |
CN119323777B (en) | Automatic obstacle avoidance system of automobile based on real-time 3D target detection | |
CN117367438A (en) | Intelligent driving method and system based on binocular vision | |
CN113895439A (en) | A lane-changing decision-making method for autonomous driving based on probabilistic fusion of vehicle-mounted multi-source sensors | |
CN111814667B (en) | Intelligent road condition identification method | |
CN110888441B (en) | Gyroscope-based wheelchair control system | |
CN118447485B (en) | Vehicle target recognition system based on edge calculation | |
JPH11142168A (en) | Environment-recognizing apparatus | |
Llorca et al. | Stereo-based pedestrian detection in crosswalks for pedestrian behavioural modelling assessment | |
Jianguo et al. | Stereo depth estimation based on adaptive stacks from event cameras | |
CN118545081A (en) | Lane departure warning method and system | |
JP7511822B2 (en) | One-way traffic control system | |
CN119503632B (en) | Overhead crane operation control method and overhead crane equipment | |
Kędziora et al. | Active speed and cruise control of the mobile robot based on the analysis of the position of the preceding vehicle | |
WO2021024905A1 (en) | Image processing device, monitoring device, control system, image processing method, computer program, and recording medium | |
RU2746631C1 (en) | Road lane detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |