[go: up one dir, main page]

CN117935559B - Traffic accident decision system based on multi-mode fusion perception technology - Google Patents

Traffic accident decision system based on multi-mode fusion perception technology Download PDF

Info

Publication number
CN117935559B
CN117935559B CN202410258229.9A CN202410258229A CN117935559B CN 117935559 B CN117935559 B CN 117935559B CN 202410258229 A CN202410258229 A CN 202410258229A CN 117935559 B CN117935559 B CN 117935559B
Authority
CN
China
Prior art keywords
pixel point
frame
image
obtaining
traffic accident
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410258229.9A
Other languages
Chinese (zh)
Other versions
CN117935559A (en
Inventor
周长军
黄慧华
黄刚
王安国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN XINCHUANG ZHONGTIAN INFORMATION TECHNOLOGY DEVELOPMENT CO LTD
Original Assignee
SHENZHEN XINCHUANG ZHONGTIAN INFORMATION TECHNOLOGY DEVELOPMENT CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN XINCHUANG ZHONGTIAN INFORMATION TECHNOLOGY DEVELOPMENT CO LTD filed Critical SHENZHEN XINCHUANG ZHONGTIAN INFORMATION TECHNOLOGY DEVELOPMENT CO LTD
Priority to CN202410258229.9A priority Critical patent/CN117935559B/en
Publication of CN117935559A publication Critical patent/CN117935559A/en
Application granted granted Critical
Publication of CN117935559B publication Critical patent/CN117935559B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/64Analysis of geometric attributes of convexity or concavity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Analytical Chemistry (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Acoustics & Sound (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of vehicle deformation detection, in particular to a traffic accident decision system based on a multi-mode fusion perception technology. According to the characteristic value distribution characteristics of the real symmetric matrix of each pixel point in each frame image in the monitoring video, local time change parameters of each pixel point on all frame images and average time change parameters of all frame images are obtained; obtaining a spatial variation parameter of each pixel point on each frame image according to the variation trend of the degree of the near circle of each other pixel point in the neighborhood range of different preset directions of each pixel point on each frame image; further obtaining a weight coefficient of each pixel point on each frame of image; performing corner detection on each frame of image to obtain a monitoring video recognition accident result; combining the audio frequency to identify accident results; and making a decision on the traffic accident of the vehicle. According to the invention, the effect of identifying abnormal deformation is improved by obtaining the accurate weight coefficient of each pixel point during corner detection.

Description

Traffic accident decision system based on multi-mode fusion perception technology
Technical Field
The invention relates to the technical field of abnormal deformation detection, in particular to a traffic accident decision system based on a multi-mode fusion perception technology.
Background
In the context of a traffic accident decision system, multiple modes may include a surveillance video including image data, an alarm sound of an emergency vehicle, data from an on-board sensor, and the like, and by performing fusion analysis on different types of data, information of different dimensions can be known, so that the nature of a problem can be better understood, the reliability and accuracy of the system can be improved, and a more intelligent decision can be made.
In the prior art, considering that a vehicle may have a deformation state when a traffic accident occurs, in order to extract effective information from a monitoring video, a Harris corner detection algorithm is used to obtain corners under different frame images, the corners are input into a neural network to determine a traffic accident judgment result of the vehicle under the video dimension, and then the traffic accident judgment is carried out by combining an identification result of audio information to judge whether the vehicle is deformed. However, in the process of corner detection, the edge of a vehicle part is deformed due to the occurrence of traffic accidents, the situation that the corner detection result is an edge instead of a corner occurs in a part with a non-serious bending degree, and when the corner detection cannot be obtained, the accurate weight coefficient of each pixel point is not obtained, so that the corner detection result cannot be input into a neural network to accurately judge whether the vehicle is deformed, and whether the traffic accidents occur cannot be effectively identified.
Disclosure of Invention
In order to solve the technical problem that the accident identification effect is poor due to the fact that accurate weight coefficients of each pixel point are not obtained when the angular point detection is carried out, the invention aims to provide a traffic accident decision system based on a multi-mode fusion perception technology, and the adopted technical scheme is as follows:
the invention provides a traffic accident decision system based on a multi-mode fusion perception technology, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the following steps when executing the computer program:
Acquiring multi-mode information of a vehicle traffic accident, wherein the multi-mode information comprises monitoring video and audio;
According to the gradient characteristics of each pixel point on each frame image in the monitoring video, obtaining the characteristic value of the real symmetrical matrix corresponding to each pixel point on each frame image; obtaining local time variation parameters of each pixel point on all frame images and average time variation parameters of all frame images according to the characteristic value distribution characteristics of the real symmetrical matrix of each pixel point in each frame image in the monitoring video;
Obtaining the degree of the near circle of the ellipse corresponding to each pixel point according to the characteristic value of each pixel point; obtaining a spatial variation parameter of each pixel point on each frame image according to the variation trend of the degree of the near circle of each other pixel point in the neighborhood range of different preset directions of each pixel point on each frame image; obtaining a weight coefficient of each pixel point on each frame image according to the spatial variation parameter, the corresponding local time variation parameter and the average time variation parameter of each pixel point on each frame image;
Performing corner detection on each frame of image according to the weight coefficient of each pixel point on each frame of image, and obtaining a monitoring video recognition accident result according to a corner detection result;
according to the sound change characteristics of the vehicle traffic accident audio, an audio recognition accident result is obtained; and deciding the vehicle traffic accident according to the monitoring video recognition accident result and the audio recognition accident result.
Further, the method for acquiring the characteristic value comprises the following steps:
in the process of adopting a Harris corner detection algorithm for each frame of image, acquiring a gradient matrix of each pixel point on each frame of image according to gradient characteristics of each pixel point on each frame of image in different directions; solving a second derivative of each element in a gradient matrix to obtain a second gradient matrix, wherein the second gradient matrix is used as a real symmetrical matrix corresponding to each pixel point on each frame of image; and obtaining a plurality of eigenvalues of the real symmetric matrix.
Further, the method for acquiring the time variation parameter comprises the following steps:
calculating the square of the difference value between the characteristic values of each pixel point on each frame of image to be used as a first change value of each pixel point; and averaging the first variation values of the pixel points at the same position in all the frame images to obtain local time variation parameters of each pixel point on all the frame images.
Further, the method for obtaining the average time variation parameter includes:
On each frame of image, calculating the difference between the characteristic values of each pixel point as a first difference value;
Calculating the sum of the first difference values of all the pixel points to be used as a first accumulated value; normalizing the first accumulated value, and calculating the product of the normalized result and the first variation value to be used as a second variation value of each pixel point on each frame of image; averaging the second variation values of the pixel points at the same position in all the frame images to obtain a second variation average value of each pixel point on all the frame images;
And solving the average value of the second variation average value of all pixel points on all frame images to obtain the average time variation parameters of all frame images.
Further, the method for obtaining the degree of near circle comprises the following steps:
taking the maximum characteristic value of each pixel point as the major axis of the ellipse, and taking the minimum characteristic value of each pixel point as the minor axis of the ellipse; and calculating the ratio of the major axis to the minor axis of the ellipse to obtain the degree of the near circle.
Further, the method for acquiring the spatial variation parameter comprises the following steps:
the preset direction comprises a horizontal direction and a vertical direction;
obtaining a space variation parameter according to an obtaining formula of the space variation parameter, wherein the obtaining formula of the space variation parameter is as follows:
; wherein/> Represents the/>Frame image No./>A plurality of pixel points; /(I)Represents the/>Frame image No./>Spatial variation parameters of the individual pixels; /(I)Representing a preset coefficient; /(I)Represents the/>Frame image No./>The first/>, in the neighborhood range of each pixel point in the vertical directionThe other pixel points correspond to the degree of the near circle of the ellipse; /(I)Represents the/>Frame image No./>The first pixel point in the neighborhood range in the horizontal directionThe pixel points correspond to the degree of the near circle of the ellipse; /(I)Representing the number of other pixel points in the neighborhood range in the vertical direction; /(I)Representing the number of other pixel points in the neighborhood range in the horizontal direction; /(I)A position sequence number of a pixel point on each frame of image is represented; /(I)Representing the normalization function.
Further, the method for obtaining the weight coefficient comprises the following steps:
Calculating the average value of spatial variation parameters of each pixel point in all frame images to obtain a first weight;
Carrying out negative correlation mapping on local time variation parameters of each pixel point on all frame images, and normalizing a negative correlation mapping result based on average time variation parameters of all frame images to obtain a second weight;
and calculating the product of the first weight and the second weight, and normalizing to obtain the weight coefficient of each pixel point on each frame of image.
Further, the method for acquiring the monitoring video recognition accident result comprises the following steps:
Carrying out Harris corner detection on each frame of image according to the weight coefficient of each pixel point on each frame of image to obtain corner detection results of all frames of images in the monitoring video;
And carrying out neural network identification on the corner detection result to obtain a monitoring video identification accident result.
Further, the method for acquiring the audio recognition accident result comprises the following steps:
And recognizing the audio by adopting a voice recognition algorithm according to the sound change characteristics of the vehicle traffic accident audio to obtain an audio recognition accident result.
Further, the preset coefficient is 0.5.
The invention has the following beneficial effects:
In order to improve accurate judgment and analysis of accidents, the invention acquires multi-mode information of vehicle traffic accidents, wherein the multi-mode information comprises monitoring video and audio; according to the gradient characteristics of each pixel point on each frame image in the monitoring video, the characteristic value of the real symmetrical matrix corresponding to each pixel point on each frame image is obtained, so that the change condition of the pixel points can be better understood, and the local characteristics of the image can be better understood and analyzed; according to the characteristic value distribution characteristics of the real symmetric matrix of each pixel point on each frame image in the monitoring video, obtaining local time variation parameters of each pixel point on all frame images, obtaining average time variation parameters of all frame images, quantifying the time variation condition of each pixel point, and knowing the dynamic variation of the pixel point; according to the variation trend of the degree of the near circle of each pixel point in the neighborhood range of different preset directions on each frame image, the spatial variation parameter of each pixel point on each frame image is obtained, the information of the spatial movement of each pixel point is provided, and the position, the direction and the possible movement mode of the pixel point in the image are known; comprehensively considering time and space factors, further, obtaining a weight coefficient of each pixel point on each frame of image, and more comprehensively evaluating the weight and importance of the pixel points; according to the weight coefficient of each pixel point on each frame of image, carrying out corner detection on each frame of image, more accurately identifying the corner in the image, and improving the accuracy of identifying traffic accidents; obtaining a monitoring video recognition accident result according to the corner detection result; in order to analyze traffic accident situation more comprehensively and accurately, according to the sound change characteristics of the vehicle traffic accident audio, an audio recognition accident result is obtained; and the vehicle traffic accidents are decided, the nature and the severity of the accidents are more accurately estimated, and the accuracy of traffic accident decision is improved. According to the invention, the effect of identifying abnormal deformation is improved by obtaining the accurate weight coefficient of each pixel point during corner detection.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an implementation method of a traffic accident decision system based on a multi-modal fusion awareness technology according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following is a detailed description of a traffic accident decision system based on the multi-mode fusion perception technology according to the invention, and the detailed description of the specific implementation, structure, characteristics and effects thereof is as follows. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the traffic accident decision system based on the multi-mode fusion perception technology provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of an implementation method of a traffic accident decision system based on a multi-mode fusion awareness technology according to an embodiment of the present invention is shown, where the specific method includes:
step S1: and acquiring multi-mode information of the vehicle traffic accident, wherein the multi-mode information comprises monitoring video and audio.
In the embodiment of the invention, considering that single data cannot comprehensively and accurately represent the information data of the target object or scene, in order to accurately decide and judge the traffic accident of the vehicle, various perception modes and data types are combined, and richer and more comprehensive information is provided, so that the accuracy of detection and identification is improved. Firstly, multi-mode information of vehicle traffic accidents is acquired, wherein the multi-mode information comprises monitoring video and audio.
The monitoring video can provide multi-angle and omnibearing traffic accident scene images, is beneficial to comprehensively knowing the accident situation and is important for judging accident responsibility, knowing the accident occurrence process and subsequent treatment; the audio information may provide a sound cue of the accident scene, may supplement the video information, and may provide more dimensional information. It should be noted that, in one embodiment of the present invention, a monitoring video of a vehicle traffic accident may be collected by a high-definition camera disposed on a traffic road; the audio of the vehicle traffic accident is collected in real time by installing the audio collecting equipment.
In one embodiment of the invention, because the definition of the different frame images in the monitoring video is inconsistent, preprocessing operation is performed on each frame image in the acquired monitoring video to facilitate the subsequent image processing process, the quality of the image is enhanced, and then the processed image is analyzed. It should be noted that the image preprocessing operation is a technical means well known to those skilled in the art, and may be specifically set according to a specific implementation scenario. In one embodiment of the invention, an equalization algorithm is adopted to process the image, the contrast of the image contents of different frames is amplified, the noise is effectively suppressed, and the image quality is improved. The specific equalization algorithm is a technical means well known to those skilled in the art, and will not be described herein.
Step S2: according to the gradient characteristics of each pixel point on each frame image in the monitoring video, obtaining the characteristic value of the real symmetrical matrix corresponding to each pixel point on each frame image; and obtaining local time variation parameters of each pixel point on all frame images and average time variation parameters of all frame images according to the characteristic value distribution characteristics of the real symmetrical matrix of each pixel point in each frame image in the monitoring video.
Since the vehicle is deformed after the traffic accident of the vehicle, the edge part is possibly bent, the gradient of the pixel points in the image can be analyzed to effectively express the edge information and the texture characteristics of the image, and whether the edge of the vehicle is deformed or not is detected; the gradient change of the vehicle edge is represented by calculating the obtained characteristic value, so that the data dimension can be reduced, the calculation complexity is reduced, and the local characteristics of the image can be better understood and analyzed. And obtaining the characteristic value of the real symmetrical matrix corresponding to each pixel point on each frame of image according to the gradient characteristic of each pixel point on each frame of image in the monitoring video.
Preferably, in one embodiment of the present invention, the method for acquiring the feature value includes:
In the process of adopting a Harris corner detection algorithm for each frame of image, obtaining a gradient matrix of each pixel point on each frame of image according to gradient characteristics of each pixel point on each frame of image in different directions; solving a second derivative of each element in a gradient matrix to obtain a second gradient matrix, and obtaining a real symmetric matrix corresponding to each pixel point on each frame of image; and obtaining a plurality of eigenvalues of the real symmetric matrix.
It should be noted that, in the embodiment of the present invention, a Sobel operator may be used to calculate a gradient matrix of each pixel point; and the corresponding characteristic value can be obtained by adopting a characteristic polynomial method or a Jacobi iteration method on the real symmetric matrix. The specific Sobel operator and Harris corner detection algorithm are technical means well known to those skilled in the art, and will not be described in detail herein.
The characteristic value can be used for analyzing the stability of the region where the pixel point is located, and is helpful for judging whether the pixel point is in deformation or edge condition. The closer the characteristic values change, the smaller the difference between the characteristic values, the smaller the gradient characteristic difference of the pixel points in different directions, the similar gray level change is generated when the image pixel points move, and the smaller the time change parameter is, the more deformation is likely to happen; the larger the difference between the characteristic values is, the larger the difference between the pixel points is, the larger the time variation parameter is, and the more likely the pixel points are close to the edge; the content and the change condition of the monitoring video are better understood by quantifying the change condition of each pixel point along with time and obtaining the average time change parameter of all frame images. And obtaining local time variation parameters of each pixel point in all frame images according to the difference characteristics among the characteristic values of each pixel point in each frame image in the monitoring video, and obtaining average time variation parameters of all frame images.
Preferably, in one embodiment of the present invention, the method for acquiring the time-varying parameter is:
Calculating the square of the difference value between the characteristic values of each pixel point on each frame of image to be used as a first change value of each pixel point; and averaging the first variation values of the pixel points at the same position in all the frame images to obtain local time variation parameters of each pixel point on all the frame images. In one embodiment of the invention, the time-varying parameter is formulated as:
wherein, Expressed at/>Frame image No./>A time variation parameter of each pixel point; /(I)Represents the/>Intra-frame image No./>A first characteristic value of each pixel point; /(I)Representation/>Represents the/>Intra-frame image No./>A second characteristic value of each pixel point; /(I)Representing the number of frame images in the monitoring video; /(I)And the position serial number of the pixel point on each frame of image is represented.
In the formulation of the time-varying parameter,The first variation value of each pixel point is represented, and when the first variation value is larger, the difference between the characteristic values of each pixel point is larger, the gradient characteristic difference around the pixel point is larger, and the time variation parameter is larger.
Preferably, in one embodiment of the present invention, the method for acquiring the average time variation parameter includes:
On each frame of image, calculating the difference between the characteristic values of each pixel point as a first difference value; calculating the sum of first difference values of all pixel points to be used as a first accumulated value; normalizing the first accumulated value, and calculating the product of the normalized result and the first change value to be used as a second change value of each pixel point on each frame of image; averaging the second variation values of the pixel points at the same position in all the frame images to obtain a second variation average value of each pixel point on all the frame images; and (3) averaging the second variation average value of all pixel points on all frame images to obtain the average time variation parameters of all frame images. In one embodiment of the invention, the mean time-varying parameter is formulated as:
wherein, Representing/>, monitoring videoAverage time variation parameters of the frame images; /(I)Represents the/>The sum of the first difference values of all pixel points on the frame image; /(I)A serial number representing a frame image in the monitoring video; /(I)Represents the/>The sum of the first difference values of all pixel points on the frame image; /(I)Represents the/>Intra-frame first/>A first characteristic value corresponding to each pixel point; Represents the/> Intra-frame first/>A second characteristic value corresponding to each pixel point; /(I)Representing the number of frame images in the monitoring video; /(I)Representing the number of pixel points on each frame of image; /(I)And the position serial number of the pixel point on each frame of image is represented.
In the formula for the average time-varying parameter,Represents the/>Normalizing the first accumulated value of each frame image according to the ratio of the first accumulated value of the frame image to the sum of the first accumulated values of all the frame images, wherein the larger the first difference value of each pixel on each frame image is, the larger the first accumulated value is, and the larger the contribution of the time variation parameter is; when the first variation value is larger, the first accumulated value is larger, the average time variation parameter is larger, the difference between the characteristic values is larger, the gradient characteristic difference of the pixel point in different directions is larger, the pixel point is more likely to be close to the edge, and the possibility of deformation is smaller.
It should be noted that, in other embodiments of the present invention, the positive-negative correlation and normalization method may be constructed by other basic mathematical operations, and specific means are technical means well known to those skilled in the art, and will not be described herein.
Step S3: obtaining the degree of the near circle of the ellipse corresponding to each pixel point according to the characteristic value of each pixel point; obtaining a spatial variation parameter of each pixel point on each frame image according to the variation trend of the degree of the near circle of each other pixel point in the neighborhood range of different preset directions of each pixel point on each frame image; and obtaining the weight coefficient of each pixel point on each frame image according to the spatial variation parameter, the corresponding local time variation parameter and the average time variation parameter of each pixel point on each frame image.
The characteristic value can reflect the characteristic change condition around the pixel points in the image, and the difference degree of the ellipse in different directions can be obtained by analyzing the difference of the characteristic value, so that the flatness degree of the ellipse is reflected, and the shape and the structure of the object can be accurately evaluated; because the curved object or structure may present a more circular ellipse, the forces and the deformation of the pixel points can be reflected by analyzing the degree of the near circle of the ellipse corresponding to the pixel points, and the possibility of traffic accidents of the vehicle is estimated, so the degree of the near circle of the ellipse corresponding to each pixel point is obtained according to the characteristic value of each pixel point.
Preferably, in one embodiment of the present invention, the method for obtaining the degree of near circle includes:
taking the maximum characteristic value of each pixel point as the major axis of the ellipse, and taking the minimum characteristic value of each pixel point as the minor axis of the ellipse; and calculating the ratio of the shorter axis to the longer axis of the ellipse to obtain the degree of the near circle of the ellipse corresponding to each pixel point. In one embodiment of the invention, the near-circular degree is formulated as:
wherein, Representing the degree of the near circle of the ellipse corresponding to the pixel point; /(I)Representing the minimum characteristic value of the pixel point; /(I)The maximum eigenvalue of the pixel point is represented.
In the formula of the degree of the near circle, the closer the value of the degree of the near circle is to 100%, the closer the major axis and the minor axis of the ellipse are, the closer the characteristic values of the corresponding pixel points are, the closer the gradient change of the direction of the pixel points is, and the closer the pixel points are to the bending region, the more deformation is likely to happen.
The degree of the near circles of the pixel points can help to know the position, the direction and the possible motion mode of the pixel points in the image, and the spatial variation of the pixel points can be known; since deformation may cause an edge direction change, if the degree of rounding of a pixel point changes between consecutive frames, this means that the pixel point is moving or the surrounding pixel structure is changing, i.e. deformation may occur, and the spatial variation parameter is larger. The video content and detection of moving objects is better understood by analyzing the spatial variation of each pixel point on each frame of image. And therefore, according to the variation trend of the degree of the near circle of each pixel point on each frame image in the neighborhood range of different preset directions, the spatial variation parameter of each pixel point on each frame image is obtained.
Preferably, in one embodiment of the present invention, the method for acquiring the spatial variation parameter includes:
in order to describe the change condition of the pixel points in space, the preset direction comprises a horizontal direction and a vertical direction;
obtaining a space variation parameter according to an obtaining formula of the space variation parameter, wherein the obtaining formula of the space variation parameter is as follows:
wherein, Represents the/>Frame image No./>A plurality of pixel points; /(I)Represents the/>Frame image No./>Spatial variation parameters of the individual pixels; /(I)Representing a preset coefficient; /(I)Represents the/>Frame image No./>The first/>, in the neighborhood range of each pixel point in the vertical directionThe other pixel points correspond to the degree of the near circle of the ellipse; /(I)Represents the/>Frame image No./>The first pixel point in the neighborhood range in the horizontal directionThe pixel points correspond to the degree of the near circle of the ellipse; /(I)Representing the number of other pixels in the vertical direction; /(I)Representing the number of other pixel points in the horizontal direction; /(I)A position sequence number of a pixel point on each frame of image is represented; /(I)Representing the normalization function.
In the acquisition formula of the spatially varying parameters,Represents the/>Frame image No./>The average value of the near-circle degree of the ellipse corresponding to all other pixel points in the neighborhood range of each pixel point in the vertical direction,Represents the/>Frame image No./>The average value of the near-circle degree of the ellipse corresponding to all other pixel points in the neighborhood range of each pixel point in the horizontal direction; the larger the mean value of the degree of the nearly circle is, the closer the spatial variation parameter is to 1, which indicates that the variation of the pixel points in different directions is consistent, and the closer the long and short axes of the ellipses corresponding to the pixel points are, namely, the closer the characteristic value of each pixel point is, the more likely the deformation part of the vehicle is.
It should be noted that, in one embodiment of the present invention, the preset coefficient is 0.5; the field range is a neighborhood range in different preset directions formed by taking each pixel point as a center and a plurality of other pixel points in different preset directions, and the number of the other pixel points is 20. In other embodiments of the present invention, the preset coefficients and the size of the neighborhood range may be specifically set according to specific situations, which are not limited and described herein in detail.
The edge of a part of the vehicle is deformed due to the occurrence of the traffic accident, but the edge is more mainly characterized for the part with less serious bending degree, so that the accuracy of judging the traffic accident of the vehicle is poor; to increase the sensitivity to the lower edges of the partial curve, a different degree of increase is made by selecting the appropriate weighting coefficients. Synthesizing the time variation parameter and the space variation parameter to obtain a more accurate and reliable pixel point weight coefficient; the larger the time variation parameter is, the larger the variation trend of the pixel point under different frames is, the larger the difference between the characteristic values is, the more likely to be an edge is, and the smaller the weight coefficient of the pixel point is; the smaller the time variation parameter is, the smaller the variation trend of the pixel point among the characteristic values under different frame images is, the more likely the pixel point approaches to the bending part, and the larger the weight coefficient of the pixel point is; the larger the space variation parameter is, the larger the degree of the nearly circle is, the more similar the characteristic values are, the more likely the characteristic values are the bending parts are, and the larger the weight coefficient is. And obtaining the weight coefficient of each pixel point on each frame image according to the spatial variation parameter, the corresponding local time variation parameter and the average time variation parameter of each pixel point on each frame image.
Preferably, in one embodiment of the present invention, the method for acquiring the weight coefficient includes:
calculating the average value of spatial variation parameters of each pixel point in all frame images to obtain a first weight; carrying out negative correlation mapping on local time variation parameters of each pixel point on all frame images, and normalizing a negative correlation mapping result based on average time variation parameters of all frame images to obtain a second weight; and calculating the product of the first weight and the second weight, and normalizing to obtain the weight coefficient of each pixel point on each frame of image.
In one embodiment of the invention, the formula for the weight coefficients is:
wherein, Represents the/>Frame image No./>A plurality of pixel points; /(I)Represents the/>Frame image No./>The weight coefficient of each pixel point; /(I)Represents the/>Frame image No./>A plurality of pixel points; /(I)Represents the/>Frame image No./>Spatial variation parameters of the individual pixels; /(I)Representing/>, monitoring videoAverage time variation parameters of the frame images; /(I)Expressed at/>Intra-frame image No./>Local time variation parameters of the individual pixels; /(I)Representing the number of frame images in the monitoring video; /(I)A position sequence number of a pixel point on each frame of image is represented; /(I)Representing the normalization function.
In the formula of the weight coefficient,The first weight is represented, and the larger the spatial variation parameter of each pixel point on each frame of image is, the larger the first weight is, and the larger the weight coefficient is; /(I)Representing a second weight, namely carrying out normalization after carrying out negative correlation mapping on time variation parameters of each pixel point on different frame images, and carrying out normalization on the time variation parameters of each pixel point on the frame imagesThe larger the time variation parameter of the pixel point in the frame image is, the smaller the second weight value is, the larger the characteristic value variation under the time parameter is, the closer to the edge is, and the smaller the weight coefficient is.
It should be noted that, in other embodiments of the present invention, the positive-negative correlation and normalization method may be constructed by other basic mathematical operations, and specific means are technical means well known to those skilled in the art, and will not be described herein.
Step S4: and carrying out corner detection on each frame of image according to the weight coefficient of each pixel point on each frame of image, and obtaining a monitoring video recognition accident result according to the corner detection result.
Because vehicles with traffic accidents have more deformation areas, the pixels of the bending part show more corner features, corners are easy to generate, and the distribution of the corners is more dense; the angular point distribution of the vehicles without traffic accidents is the same as that of the normal vehicles, and the angular points in the images are analyzed; the proper weight coefficient is obtained, so that inaccurate or missing corner detection results caused by lower bending degree can be avoided, the subsequent judgment effect is improved, and whether traffic accidents occur or not can be effectively identified. And therefore, the corner detection is carried out on each frame of image according to the weight coefficient of each pixel point on each frame of image, and the monitoring video recognition accident result is obtained according to the corner detection result.
Preferably, in one embodiment of the present invention, the method for acquiring the surveillance video recognition accident result includes:
Carrying out Harris corner detection on each frame of image according to the weight coefficient of each pixel point on each frame of image to obtain corner detection results of all frames of images in the monitoring video; and carrying out neural network identification on the corner detection result to obtain a monitoring video identification accident result.
It should be noted that, in one embodiment of the present invention, the neural network identification process is:
After converting the data obtained from Harris corner detection into a format suitable for processing by a neural network, training the neural network by using a marked data set, namely an image comprising a normal vehicle and a traffic accident vehicle, wherein the network learns corner features for distinguishing the normal vehicle and the traffic accident vehicle; neural network learning identifies corner patterns related to traffic accidents, such as corner points of damaged parts of vehicles; judging the traffic accident of the vehicle and completing the training of the neural network; and applying the trained neural network to the new image data, and identifying whether corner features related to the vehicle traffic accident exist or not, wherein if so, the network identification accident result is accident occurrence. The specific neural network and Harris corner detection algorithm are technical means well known to those skilled in the art, and are not described herein.
Step S5: according to the sound change characteristics of the vehicle traffic accident audio, an audio recognition accident result is obtained; and deciding the vehicle traffic accident according to the monitoring video recognition accident result and the audio recognition accident result.
In order to analyze traffic accident situation more comprehensively and accurately, the audio of the vehicle accident can be analyzed, and for the accidents such as vehicle collision and the like, the audio characteristics related to the structural deformation of the vehicle, such as impact sound and structural vibration sound, can be focused to reflect the dynamic behavior and possible deformation of the vehicle in the accident; and obtaining an audio recognition accident result according to the sound change characteristics of the vehicle traffic accident audio.
Preferably, in one embodiment of the present invention, the method for acquiring the audio recognition accident result includes:
And recognizing the audio by adopting a voice recognition algorithm according to the sound change characteristics of the vehicle traffic accident audio to obtain an audio recognition accident result.
It should be noted that, in one embodiment of the present invention, the voice recognition algorithm may use a dynamic time adjustment algorithm to recognize the audio information, that is, select the audio of a normal vehicle, calculate the relative distance between the traffic accident of the vehicle and the normal vehicle according to the dynamic time adjustment algorithm to determine whether the vehicle is deformed, and if the calculated relative distance is significantly higher than the normal range, implement personnel combine the actual situation and experience to obtain the result of the audio recognition accident, and determine whether the accident occurs. The specific dynamic time adjustment algorithm is a technical means well known to those skilled in the art, and will not be described herein.
The audio and monitoring video information is combined, so that a more comprehensive traffic accident situation can be provided, the nature and the severity of the accident can be estimated more accurately, and the accuracy of traffic accident decision can be improved; and deciding the vehicle traffic accident according to the monitoring video recognition accident result and the audio recognition accident result.
In the embodiment of the invention, if the audio recognition accident result and the network recognition accident result both judge the accident, the accident decision is reported as the accident. After the accident decision report is obtained, whether alarm operation is needed or not can be further judged; if the accident decision report is an accident occurrence, carrying out alarm processing; conversely, no alarm is required.
In summary, the invention obtains the eigenvalue of the real symmetric matrix corresponding to each pixel point; according to the characteristic value distribution characteristics of the real symmetric matrix of each pixel point on each frame image in the monitoring video, obtaining local time variation parameters of each pixel point in all frame images, and obtaining average time variation parameters of all frame images; obtaining a spatial variation parameter of each pixel point on each frame image according to the variation trend of the degree of the near circle of each other pixel point in the neighborhood range of different preset directions of each pixel point on each frame image; further obtaining a weight coefficient of each pixel point on each frame of image; performing corner detection on each frame of image to obtain a monitoring video recognition accident result; combining the audio frequency to identify accident results; and making a decision on the traffic accident of the vehicle. According to the invention, the effect of identifying abnormal deformation is improved by obtaining the accurate weight coefficient of each pixel point during corner detection.
An embodiment of a traffic accident video recognition method comprises the following steps:
In the prior art, considering that a vehicle may have a deformation state when a traffic accident occurs, in order to extract effective information from a monitoring video, a Harris corner detection algorithm is used for obtaining corners under different frame images, and the corners are input into a neural network to determine a traffic accident judgment result of the vehicle under the video dimension; however, in the process of corner detection, the edge of a vehicle part is deformed due to the occurrence of traffic accidents, the situation that the corner detection result is an edge instead of a corner occurs in a part with a non-serious bending degree, and when the corner detection cannot be obtained, the accurate weight coefficient of each pixel point is not obtained, so that the corner detection result cannot be input into a neural network to accurately judge whether the vehicle is deformed, and whether the traffic accidents occur cannot be effectively identified. In order to solve the technical problem, the embodiment provides a traffic accident video recognition method, which comprises the following steps:
Step S1: and acquiring a monitoring video of the vehicle traffic accident.
In the embodiment of the invention, the monitoring video can provide multi-angle and all-directional traffic accident scene images, is beneficial to comprehensively knowing the accident situation and is important to judging the accident responsibility, knowing the accident occurrence process and subsequent treatment. Therefore, the monitoring video of the vehicle traffic accident is collected through the high-definition camera arranged on the traffic road.
In one embodiment of the invention, because the definition of the different frame images in the monitoring video is inconsistent, preprocessing operation is performed on each frame image in the acquired monitoring video to facilitate the subsequent image processing process, the quality of the image is enhanced, and then the processed image is analyzed. It should be noted that the image preprocessing operation is a technical means well known to those skilled in the art, and may be specifically set according to a specific implementation scenario. In one embodiment of the invention, an equalization algorithm is adopted to process the image, the contrast of the image contents of different frames is amplified, the noise is effectively suppressed, and the image quality is improved. The specific equalization algorithm is a technical means well known to those skilled in the art, and will not be described herein.
Step S2: according to the gradient characteristics of each pixel point on each frame image in the monitoring video, obtaining the characteristic value of the real symmetrical matrix corresponding to each pixel point on each frame image; and obtaining local time variation parameters of each pixel point on all frame images and average time variation parameters of all frame images according to the characteristic value distribution characteristics of the real symmetrical matrix of each pixel point in each frame image in the monitoring video.
Step S3: obtaining the degree of the near circle of the ellipse corresponding to each pixel point according to the characteristic value of each pixel point; obtaining a spatial variation parameter of each pixel point on each frame image according to the variation trend of the degree of the near circle of each other pixel point in the neighborhood range of different preset directions of each pixel point on each frame image; and obtaining the weight coefficient of each pixel point on each frame image according to the spatial variation parameter, the corresponding local time variation parameter and the average time variation parameter of each pixel point on each frame image.
Step S4: and carrying out corner detection on each frame of image according to the weight coefficient of each pixel point on each frame of image, and obtaining a monitoring video recognition accident result according to the corner detection result.
Because the specific implementation process of steps S2-S4 is already described in detail in the traffic accident decision system based on the multi-mode fusion perception technology, the detailed description is omitted.
The technical effects of this embodiment are:
The method obtains the characteristic value of the real symmetrical matrix corresponding to each pixel point; according to the characteristic value distribution characteristics of the real symmetric matrix of each pixel point on each frame image in the monitoring video, obtaining local time variation parameters of each pixel point in all frame images, and obtaining average time variation parameters of all frame images; obtaining a spatial variation parameter of each pixel point on each frame image according to the variation trend of the degree of the near circle of each other pixel point in the neighborhood range of different preset directions of each pixel point on each frame image; further obtaining a weight coefficient of each pixel point on each frame of image; and (5) carrying out corner detection on each frame of image to obtain the monitoring video recognition accident result. According to the invention, the accurate weight coefficient of each pixel point in the corner detection is obtained, so that the efficiency and accuracy of monitoring video identification of the vehicle traffic accident are improved.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.

Claims (9)

1. A traffic accident decision system based on a multimodal fusion awareness technology, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor when executing the computer program implements the steps of:
Acquiring multi-mode information of a vehicle traffic accident, wherein the multi-mode information comprises monitoring video and audio;
According to the gradient characteristics of each pixel point on each frame image in the monitoring video, obtaining the characteristic value of the real symmetrical matrix corresponding to each pixel point on each frame image; obtaining local time variation parameters of each pixel point on all frame images and average time variation parameters of all frame images according to the characteristic value distribution characteristics of the real symmetrical matrix of each pixel point in each frame image in the monitoring video;
Obtaining the degree of the near circle of the ellipse corresponding to each pixel point according to the characteristic value of each pixel point; obtaining a spatial variation parameter of each pixel point on each frame image according to the variation trend of the degree of the near circle of each other pixel point in the neighborhood range of different preset directions of each pixel point on each frame image; obtaining a weight coefficient of each pixel point on each frame image according to the spatial variation parameter, the corresponding local time variation parameter and the average time variation parameter of each pixel point on each frame image;
Performing corner detection on each frame of image according to the weight coefficient of each pixel point on each frame of image, and obtaining a monitoring video recognition accident result according to a corner detection result;
According to the sound change characteristics of the vehicle traffic accident audio, an audio recognition accident result is obtained; the vehicle traffic accident is decided according to the monitoring video recognition accident result and the audio recognition accident result;
the weight coefficient acquisition method comprises the following steps:
Calculating the average value of spatial variation parameters of each pixel point in all frame images to obtain a first weight;
Carrying out negative correlation mapping on local time variation parameters of each pixel point on all frame images, and normalizing a negative correlation mapping result based on average time variation parameters of all frame images to obtain a second weight;
and calculating the product of the first weight and the second weight, and normalizing to obtain the weight coefficient of each pixel point on each frame of image.
2. The traffic accident decision system based on the multi-modal fusion awareness technology according to claim 1, wherein the method for obtaining the feature value comprises:
in the process of adopting a Harris corner detection algorithm for each frame of image, acquiring a gradient matrix of each pixel point on each frame of image according to gradient characteristics of each pixel point on each frame of image in different directions; solving a second derivative of each element in a gradient matrix to obtain a second gradient matrix, wherein the second gradient matrix is used as a real symmetrical matrix corresponding to each pixel point on each frame of image; and obtaining a plurality of eigenvalues of the real symmetric matrix.
3. The traffic accident decision system based on the multi-modal fusion awareness technology according to claim 1, wherein the time-varying parameter obtaining method comprises:
calculating the square of the difference value between the characteristic values of each pixel point on each frame of image to be used as a first change value of each pixel point; and averaging the first variation values of the pixel points at the same position in all the frame images to obtain local time variation parameters of each pixel point on all the frame images.
4. The traffic accident decision system based on the multi-modal fusion awareness technology according to claim 3, wherein the method for obtaining the average time variation parameter comprises:
On each frame of image, calculating the difference between the characteristic values of each pixel point as a first difference value;
Calculating the sum of the first difference values of all the pixel points to be used as a first accumulated value; normalizing the first accumulated value, and calculating the product of the normalized result and the first variation value to be used as a second variation value of each pixel point on each frame of image; averaging the second variation values of the pixel points at the same position in all the frame images to obtain a second variation average value of each pixel point on all the frame images;
And solving the average value of the second variation average value of all pixel points on all frame images to obtain the average time variation parameters of all frame images.
5. The traffic accident decision system based on the multi-modal fusion awareness technology according to claim 1, wherein the method for obtaining the degree of the near-circle comprises the following steps:
taking the maximum characteristic value of each pixel point as the major axis of the ellipse, and taking the minimum characteristic value of each pixel point as the minor axis of the ellipse; and calculating the ratio of the major axis to the minor axis of the ellipse to obtain the degree of the near circle.
6. The traffic accident decision system based on the multi-modal fusion awareness technology according to claim 1, wherein the method for obtaining the spatial variation parameter comprises:
the preset direction comprises a horizontal direction and a vertical direction;
obtaining a space variation parameter according to an obtaining formula of the space variation parameter, wherein the obtaining formula of the space variation parameter is as follows:
; wherein/> Represents the/>Frame image No./>A plurality of pixel points; /(I)Represents the/>Frame image No./>Spatial variation parameters of the individual pixels; /(I)Representing a preset coefficient; /(I)Represents the/>Frame image No./>The first/>, in the neighborhood range of each pixel point in the vertical directionThe other pixel points correspond to the degree of the near circle of the ellipse; /(I)Represents the/>Frame image No./>The first pixel point in the neighborhood range in the horizontal directionThe pixel points correspond to the degree of the near circle of the ellipse; /(I)Representing the number of other pixel points in the neighborhood range in the vertical direction; /(I)Representing the number of other pixel points in the neighborhood range in the horizontal direction; /(I)A position sequence number of a pixel point on each frame of image is represented; /(I)Representing the normalization function.
7. The traffic accident decision system based on the multi-mode fusion perception technology according to claim 1, wherein the method for acquiring the monitoring video recognition accident result comprises the following steps:
Carrying out Harris corner detection on each frame of image according to the weight coefficient of each pixel point on each frame of image to obtain corner detection results of all frames of images in the monitoring video;
And carrying out neural network identification on the corner detection result to obtain a monitoring video identification accident result.
8. The traffic accident decision system based on the multi-modal fusion awareness technology according to claim 1, wherein the method for obtaining the audio recognition accident result comprises the following steps:
And recognizing the audio by adopting a voice recognition algorithm according to the sound change characteristics of the vehicle traffic accident audio to obtain an audio recognition accident result.
9. The traffic accident decision system based on the multimodal fusion awareness technology according to claim 6, wherein the preset coefficient is 0.5.
CN202410258229.9A 2024-03-07 2024-03-07 Traffic accident decision system based on multi-mode fusion perception technology Active CN117935559B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410258229.9A CN117935559B (en) 2024-03-07 2024-03-07 Traffic accident decision system based on multi-mode fusion perception technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410258229.9A CN117935559B (en) 2024-03-07 2024-03-07 Traffic accident decision system based on multi-mode fusion perception technology

Publications (2)

Publication Number Publication Date
CN117935559A CN117935559A (en) 2024-04-26
CN117935559B true CN117935559B (en) 2024-05-24

Family

ID=90750887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410258229.9A Active CN117935559B (en) 2024-03-07 2024-03-07 Traffic accident decision system based on multi-mode fusion perception technology

Country Status (1)

Country Link
CN (1) CN117935559B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118657645B (en) * 2024-06-17 2025-03-18 中国人民解放军军事科学院系统工程研究院 A method and device for evaluating a security system based on a standard model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5774569A (en) * 1994-07-25 1998-06-30 Waldenmaier; H. Eugene W. Surveillance system
JP2008294740A (en) * 2007-05-24 2008-12-04 Denso Corp Roadside machine for vehicle communication system
CN101458871A (en) * 2008-12-25 2009-06-17 北京中星微电子有限公司 Intelligent traffic analysis system and application system thereof
CN101751782A (en) * 2009-12-30 2010-06-23 北京大学深圳研究生院 Crossroad traffic event automatic detection system based on multi-source information fusion
CN105761547A (en) * 2016-03-28 2016-07-13 安徽云森物联网科技有限公司 Traffic collision pre-warning technique and system based on images

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9076045B2 (en) * 2009-10-07 2015-07-07 Alon Atsmon Automatic content analysis method and system
US20180096595A1 (en) * 2016-10-04 2018-04-05 Street Simplified, LLC Traffic Control Systems and Methods

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5774569A (en) * 1994-07-25 1998-06-30 Waldenmaier; H. Eugene W. Surveillance system
JP2008294740A (en) * 2007-05-24 2008-12-04 Denso Corp Roadside machine for vehicle communication system
CN101458871A (en) * 2008-12-25 2009-06-17 北京中星微电子有限公司 Intelligent traffic analysis system and application system thereof
CN101751782A (en) * 2009-12-30 2010-06-23 北京大学深圳研究生院 Crossroad traffic event automatic detection system based on multi-source information fusion
CN105761547A (en) * 2016-03-28 2016-07-13 安徽云森物联网科技有限公司 Traffic collision pre-warning technique and system based on images

Also Published As

Publication number Publication date
CN117935559A (en) 2024-04-26

Similar Documents

Publication Publication Date Title
US6961466B2 (en) Method and apparatus for object recognition
KR101903127B1 (en) Gaze estimation method and apparatus
US12131485B2 (en) Object tracking device and object tracking method
CN108596087B (en) Driving fatigue degree detection regression model based on double-network result
CN117935559B (en) Traffic accident decision system based on multi-mode fusion perception technology
CN112261390B (en) Vehicle-mounted camera equipment and image optimization device and method thereof
CN104182983B (en) Highway monitoring video definition detection method based on corner features
CN117853484B (en) Intelligent bridge damage monitoring method and system based on vision
CN116883763B (en) Deep learning-based automobile part defect detection method and system
CN117115926B (en) Human body action standard judging method and device based on real-time image processing
CN115171218A (en) Material sample feeding abnormal behavior recognition system based on image recognition technology
CN116823673B (en) Visual perception method of passenger status in high-speed elevator car based on image processing
CN118736511A (en) Truck loading and unloading anomaly detection method and system based on image processing technology
CN101320477B (en) Human body tracing method and equipment thereof
CN118096579A (en) 3D printing lattice structure defect detection method
CN117745709A (en) Railway foreign matter intrusion detection method, system, equipment and medium
CN112927223A (en) Glass curtain wall detection method based on infrared thermal imager
CN116152758A (en) Intelligent real-time accident detection and vehicle tracking method
CN110348307B (en) Path edge identification method and system for crane metal structure climbing robot
CN118571493B (en) Patient limb function rehabilitation evaluation method
JP4994955B2 (en) Mobile object identification device and mobile object identification program
CN115345821A (en) Steel coil binding belt loosening abnormity detection and quantification method based on active visual imaging
CN119091193A (en) A real-time image recognition scratch detection system and method based on deformation network
CN117523428B (en) Ground target detection method and device based on aircraft platform
CN116612390B (en) Information management system for constructional engineering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant