[go: up one dir, main page]

CN117133122A - Traffic situation awareness prediction method and system based on multi-mode traffic big model - Google Patents

Traffic situation awareness prediction method and system based on multi-mode traffic big model Download PDF

Info

Publication number
CN117133122A
CN117133122A CN202311083594.2A CN202311083594A CN117133122A CN 117133122 A CN117133122 A CN 117133122A CN 202311083594 A CN202311083594 A CN 202311083594A CN 117133122 A CN117133122 A CN 117133122A
Authority
CN
China
Prior art keywords
traffic
model
data
trained
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311083594.2A
Other languages
Chinese (zh)
Inventor
闫军
丁丽珠
王艳清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Smart Intercommunication Technology Co ltd
Original Assignee
Smart Intercommunication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smart Intercommunication Technology Co ltd filed Critical Smart Intercommunication Technology Co ltd
Priority to CN202311083594.2A priority Critical patent/CN117133122A/en
Publication of CN117133122A publication Critical patent/CN117133122A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • G06V10/811Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data the classifiers operating on different input data, e.g. multi-modal recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Chemical & Material Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a traffic situation awareness prediction method and system based on a multi-mode traffic big model, and relates to the technical field of artificial intelligence. The method comprises the following steps: acquiring multi-mode fusion data; model training optimization is carried out on the traffic big model according to the multi-mode fusion data, and the traffic big model after training is obtained; carrying out traffic situation awareness on vehicle information in different traffic environments according to the trained traffic big model to obtain traffic situation awareness information; predicting the traffic situation according to the traffic situation awareness information to obtain traffic prediction information; and managing urban traffic according to the traffic prediction information. According to the method, traffic data under a plurality of different modes are fused, a traffic large model is correspondingly formed, traffic situation perception information is acquired, and traffic scenes are characterized from a plurality of different dimensions, so that the problem that complex traffic scene analysis cannot be realized due to the fact that single data is adopted in the traditional method is solved.

Description

Traffic situation awareness prediction method and system based on multi-mode traffic big model
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a traffic situation awareness prediction method and system based on a multi-mode traffic big model.
Background
In recent years, with the development of urban areas, the population is continuously increased, the quantity of the reserved automobiles is continuously increased, the traffic flow is continuously increased, and the traffic complexity is continuously increased. The perception and prediction of traffic situation are mainly based on visible light cameras installed at road sides, intersections and the like, video image data acquisition is carried out, and then data processing is carried out by utilizing an image algorithm. However, in the conventional method, single visible light camera data can fail in special weather such as night, rain and fog, shielding and other environments, and understanding and analysis of traffic scenes cannot be realized.
Disclosure of Invention
The invention aims to solve the technical problem that traffic scene understanding and analysis cannot be realized due to failure of single visible light camera data in an extreme environment in the traditional method. In order to achieve the above purpose, the invention provides a traffic situation awareness prediction method and a traffic situation awareness prediction system based on a multi-mode traffic big model.
The invention provides a traffic situation awareness prediction method based on a multi-mode traffic big model, which comprises the following steps:
acquiring multi-mode fusion data;
model training optimization is carried out on the traffic big model according to the multi-mode fusion data, and a trained traffic big model is obtained;
Carrying out traffic situation awareness on vehicle information in different traffic environments according to the trained traffic big model to obtain traffic situation awareness information;
predicting the traffic situation according to the traffic situation awareness information to obtain traffic prediction information;
and managing urban traffic according to the traffic prediction information.
In one embodiment, the performing model training optimization on the traffic big model according to the multimodal fusion data to obtain a trained traffic big model includes:
according to the visible light mode data and the infrared light mode data in the multi-mode fusion data, training and optimizing the two-dimensional target detection model to obtain a target detection model after training;
according to the laser point cloud modal data in the multi-modal fusion data, training and optimizing the three-dimensional target detection model to obtain a trained three-dimensional target detection model;
according to the visible light mode data and the infrared light mode data in the multi-mode fusion data, training and optimizing the vehicle tracking model to obtain a trained vehicle tracking model;
according to the visible light mode data and the infrared light mode data in the multi-mode fusion data, training and optimizing the license plate recognition model to obtain a trained license plate recognition model;
Training and optimizing the character interaction relation model according to the visible light mode data and the infrared light mode data in the multi-mode fusion data to obtain a trained character interaction relation model;
according to the visible light mode data or the laser point cloud mode data in the multi-mode fusion data, performing model training optimization on the semantic segmentation model to obtain a trained semantic segmentation model;
the trained target detection model, the trained three-dimensional target detection model, the trained license plate recognition model, the trained character interaction relation model, the trained semantic segmentation model and the trained vehicle tracking model form the trained traffic big model.
In one embodiment, the performing traffic situation awareness on the vehicle information in different traffic environments according to the trained traffic big model to obtain traffic situation awareness information includes:
according to the vehicle two-dimensional target detection result and the vehicle three-dimensional target detection result, obtaining vehicle feature perception under different traffic environments;
obtaining traffic flow characteristic perception under different traffic environments according to the number of vehicles and the speed of the vehicles in the road section;
According to the vehicle tracking results of different driving directions at the intersection, obtaining the feature perception of the confluence and diversion of the vehicles at the intersection;
obtaining multiple perception of traffic accidents according to the character interaction relation detection result, the multi-category two-dimensional target detection result and the multi-category three-dimensional target detection result;
and obtaining berth feature perception according to the semantic segmentation result and the license plate recognition result.
In one embodiment, the predicting the traffic situation according to the traffic situation awareness information to obtain traffic prediction information includes:
predicting the traffic congestion road section according to the perception of the converging and diverging characteristics of the vehicles at the intersection to obtain the prediction information of the traffic congestion road section;
and predicting the multiple traffic accident places according to the multiple traffic accident place sensing to obtain the multiple traffic accident place prediction information.
In one embodiment, the predicting the traffic situation according to the traffic situation awareness information, to obtain traffic prediction information, further includes:
predicting the number of parked vehicles according to the berth feature perception, the vehicle feature perception and the traffic flow feature perception to obtain parking vehicle prediction information;
the managing urban traffic according to the traffic prediction information includes:
And increasing or decreasing the number of parking spaces according to the parking vehicle prediction information.
The invention provides a traffic situation awareness prediction system based on a multi-mode traffic big model, which comprises the following steps:
the data acquisition module is used for acquiring multi-mode fusion data;
the large model generation module is used for carrying out model training optimization on the traffic large model according to the multi-mode fusion data to obtain a trained traffic large model;
the situation awareness module is used for carrying out traffic situation awareness on the vehicle information in different traffic environments according to the trained traffic big model to obtain traffic situation awareness information;
the traffic prediction module is used for predicting traffic situation according to the traffic situation awareness information to obtain traffic prediction information;
and the management module is used for managing urban traffic according to the traffic prediction information.
In one embodiment, the large model generation module includes:
the two-dimensional target detection module is used for training and optimizing the two-dimensional target detection model according to the visible light mode data and the infrared light mode data in the multi-mode fusion data to obtain a target detection model after training;
The three-dimensional target detection module is used for training and optimizing the three-dimensional target detection model according to the laser point cloud modal data in the multi-modal fusion data to obtain a trained three-dimensional target detection model;
the vehicle tracking module is used for training and optimizing the vehicle tracking model according to the visible light mode data and the infrared light mode data in the multi-mode fusion data to obtain a trained vehicle tracking model;
the license plate recognition module is used for training and optimizing the license plate recognition model according to the visible light mode data and the infrared light mode data in the multi-mode fusion data to obtain a trained license plate recognition model;
the character interaction module is used for training and optimizing the character interaction relation model according to the visible light mode data and the infrared light mode data in the multi-mode fusion data to obtain a trained character interaction relation model;
the semantic segmentation module is used for carrying out model training optimization on the semantic segmentation model according to the visible light mode data or the laser point cloud mode data in the multi-mode fusion data to obtain a trained semantic segmentation model;
the trained target detection model, the trained three-dimensional target detection model, the trained license plate recognition model, the trained character interaction relation model, the trained semantic segmentation model and the trained vehicle tracking model form the trained traffic big model.
In one embodiment, the situational awareness module includes:
the vehicle sensing module is used for obtaining vehicle feature sensing under different traffic environments according to the vehicle two-dimensional target detection result and the vehicle three-dimensional target detection result;
the traffic flow sensing module is used for acquiring the vehicle speed according to the millimeter wave radar modal data in the multi-modal fusion data and acquiring traffic flow characteristic sensing under different traffic environments according to the number of vehicles in the road section and the vehicle speed;
the converging and diverging sensing module is used for obtaining the converging and diverging characteristic sensing of the vehicles at the intersection according to the vehicle tracking results of different driving directions at the intersection;
the accident sensing module is used for obtaining multiple traffic accident sensing according to the character interaction relation detection result, the multi-category two-dimensional target detection result and the multi-category three-dimensional target detection result;
and the berth feature perception module is used for obtaining berth feature perception according to the semantic segmentation result and the license plate recognition result.
In one embodiment, the traffic prediction module comprises:
the congestion road section prediction module is used for predicting the traffic congestion road section according to the perception of the converging and diverging characteristics of the vehicles at the intersection to obtain the traffic congestion road section prediction information;
And the accident prediction module is used for predicting the multiple traffic accident places according to the multiple traffic accident place sensing so as to obtain the multiple traffic accident place prediction information.
In one embodiment, the traffic prediction module further comprises:
the parking vehicle prediction module is used for predicting the number of the parking vehicles according to the berth feature perception, the vehicle feature perception and the traffic flow feature perception to obtain parking vehicle prediction information;
the management module comprises:
and the parking space management module is used for increasing or decreasing the number of the parking spaces according to the parking vehicle prediction information.
In the traffic situation awareness prediction method and system based on the multi-modal traffic big model, the multi-modal fusion data are utilized to construct the multi-modal big model in the traffic field, the acquired multi-modal fusion data are subjected to algorithm analysis processing, so that scene understanding analysis of traffic states in time space can be realized, important effects on awareness and prediction of traffic situations are realized, judgment and early warning of traffic jams, feature extraction and analysis of vehicle flows, detection and early warning of traffic abnormal conditions and the like can be realized, and the method and system have important significance on convenience and safety of travel of people. Therefore, the traffic situation awareness prediction method based on the multi-mode traffic big model provided by the invention fuses traffic data under various different modes, correspondingly forms the traffic big model, acquires traffic situation awareness information, performs feature description on traffic scenes from various different dimensions, and solves the problem that complex traffic scene analysis cannot be realized due to the adoption of single data in the traditional method.
Drawings
Fig. 1 is a schematic flow chart of steps of a traffic situation awareness prediction method based on a multi-mode traffic big model.
Fig. 2 is a schematic structural diagram of a traffic situation awareness prediction system based on a multi-mode traffic big model.
Detailed Description
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Referring to fig. 1, the invention provides a traffic situation awareness prediction method based on a multi-mode traffic big model, which comprises the following steps:
s10, acquiring multi-mode fusion data;
s20, carrying out model training optimization on the traffic large model according to the multi-mode fusion data to obtain a trained traffic large model;
s30, carrying out traffic situation awareness on the vehicle information in different traffic environments according to the trained traffic big model to obtain traffic situation awareness information;
s40, predicting the traffic situation according to the traffic situation awareness information to obtain traffic prediction information;
and S50, managing urban traffic according to the traffic prediction information.
In this embodiment, the installation of the multi-mode data acquisition device is performed at different traffic locations to obtain initial multi-mode traffic data. And (5) aligning and marking the initial multi-mode traffic data to form multi-mode fusion data. Different traffic locations include traffic locations such as major urban roads, intersections, roadside parking areas, expressways, accent areas such as gates in schools, hospitals, and the like. The multi-mode fusion data comprise dynamic change data of targets such as vehicles and pedestrians in different scenes, flow change data of vehicles in different time periods, event data such as vehicle congestion and the like of records of abnormal conditions in different scenes, event statistics data of road side parking, vehicle flow data of highways, and multidimensional data such as vehicle running directions at intersections. The multi-mode fusion data is derived from traffic data with different dimensions from dynamic and static states, so that scene analysis and understanding of a traffic large model on different scenes are facilitated.
The initial multi-mode traffic data acquisition equipment comprises equipment of different mode data such as a radar fusion all-in-one machine, an infrared camera, a laser radar and the like. The radar fusion integrated machine is a traffic sensor which combines a monocular or multi-purpose visible light camera with a millimeter wave radar to form a radar fusion integrated body. The millimeter wave radar has the characteristics of light-like waves and strong directivity, can form strong directional radiation according to various antennas, has good effect on directivity and distance recognition of targets, has strong anti-interference capability, can accurately recognize targets, combines a visible light camera with the millimeter wave radar, and can cope with more complex traffic scenes. The infrared camera can generate night images with good perception at night by utilizing infrared thermal imaging. In a night scene, when the perception capability of the visible light camera is reduced, the infrared sensing image is used for visual perception, so that the accuracy of perception can be improved. In daytime, the infrared cameras can also provide important sensing information by utilizing the infrared thermal sensing capability of the infrared cameras under the condition that certain pedestrian vehicles are shielded.
Compared with a camera, the laser radar has high perception precision and strong anti-interference capability, and can directly acquire the real position, distance, angle, speed and other perception information of targets such as vehicles, pedestrians and the like in a road side parking scene. The three-dimensional frame images of targets such as vehicles, pedestrians and the like can be generated by using the perception data obtained by the laser radar, and the feature data is analyzed and judged by fusing the three-dimensional frame images with the two-dimensional image features obtained by the visible light and infrared light cameras, so that multidimensional perception and analysis judgment with more accurate, stable and strong anti-interference capability on different traffic scenes are realized.
The alignment and labeling of the initial multimodal traffic data includes a temporal dimension and a spatial dimension. The data matching and alignment of the space dimension are that the images observed by the data of different modes are unified to the same coordinate system. The data matching and alignment of the time dimension is to align the data of the time dimension of the sensing devices of different modes, so as to ensure that acquired data of the devices of the multiple modes are the sensing data recorded by the same scene at the same moment.
And carrying out space-time multi-dimension, multi-target and multi-type data labeling on the multi-mode fusion data. The labeling of the space-time multi-dimension is to label the data in the two-way dimension of time and space. The space dimension is multi-category and multi-target labeling in the space dimension based on multi-mode data of a single frame. The time dimension is used for marking continuous time periods for multi-mode data and is used for carrying out tasks such as target tracking and the like. The multi-target representation supports labeling of multiple categories of targets including, but not limited to, labeling of targets of pedestrian, vehicle, non-motor vehicle, lane line, traffic sign, green plant, etc., and vehicles specifically include multiple categories of cars, buses, taxis, etc. The multiple types represent different types of annotation for the same target, for example, taking a vehicle target as an example, multiple types of annotation modes such as two-dimensional rectangular frame, three-dimensional stereo frame, semantic level classification annotation, instance level classification annotation, point cloud annotation and the like can be carried out on the vehicle, and the annotation types can be switched at will in different scene tasks so as to adapt to the algorithm realized by the scene analysis task.
The multi-mode fusion data cover the data of different modes of various sensor devices such as a visible light camera, an infrared camera, a laser radar, a millimeter wave radar and the like, the data of different modes can play different advantages in different environments and weather states, the fusion of the data of various modes can provide more data information support for the analysis of complex scenes, and the multi-mode fusion data has an important role in improving the accuracy of the analysis of traffic scenes.
The multi-mode fusion data comprises data of a plurality of different modes and data labeling information. The multi-mode fusion data based on the labels is used for training and optimizing the traffic big model, different visual tasks can be executed by using different visual algorithms based on the characteristics of different mode data, so that more comprehensive perception and understanding of traffic situation are realized, and further more accurate traffic situation prediction is realized. The large traffic model may include one or more of a target detection model, a three-dimensional target detection model, a license plate recognition model, a person interaction relationship model, a vehicle speed model, a semantic segmentation model, and a vehicle tracking model. In one embodiment, two-dimensional object detection and utilization of semantic segmentation algorithms is performed based on features of visible light images in the multimodal fusion data. In one embodiment, detection, depth estimation, and the like of the three-dimensional target are performed based on features of laser point cloud data in the multi-modal fusion data. In one embodiment, obstacle detection and the like is performed based on millimeter wave radar data in the multimodal fusion data. By constructing a large traffic model, the advantages of different mode data can be fully exerted, and the perception and prediction of traffic situation can be performed.
The traffic large model formed by multi-mode fusion data training can sense traffic situation of vehicle information in different traffic environments. The traffic situation sensing information such as corresponding vehicle feature sensing, traffic flow feature sensing, vehicle converging and diverging feature sensing, traffic accident multiple sensing, berth feature sensing and the like can be obtained through training the formed traffic large model detection, such as a two-dimensional target detection result, a three-dimensional target detection result, the number of vehicles, the speed of vehicles, a character interaction relation detection result of a vehicle tracking result, a multi-category two-dimensional target detection result, a multi-category three-dimensional target detection result, a semantic segmentation result, a license plate recognition result and the like, is used for predicting traffic information in subsequent steps, obtaining traffic conditions of a certain road section, a certain intersection or a certain area, and further carrying out subsequent urban traffic management according to the traffic prediction information.
The traffic situation awareness prediction method based on the multi-modal traffic big model has important significance in constructing the multi-modal big model in the traffic field by utilizing the multi-modal fusion data, can realize scene understanding analysis of traffic states in time space by carrying out algorithm analysis processing on the acquired multi-modal fusion data, has important effects on awareness and prediction of traffic situations, can realize judgment and early warning of traffic jams, feature extraction and analysis of vehicle flows, detection and early warning of traffic abnormal conditions and the like, and has important significance in convenience and safety of traveling of people. Therefore, the traffic situation awareness prediction method based on the multi-mode traffic big model provided by the invention fuses traffic data under various different modes, correspondingly forms the traffic big model, acquires traffic situation awareness information, performs feature description on traffic scenes from various different dimensions, and solves the problem that complex traffic scene analysis cannot be realized due to the adoption of single data in the traditional method.
In one embodiment, S20, performing model training optimization on the traffic big model according to the multi-mode fusion data to obtain a trained traffic big model, including:
s210, training and optimizing a two-dimensional target detection model according to visible light mode data and infrared light mode data in the multi-mode fusion data to obtain a target detection model after training;
s220, training and optimizing the three-dimensional target detection model according to laser point cloud modal data in the multi-modal fusion data to obtain a trained three-dimensional target detection model;
s230, training and optimizing the vehicle tracking model according to the visible light mode data and the infrared light mode data in the multi-mode fusion data to obtain a trained vehicle tracking model;
s240, training and optimizing the license plate recognition model according to the visible light mode data and the infrared light mode data in the multi-mode fusion data to obtain a trained license plate recognition model;
s250, training and optimizing the character interaction relation model according to visible light mode data and infrared light mode data in the multi-mode fusion data to obtain a trained character interaction relation model;
s260, performing model training optimization on the semantic segmentation model according to visible light mode data or laser point cloud mode data in the multi-mode fusion data to obtain a trained semantic segmentation model;
The training-completed target detection model, the training-completed three-dimensional target detection model, the training-completed license plate recognition model, the training-completed character interaction relation model, the training-completed semantic segmentation model and the training-completed vehicle tracking model form a training-completed traffic big model.
In this embodiment, the task of the two-dimensional object detection model is to detect and obtain the position coordinates and the category information of the detection frame of the object by using the visible light image and the infrared light image algorithm, including but not limited to using the object detection algorithm based on deep learning or the object detection algorithm based on machine learning.
The three-dimensional target detection model is used for detecting three-dimensional characteristics of vehicles, pedestrians and the like based on laser radar point cloud data, and comprises length, width and height information of targets of the vehicles, the pedestrians and the like, three-dimensional coordinate information of the targets and course angle information of the targets. Based on the three-dimensional coordinate position information of the targets such as the vehicle and the like in the real scene, the vehicle position judgment method is beneficial to judging the specific position of the vehicle, and can more accurately judge traffic situation for the conditions such as shielding and the like. The task of the vehicle tracking model is to track the vehicle at the road side, the road junction and the like by utilizing a target tracking algorithm to realize the vehicle tracking across cameras. The vehicle re-identification means that the same vehicle is identified under different cameras, and the vehicle re-identification method can be used for carrying out scenes such as tracking investigation of illegal vehicles.
The task of the license plate recognition model is to obtain a final license plate recognition result by combining weighted results of various license plate recognition by utilizing various license plate recognition algorithms including, but not limited to, character detection based on target detection, character recognition algorithm based on a cyclic neural network and the like. The model recognition means classifying the model of the vehicle. The task of the human interaction relation model is that the human interaction relation model is used for identifying the human gesture, and the human interaction relation of the actions of the pedestrian and the pedestrians and other targets in the traffic scene is realized by utilizing a key point detection method. The semantic segmentation indicates that the targets in the image or the point cloud are classified pixel by pixel, and mask information and category information of the targets are obtained. The category information comprises categories such as green plants, berth lines, lane lines, well covers, zebra lines, iron grates and the like, and the positions of targets such as the green plants, the berth lines and the like can be obtained more accurately by utilizing a semantic segmentation algorithm for subsequent traffic scene analysis tasks.
In one embodiment, S20, performing model training optimization on the traffic big model according to the multi-mode fusion data to obtain a trained traffic big model, and further includes:
The method comprises the steps of training and optimizing a multi-class instance segmentation model, training and optimizing a multi-target classification model, training and optimizing a human-vehicle interaction motion detection model, training and optimizing a vehicle track prediction model, training and optimizing an abnormal event detection model and the like, and the method is used for jointly forming a large traffic model and has important value for constructing the large traffic model.
In one embodiment, S30, performing traffic situation awareness on vehicle information in different traffic environments according to the trained traffic big model, to obtain traffic situation awareness information, including:
s310, obtaining vehicle feature perception under different traffic environments according to a vehicle two-dimensional target detection result and a vehicle three-dimensional target detection result;
s320, acquiring vehicle speed according to millimeter wave radar mode data in the multi-mode fusion data, and acquiring traffic flow characteristic perception under different traffic environments according to the number of vehicles in a road section and the vehicle speed;
s330, obtaining the feature perception of the confluence and diversion of the vehicles at the intersection according to the vehicle tracking results of different driving directions at the intersection;
s340, obtaining multiple perception of traffic accidents according to the detection result of the interaction relation of the people, the detection result of the multi-category two-dimensional target and the detection result of the multi-category three-dimensional target;
S350, obtaining berth feature perception according to the semantic segmentation result and the license plate recognition result.
In this embodiment, the two-dimensional target detection result and the three-dimensional target detection result of the vehicle detect the target vehicle from two-dimensional and three-dimensional angles respectively, so that the vehicle information such as the color, the model, the license plate number and the like of the vehicle can be obtained, and the perception of the vehicle characteristics is formed. Different road segments correspond to different traffic flows, the traffic flow characteristics can be understood as traffic flow characteristics, and the traffic flow of the road segments can be calculated through the number of vehicles in the road segments and the speed of the vehicles, so that traffic flow characteristic perception is formed.
The different driving directions at the intersection can comprise an intersection turning direction, an intersection straight direction, an intersection turning direction and the like. The traffic combination or the traffic diversion situations of different situations can occur in different driving directions, and whether the vehicles at the intersection are converged or diverted can be known through the vehicle tracking results of the different driving directions. The vehicle tracking result can reflect the running condition of the vehicle in different areas or different road sections, can be inter-area tracking or inter-road section tracking, obtains the running track of the vehicle, can accurately obtain whether the vehicles at the intersection are merged or split, and further obtains the feature perception of merging and splitting of the vehicles at the intersection.
The person interaction relation detection results comprise pedestrians and automobiles, pedestrians and electric vehicles, pedestrians and bicycles and the like. The multi-category two-dimensional object detection result can reflect the two-dimensional characteristics of the detection object. The multi-category three-dimensional object detection result can reflect the three-dimensional characteristics of the detection object. Whether the accident of the pedestrian and the object occurs or not, for example, whether the pedestrian collides with the automobile or not, whether the pedestrian falls down or not or the like traffic accidents are reflected from multiple angles through the character interaction relation detection result, the multi-category two-dimensional target detection result and the multi-category three-dimensional target detection result, so that the traffic accident multiple sensing information is obtained. The traffic accident multi-place sensing comprises pedestrian and vehicle traffic accident sensing, pedestrian falling traffic accident sensing, vehicle collision traffic accident sensing and the like. The semantic segmentation result comprises analysis of scenes, and category information such as berth lines, lane lines and the like can be identified. The license plate recognition result can acquire license plate information of the vehicle. And whether the berth is occupied or not can be obtained through analysis through the semantic segmentation result and the license plate recognition result, and occupied license plate information can be obtained. The berth feature perception comprises the license plate information when the berth is occupied or in a berth idle state. Through each perception function of the multi-mode traffic big model in the embodiment, the perception of traffic situation based on different characteristics can be realized.
In one embodiment, S40, predicting the traffic situation according to the traffic situation awareness information to obtain traffic prediction information, includes:
s410, predicting the traffic congestion road section according to the perception of the converging and diverging characteristics of the vehicles at the intersection to obtain the prediction information of the traffic congestion road section;
s420, predicting the multiple traffic accident places according to the multiple traffic accident places sensing, and obtaining the multiple traffic accident places prediction information.
In this embodiment, based on the perception of the merging and diverging features of the vehicles in different road segments, the vehicle data, such as the number of vehicles, existing in a future period of time in different road segments is predicted, and whether the traffic jam situation occurs in the future period of time is further determined by knowing the number of vehicles accommodated under the condition of smooth traffic in a unit long and short road segment, that is, the traffic jam road segment prediction information is obtained. Through multiple perception of traffic accidents, whether the traffic accidents occur can be judged by utilizing algorithms such as multi-category target detection, visual relation detection, pedestrian gesture detection and the like. The accident-prone position can be predicted by counting the accident-prone position and obtaining the traffic accident-prone prediction information. Further, by combining the current vehicle and the traffic flow characteristics in the corresponding area, early warning of accident places is sent to the user, and the user is reminded of running safety.
In one embodiment, S40, predicting a traffic situation according to the traffic situation awareness information to obtain traffic prediction information, further includes:
s430, predicting the number of parked vehicles according to the berth feature perception, the vehicle feature perception and the traffic flow feature perception to obtain parking vehicle prediction information;
s50, managing urban traffic according to traffic prediction information, including:
s510, according to the parking vehicle prediction information, the number of parking spaces is increased or decreased.
In this embodiment, the perception of the vehicle features characterizes the feature information of the vehicle, and the berth feature perception characterizes the number plate information when the berth is occupied or in the berth idle state and the berth is occupied. The traffic characteristic perception characterizes traffic merging or traffic splitting conditions of different situations in different driving directions. In a certain road section, the number of parked vehicles in a future period is predicted by taking historical data such as parking occupation times, duration and the like of a road side parking space in different time periods as reference basis and combining parking position feature perception, vehicle feature perception and vehicle flow feature perception near a parking position area to obtain parking vehicle prediction information. When the parking space is insufficient, the vehicle owner can be sent an early warning of insufficient parking space in the nearby area. The construction of the road side berth can be increased for the region with more parking times, so that the user can park conveniently, and the setting points of the berth can be properly reduced for other public infrastructures for the region with less parking times, and meanwhile, the maintenance cost of equipment operation is reduced.
In one embodiment, the traffic situation sensing result in each of the above embodiments is combined to perform the intelligent signal lamp regulation. According to the vehicle feature perception, the traffic flow feature perception and the traffic jam road section prediction information, dynamic signal lamp regulation and control can be performed, waiting time of vehicles at the intersection is reduced, occurrence probability of traffic jam is reduced, and traffic running condition is improved.
Referring to fig. 2, the present invention provides a traffic situation awareness prediction system 100 based on a multi-modal traffic big model. The traffic situation awareness prediction system 100 based on the multi-mode traffic big model comprises a data acquisition module 10, a big model generation module 20, a situation awareness module 30, a traffic prediction module 40 and a management module 50. The data acquisition module 10 is used for acquiring multi-modal fusion data. The large model generating module 20 is configured to perform model training optimization on the traffic large model according to the multimodal fusion data, so as to obtain a trained traffic large model. The situation awareness module 30 is configured to perform traffic situation awareness on vehicle information in different traffic environments according to the trained traffic big model, so as to obtain traffic situation awareness information. The traffic prediction module 40 is configured to predict traffic situation according to the traffic situation awareness information, and obtain traffic prediction information. The management module 50 is used for managing urban traffic according to traffic prediction information.
In this embodiment, the description of the data acquisition module 10 may refer to the description of S10 in the above embodiment. The description of the large model generation module 20 may refer to the description of S20 in the above embodiment. The relevant description of the situation awareness module 30 may refer to the relevant description of S30 in the above embodiment. The description of the traffic prediction module 40 may refer to the description of S40 in the above embodiment. The relevant description of the management module 50 may refer to the relevant description of S50 in the above-described embodiment.
In one embodiment, the large model generation module 20 includes a two-dimensional object detection module, a three-dimensional object detection module, a vehicle tracking module, a license plate recognition module, a person interaction module, a vehicle speed module, and a semantic segmentation module. The two-dimensional target detection module is used for training and optimizing the two-dimensional target detection model according to the visible light mode data and the infrared light mode data in the multi-mode fusion data to obtain a target detection model after training. The three-dimensional target detection module is used for training and optimizing the three-dimensional target detection model according to the laser point cloud modal data in the multi-modal fusion data to obtain a trained three-dimensional target detection model.
The vehicle tracking module is used for training and optimizing the vehicle tracking model according to the visible light mode data and the infrared light mode data in the multi-mode fusion data to obtain the trained vehicle tracking model. The license plate recognition module is used for training and optimizing the license plate recognition model according to the visible light mode data and the infrared light mode data in the multi-mode fusion data to obtain the trained license plate recognition model. The character interaction module is used for training and optimizing the character interaction relation model according to the visible light mode data and the infrared light mode data in the multi-mode fusion data to obtain the trained character interaction relation model. The semantic segmentation module is used for carrying out model training optimization on the semantic segmentation model according to the visible light mode data or the laser point cloud mode data in the multi-mode fusion data to obtain a trained semantic segmentation model.
The training-completed target detection model, the training-completed three-dimensional target detection model, the training-completed license plate recognition model, the training-completed character interaction relation model, the training-completed semantic segmentation model and the training-completed vehicle tracking model form a training-completed traffic big model.
In this embodiment, the description of the two-dimensional object detection module may refer to the description of S210 in the above embodiment. The related description of the three-dimensional object detection module may refer to the related description of S220 in the above embodiment. The description of the vehicle tracking module may refer to the description of S230 in the above embodiment. The relevant description of the license plate recognition module may refer to the relevant description of S240 in the above embodiment. The description of the character interaction module may refer to the description of S250 in the above embodiment. The related description of the semantic segmentation module may refer to the related description of S260 in the above embodiment.
In one embodiment, situation awareness module 30 includes a vehicle awareness module, a traffic awareness module, a merge and split awareness module, a congested road segment awareness module, an accident awareness module, and a berth feature awareness module. The vehicle sensing module is used for obtaining vehicle feature sensing under different traffic environments according to the vehicle two-dimensional target detection result and the vehicle three-dimensional target detection result. The traffic flow sensing module is used for acquiring vehicle speed according to millimeter wave radar mode data in the multi-mode fusion data and acquiring traffic flow characteristic sensing under different traffic environments according to the number of vehicles in the road section and the vehicle speed. And the converging and diverging sensing module is used for obtaining the converging and diverging characteristic sensing of the vehicles at the intersection according to the vehicle tracking results of different driving directions at the intersection. The accident sensing module is used for obtaining multiple sensing of traffic accidents according to the detection result of the interaction relation of the people, the detection result of the multi-category two-dimensional targets and the detection result of the multi-category three-dimensional targets. And the berth feature perception module is used for obtaining berth feature perception according to the semantic segmentation result and the license plate recognition result.
In this embodiment, the description of the vehicle sensing module may refer to the description of S310 in the above embodiment. The description of the traffic sensing module may refer to the description of S320 in the above embodiment. The description of the merging and diverging sensing module may refer to the description of S330 in the above embodiment. The description of the congestion section sensing module may refer to the description of S340 in the above embodiment. The related description of the accident sensing module may refer to the related description of S350 in the above embodiment. The description of the berth feature perception module may refer to the description of S360 in the above embodiment.
In one embodiment, the traffic prediction module 40 includes a congested road segment prediction module and an accident prediction module. The congestion road section prediction module is used for predicting the traffic congestion road section according to the perception of the converging and diverging characteristics of the vehicles at the intersection, and obtaining the traffic congestion road section prediction information. The accident prediction module is used for predicting the multiple traffic accident places according to the multiple traffic accident place sensing and obtaining the multiple traffic accident place prediction information.
In this embodiment, the description of the congestion section prediction module may refer to the description of S410 in the above embodiment. The related description of the accident prediction module may refer to the related description of S420 in the above embodiment.
In one embodiment, the traffic prediction module 40 further includes a parked vehicle prediction module. The parking vehicle prediction module is used for predicting the number of the parking vehicles according to the parking position feature perception, the vehicle feature perception and the vehicle flow feature perception to obtain parking vehicle prediction information.
In the present embodiment, the description of the parked vehicle prediction module may refer to the description of S430 in the above embodiment.
The management module 50 includes a parking space management module. The parking space management module is used for increasing or decreasing the number of the parking spaces according to the prediction information of the parked vehicles.
In this embodiment, the description of the parking space management module may refer to the description of S510 in the above embodiment.
In the various embodiments described above, the particular order or hierarchy of steps in the processes disclosed are examples of exemplary approaches. Based on design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy.
Those of skill in the art will further appreciate that the various illustrative logical blocks (illustrative logical block) listed in the present invention, modules and steps may be implemented by electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components (illustrative components), modules, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design requirements of the overall system. Those skilled in the art may implement the described functionality in varying ways for each particular application, but such implementation is not to be understood as beyond the scope of the embodiments of the present invention.
The various illustrative logical blocks or modules described in connection with the embodiments of the present invention may be implemented or performed with a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the general purpose processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. In an example, a storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may reside in a user terminal. In the alternative, the processor and the storage medium may reside as distinct components in a user terminal.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (10)

1. A traffic situation awareness prediction method based on a multi-mode traffic big model is characterized by comprising the following steps:
acquiring multi-mode fusion data;
model training optimization is carried out on the traffic big model according to the multi-mode fusion data, and a trained traffic big model is obtained;
carrying out traffic situation awareness on vehicle information in different traffic environments according to the trained traffic big model to obtain traffic situation awareness information;
predicting the traffic situation according to the traffic situation awareness information to obtain traffic prediction information;
and managing urban traffic according to the traffic prediction information.
2. The traffic situation awareness prediction method based on the multi-modal traffic big model according to claim 1, wherein the performing model training optimization on the traffic big model according to the multi-modal fusion data to obtain a trained traffic big model comprises:
According to the visible light mode data and the infrared light mode data in the multi-mode fusion data, training and optimizing the two-dimensional target detection model to obtain a target detection model after training;
according to the laser point cloud modal data in the multi-modal fusion data, training and optimizing the three-dimensional target detection model to obtain a trained three-dimensional target detection model;
according to the visible light mode data and the infrared light mode data in the multi-mode fusion data, training and optimizing the vehicle tracking model to obtain a trained vehicle tracking model;
according to the visible light mode data and the infrared light mode data in the multi-mode fusion data, training and optimizing the license plate recognition model to obtain a trained license plate recognition model;
training and optimizing the character interaction relation model according to the visible light mode data and the infrared light mode data in the multi-mode fusion data to obtain a trained character interaction relation model;
according to the visible light mode data or the laser point cloud mode data in the multi-mode fusion data, performing model training optimization on the semantic segmentation model to obtain a trained semantic segmentation model;
The trained target detection model, the trained three-dimensional target detection model, the trained license plate recognition model, the trained character interaction relation model, the trained semantic segmentation model and the trained vehicle tracking model form the trained traffic big model.
3. The traffic situation awareness prediction method based on the multi-mode traffic big model according to claim 2, wherein the traffic situation awareness of the vehicle information in different traffic environments according to the trained traffic big model, to obtain traffic situation awareness information, includes:
according to the vehicle two-dimensional target detection result and the vehicle three-dimensional target detection result, obtaining vehicle feature perception under different traffic environments;
acquiring vehicle speed according to millimeter wave radar mode data in the multi-mode fusion data, and acquiring traffic flow characteristic perception under different traffic environments according to the number of vehicles in a road section and the vehicle speed;
according to the vehicle tracking results of different driving directions at the intersection, obtaining the feature perception of the confluence and diversion of the vehicles at the intersection;
obtaining multiple perception of traffic accidents according to the character interaction relation detection result, the multi-category two-dimensional target detection result and the multi-category three-dimensional target detection result;
And obtaining berth feature perception according to the semantic segmentation result and the license plate recognition result.
4. The traffic situation awareness prediction method based on the multi-modal traffic big model according to claim 3, wherein predicting traffic situation according to the traffic situation awareness information to obtain traffic prediction information comprises:
predicting the traffic congestion road section according to the perception of the converging and diverging characteristics of the vehicles at the intersection to obtain the prediction information of the traffic congestion road section;
and predicting the multiple traffic accident places according to the multiple traffic accident place sensing to obtain the multiple traffic accident place prediction information.
5. The traffic situation awareness prediction method based on the multi-modal traffic big model according to claim 4, wherein predicting traffic situation according to the traffic situation awareness information obtains traffic prediction information, further comprising:
predicting the number of parked vehicles according to the berth feature perception, the vehicle feature perception and the traffic flow feature perception to obtain parking vehicle prediction information;
the managing urban traffic according to the traffic prediction information includes:
and increasing or decreasing the number of parking spaces according to the parking vehicle prediction information.
6. A traffic situation awareness prediction system based on a multi-mode traffic big model is characterized by comprising:
the data acquisition module is used for acquiring multi-mode fusion data;
the large model generation module is used for carrying out model training optimization on the traffic large model according to the multi-mode fusion data to obtain a trained traffic large model;
the situation awareness module is used for carrying out traffic situation awareness on the vehicle information in different traffic environments according to the trained traffic big model to obtain traffic situation awareness information;
the traffic prediction module is used for predicting traffic situation according to the traffic situation awareness information to obtain traffic prediction information;
and the management module is used for managing urban traffic according to the traffic prediction information.
7. The traffic situation awareness prediction system based on a multimodal traffic big model according to claim 6, wherein the big model generation module comprises:
the two-dimensional target detection module is used for training and optimizing the two-dimensional target detection model according to the visible light mode data and the infrared light mode data in the multi-mode fusion data to obtain a target detection model after training;
The three-dimensional target detection module is used for training and optimizing the three-dimensional target detection model according to the laser point cloud modal data in the multi-modal fusion data to obtain a trained three-dimensional target detection model;
the vehicle tracking module is used for training and optimizing the vehicle tracking model according to the visible light mode data and the infrared light mode data in the multi-mode fusion data to obtain a trained vehicle tracking model;
the license plate recognition module is used for training and optimizing the license plate recognition model according to the visible light mode data and the infrared light mode data in the multi-mode fusion data to obtain a trained license plate recognition model;
the character interaction module is used for training and optimizing the character interaction relation model according to the visible light mode data and the infrared light mode data in the multi-mode fusion data to obtain a trained character interaction relation model;
the semantic segmentation module is used for carrying out model training optimization on the semantic segmentation model according to the visible light mode data or the laser point cloud mode data in the multi-mode fusion data to obtain a trained semantic segmentation model;
the trained target detection model, the trained three-dimensional target detection model, the trained license plate recognition model, the trained character interaction relation model, the trained semantic segmentation model and the trained vehicle tracking model form the trained traffic big model.
8. The traffic situation awareness prediction system based on a multimodal traffic heavy model of claim 7, wherein the situation awareness module comprises:
the vehicle sensing module is used for obtaining vehicle feature sensing under different traffic environments according to the vehicle two-dimensional target detection result and the vehicle three-dimensional target detection result;
the traffic flow sensing module is used for acquiring the vehicle speed according to the millimeter wave radar modal data in the multi-modal fusion data and acquiring traffic flow characteristic sensing under different traffic environments according to the number of vehicles in the road section and the vehicle speed;
the converging and diverging sensing module is used for obtaining the converging and diverging characteristic sensing of the vehicles at the intersection according to the vehicle tracking results of different driving directions at the intersection;
the accident sensing module is used for obtaining multiple traffic accident sensing according to the character interaction relation detection result, the multi-category two-dimensional target detection result and the multi-category three-dimensional target detection result;
and the berth feature perception module is used for obtaining berth feature perception according to the semantic segmentation result and the license plate recognition result.
9. The traffic situation awareness prediction system based on a multimodal traffic heavy model of claim 8, wherein the traffic prediction module comprises:
The congestion road section prediction module is used for predicting the traffic congestion road section according to the perception of the converging and diverging characteristics of the vehicles at the intersection to obtain the traffic congestion road section prediction information;
and the accident prediction module is used for predicting the multiple traffic accident places according to the multiple traffic accident place sensing so as to obtain the multiple traffic accident place prediction information.
10. The traffic situation awareness prediction system based on a multimodal traffic heavy model of claim 9, wherein the traffic prediction module further comprises:
the parking vehicle prediction module is used for predicting the number of the parking vehicles according to the berth feature perception, the vehicle feature perception and the traffic flow feature perception to obtain parking vehicle prediction information;
the management module comprises:
and the parking space management module is used for increasing or decreasing the number of the parking spaces according to the parking vehicle prediction information.
CN202311083594.2A 2023-08-25 2023-08-25 Traffic situation awareness prediction method and system based on multi-mode traffic big model Pending CN117133122A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311083594.2A CN117133122A (en) 2023-08-25 2023-08-25 Traffic situation awareness prediction method and system based on multi-mode traffic big model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311083594.2A CN117133122A (en) 2023-08-25 2023-08-25 Traffic situation awareness prediction method and system based on multi-mode traffic big model

Publications (1)

Publication Number Publication Date
CN117133122A true CN117133122A (en) 2023-11-28

Family

ID=88854023

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311083594.2A Pending CN117133122A (en) 2023-08-25 2023-08-25 Traffic situation awareness prediction method and system based on multi-mode traffic big model

Country Status (1)

Country Link
CN (1) CN117133122A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118298633A (en) * 2024-04-11 2024-07-05 东南大学 Real-time human-vehicle conflict sensing and early warning system in campus scene based on thunder fusion
CN118840493A (en) * 2024-09-20 2024-10-25 南京元时空地理信息技术有限公司 Holographic intersection three-dimensional generation method and system based on laser point cloud technology
CN119152450A (en) * 2024-11-11 2024-12-17 深圳市城市交通规划设计研究中心股份有限公司 Complex scene traffic network congestion identification method based on unmanned aerial vehicle and data fusion

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118298633A (en) * 2024-04-11 2024-07-05 东南大学 Real-time human-vehicle conflict sensing and early warning system in campus scene based on thunder fusion
CN118840493A (en) * 2024-09-20 2024-10-25 南京元时空地理信息技术有限公司 Holographic intersection three-dimensional generation method and system based on laser point cloud technology
CN118840493B (en) * 2024-09-20 2025-01-24 南京元时空地理信息技术有限公司 A method and system for generating three-dimensional holographic intersection based on laser point cloud technology
CN119152450A (en) * 2024-11-11 2024-12-17 深圳市城市交通规划设计研究中心股份有限公司 Complex scene traffic network congestion identification method based on unmanned aerial vehicle and data fusion

Similar Documents

Publication Publication Date Title
Gandhi et al. Pedestrian protection systems: Issues, survey, and challenges
CN117133122A (en) Traffic situation awareness prediction method and system based on multi-mode traffic big model
US20170011625A1 (en) Roadway sensing systems
CN113345237A (en) Lane-changing identification and prediction method, system, equipment and storage medium for extracting vehicle track by using roadside laser radar data
Abdel-Aty et al. Using closed-circuit television cameras to analyze traffic safety at intersections based on vehicle key points detection
US11914041B2 (en) Detection device and detection system
CN114781479A (en) Traffic incident detection method and device
KR102122850B1 (en) Solution for analysis road and recognition vehicle license plate employing deep-learning
Wang et al. Ips300+: a challenging multi-modal data sets for intersection perception system
CN115618932A (en) Traffic incident prediction method, device and electronic equipment based on networked automatic driving
Zheng Developing a traffic safety diagnostics system for unmanned aerial vehicles usingdeep learning algorithms
Liang et al. Traffic incident detection based on a global trajectory spatiotemporal map
Gupta et al. Dynamic object detection using sparse LiDAR data for autonomous machine driving and road safety applications
Llorca et al. Traffic data collection for floating car data enhancement in V2I networks
RU2770145C1 (en) Device and system for registration of objects adjacent to highways
Hnoohom et al. The video-based safety methodology for pedestrian crosswalk safety measured: The case of Thammasat University, Thailand
Trivedi et al. A vision-based real-time adaptive traffic light control system using vehicular density value and statistical block matching approach
CN117058512A (en) Multi-mode data fusion method and system based on traffic big model
CN112927514B (en) Prediction method and system for motor vehicle yellow light running behavior based on 3D lidar
CN116129675A (en) Method, device and equipment for early warning of collision between people and vehicles
CN114037962A (en) Vehicle collision prediction method, device, electronic device and storage medium
Kolcheck et al. Visual counting of traffic flow from a car via vehicle detection and motion analysis
Singh et al. Detection of vacant parking spaces through the use of convolutional neural network
Liu et al. Ubiquitous sensing for smart cities with autonomous vehicles
Manikandan et al. Energy-aware vehicle/pedestrian detection and close movement alert at nighttime in dense slow traffic on Indian urban roads using a depth camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination