CN115171371A - Cooperative type road intersection passing method and device - Google Patents
Cooperative type road intersection passing method and device Download PDFInfo
- Publication number
- CN115171371A CN115171371A CN202210684180.4A CN202210684180A CN115171371A CN 115171371 A CN115171371 A CN 115171371A CN 202210684180 A CN202210684180 A CN 202210684180A CN 115171371 A CN115171371 A CN 115171371A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- intelligent networked
- information
- intelligent
- vehicles
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0108—Measuring and analyzing of parameters relative to traffic conditions based on the source of data
- G08G1/0112—Measuring and analyzing of parameters relative to traffic conditions based on the source of data from the vehicle, e.g. floating car data [FCD]
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0108—Measuring and analyzing of parameters relative to traffic conditions based on the source of data
- G08G1/0116—Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0137—Measuring and analyzing of parameters relative to traffic conditions for specific applications
- G08G1/0145—Measuring and analyzing of parameters relative to traffic conditions for specific applications for active traffic flow control
Landscapes
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Traffic Control Systems (AREA)
Abstract
The application provides a cooperative road intersection passing method and a cooperative road intersection passing device, wherein the method comprises the steps of obtaining first running information of m vehicles in each vehicle sensing area through each road side sensing device arranged at a non-signal control road intersection; and acquiring second driving information of each intelligent networked vehicle through vehicle-mounted sensing equipment of each intelligent networked vehicle in each vehicle sensing area, so that first driving state information and first driving intention information of each non-intelligent networked vehicle and second driving state information and second driving intention information of each intelligent networked vehicle can be determined, traffic scheduling information for each intelligent networked vehicle is generated based on each first driving state information, each first driving intention information, each second driving state information and each second driving intention information, and each intelligent networked vehicle can be guided and scheduled more accurately through the traffic scheduling information, so that the traffic safety of the non-signal control road intersection can be effectively ensured.
Description
Technical Field
The application relates to the technical field of vehicle-road cooperation, in particular to a cooperative road intersection passing method and device.
Background
With the rapid development of social economy, the automobile holding amount is increasing year by year, the intellectualization and networking of automobiles become the development trend of the current automobile industry, and meanwhile, the problem of road traffic safety is increasingly prominent. The road intersections serve as nodes for connection between roads, and because the road intersections have a plurality of traffic flow directions and a plurality of collected vehicles, the traffic conditions of the road intersections are relatively complex, so that the road intersections, particularly the non-signal control road intersections (namely the road intersections without signal lamps), are places where the road safety problem needs to be focused.
At the present stage, when a certain vehicle passes through an untrusted control road intersection, a driver of the vehicle needs to make timely and accurate judgment on the traffic condition of the untrusted control road intersection to ensure that the vehicle safely passes through the untrusted control road intersection. However, the mode mainly depends on vehicle-mounted devices (such as a vehicle-mounted radar, a vehicle-mounted camera and the like) and the vision and hearing of a vehicle driver to perform judgment, and the state information of each vehicle in the range of the non-signal control road intersection cannot be timely and accurately acquired, so that the vehicle passing efficiency of the non-signal control road intersection is low, and even a certain degree of traffic accidents may be caused, so that the passing safety of the vehicle at the non-signal control road intersection is low.
In summary, there is a need for a cooperative intersection passing method to effectively ensure the passing safety of an un-controlled intersection.
Disclosure of Invention
The application provides a cooperative intersection passing method and device, which are used for effectively ensuring the passing safety of an un-controlled intersection.
In a first aspect, an exemplary embodiment of the present application provides a cooperative intersection passing method, including:
aiming at any un-trusted road intersection, acquiring first running information of m vehicles in each vehicle sensing area through each road side sensing device arranged at the un-trusted road intersection; second driving information of each intelligent networked vehicle is obtained through vehicle-mounted sensing equipment of each intelligent networked vehicle in each vehicle sensing area;
determining first running state information and first running intention information of the m vehicles based on the first running information of the m vehicles, and determining second running state information and second running intention information of each intelligent networked vehicle based on the second running information of each intelligent networked vehicle;
determining first running state information and first running intention information of each non-intelligent networked vehicle in the m vehicles;
generating traffic scheduling information for each intelligent networked vehicle located in each vehicle sensing area based on first driving state information and first driving intention information of each non-intelligent networked vehicle and second driving state information and second driving intention information of each intelligent networked vehicle;
and scheduling the intelligent networked vehicles according to the traffic scheduling information.
In the above technical solution, in order to provide more accurate traffic scheduling information for each intelligent internet vehicle located in each vehicle sensing area at the intersection of the untrusted road, so as to guide each intelligent internet vehicle located in each vehicle sensing area at the intersection of the untrusted road to efficiently and safely pass through the intersection of the untrusted road, the driving state information and the driving intention information of each intelligent internet vehicle in each vehicle sensing area need to be accurately sensed in time, and the driving state information and the driving intention information of each non-intelligent internet vehicle located in each vehicle sensing area need to be accurately sensed in time, so that by fusing the driving state information and the driving intention information of each intelligent internet vehicle and the driving state information and the driving intention information of each non-intelligent internet vehicle, effective support can be provided for more accurately guiding and scheduling each intelligent internet vehicle located in each vehicle sensing area to efficiently and safely pass through the intersection of the untrusted road. Specifically, for any one of the untrusted controlled road intersections, the first driving information of m vehicles located in each vehicle sensing area is acquired through each road side sensing device arranged at the untrusted controlled road intersection, the second driving state information and the second driving intention information of each intelligent networked vehicle are acquired through the vehicle-mounted sensing devices of each intelligent networked vehicle located in each vehicle sensing area, and the first driving state information and the first driving intention information of each non-intelligent networked vehicle in the m vehicles can be accurately determined based on each intelligent networked vehicle, so that the driving state information and the driving intention information of each vehicle located in each vehicle sensing area at the untrusted controlled road intersection can be more comprehensively acquired. And then based on the first running state information and the first running intention information of each non-intelligent networked vehicle and the second running state information and the second running intention information of each intelligent networked vehicle, traffic scheduling information for each intelligent networked vehicle in each vehicle sensing area can be generated, so that the generated traffic scheduling information for each intelligent networked vehicle in each vehicle sensing area can better conform to the actual traffic condition of the un-signal controlled road intersection, and can also better meet the efficient and safe traffic requirement of the un-signal controlled road intersection. Then, through the traffic scheduling information, the intelligent networked vehicles in each vehicle sensing area can be guided and scheduled more accurately, so that the vehicle traffic efficiency of the non-signal control road intersection can be effectively improved, and the traffic safety of the non-signal control road intersection can be effectively ensured.
In some exemplary embodiments, determining the first travel state information and the first travel intention information of the m vehicles based on the first travel information of the m vehicles includes:
for any vehicle sensing area, acquiring a vehicle sensing area image for the vehicle sensing area through a video image acquisition device which is arranged at the non-signal control intersection and aims at the vehicle sensing area, identifying the vehicle sensing area image, and determining first driving intention information of at least one vehicle included in the vehicle sensing area image so as to determine first driving intention information of m vehicles in each vehicle sensing area;
the method comprises the steps that the radar detection device which is arranged at the non-signal control road intersection and aims at the vehicle perception area obtains the running state data of at least one vehicle in the vehicle perception area, and the first running state information of the at least one vehicle is determined according to the running state data of the at least one vehicle in the vehicle perception area, so that the first running state information of m vehicles in each vehicle perception area is determined.
In the technical scheme, for each vehicle sensing area at the position of the non-signal control road intersection, all targets contained in the vehicle sensing area can be shot through the video image acquisition equipment arranged for the vehicle sensing area, so that a vehicle sensing area image is obtained, the vehicle sensing area image is identified, and the driving intention of at least one vehicle contained in the vehicle sensing area image can be accurately identified. Meanwhile, the radar detection device arranged for the vehicle sensing area can detect at least one vehicle contained in the vehicle sensing area, so that the running state data of the at least one vehicle contained in the vehicle sensing area can be acquired, and the running state data of the at least one vehicle contained in the vehicle sensing area is processed, so that the running state information (such as vehicle speed, vehicle acceleration, vehicle running direction, vehicle heading angle, vehicle longitude and latitude coordinates and the like) of the at least one vehicle contained in the vehicle sensing area can be accurately determined. Therefore, the scheme can provide effective data support for the subsequent identification of the driving state information and the driving intention information of the non-intelligent networked vehicle.
In some exemplary embodiments, identifying the vehicle perception area image, determining first driving intention information of at least one vehicle included in the vehicle perception area image, includes:
carrying out target detection on the vehicle perception area image, and determining the image center position coordinates and the image area size of at least one vehicle in the vehicle perception area image;
for each vehicle in the at least one vehicle, taking the image center position coordinates of the vehicle as a cutting reference point, and cutting an image area where the vehicle is located from the vehicle perception area image according to the size of the image area of the vehicle in the vehicle perception area image;
and performing intention detection on an image area where the vehicle is located, and determining first driving intention information of the vehicle so as to determine the first driving intention information of the at least one vehicle.
In the above technical solution, for the vehicle sensing area image captured and collected in each vehicle sensing area, by performing target detection on the vehicle sensing area image, the vehicle type of at least one vehicle included in the vehicle sensing area image, the image center position coordinates in the vehicle sensing area image, and the image area size (that is, the area size formed by the pixel length and width of the vehicle in the vehicle sensing area image) can be detected. Furthermore, for each vehicle, by using the coordinates of the image center position of the vehicle as the capturing reference point, the image area where the vehicle is located can be accurately captured from the vehicle sensing area image according to the size of the image area of the vehicle in the vehicle sensing area image. Then, the intention detection is carried out on the image area where the vehicle is located, so that the driving intention of the vehicle can be accurately identified, and the driving intention of at least one vehicle included in the vehicle sensing area image can be detected, so that effective data support is provided for the subsequent identification of the driving intention of the non-intelligent networked vehicle.
In some exemplary embodiments, identifying the vehicle perception area image, determining first driving intention information of at least one vehicle included in the vehicle perception area image, includes:
and performing intention detection on the vehicle perception area image, and determining driving intention information and image center position coordinates of at least one vehicle included in the vehicle perception area image so as to determine first driving intention information of the at least one vehicle.
In the technical scheme, aiming at the vehicle perception area image shot and collected by each vehicle perception area, the driving intention information and the image center position coordinate of at least one vehicle in the vehicle perception area image can be accurately identified by performing intention detection on the vehicle perception area image, so that effective data support is provided for the subsequent identification of the driving intention of the non-intelligent networked vehicle.
In some exemplary embodiments, determining the second driving state information and the second driving intention information of each intelligent networked vehicle based on the second driving information of each intelligent networked vehicle includes:
aiming at any vehicle sensing area, acquiring running state data and running intention data of each intelligent networked vehicle in the vehicle sensing area through vehicle-mounted sensing equipment of each intelligent networked vehicle in the vehicle sensing area;
and determining second running state information of the intelligent networked vehicle according to the running state data of the intelligent networked vehicle in the vehicle sensing area, and determining second running intention information of the intelligent networked vehicle according to the running intention data of the intelligent networked vehicle in the vehicle sensing area, so that the second running state information and the second running intention information of each intelligent networked vehicle are determined.
In the technical scheme, aiming at a certain vehicle sensing area, the driving intention data and the driving state data of the intelligent networked vehicles can be timely and accurately acquired through the vehicle-mounted sensing equipment of each intelligent networked vehicle in the vehicle sensing area. Then, the driving intention of the intelligent internet vehicle can be obtained by analyzing and processing the driving intention data of the intelligent internet vehicle, and the driving state of the intelligent internet vehicle can be obtained by processing the driving state data of the intelligent internet vehicle, so that effective support can be provided for accurately determining the traffic scheduling sequence of each intelligent internet vehicle in the follow-up process, and data support can be provided for effectively ensuring the traffic safety of the non-signal control road intersection.
In some exemplary embodiments, determining the first travel intention information for each non-intelligent networked vehicle of the m vehicles comprises:
aiming at any vehicle perception area, acquiring the license plate number of at least one intelligent networked vehicle through vehicle-mounted equipment of the at least one intelligent networked vehicle positioned in the vehicle perception area, and determining the license plate number of the at least one vehicle in the vehicle perception area image by performing vehicle attribute identification on the vehicle perception area image of the vehicle perception area;
aiming at the license plate number of any vehicle in the at least one vehicle, if the license plate number of the vehicle does not exist in the license plate numbers of the at least one intelligent networked vehicle, determining that the vehicle is a non-intelligent networked vehicle;
according to the image center position coordinates of the non-intelligent networked vehicles in the vehicle perception area image, determining first driving intention information of the non-intelligent networked vehicles from first driving intention information of at least one vehicle included in the vehicle perception area image, and accordingly determining the first driving intention information of the non-intelligent networked vehicles;
determining first driving state information of each non-intelligent networked vehicle in the m vehicles, wherein the first driving state information comprises the following steps:
aiming at any non-intelligent networked vehicle in the vehicle sensing area, determining longitude and latitude coordinates of the non-intelligent networked vehicle in a radar coordinate system according to a coordinate conversion rule of a radar coordinate system and a video image coordinate system and based on the image center position coordinates of the non-intelligent networked vehicle in the vehicle sensing area image;
and determining the first running state information of the non-intelligent networked vehicles from the first running state information of at least one vehicle in the vehicle sensing area according to the longitude and latitude coordinates of the non-intelligent networked vehicles in a radar coordinate system, so as to determine the first running state information of each non-intelligent networked vehicle.
In the technical scheme, for a certain vehicle sensing area, information such as the license plate number, longitude and latitude coordinates, the vehicle type, the vehicle color and the like of at least one intelligent networked vehicle can be timely and accurately acquired through the vehicle-mounted equipment of at least one intelligent networked vehicle positioned in the vehicle sensing area, and vehicle attribute identification is performed on the vehicle sensing area image of the vehicle sensing area, so that information such as the license plate number, the vehicle type, the vehicle color and the like of at least one vehicle included in the vehicle sensing area image can be identified. For any vehicle in at least one vehicle, if the license plate number of the vehicle does not exist in the license plate number of at least one intelligent networked vehicle, the vehicle can be determined to be a non-intelligent networked vehicle, and therefore, through the image center position coordinates of the non-intelligent networked vehicle in the vehicle sensing area image, the first driving intention information of the non-intelligent networked vehicle can be accurately determined from the first driving intention information of at least one vehicle in the vehicle sensing area. Meanwhile, for any non-intelligent networked vehicle in the vehicle sensing area, according to a coordinate conversion rule of a radar coordinate system and a video image coordinate system, based on the center position coordinate of the non-intelligent networked vehicle in the vehicle sensing area image, the longitude and latitude coordinates of the non-intelligent networked vehicle in the radar coordinate system can be determined, and according to the longitude and latitude coordinates of the non-intelligent networked vehicle, the first driving state information of the non-intelligent networked vehicle can be accurately determined from the first driving state information of at least one vehicle in the vehicle sensing area. Therefore, the scheme can accurately obtain the first running state information and the first running intention information of each non-intelligent networked vehicle, and effective data support can be provided for subsequently and more accurately determining the traffic scheduling information aiming at each intelligent networked vehicle.
In some exemplary embodiments, determining the first driving state information of each non-intelligent networked vehicle of the m vehicles comprises:
aiming at any vehicle sensing area, acquiring longitude and latitude coordinates of at least one intelligent networking vehicle through vehicle-mounted sensing equipment of the at least one intelligent networking vehicle positioned in the vehicle sensing area;
determining longitude and latitude coordinates of radar detection equipment which are set at the non-signal control road intersection and aim at the vehicle perception area, and acquiring the relative distance between at least one vehicle in the vehicle perception area and the radar detection equipment through the radar detection equipment;
according to the longitude and latitude coordinates of the radar detection equipment and the relative distance between the at least one vehicle and the radar detection equipment, the longitude and latitude coordinates of the at least one vehicle are determined;
for the longitude and latitude coordinates of each vehicle in the at least one vehicle, if the difference values of the longitude and latitude coordinates of the vehicle and the longitude and latitude coordinates of the at least one intelligent networked vehicle do not meet a set threshold value, determining that the vehicle is a non-intelligent networked vehicle, and determining first running state information of the non-intelligent networked vehicle from the first running state information of the at least one vehicle according to the longitude and latitude coordinates of the non-intelligent networked vehicle, so as to determine the first running state information of each non-intelligent networked vehicle;
determining first driving intention information of each non-intelligent networked vehicle in the m vehicles, including:
aiming at any non-intelligent networked vehicle in the vehicle sensing area, determining the image center position coordinates of the non-intelligent networked vehicle in a video image coordinate system according to the coordinate conversion rule of a radar coordinate system and a video image coordinate system and based on the longitude and latitude coordinates of the non-intelligent networked vehicle in the radar coordinate system;
according to the image center position coordinates of the non-intelligent networked vehicles in a video image coordinate system, determining first driving intention information of the non-intelligent networked vehicles from the first driving intention information of at least one vehicle in the vehicle sensing area, and accordingly determining the first driving intention information of each non-intelligent networked vehicle in each vehicle sensing area.
In the technical scheme, for a certain vehicle sensing area, information such as the license plate number, the longitude and latitude coordinates, the vehicle type, the vehicle color and the like of at least one intelligent networked vehicle can be timely and accurately acquired through the vehicle-mounted sensing device of at least one intelligent networked vehicle positioned in the vehicle sensing area, the longitude and latitude coordinates of the radar detection device arranged for the vehicle sensing area are determined, the relative distance between the at least one vehicle positioned in the vehicle sensing area and the radar detection device can be acquired through the radar detection device, and the longitude and latitude coordinates of the at least one vehicle can be calculated. And then, for each vehicle in the at least one vehicle, calculating a difference value between the longitude and latitude coordinate of the vehicle and the longitude and latitude coordinate of at least one intelligent networked vehicle, if the difference values do not meet a set threshold value, determining that the vehicle is a non-intelligent networked vehicle, and if the difference value between the longitude and latitude coordinate of one intelligent networked vehicle and the longitude and latitude coordinate of the vehicle meets the set threshold value, determining that the vehicle is an intelligent networked vehicle. Meanwhile, for any non-intelligent networked vehicle in the vehicle perception area, according to a coordinate conversion rule of a radar coordinate system and a video image coordinate system, based on longitude and latitude coordinates of the non-intelligent networked vehicle in the radar coordinate system, the center position coordinates of the non-intelligent networked vehicle in the vehicle perception area image can be determined, and according to the center position coordinates of the non-intelligent networked vehicle in the vehicle perception area image, the first driving intention information of the non-intelligent networked vehicle can be accurately determined from the first driving intention information of at least one vehicle in the vehicle perception area. Therefore, the scheme can accurately obtain the first running state information and the first running intention information of each non-intelligent networked vehicle, and effective data support can be provided for more accurately determining the traffic scheduling information aiming at each intelligent networked vehicle in the follow-up process.
In some exemplary embodiments, generating the traffic scheduling information for each intelligent networked vehicle located in each vehicle sensing area based on the first traveling state information and the first traveling intention information of each non-intelligent networked vehicle and the second traveling state information and the second traveling intention information of each intelligent networked vehicle includes:
generating the passing priority of each intelligent networked vehicle and each non-intelligent networked vehicle in each vehicle sensing area at the wireless control road intersection based on the first running state information and the first running intention information of each non-intelligent networked vehicle and the second running state information and the second running intention information of each intelligent networked vehicle according to a preset vehicle passing rule;
generating a traffic scheduling instruction for each intelligent networked vehicle according to the traffic priority of each intelligent networked vehicle and each non-intelligent networked vehicle at the non-signal control road intersection; and the traffic scheduling instruction is used for indicating each intelligent internet vehicle to sequentially pass through the non-signal control road intersection according to the traffic sequence.
According to the technical scheme, according to the preset vehicle passing rule, the passing priority aiming at each non-intelligent networked vehicle and each intelligent networked vehicle can be accurately generated based on the first running state information and the first running intention information of each non-intelligent networked vehicle and the second running state information and the second running intention information of each intelligent networked vehicle. Then, through the passing priorities of the non-intelligent networked vehicles and the intelligent networked vehicles, passing scheduling instructions for the intelligent networked vehicles can be generated more accurately, so that the intelligent networked vehicles can be guided to pass through the non-signal control road intersection more accurately according to the passing sequence, and the vehicle passing efficiency of the non-signal control road intersection can be effectively improved.
In some exemplary embodiments, after generating the traffic scheduling instruction for each intelligent networked vehicle, the method further includes:
if it is detected that at least one non-intelligent networked vehicle respectively passes through a stop waiting line of a lane where the non-intelligent networked vehicle is located, sending prompt information to at least one intelligent networked vehicle passing through the wireless control road intersection, and sending waiting passing information to other intelligent networked vehicles not passing through the wireless control road intersection; the prompt information comprises first running state information and first running intention information of the at least one non-intelligent networked vehicle; the prompt information is used for prompting the at least one intelligent networked vehicle to pay attention to deceleration and avoidance to the at least one non-intelligent networked vehicle according to the first running state information and the first running intention information of the at least one non-intelligent networked vehicle; the waiting traffic information is used for indicating other intelligent internet vehicles which do not pass through the non-signal control road intersection to stay in the lane to wait for passing.
In the technical scheme, in the process of guiding each intelligent networked vehicle to sequentially pass through the non-intelligent networked vehicle at the non-intelligent networked road intersection according to the passing sequence, the motion state of each non-intelligent networked vehicle can be detected in real time, if it is detected that at least one non-intelligent networked vehicle passes through the stop waiting line of the lane where the non-intelligent networked vehicle is located, prompt information is sent to at least one intelligent networked vehicle passing through the non-intelligent networked road intersection, and the prompt information comprises the running state information and the running intention information of at least one non-intelligent networked vehicle, so that the at least one intelligent networked vehicle passing through the non-intelligent networked road intersection can be prompted to pay attention to reduce the speed and avoid the at least one non-intelligent networked vehicle according to the running state information and the running intention information of the at least one non-intelligent networked vehicle. Meanwhile, waiting traffic information is sent to other intelligent networking vehicles which do not pass through the non-signal control road intersection so as to indicate that the other intelligent networking vehicles stay in respective lanes to wait for passing. In this way, according to the scheme, accurate guidance is performed on each intelligent networked vehicle according to the first running state information and the first running intention information of each non-intelligent networked vehicle and the second running state information and the second running intention information of each intelligent networked vehicle, so that the vehicle passing efficiency of the non-signal control road intersection can be effectively improved, and the passing safety of the non-signal control road intersection can be effectively ensured.
In a second aspect, an exemplary embodiment of the present application provides a cooperative intersection passing device, including:
the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring first running information of m vehicles in each vehicle sensing area through each road side sensing device arranged at any one non-signal control road intersection; acquiring second running information of each intelligent networked vehicle through vehicle-mounted sensing equipment of each intelligent networked vehicle in each vehicle sensing area;
a processing unit, configured to determine first driving state information and first driving intention information of the m vehicles based on first driving information of the m vehicles, and determine second driving state information and second driving intention information of each intelligent networked vehicle based on second driving information of each intelligent networked vehicle; determining first running state information and first running intention information of each non-intelligent networked vehicle in the m vehicles; generating traffic scheduling information for each intelligent networked vehicle located in each vehicle sensing area based on first driving state information and first driving intention information of each non-intelligent networked vehicle and second driving state information and second driving intention information of each intelligent networked vehicle; and scheduling each intelligent internet vehicle according to the traffic scheduling information.
In a third aspect, embodiments of the present application provide a computing device comprising at least one processor and at least one memory, wherein the memory stores a computer program that, when executed by the processor, causes the processor to perform the cooperative intersection passage method according to any of the first aspects.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program executable by a computing device, the program when executed on the computing device causing the computing device to perform the collaborative intersection passing method according to any of the first aspects.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic diagram of a possible system architecture according to some embodiments of the present application;
fig. 2 is a schematic view of an application scenario of a cooperative intersection passing method according to some embodiments of the present application;
FIG. 3 is a schematic diagram of a sensing area of a vehicle where roadside sensing devices are located according to some embodiments of the present application;
fig. 4 is a schematic flow chart of a cooperative intersection passing method according to some embodiments of the present disclosure;
fig. 5 is a schematic diagram of a roadside sensing device for acquiring motion attribute data of a vehicle and a vehicle sensing region image according to some embodiments of the present application;
fig. 6 is a schematic view of a vehicle on which an intelligent internet vehicle reports driving state data and driving intention data through a vehicle-mounted device according to some embodiments of the present disclosure;
FIG. 7 is a schematic flow chart illustrating a process for identifying a driving intent of a vehicle included in a vehicle sensing region image according to some embodiments of the present disclosure;
FIG. 8 is a schematic illustration of detecting an image of a perception area of a vehicle according to some embodiments of the present disclosure;
FIG. 9 is a schematic diagram of an intelligent networked vehicle in a vehicle sensing area according to some embodiments of the present disclosure;
FIG. 10 is a schematic diagram illustrating an intelligent networked vehicle driving intention determination according to some embodiments of the present disclosure;
FIG. 11 is a schematic view of a method for determining driving intent of a non-intelligent networked vehicle according to some embodiments of the present disclosure;
FIG. 12 is a schematic flow chart illustrating a method for identifying a non-intelligent networked vehicle according to some embodiments of the present disclosure;
FIG. 13 is a schematic illustration of detecting a perception area of a vehicle according to some embodiments of the present application;
FIG. 14 is a schematic flow chart of another method for identifying a non-intelligent networked vehicle provided by some embodiments of the present application;
FIG. 15 is a schematic view of a scenario for identifying a non-intelligent networked vehicle according to some embodiments of the present application;
FIG. 16 is a schematic illustration of a vehicle having a right turn intent of traveling on an intelligent networked vehicle traveling at an un-trusted intersection in accordance with certain embodiments of the present application;
FIG. 17 is a schematic illustration of a smart networked vehicle traveling at an un-trusted intersection with a straight-ahead driving intent according to some embodiments of the present application;
FIG. 18 is a schematic illustration of a vehicle traveling in an untrusted intersection on a smart networked vehicle with a left turn intention according to some embodiments of the present application;
FIG. 19 is a schematic illustration of a special vehicle traveling at an un-trusted intersection with a right turn intent in accordance with certain disclosed embodiments;
FIG. 20 is a schematic structural diagram of a cooperative intersection passing device according to some embodiments of the present disclosure;
fig. 21 is a schematic structural diagram of a computing device according to some embodiments of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
To facilitate understanding of the embodiment of the present application, first, a collaborative intersection passing system architecture applicable to the embodiment of the present application is described by taking one possible system architecture shown in fig. 1 as an example. The cooperative intersection traffic system architecture can be applied to intersections such as non-signal control intersections (namely intersections without signal lamps). As shown in fig. 1, the system architecture may include an in-vehicle apparatus 100 and a roadside apparatus 200.
The vehicle-mounted device 100 may include a vehicle-mounted sensing device 101, a vehicle-mounted unit (OBU), a human-computer interaction unit 105, and the like. The on-board unit may include an ethernet communication unit 102, a V2X (vehicle-to-all) communication unit 103, and a process control unit 104. The vehicle-mounted sensing device 101 may be configured to collect status data (including driving status data and driving intention data of the intelligent networked vehicle) of the intelligent networked vehicle, for example, at least include vehicle location data (collected by a GPS (Global Positioning System) Positioning device), vehicle speed data (collected by a gyroscope sensor, for example), vehicle acceleration data (collected by a gyroscope sensor, for example), a heading angle, a turn-on state of a steering lamp, and the like; optionally, the vehicle-mounted information system may also include front road condition sensing data (for example, the front road condition sensing data may be acquired by a camera at the vehicle end, a millimeter wave radar, and other sensing devices); the ethernet communication unit 102 may be configured to perform data interaction with the vehicle-mounted sensing device 101; the V2X communication unit 103 is configured to send vehicle driving state data and vehicle driving intention data to the roadside unit, and receive traffic scheduling information of the roadside device 200 for each intelligent networked vehicle located at the intersection; the processing control unit 104 is used for processing the original state data of the vehicle and controlling the work of other units; the human-computer interaction unit 105 is used for interacting with a driver and prompting the driver to obtain cooperative traffic scheduling information of the intelligent networked vehicle where the driver is located at the intersection.
The roadside apparatus 200 may include a roadside sensing apparatus 203, a roadside unit (RSU), an information distribution unit 206 (optional), and the like. The rsu may include a V2X communication unit 201, a processing control unit 202, an ethernet communication unit 204, and a data calculation unit 205. The roadside sensing device 203 may include a camera, a millimeter wave radar, and other roadside sensors, and is configured to sense driving state data and driving intention data of each vehicle located at the intersection, and mainly determine driving state information and driving intention information of each non-intelligent networked vehicle located at the intersection; the ethernet communication unit 204 is configured to perform data interaction with the roadside sensing device 203; the V2X communication unit 201 is configured to receive driving state data and driving intention data of each intelligent networked vehicle located at the intersection through V2X, and send traffic scheduling information for each intelligent networked vehicle located at the intersection to the vehicle-mounted device 100; the processing control unit 202 is configured to forward the driving state data and the driving intention data of each vehicle located at the intersection and the driving state data and the driving intention data of each intelligent networked vehicle located at the intersection, which are sensed by the roadside sensing device 203, to the data calculation unit 205, and to control the operations of other units; the data calculation unit 205 is configured to determine driving state information and driving intention information of each non-intelligent networked vehicle, and driving state information and driving intention information of each intelligent networked vehicle, and generate traffic scheduling information for each intelligent networked vehicle located at an intersection based on the driving state information and driving intention information of each non-intelligent networked vehicle, and the driving state information and driving intention information of each intelligent networked vehicle; the information distribution unit 206 is configured to facilitate the road side unit to notify the passing vehicles at the intersection of the vehicle passing through the information distribution unit to give way to special vehicles (such as emergency rescue vehicles, emergency task execution vehicles, etc.).
It should be noted that the system architecture shown in fig. 1 is only an example, and the embodiment of the present application does not limit this.
Referring to fig. 2, fig. 2 is a schematic diagram illustrating an application scenario of the collaborative intersection passing method. The application scenario of the cooperative intersection passing method is implemented based on the cooperative intersection passing system architecture shown in fig. 1. As shown in fig. 2, for a certain non-signal controlled road intersection, a roadside unit is deployed on a certain side of the non-signal controlled road intersection, and a camera 1 and a radar 1 (such as a millimeter wave radar or a laser radar) are deployed from north to south; a camera 2 and a millimeter wave radar 2 are deployed from east to west; the camera 3 and the radar 3 are deployed from the south to the north; the camera 4 and the radar 4 are deployed from west to east. The cameras 1 to 4 and the radars 1 to 4 are used for realizing the perception of the vehicle running state information and the perception of the running intention information in different directions. The road side unit is used for determining the running state information and the running intention information of each intelligent networked vehicle and the running state information and the running intention information of each non-intelligent networked vehicle in the position area of the non-signal control road intersection, generating the traffic scheduling information for each intelligent networked vehicle based on the running state information and the running intention information of each intelligent networked vehicle and the running state information and the running intention information of each non-intelligent networked vehicle in the position area of the non-signal control road intersection, and then releasing the traffic scheduling information for each intelligent networked vehicle.
For example, as shown in fig. 3, a schematic diagram of a vehicle sensing area where roadside sensing devices (such as a camera and a radar device) deployed in a certain direction (such as a west-east direction) in fig. 2 are located is provided. Wherein, the camera (such as a video camera) can shoot and collect road area images (namely vehicle sensing area images) aiming at the area between the vehicle sensing line and the stop waiting line of the intersection, and the road area images are transmitted to the data computing unit through the Ethernet communication unit after being collected, the method comprises the steps that each vehicle target in an area between a vehicle sensing line and a stop waiting line of a road intersection can be detected through radar equipment, so that motion attribute data (such as the relative distance, the relative speed and the relative direction between the vehicle target and the radar equipment) of each vehicle target are obtained, and the motion attribute data are transmitted to a data calculation unit through an Ethernet communication unit after being collected. Taking the millimeter wave radar as an example, the radar device is a millimeter wave radar, and the millimeter wave radar is a radar with a working frequency band in a millimeter wave frequency band. The millimeter wave radar can actively transmit electromagnetic wave signals, receive echoes and obtain the relative distance, the relative speed and the relative direction of the vehicle target according to the time difference of transmitting and receiving the electromagnetic wave signals. For example, the millimeter wave radar and the camera may be generally arranged on the same horizontal plane as shown in fig. 3, or the millimeter wave radar and the camera may be arranged at a certain angle, or may be arranged at other positions, which is not specifically limited in this embodiment of the present invention.
Based on the above description, fig. 4 exemplarily shows a flow of a cooperative intersection passing method provided by the embodiment of the present application, where the flow may be executed by a cooperative intersection passing apparatus. The cooperative intersection passing method may be applied to the system architecture shown in fig. 1, and the road side device in fig. 1 may execute the cooperative intersection passing method. The cooperative intersection passing device may be a road side device, or may also be a component (such as a chip or an integrated circuit) capable of supporting the road side device to implement the functions required by the method, or may also be other electronic devices having the functions required by the method, such as a traffic control device.
As shown in fig. 4, the process specifically includes:
In this embodiment of the application, for a certain non-signal controlled intersection, first driving information (for example, vehicle sensing area images and driving state data) of m vehicles in each vehicle sensing area belonging to an area where the non-signal controlled intersection is located may be acquired through each road side sensing device arranged at the non-signal controlled intersection, where m is an integer greater than or equal to 1. For example, a set of roadside sensing devices (video image acquisition devices and radar detection devices) is arranged in each vehicle sensing area of the intersection, for example, as shown in fig. 5, taking a certain vehicle sensing area of the intersection as an example, the video image acquisition devices and the radar detection devices arranged in the vehicle sensing area are used for acquiring the first driving information of at least one vehicle located in the vehicle sensing area. The video image acquisition device (such as a video camera) and the radar detection device (such as a millimeter wave radar) may be arranged on the same horizontal plane as shown in fig. 5, or the millimeter wave radar and the camera may be arranged at a certain angle, or at other positions, which is not specifically limited in this embodiment of the present application. Based on fig. 5, a vehicle sensing area image (for example, a vehicle sensing area image including a vehicle a, a vehicle b, a vehicle c, and a vehicle d) of an area between a vehicle sensing line and a stop waiting line at a road intersection may be collected by a video camera, and motion attribute data of each vehicle target (for example, a vehicle a, a vehicle b, a vehicle c, and a vehicle d) in the area between the vehicle sensing line and the stop waiting line at the road intersection may be collected by a millimeter wave radar.
In addition, second driving information (such as vehicle driving state data and vehicle driving intention data) of each intelligent networked vehicle can be acquired through the vehicle-mounted sensing device of each intelligent networked vehicle in each vehicle sensing area belonging to the area where the non-signal control road intersection is located. For example, as shown in fig. 6, taking a certain vehicle sensing area as an example, the vehicle sensing area includes a vehicle 1, a vehicle 2, a vehicle 3, a vehicle 4, and a vehicle 5, and it is assumed that the vehicle 1, the vehicle 3, and the vehicle 5 are intelligent internet vehicles, and the vehicle 2 and the vehicle 4 are non-intelligent internet vehicles, so that the vehicle 1, the vehicle 3, and the vehicle 5 may obtain respective driving state data and driving intention data through respective vehicle-mounted sensing devices, for example, vehicle position data may be collected through a GPS positioning device, vehicle speed data, vehicle acceleration data, and the like may be collected through a gyroscope sensor, a vehicle angle may be collected through a heading angle sensor, and a vehicle turn lamp on state (for example, a vehicle left turn lamp is turned on, a vehicle right turn lamp is turned on, or a vehicle turn lamp is not turned on) may be obtained through a driver's operation instruction for a turn lamp. Of course, the vehicles 1, 3 and 5 may also upload attribute information of the intelligent networked vehicles, such as vehicle types, vehicle body colors, vehicle logos, license plate numbers, and the like, to the road side equipment through the V2X communication unit. For the vehicles 2 and 4, the motion attribute data of the vehicles 2 and 4 are mainly obtained through the millimeter wave radar arranged aiming at the vehicle perception area, the images of the vehicle perception area are obtained through the video camera arranged aiming at the vehicle perception area, and then the driving intentions of the vehicles 2 and 4 are determined through the recognition aiming at the images of the vehicle perception area.
In the embodiment of the application, for any vehicle sensing area, a video image acquisition device for the vehicle sensing area, which is arranged at the non-signal control intersection, can acquire a vehicle sensing area image for the vehicle sensing area and identify the vehicle sensing area image, that is, first driving intention information of at least one vehicle included in the vehicle sensing area image can be determined, so that first driving intention information of m vehicles located in each vehicle sensing area can be determined. Meanwhile, the driving state data of at least one vehicle in the vehicle sensing area can be acquired through the radar detection device which is arranged at the non-signal control intersection and aims at the vehicle sensing area, and the first driving state information of at least one vehicle can be determined according to the driving state data of at least one vehicle in the vehicle sensing area, so that the first driving state information of m vehicles in each vehicle sensing area can be determined. Therefore, the scheme can provide effective data support for the subsequent identification of the driving state information and the driving intention information of the non-intelligent networked vehicle.
For the identification of the driving intention of the vehicle included in a certain vehicle sensing area image, refer to fig. 7, which is a schematic flow chart illustrating the identification of the driving intention of the vehicle included in the vehicle sensing area image according to the embodiment of the present application. As shown in fig. 7, the process may include:
The vehicle type of at least one vehicle included in the vehicle sensing area image, the image center position coordinates of the vehicle in the vehicle sensing area image, and the image area size (that is, the area size formed by the pixel length and width of the vehicle in the vehicle sensing area image) can be detected by performing target detection on the vehicle sensing area image according to the vehicle sensing area image captured and collected by each vehicle sensing area.
The image center position coordinates of the vehicle are used as the center point, and the pixel length and width of the vehicle are used as the intercepting side length for intercepting, so that the image area where the vehicle is located is intercepted from the image of the vehicle sensing area. For example, assume that an x coordinate (x 0) and a y coordinate (y 0) of a vehicle in a pixel coordinate system, take (x 0, y 0) as a center point, and take a pixel length and a pixel width corresponding to a vehicle length in the pixel coordinate system as side lengths, and then intercept an image area where the vehicle is located.
The driving intention of the vehicle can be accurately identified by performing intention detection on the image area where the vehicle is located, so that the driving intention of at least one vehicle included in the vehicle sensing area image can be detected.
Or, the intention detection may be directly performed on the vehicle perception area image for each vehicle perception area image captured and collected, that is, the driving intention information and the image center position coordinates of at least one vehicle included in the vehicle perception area image may be accurately identified, so that the first driving intention information of the at least one vehicle may be determined.
By way of example, taking a vehicle sensing area image of a certain vehicle sensing area as an example, referring to fig. 8, a schematic diagram of detecting a vehicle sensing area image provided in an embodiment of the present application is provided. As shown in fig. 8, after the vehicle-mounted device acquires the vehicle sensing area image of the vehicle sensing area through the camera and transmits the vehicle sensing area image to the roadside device, the roadside device may perform multi-target detection on the vehicle sensing area image through the data calculation unit based on a deep learning target detection algorithm (e.g., a YOLOV5 (young only logic area conversion 5, yolo series target detection) target detection algorithm, a SSD (Single shot multi-box detector, single-stage multi-frame detection) and other Single-stage target detection algorithms, a fast R-CNN (fast convolutional neural network) and other two-stage target detection algorithms), so as to identify the image center position coordinates and the image area size of the vehicle in the vehicle sensing area image (i.e., the area size formed by the pixel length and width of the vehicle in the vehicle sensing area image). By taking a YOLOV5 target detection algorithm as an example, in the embodiment of the application, vehicle target detection in the vehicle sensing area image is completed by adopting the YOLOV5 target detection algorithm, and the YOLOV5 target detection algorithm can return the type of the vehicle detected in the vehicle sensing area image, the position coordinates of the vehicle target in the vehicle sensing area image and the pixel length and width of the vehicle target. The method comprises the steps of performing attribute labeling on motor vehicles, non-motor vehicles and the like on the basis of monitoring video images acquired by video image acquisition equipment on a road intersection to construct a vehicle detection training data set, and completing iterative training aiming at a YOLOV5 algorithm on the basis of the vehicle detection training data set. In the inference process of the YOLOV5 algorithm, a video image to be detected (namely, a vehicle sensing area is used as a video image) is input into the YOLOV5 algorithm, and the type of a vehicle detected in the vehicle sensing area image, the position coordinates of a vehicle target in the vehicle sensing area image and the pixel length and width of the vehicle target can be directly returned.
After the position coordinates and the pixel length and width of any vehicle in the vehicle sensing area image are detected, the position coordinates and the pixel length and width of the vehicle are taken as the center point and the pixel length and width of the vehicle are taken as the intercepting side length to intercept the image area where the vehicle is located from the vehicle sensing area image. Then, the vehicle driving intention detection algorithm (such as ResNet 50) can perform intention detection on the image area where the vehicle is located, so that the driving intention of the vehicle can be identified. The ResNet50 network model may be trained in advance through a vehicle picture sample set including different turn signal lamp on states (for example, a left turn signal lamp is on, a right turn signal lamp is on, and neither a left turn signal lamp nor a right turn signal lamp is on), that is, the turn signal lamps of each vehicle in the vehicle pictures are labeled for each acquired vehicle picture, for example, if a certain vehicle is on with a left turn signal lamp, the vehicle is labeled with a left turn type based on an attribute of the left turn signal lamp that the vehicle is on, or if a certain vehicle is on with a right turn signal lamp, the vehicle is labeled with a right turn type based on an attribute of the right turn signal lamp that the vehicle is on, or if a certain vehicle is not on a turn signal lamp, the vehicle is labeled with a straight type based on an attribute that the turn signal lamps are not on. After the training is finished, inputting a certain section of extracted vehicle image into a ResNet50 network model after the training is finished, carrying out feature extraction on the vehicle image by the ResNet50 network model, outputting the probability that the driving intention of the vehicle belongs to left turn, right turn and straight running respectively, and determining that the driving intention of the vehicle is left turn when the probability that the driving intention of the vehicle is left turn is highest.
In addition, the radar detection device includes a radio frequency module and an array antenna module, the radio frequency module externally transmits an electromagnetic beam, the array antenna module receives a returned electromagnetic beam, and vehicle motion attribute data detected by the radar detection device can be obtained through calculation by using the returned electromagnetic beam, that is, by taking the example that the radar detection device in fig. 8 is a millimeter wave radar, the motion attribute data of the vehicle 1, the motion attribute data of the vehicle 2, and the motion attribute data of the vehicle 3 can be detected by using the millimeter wave radar. It should be noted that, because the accuracy of the deep learning target detection algorithm is greatly influenced by environmental factors such as illumination and weather, the detection effect of the video algorithm is often closely related to the structural design of the neural network, and the vehicle target detection result of the video algorithm often has large false detection and missed detection situations, the radar target detected by the radar detection device (such as a millimeter wave radar) is used as the reference target of the vehicle target in the embodiment of the present application.
In addition, in any vehicle sensing area, the driving state data and the driving intention data of the intelligent networked vehicles in the vehicle sensing area can be timely and accurately acquired through the vehicle-mounted sensing equipment of each intelligent networked vehicle in the vehicle sensing area. Then, the second driving state information of the intelligent networked vehicle can be determined according to the driving state data of the intelligent networked vehicle in the vehicle sensing area, and the second driving intention information of the intelligent networked vehicle can be determined according to the driving intention data of the intelligent networked vehicle in the vehicle sensing area, so that the second driving state information and the second driving intention information of each intelligent networked vehicle can be determined. For example, referring to fig. 9, a schematic diagram of an intelligent networked vehicle in a vehicle sensing area is provided in an embodiment of the present application, as shown in fig. 9, the vehicles in the vehicle sensing area include a vehicle 1, a vehicle 2, a vehicle 3, and a vehicle 4. It is assumed that the vehicle 1 and the vehicle 2 are intelligent networked vehicles, and the vehicle 3 and the vehicle 4 are non-intelligent networked vehicles. The vehicles 1 and 2 can transmit motion state information such as vehicle speed, vehicle acceleration, longitude and latitude coordinates, course angle, turn-on state of a steering lamp and the like, and attribute information such as vehicle type, vehicle body color, vehicle logo, license plate number and the like, which are acquired in real time through respective vehicle-mounted sensing equipment, to roadside equipment through a V2X communication unit. After receiving the motion state information and the attribute information transmitted by each of the vehicles 1 and 2, the roadside device, taking the vehicle 1 as an example, may obtain the driving intention (such as a left turn, a right turn, or a straight line) of the vehicle 1 by analyzing and processing the turn-on state of the turn signal of the vehicle 1, may obtain the driving state (such as a vehicle speed, a vehicle acceleration, a longitude and latitude coordinate, a heading angle, and the like) of the vehicle 1 by processing the motion state information of the vehicle 1, such as a vehicle speed, a vehicle acceleration, a longitude and latitude coordinate, and the like, and may also obtain the license plate number, the vehicle type, the vehicle body color, and the like of the vehicle 1.
It should be noted that, for example, the route planning information of the vehicle (for example, the planned driving route of the intelligent internet vehicle passing through the intersection 1, the intersection 2, the intersection 3, the 8230; and the intersection n in sequence) is broadcast to the roadside equipment by the intelligent internet vehicle through the V2X communication unit. For example, taking a road intersection 0 (assuming that the road intersection 0 is an un-signal controlled road intersection), referring to fig. 10, a schematic diagram for determining a driving intention of the intelligent internet-connected vehicle provided in the embodiment of the present application is provided, and assuming that a northbound intersection is a road intersection 1, an eastern intersection is a road intersection 2, a southbound intersection is a road intersection 3, a westbound intersection is a road intersection 4, and a vehicle 6 is an intelligent internet-connected vehicle, the vehicle 6 broadcasts its own path planning information to roadside devices through a V2X communication unit, for example, the path planning information of the vehicle 6 is the road intersection 3 → the road intersection 0 → the road intersection 2, and it can be determined that the driving intention of the vehicle 6 at the road intersection 0 is a right turn.
It should be noted that during the process of vehicle entering the intersection, the lane line gradually changes from a dotted line to a solid line, and during the process of vehicle passing through the intersection, the lane line is always changed to a left-right turning lane or a straight lane in advance, and the left-turn light, the right-turn light or the turn light is turned on. The left turn light or the right turn light of the vehicle in the dotted lane area does not represent the driving intention of the vehicle at the intersection, and the turn light may be turned on to complete the overtaking or lane changing behavior. In this way, the driving intention of the vehicle is judged to be left turning, right turning or straight driving by the actions of turning on the left turning light, the right turning light or not turning on the turning light of the vehicle in the solid line area. For example, taking an intelligent internet vehicle (such as a vehicle 3) as an example, the vehicle 3 may broadcast the turn-on turn signal state of itself to the roadside device through the V2X communication unit, for example, the vehicle 3 turns on a right turn signal, and then the roadside device may determine that the driving intention of the vehicle 3 is a right turn by the turn signal turn-on state reported by the vehicle 3. Alternatively, for example, a non-intelligent networked vehicle (such as the vehicle 1) is assumed to turn on the right turn light of the vehicle 1. In this case, for a non-intelligent networked vehicle, the intention detection algorithm (such as ResNet 50) may be used to detect the intention of the image area where the vehicle 1 is located, that is, it may be recognized that the driving intention of the vehicle 1 is a right turn. Or, for example, when there are some road sections with lanes for left turn and straight running, right turn and straight running, or left turn and straight running, the turn-on state of the turn signal of the vehicle 1 is detected based on the video camera. In order to achieve a better detection effect, in the embodiment of the application, the ResNet50 is used as a backbone network for detecting the turn-on state of the vehicle turn lights, and the classification network directly outputs the probabilities of turning on the left turn lights, turning on the right turn lights and turning off no turn lights of the vehicle, so that the driving intention of the vehicle 1 can be obtained.
In addition, continuing with the car 3 as an example, assuming that the car 3 does not transmit the vehicle path planning information to the road side device through the V2X communication unit, the vehicle state information, such as the turning on states of the left and right turn signals of the car 3, transmitted by the car 3 through the V2X communication unit may be obtained. The turning-on state of the left and right turn lights is broadcast externally by the vehicle 3 through the V2X communication unit. Assuming that the current position of the vehicle 3 is in the solid line area of the lane line and the right turn lamp of the vehicle 3 is in the on state, it can be determined that the driving intention of the vehicle 3 is right turn at this time.
Fig. 11 is a schematic diagram of determining a driving intention of a non-intelligent networked vehicle according to an embodiment of the present disclosure. As shown in fig. 11, if the vehicle 5 is a non-intelligent networked vehicle, the determination may be made by the lane where the vehicle 5 is located in the lane solid line region, and the left-turn lane, the straight lane, and the right-turn lane where the vehicle is located respectively indicate the driving intention of the vehicle. If the millimeter wave radar detects that the vehicle 5 is currently located in the first lane from the south to the right and the vehicle 5 is located in the solid lane area, it can be determined that the driving intention of the vehicle 5 is right turn at this time. Or, if the vehicle 9 is a non-intelligent networked vehicle, the determination may be made by the lane where the vehicle 9 is located in the lane solid line area, and the left-turn lane, the straight lane, and the right-turn lane where the vehicle is located respectively represent the driving intention of the vehicle. If the millimeter wave radar detects that the vehicle 9 is currently in the second lane from west to east and the vehicle 9 is in the solid lane area, it can be determined that the driving intention of the vehicle 9 is straight.
In the embodiment of the application, based on the intelligent networked vehicles, the non-intelligent networked vehicles can be determined from the m vehicles, and the first running state information and the first running intention information of the non-intelligent networked vehicles can be determined.
The identification of the non-intelligent networked vehicle may have two identification modes, the first identification mode refers to the identification process shown in fig. 12, and as shown in fig. 12, the identification process specifically includes:
Aiming at any vehicle sensing area, each intelligent networked vehicle in the vehicle sensing area can transmit the license plate number, the vehicle type, the vehicle body color and the like of the intelligent networked vehicle to roadside equipment through vehicle-mounted equipment. Meanwhile, the roadside device may perform vehicle attribute recognition on the vehicle sensing area image of the vehicle sensing area, so as to recognize attribute information, such as vehicle types, vehicle colors, vehicle logos, license plate numbers, and the like, of each vehicle included in the vehicle sensing area image.
Exemplarily, a certain vehicle sensing area is taken as an example, and referring to fig. 13, a schematic diagram of detecting a vehicle sensing area provided in the embodiment of the present application is shown. As shown in fig. 13, the vehicles located in the vehicle sensing area include vehicle 1, vehicle 2, vehicle 3, vehicle 4, and vehicle 5. It is assumed that the vehicles 2 and 3 are intelligent internet vehicles, and the vehicles 1, 4 and 5 are non-intelligent internet vehicles. The vehicles 2 and 3 can transmit motion state information such as vehicle speed, vehicle acceleration, longitude and latitude coordinates, course angle, turn-on state of a steering lamp and the like, and attribute information such as vehicle type, vehicle body color, vehicle logo, license plate number and the like, which are acquired in real time through respective vehicle-mounted sensing equipment, to roadside equipment through a V2X communication unit. The vehicle attribute recognition is carried out on the vehicle sensing area image of the vehicle sensing area, and vehicle attribute information such as vehicle types, vehicle colors, vehicle logos and license plate numbers of all vehicle targets in the vehicle sensing area image can be recognized on the basis of a deep learning vehicle attribute recognition network. For example, the vehicle attribute identification network in the embodiment of the present application may be a lightweight neural network such as MobileNet and ShuffleNet, or may be a large and medium network such as ResNet. In order to achieve a better detection effect, in the embodiment of the present application, the ResNet50 is used as a backbone network of the vehicle attribute identification network, and attribute information such as a vehicle type, a vehicle color, a vehicle logo, and the like of a vehicle target can be acquired. Of course, an Optical Character Recognition (OCR) technique may also be used to recognize the image of the vehicle sensing region, so as to recognize the license plate number of each vehicle target included in the image of the vehicle sensing region, which is not limited in this application. In this way, the license plate numbers of all of the cars 1, 2, 3, 4, and 5 included in the vehicle sensing area image can be identified.
Aiming at a certain vehicle sensing area, after the vehicle sensing area image of the vehicle sensing area is identified and the license plate number of each vehicle target included in the vehicle sensing area image is identified, aiming at the license plate number of any vehicle target, the license plate number of the vehicle target is compared with the license plate number of at least one intelligent internet vehicle, so that whether the vehicle target is the intelligent internet vehicle or not is determined, and therefore each non-intelligent internet vehicle located in the vehicle sensing area can be determined. Exemplarily, taking the vehicle 1 as an example, comparing the license plate number of the vehicle 1 with the license plate numbers of the vehicles 2 and 3 respectively, and if the comparison is inconsistent, determining that the vehicle 1 is a non-intelligent internet vehicle, taking the vehicle 2 as an example, comparing the license plate numbers of the vehicles 2 with the license plate numbers of the vehicles 2 and 3 respectively, and if the comparison is consistent, determining that the vehicle 2 is an intelligent internet vehicle.
According to a certain vehicle sensing area, after each non-intelligent networked vehicle in the vehicle sensing area is determined, because the mapping relation exists between the image center position coordinates of each vehicle in the vehicle sensing area image and the driving intention information of each vehicle, which are included in the vehicle sensing area image, aiming at a certain non-intelligent networked vehicle, the driving intention information of the non-intelligent networked vehicle can be obtained after the image center position coordinates of the non-intelligent networked vehicle in the vehicle sensing area image are obtained.
The radar coordinate system and the video image coordinate system belong to different coordinate systems respectively, the radar detection device detects the running state information of each vehicle target, the video image acquisition device detects the vehicle attribute information (such as vehicle type, vehicle color, license plate number and the like) and the vehicle running intention information of each vehicle target, when the running intention information and the running state information of the same vehicle target are related, the position coordinates of the same vehicle target in a certain coordinate system need to be converted into another coordinate system to obtain the position coordinates of the same vehicle target in another coordinate system, so that the running intention information and the running state information of the same vehicle target are related, and the running state information, the running intention information and the vehicle attribute information of each vehicle can be obtained. Therefore, for a certain non-intelligent networked vehicle in a certain vehicle sensing area, the coordinates of the image center position of the non-intelligent networked vehicle in the vehicle sensing area image corresponding to the vehicle sensing area can be converted according to the coordinate conversion rule of the radar coordinate system and the video image coordinate system, so that the longitude and latitude coordinates of the non-intelligent networked vehicle in the radar coordinate system are obtained.
According to a certain vehicle sensing area, after the longitude and latitude coordinates of each non-intelligent networked vehicle in the vehicle sensing area in a radar coordinate system are determined, the longitude and latitude coordinates of each vehicle in the radar coordinate system in the vehicle sensing area and the driving state information of each vehicle are in a mapping relation, so that the driving state information of each non-intelligent networked vehicle can be obtained after the longitude and latitude coordinates of each non-intelligent networked vehicle in the radar coordinate system are obtained for a certain non-intelligent networked vehicle.
In addition, referring to the identification flow shown in fig. 14, as shown in fig. 14, the second identification manner specifically includes:
According to a certain vehicle sensing area, information such as the license plate number, longitude and latitude coordinates, the vehicle type and the vehicle color of at least one intelligent networked vehicle can be timely and accurately acquired through the vehicle-mounted sensing equipment of at least one intelligent networked vehicle located in the vehicle sensing area.
After the longitude and latitude coordinates of the radar detection device set for the vehicle sensing area are determined, the relative distance between at least one vehicle in the vehicle sensing area and the radar detection device can be obtained through the radar detection device, and therefore the longitude and latitude coordinates of the at least one vehicle can be calculated.
After the longitude and latitude coordinates of at least one vehicle in the vehicle sensing area are calculated, the difference value between the longitude and latitude coordinates of the vehicle and the longitude and latitude coordinates of at least one intelligent internet vehicle in the vehicle sensing area can be calculated for each vehicle, if the difference values do not meet the set threshold value, the vehicle can be determined to be a non-intelligent internet vehicle, and if one difference value meets the set threshold value, the vehicle can be determined to be an intelligent internet vehicle. Therefore, as the longitude and latitude coordinates of each vehicle in the vehicle sensing area in the radar coordinate system have a mapping relation with the running state information of each vehicle, the running state information of each non-intelligent networked vehicle can be obtained according to the longitude and latitude coordinates of the non-intelligent networked vehicle aiming at each non-intelligent networked vehicle in the vehicle sensing area. The threshold may be set according to experience of a person skilled in the art, or may be set according to results obtained by multiple experiments, or may be set according to an actual application scenario, which is not limited in the embodiment of the present application.
Referring to fig. 15, a schematic view of a scenario for identifying a non-intelligent networked vehicle according to an embodiment of the present application is provided. As shown in fig. 15, it is assumed that the vehicle 6 is an intelligent internet vehicle, and the vehicles 7 and 8 are non-intelligent internet vehicles. The roadside device can obtain real-time longitude and latitude coordinates of the vehicle 6 through vehicle state information sent by the vehicle 6 received by the V2X communication unit. The millimeter wave radar actively transmits electromagnetic wave signals, senses vehicle state information of each vehicle in a vehicle sensing area where the vehicle 6 is located in real time, obtains the speed, the relative distance, the acceleration and the course angle of the vehicles 6, 7 and 8 relative to the radar respectively, and can calculate the longitude and latitude coordinates of the vehicles 6, 7 and 8 according to the longitude and latitude coordinates of the radar and the relative distance between the radar and each vehicle, wherein the longitude and latitude coordinates (lat 0 and Long 0) of the radar, the longitude and latitude coordinates (lat 1 and Long 1) of the vehicles 6, the longitude and latitude coordinates (lat 2 and Long 2) of the vehicles 7 and the longitude and latitude coordinates (lat 3 and Long 3) of the vehicles 8. Thus, the difference between the longitude and latitude coordinates of the vehicle 6 and the longitude and latitude coordinates of the vehicle 6, the difference between the longitude and latitude coordinates of the vehicle 7 and the longitude and latitude coordinates of the vehicle 6, and the difference between the longitude and latitude coordinates of the vehicle 8 and the longitude and latitude coordinates of the vehicle 6 can be respectively calculated, and it can be found that only the difference between the longitude and latitude coordinates of the vehicle 6 and the longitude and latitude coordinates of the vehicle 6 satisfies a set threshold, for example, the difference is less than or equal to a certain set value, and other differences do not satisfy the set threshold, at this time, it can be determined that the vehicle 6 is an intelligent internet vehicle, and the vehicles 7 and 8 are non-intelligent internet vehicles. The driving state information, the driving intention information and the vehicle attribute information of the vehicle 6 are transmitted to the road side equipment through the V2X communication unit, and further sensing is not needed through the road side sensing equipment.
The longitude and latitude coordinates of a certain non-intelligent networked vehicle in a radar coordinate system can be converted according to the coordinate conversion rule of the radar coordinate system and the video image coordinate system, so that the image center position coordinates of the non-intelligent networked vehicle in the video image coordinate system can be obtained.
According to a certain vehicle sensing area, after the image center position coordinates of each non-intelligent networked vehicle in the vehicle sensing area in a video image coordinate system are determined, the image center position coordinates of each vehicle in the video image coordinate system in the vehicle sensing area and the driving intention information of each vehicle have a mapping relation, so that the driving intention information of each non-intelligent networked vehicle can be obtained after the image center position coordinates of each non-intelligent networked vehicle in the video image coordinate system are obtained for a certain non-intelligent networked vehicle.
And 405, scheduling each intelligent internet vehicle according to the traffic scheduling information.
In the embodiment of the application, the passing priority of each intelligent networked vehicle and each non-intelligent networked vehicle in each vehicle sensing area at the signallessly-controlled road intersection can be generated according to a preset vehicle passing rule and based on the first running state information and the first running intention information of each non-intelligent networked vehicle and the second running state information and the second running intention information of each intelligent networked vehicle. Then, a traffic scheduling instruction (traffic scheduling information) for each intelligent networked vehicle can be generated according to the traffic priority of each intelligent networked vehicle and each non-intelligent networked vehicle at the non-signal control road intersection, so that each intelligent networked vehicle can be more accurately guided to sequentially pass through the non-signal control road intersection according to the traffic sequence, and the vehicle traffic efficiency of the non-signal control road intersection can be effectively improved.
In addition, in the process of guiding each intelligent networked vehicle to sequentially pass through the non-intelligent networked vehicle at the non-intelligent networked road intersection according to the passing sequence, the motion state of each non-intelligent networked vehicle can be detected in real time, if it is detected that at least one non-intelligent networked vehicle passes through a stop waiting line of a lane where the non-intelligent networked vehicle is located, prompt information is sent to at least one intelligent networked vehicle passing through the non-intelligent networked road intersection, and the prompt information comprises the running state information and the running intention information of at least one non-intelligent networked vehicle, so that at least one intelligent networked vehicle passing through the non-intelligent networked road intersection can be prompted to pay attention to reduce the speed and avoid the at least one non-intelligent networked vehicle according to the running state information and the running intention information of the at least one non-intelligent networked vehicle. Meanwhile, the waiting traffic information is sent to other intelligent networked vehicles which do not pass through the non-signal control road intersection, so that the other intelligent networked vehicles are indicated to stay in respective lanes to wait for passing.
Illustratively, referring to fig. 16, a schematic diagram of an intelligent networked vehicle driving at an un-controlled intersection with a right-turn driving intention is provided according to an embodiment of the present application. As shown in fig. 16, the vehicle 7 entering the intersection from the south to the north in fig. 16 will be described as an example. In this case, the vehicle 7 is in a right-turn lane, and the curve of the steering process of the vehicle 7 is shown by a black dashed arrow in fig. 16. During turning of the vehicle 7 from the south to the east, it may be influenced by the vehicles 3-1 and 3-2 turning from the north to the east, and may also be influenced by the vehicle 11 traveling straight from the west to the east. According to the principle of turning and allowing straight-going, the traffic priority prio11 of the vehicle 11 is greater than the traffic priorities of the vehicle 3-1, the vehicle 3-2 and the vehicle 7. According to the principle of right turn left turn, the traffic priorities prio3-1 and prio3-2 of the vehicles 3-1 and 3-2 are greater than the traffic priority prio7 of the vehicle 7. Thus, in FIG. 16, the vehicle 11 has the highest traffic priority, the vehicles 3-1 and 3-2 have the lowest traffic priority, and the vehicle 7 has the lowest traffic priority. At the moment, the vehicle passing priority ranking sequence is prio11> prio3-1>, prio3-2>, prio7, so that at the moment, the roadside device firstly sends a priority passing guide instruction to the vehicle 11, and the rest vehicles wait for passing at the intersection; when waiting for the roadside equipment to sense that the vehicle 11 passes through the intersection, the roadside equipment sends a priority passing guide instruction to the vehicle 3-1 and the vehicle 3-2, and when waiting for the roadside equipment to detect that the vehicle 3-1 and the vehicle 3-2 pass through the intersection, the roadside equipment sends a priority passing guide instruction to the vehicle 7 at the moment, and guides the vehicle 7 to smoothly pass through the intersection. Wherein, the vehicle 7, the vehicle 11, the vehicle 3-1 and the vehicle 3-2 are all intelligent networking vehicles.
For example, referring to fig. 17, a schematic diagram of an intelligent internet vehicle driving at an untrusted intersection with a straight-ahead driving intention is provided according to an embodiment of the present application. As shown in fig. 17, the vehicle 8 entering the intersection from the south to the north in fig. 17 will be described as an example. In which the vehicle 8 is in a straight lane and the driving profile of the vehicle 8 is shown by the black dashed arrow in fig. 17. During the straight-ahead movement of the vehicle 8, the vehicle 3 turning left from the north to the east may be influenced, and the vehicle 4 turning right from the north to the east may be influenced. Meanwhile, the vehicle 8 may be affected by the vehicle 5 traveling straight from east to west, the vehicle 6 turning left from east to south, the vehicle 11 traveling straight from west to east, and the vehicle 12 turning left from west to north. According to the principle of turning and straight-going, the traffic priority of the vehicles 5, 8 and 11 in fig. 17 is higher than that of turning vehicles, and the traffic priority of the vehicles 3, 4, 6 and 12 is lower than that of turning vehicles. And according to the principle that the right-side vehicle passes preferentially, the right-side vehicle in the advancing direction has a passing priority right at the crossroad without traffic signal control, and the right-side vehicle needs to be allowed to run preferentially, so that the passing priority prio5 of the vehicle 5 is greater than the passing priority prio8 of the vehicle 8, and the passing priority prio8 of the vehicle 8 is greater than the passing priority prio11 of the vehicle 11. The vehicle passing priority ranking at this time is prio5> prio8> prio11, prio8> prio12> prio4, and prio11> prio6, the vehicles 12, 3, 6 are all left-turning vehicles, the passing priorities of the three vehicles are determined by the time when the vehicles reach the stop line of the intersection, and the vehicle which reaches the stop line of the intersection first passes the highest priority. When the vehicle 12, the vehicle 3, and the vehicle 6 sequentially reach the stop line of the intersection, the vehicle passing priority ranking order is prio12> prio3> prio6. Therefore, at this time, the roadside device sends a priority passing guide instruction to the vehicle 5, the other vehicles wait in situ, when waiting for the roadside device to detect that the vehicle 5 passes through the intersection, the roadside device sends the priority passing guide instruction to the vehicle 8, the other vehicles wait in situ, when waiting for the roadside device to detect that the vehicle 8 passes through the intersection, the roadside device sends the passing guide instructions in sequence according to the passing priorities of the remaining vehicles passing through the intersection, and the vehicles are guided to pass through the intersection. Wherein, car 5, car 8, car 11, car 3, car 4, car 6, car 12 are intelligent networking vehicles.
Further, for example, referring to fig. 18, a schematic diagram of a left-turn driving intention of an intelligent internet vehicle driving at an untrusted intersection according to an embodiment of the present application is provided. As shown in fig. 18, the vehicle 9 entering the intersection from north to south in fig. 18 will be described as an example. In which the vehicle 9 is in a left-turn lane, the driving curve of the vehicle 9 is shown by the black dashed arrow in fig. 18. During a left turn of the vehicle 9, it may be affected by the vehicle 1 turning from north to west, may be affected by the vehicle 2 traveling straight from north to south, may be affected by the vehicle 5 traveling straight from east to west, may be affected by the vehicle 6 turning left from east to south, may be affected by the vehicle 11 traveling straight from west to east, and may be affected by the vehicle 12 turning left from west to north. In fig. 18, the priority of the vehicle 2, the vehicle 5, and the vehicle 11 is higher than that of the turning vehicle, and the priority of the vehicle 1, the vehicle 6, the vehicle 9, and the vehicle 12 is lower than that of the turning vehicle, in accordance with the principle of turning and straight-ahead driving. According to the principle that right-side vehicles take precedence, right-side vehicles in the advancing direction have the priority right to drive preferentially at a crossroad without traffic signal control, and the right-side vehicles need to drive preferentially, so that the traffic priority prio2 of the vehicle 2 is greater than the traffic priority prio5 of the vehicle 5, and the traffic priority prio11 of the vehicle 11 is greater than the traffic priority prio2 of the vehicle 2. Therefore, at this time, the roadside apparatus transmits a priority passing guidance instruction to the vehicle 11, the remaining vehicles wait in situ, when waiting for the roadside apparatus to detect that the vehicle 11 passes through the intersection, the roadside apparatus transmits a priority passing guidance instruction to the vehicle 2, the remaining vehicles wait in situ, when waiting for the roadside apparatus to detect that the vehicle 2 passes through the intersection, the roadside apparatus transmits a priority passing guidance instruction to the vehicle 5, the remaining vehicles wait in situ, and waiting for the roadside apparatus to detect that the vehicle 5 passes through the intersection. Wherein, car 6, car 9, car 12 are all the left turn direction, and when the left turn vehicle met the vehicle that turns left that has the vertical lane, the principle of giving way is according to the priority order that two cars passed the intersection. According to the embodiment of the application, the priority order of the vehicles is determined according to the time of each vehicle reaching the stop line of the intersection, which is acquired by the road side equipment, and the passing priority of the vehicles which reach the intersection first is higher. Assuming that the vehicle 9 arrives at the intersection first, the passing priority of the vehicle 9 is the highest among the remaining vehicles, the roadside device sends a priority passing guide instruction to the vehicle 9, the remaining vehicles wait on site, and when the roadside device detects that the vehicle 9 passes through the intersection, the remaining vehicles 1, 6 and 12 do not affect each other when passing through the intersection, and the priorities of the vehicles 1, 6 and 12 are equal, the roadside device simultaneously sends a priority passing guide instruction to the vehicles 1, 6 and 12 to guide the vehicle to pass through the intersection. In this case, the vehicle passage priority order is prio11> prio2> prio5> prio9> prio 1= prio6= prio12. Alternatively, assuming that the vehicle 6 arrives at the intersection first, the traffic priority of the vehicle 6 is the highest among the remaining vehicles. The roadside equipment sends a priority passing guide instruction to the vehicle 6, the rest vehicles wait in situ, and after the roadside equipment detects that the vehicle 6 passes through the intersection, the priority of the vehicle 9 is higher than that of the vehicle 1 according to the principle of priority of right-turning vehicles; the vehicle 12 and the vehicle 1 do not influence each other in passing, the order of the vehicles 9 and 12 reaching the intersection is different, the order of priority of passing of the vehicles 1, 9 and 12 may be prio12> prio9> prio1 or prio9> prio12= prio1, and the roadside device guides the vehicles 1, 9 and 12 to pass through the intersection in order according to the possible priority of passing. In this case, the vehicle traffic priority ranking order is prio11> prio2> prio5> prio6> prio12> prio9> prio1, or prio11> prio2> prio5> prio6> prio9> prio12= prio1. Alternatively, assuming that the vehicle 12 arrives at the intersection first, the traffic priority of the vehicle 12 is the highest among the remaining vehicles. The roadside equipment sends a priority passing guide instruction to the vehicle 12, the remaining vehicles wait in situ, and after the roadside equipment detects that the vehicle 12 passes through the intersection, the passing priority of the vehicle 9 is higher than that of the vehicle 1 according to the principle of right-turn vehicle priority; the vehicle 6 and the vehicle 1 do not influence each other in passing, the order of priority of passing of the vehicles 1, 9 and 6 may be prio6> prio9> prio1 or prio9> prio6= prio1 according to the difference of the order of arrival of the vehicles 9 and 6 at the intersection, and the roadside device guides the vehicles 1, 9 and 12 to pass through the intersection in order according to the possible priority. In this case, the vehicle traffic priority ranking order is prio11> prio2> prio5> prio12> prio6> prio9> prio1, or prio11> prio2> prio5> prio12 prio9> prio6= prio1. Wherein, car 2, car 5, car 11, car 1, car 6, car 9, car 12 are intelligent networking vehicles.
Further, for example, referring to fig. 19, a schematic diagram of a special vehicle driving at an untrusted intersection with a right-turn driving intention is provided according to an embodiment of the present application. Wherein, whether the vehicle is a special vehicle may be vehicle attribute information acquired through V2X or a result obtained through camera video analysis. The vehicle 7 (special vehicle) entering the intersection from north to south in fig. 19 will be described. The vehicle 7 is in the right-turn lane and the curve of the steering process of the vehicle 7 is shown by the black dashed arrow in fig. 19. During turning of the vehicle 7 from south to east, it may be influenced by the vehicle 3 turning from north to east and also by the vehicle 11 traveling straight from west to east. According to the principle of making straight-ahead turns, the traffic priority prio11 of the vehicle 11 is greater than the traffic priorities of the vehicles 3 and 7, and according to the principle of making left-turn turns, the traffic priority prio3 of the vehicle 3 is greater than the traffic priority prio7 of the vehicle 7, namely prio11> prio3> prio7. However, since the vehicle 7 is a special vehicle, the present embodiment gives the highest priority to the vehicle 7, that is, prio7> prio11> prio3, and the roadside apparatus guides the vehicle through the intersection in order of the priorities of the vehicle 7, the vehicle 11, and the vehicle 3. Wherein, car 7, car 11, car 3 are intelligent networking vehicle.
In addition, the current non-signal control road intersection is assumed to be an intersection where the intelligent networked vehicles and the non-intelligent networked vehicles run in a mixed mode. Continuing with the example of fig. 18, taking the left-turn scene of the intelligent internet vehicle 9 in fig. 18 as an example, assuming that the vehicle 6, the vehicle 12 and the vehicle 9 intend to turn left and arrive at the intersection in sequence, the traffic priority of the current scene is prio11> prio2> prio5> prio6> prio12> prio9> prio1. Assume that the roadside apparatus senses that the vehicles 2 and 12 are non-intelligent networked vehicles. The roadside device sends the passing guide information to the intelligent internet vehicles according to the priority sequence of prio11> prio5> prio6> prio9> prio1, meanwhile, the roadside device detects the motion state of the non-intelligent internet vehicles in real time, and when the roadside device detects that the non-intelligent internet vehicles stop at the stop line of the un-signal-controlled road intersection, the passing guide information of the roadside device is kept unchanged. When the roadside equipment detects that the non-intelligent networked vehicles drive through the stop line of the non-signal control road intersection, the current passing priority of the non-intelligent networked vehicles is higher than the passing priorities of the rest intelligent networked vehicles. Supposing that the intelligent networked vehicle 11 and the non-intelligent networked vehicle 2 pass through the intersection at the moment, the roadside device guides the intelligent networked vehicle 5 to pass through the non-signal control road intersection at the moment, and when the roadside device detects that the non-intelligent networked vehicle 12 drives across the stop line of the non-signal control road intersection, the roadside device sends the vehicle state information and the driving intention information of the non-intelligent networked vehicle 12 to the intelligent networked vehicle 5 to remind the vehicle 5 of speed reduction and avoidance, and meanwhile sends traffic waiting information to the intelligent networked vehicle 6, the intelligent networked vehicle 9 and the intelligent networked vehicle 1. When the roadside equipment detects that the non-intelligent networked vehicle 12 passes through the non-signal-control road intersection, the intelligent networked vehicle 6, the intelligent networked vehicle 9 and the intelligent networked vehicle 1 are sequentially guided to pass through the non-signal-control road intersection.
The above embodiments show that, in order to provide more accurate traffic scheduling information for each intelligent internet vehicle in each vehicle sensing area at the intersection of the untrusted road, so as to guide each intelligent internet vehicle in each vehicle sensing area at the intersection of the untrusted road to efficiently and safely pass through the intersection of the untrusted road, the driving state information and the driving intention information of each intelligent internet vehicle in each vehicle sensing area need to be accurately sensed in time, and the driving state information and the driving intention information of each non-intelligent internet vehicle in each vehicle sensing area need to be accurately sensed in time, so that by fusing the driving state information and the driving intention information of each intelligent internet vehicle and the driving state information and the driving intention information of each non-intelligent internet vehicle, each intelligent internet vehicle in each vehicle sensing area can be efficiently and safely guided and scheduled to pass through the intersection of the untrusted road more accurately. Specifically, for any one of the untrusted controlled road intersections, the first driving information of m vehicles located in each vehicle sensing area is acquired through each road side sensing device arranged at the untrusted controlled road intersection, the second driving state information and the second driving intention information of each intelligent networked vehicle are acquired through the vehicle-mounted sensing devices of each intelligent networked vehicle located in each vehicle sensing area, and the first driving state information and the first driving intention information of each non-intelligent networked vehicle in the m vehicles can be accurately determined based on each intelligent networked vehicle, so that the driving state information and the driving intention information of each vehicle located in each vehicle sensing area at the untrusted controlled road intersection can be more comprehensively acquired. And then based on the first running state information and the first running intention information of each non-intelligent networked vehicle and the second running state information and the second running intention information of each intelligent networked vehicle, traffic scheduling information for each intelligent networked vehicle in each vehicle sensing area can be generated, so that the generated traffic scheduling information for each intelligent networked vehicle in each vehicle sensing area can better conform to the actual traffic condition of the un-signal controlled road intersection, and can also better meet the efficient and safe traffic requirement of the un-signal controlled road intersection. Then, through the traffic scheduling information, the intelligent networked vehicles in each vehicle sensing area can be guided and scheduled more accurately, so that the vehicle traffic efficiency of the non-signal control road intersection can be effectively improved, and the traffic safety of the non-signal control road intersection can be effectively ensured.
Based on the same technical concept, fig. 20 exemplarily shows a cooperative intersection passing device provided by the embodiment of the present application, and the device may execute the flow of the cooperative intersection passing method. The cooperative intersection passing device may be a road side device, or may also be a component (such as a chip or an integrated circuit) capable of supporting the road side device to implement the functions required by the method, or may also be other electronic devices having the functions required by the method, such as a traffic control device.
As shown in fig. 20, the apparatus includes:
an obtaining unit 2001, configured to obtain, for any one of the untrusted intersections, first driving information of m vehicles located in each vehicle sensing area through each road side sensing device provided at the untrusted intersection; acquiring second running information of each intelligent networked vehicle through vehicle-mounted sensing equipment of each intelligent networked vehicle in each vehicle sensing area;
a processing unit 2002 for determining first traveling state information and first traveling intention information of the m vehicles based on the first traveling information of the m vehicles, and determining second traveling state information and second traveling intention information of the respective intelligent networked vehicles based on the second traveling information of the respective intelligent networked vehicles; determining first running state information and first running intention information of each non-intelligent networked vehicle in the m vehicles; generating traffic scheduling information for each intelligent networked vehicle located in each vehicle sensing area based on first driving state information and first driving intention information of each non-intelligent networked vehicle and second driving state information and second driving intention information of each intelligent networked vehicle; and scheduling each intelligent internet vehicle according to the traffic scheduling information.
In some exemplary embodiments, the processing unit 2002 is specifically configured to:
for any vehicle sensing area, acquiring a vehicle sensing area image for the vehicle sensing area through a video image acquisition device which is arranged at the non-signal control intersection and aims at the vehicle sensing area, identifying the vehicle sensing area image, and determining first driving intention information of at least one vehicle included in the vehicle sensing area image so as to determine first driving intention information of m vehicles in each vehicle sensing area;
the method comprises the steps that running state data of at least one vehicle located in a vehicle sensing area are obtained through radar detection equipment which is arranged at the non-signal control intersection and aims at the vehicle sensing area, and first running state information of the at least one vehicle is determined according to the running state data of the at least one vehicle located in the vehicle sensing area, so that first running state information of m vehicles located in each vehicle sensing area is determined.
In some exemplary embodiments, the processing unit 2002 is specifically configured to:
carrying out target detection on the vehicle perception area image, and determining the image center position coordinates and the image area size of at least one vehicle in the vehicle perception area image;
for each vehicle in the at least one vehicle, taking the image center position coordinates of the vehicle as a cutting reference point, and cutting an image area where the vehicle is located from the vehicle perception area image according to the size of the image area of the vehicle in the vehicle perception area image;
and performing intention detection on an image area where the vehicle is located, and determining first driving intention information of the vehicle so as to determine the first driving intention information of the at least one vehicle.
In some exemplary embodiments, the processing unit 2002 is specifically configured to:
aiming at any vehicle sensing area, acquiring running state data and running intention data of each intelligent networked vehicle in the vehicle sensing area through vehicle-mounted sensing equipment of each intelligent networked vehicle in the vehicle sensing area;
and determining second running state information of the intelligent networked vehicle according to the running state data of the intelligent networked vehicle in the vehicle sensing area, and determining second running intention information of the intelligent networked vehicle according to the running intention data of the intelligent networked vehicle in the vehicle sensing area, so that the second running state information and the second running intention information of each intelligent networked vehicle are determined.
In some exemplary embodiments, the processing unit 2002 is specifically configured to:
aiming at any vehicle sensing area, acquiring the license plate number of at least one intelligent networked vehicle through vehicle-mounted equipment of the at least one intelligent networked vehicle positioned in the vehicle sensing area, and determining the license plate number of the at least one vehicle in the vehicle sensing area image by performing vehicle attribute identification on the vehicle sensing area image of the vehicle sensing area;
aiming at the license plate number of any vehicle in the at least one vehicle, if the license plate number of the vehicle does not exist in the license plate numbers of the at least one intelligent networked vehicle, determining that the vehicle is a non-intelligent networked vehicle;
according to the image center position coordinates of the non-intelligent networked vehicles in the vehicle perception area image, determining first driving intention information of the non-intelligent networked vehicles from first driving intention information of at least one vehicle included in the vehicle perception area image, and accordingly determining the first driving intention information of the non-intelligent networked vehicles;
the processing unit 2002 is specifically configured to:
aiming at any non-intelligent networked vehicle in the vehicle sensing area, determining longitude and latitude coordinates of the non-intelligent networked vehicle in a radar coordinate system according to a coordinate conversion rule of a radar coordinate system and a video image coordinate system and based on the image center position coordinates of the non-intelligent networked vehicle in the vehicle sensing area image;
and determining first running state information of the non-intelligent networked vehicles from the first running state information of at least one vehicle in the vehicle perception area according to longitude and latitude coordinates of the non-intelligent networked vehicles in a radar coordinate system, so as to determine the first running state information of each non-intelligent networked vehicle.
In some exemplary embodiments, the processing unit 2002 is specifically configured to:
aiming at any vehicle sensing area, acquiring longitude and latitude coordinates of at least one intelligent networking vehicle through vehicle-mounted sensing equipment of the at least one intelligent networking vehicle positioned in the vehicle sensing area;
determining longitude and latitude coordinates of radar detection equipment which are set at the non-signal control road intersection and aim at the vehicle perception area, and acquiring the relative distance between at least one vehicle in the vehicle perception area and the radar detection equipment through the radar detection equipment;
determining the longitude and latitude coordinates of the at least one vehicle according to the longitude and latitude coordinates of the radar detection equipment and the relative distance between the at least one vehicle and the radar detection equipment;
for the longitude and latitude coordinates of each vehicle in the at least one vehicle, if the difference values of the longitude and latitude coordinates of the vehicle and the longitude and latitude coordinates of the at least one intelligent networked vehicle do not meet a set threshold value, determining that the vehicle is a non-intelligent networked vehicle, and determining first running state information of the non-intelligent networked vehicle from the first running state information of the at least one vehicle according to the longitude and latitude coordinates of the non-intelligent networked vehicle, so as to determine the first running state information of each non-intelligent networked vehicle;
the processing unit 2002 is specifically configured to:
aiming at any non-intelligent networked vehicle in the vehicle sensing area, determining the image center position coordinates of the non-intelligent networked vehicle in a video image coordinate system according to the coordinate conversion rule of a radar coordinate system and a video image coordinate system and based on the longitude and latitude coordinates of the non-intelligent networked vehicle in the radar coordinate system;
according to the image center position coordinates of the non-intelligent networked vehicles in a video image coordinate system, determining first driving intention information of the non-intelligent networked vehicles from the first driving intention information of at least one vehicle in the vehicle sensing area, and accordingly determining the first driving intention information of each non-intelligent networked vehicle in each vehicle sensing area.
In some exemplary embodiments, the processing unit 2002 is specifically configured to:
according to a preset vehicle passing rule, based on first running state information and first running intention information of each non-intelligent networked vehicle and second running state information and second running intention information of each intelligent networked vehicle, generating a passing priority of each intelligent networked vehicle and each non-intelligent networked vehicle in each vehicle perception area at the non-signal control road intersection;
generating a traffic scheduling instruction for each intelligent networked vehicle according to the traffic priority of each intelligent networked vehicle and each non-intelligent networked vehicle at the non-signal control road intersection; and the traffic scheduling instruction is used for indicating each intelligent internet vehicle to sequentially pass through the non-signal control road intersection according to the traffic sequence.
In some exemplary embodiments, the processing unit 2002 is further configured to:
after a traffic scheduling instruction for each intelligent networked vehicle is generated, if at least one non-intelligent networked vehicle is detected to respectively pass through a stop waiting line of a lane where the non-intelligent networked vehicle is located, sending prompt information to at least one intelligent networked vehicle passing through the non-signal control road intersection, and sending waiting traffic information to other intelligent networked vehicles not passing through the non-signal control road intersection; the prompt information comprises first running state information and first running intention information of the at least one non-intelligent networked vehicle; the prompt information is used for prompting the at least one intelligent networked vehicle to pay attention to deceleration and avoidance to the at least one non-intelligent networked vehicle according to the first running state information and the first running intention information of the at least one non-intelligent networked vehicle; the waiting traffic information is used for indicating other intelligent internet vehicles which do not pass through the non-signal control road intersection to stay in the lane to wait for passing.
Based on the same technical concept, the embodiment of the present application further provides a computing device, as shown in fig. 21, including at least one processor 2101 and a memory 2102 connected to the at least one processor, where a specific connection medium between the processor 2101 and the memory 2102 is not limited in the embodiment of the present application, and the processor 2101 and the memory 2102 are connected by a bus in fig. 21 as an example. The bus may be divided into an address bus, a data bus, a control bus, etc.
In the embodiment of the present application, the memory 2102 stores instructions executable by the at least one processor 2101, and the at least one processor 2101 may execute the steps included in the aforementioned collaborative intersection passage method by executing the instructions stored in the memory 2102.
The processor 2101 is a control center of a computing device, and may be connected to various parts of the computing device using various interfaces and lines, and performs data processing by executing or executing instructions stored in the memory 2102 and calling up data stored in the memory 2102. Optionally, the processor 2101 may include one or more processing units, and the processor 2101 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application program, and the like, and the modem processor mainly processes an issued instruction. It will be appreciated that the modem processor described above may not be integrated into the processor 2101. In some embodiments, the processor 2101 and the memory 2102 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
The processor 2101 may be a general-purpose processor, such as a Central Processing Unit (CPU), a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like, that may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present Application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in connection with the cooperative intersection passing method embodiment may be directly embodied as being performed by a hardware processor, or may be performed by a combination of hardware and software modules in the processor.
The memory 2102, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 2102 may include at least one type of storage medium, which may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charged Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and the like. The memory 2102 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 2102 in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
Based on the same technical concept, the embodiment of the present application further provides a computer-readable storage medium, which stores a computer program executable by a computing device, and when the program runs on the computing device, the computer program causes the computing device to execute the steps of the collaborative intersection passing method.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (10)
1. A cooperative intersection passing method is characterized by comprising the following steps:
aiming at any one non-signal control road intersection, acquiring first running information of m vehicles in each vehicle sensing area through each road side sensing device arranged at the non-signal control road intersection; second driving information of each intelligent networked vehicle is obtained through vehicle-mounted sensing equipment of each intelligent networked vehicle in each vehicle sensing area;
determining first running state information and first running intention information of the m vehicles based on the first running information of the m vehicles, and determining second running state information and second running intention information of each intelligent networked vehicle based on the second running information of each intelligent networked vehicle;
determining first running state information and first running intention information of each non-intelligent networked vehicle in the m vehicles;
generating traffic scheduling information for each intelligent networked vehicle located in each vehicle sensing area based on first driving state information and first driving intention information of each non-intelligent networked vehicle and second driving state information and second driving intention information of each intelligent networked vehicle;
and scheduling the intelligent networked vehicles according to the traffic scheduling information.
2. The method of claim 1, wherein determining the first travel state information and the first travel intention information for the m vehicles based on the first travel information for the m vehicles comprises:
aiming at any vehicle perception area, acquiring a vehicle perception area image aiming at the vehicle perception area through a video image acquisition device aiming at the vehicle perception area and arranged at the non-signal control road intersection, identifying the vehicle perception area image, and determining first driving intention information of at least one vehicle in the vehicle perception area image so as to determine first driving intention information of m vehicles in each vehicle perception area;
the method comprises the steps that running state data of at least one vehicle located in a vehicle sensing area are obtained through radar detection equipment which is arranged at the non-signal control intersection and aims at the vehicle sensing area, and first running state information of the at least one vehicle is determined according to the running state data of the at least one vehicle located in the vehicle sensing area, so that first running state information of m vehicles located in each vehicle sensing area is determined.
3. The method of claim 2, wherein identifying the vehicle perception area image, determining first driving intention information for at least one vehicle included in the vehicle perception area image, comprises:
carrying out target detection on the vehicle perception area image, and determining the image center position coordinates and the image area size of at least one vehicle in the vehicle perception area image;
for each vehicle in the at least one vehicle, taking the image center position coordinates of the vehicle as a cutting reference point, and cutting out an image area where the vehicle is located from the vehicle perception area image according to the size of the image area of the vehicle in the vehicle perception area image;
and performing intention detection on an image area where the vehicle is located, and determining first driving intention information of the vehicle so as to determine the first driving intention information of the at least one vehicle.
4. The method of claim 1, wherein determining second driving state information and second driving intention information of the respective intelligent networked vehicles based on the second driving information of the respective intelligent networked vehicles comprises:
aiming at any vehicle sensing area, acquiring running state data and running intention data of each intelligent networked vehicle in the vehicle sensing area through vehicle-mounted sensing equipment of each intelligent networked vehicle in the vehicle sensing area;
and determining second running state information of the intelligent networked vehicle according to the running state data of the intelligent networked vehicle in the vehicle sensing area, and determining second running intention information of the intelligent networked vehicle according to the running intention data of the intelligent networked vehicle in the vehicle sensing area, so that the second running state information and the second running intention information of each intelligent networked vehicle are determined.
5. The method of claim 3, wherein determining first travel intention information for each non-intelligent networked vehicle of the m vehicles comprises:
aiming at any vehicle sensing area, acquiring the license plate number of at least one intelligent networked vehicle through vehicle-mounted equipment of the at least one intelligent networked vehicle positioned in the vehicle sensing area, and determining the license plate number of the at least one vehicle in the vehicle sensing area image by performing vehicle attribute identification on the vehicle sensing area image of the vehicle sensing area;
aiming at the license plate number of any vehicle in the at least one vehicle, if the license plate number of the vehicle does not exist in the license plate numbers of the at least one intelligent networked vehicle, determining that the vehicle is a non-intelligent networked vehicle;
according to the image center position coordinates of the non-intelligent networked vehicles in the vehicle perception area image, determining first driving intention information of the non-intelligent networked vehicles from first driving intention information of at least one vehicle included in the vehicle perception area image, and accordingly determining the first driving intention information of the non-intelligent networked vehicles;
determining first driving state information of each non-intelligent networked vehicle in the m vehicles, wherein the first driving state information comprises the following steps:
aiming at any non-intelligent networked vehicle in the vehicle sensing area, determining longitude and latitude coordinates of the non-intelligent networked vehicle in a radar coordinate system according to a coordinate conversion rule of a radar coordinate system and a video image coordinate system and based on the image center position coordinates of the non-intelligent networked vehicle in the vehicle sensing area image;
and determining the first running state information of the non-intelligent networked vehicles from the first running state information of at least one vehicle in the vehicle sensing area according to the longitude and latitude coordinates of the non-intelligent networked vehicles in a radar coordinate system, so as to determine the first running state information of each non-intelligent networked vehicle.
6. The method of claim 3, wherein determining first driving state information for each non-intelligent networked vehicle of the m vehicles comprises:
aiming at any vehicle sensing area, acquiring longitude and latitude coordinates of at least one intelligent networked vehicle through vehicle-mounted sensing equipment of the at least one intelligent networked vehicle positioned in the vehicle sensing area;
determining longitude and latitude coordinates of radar detection equipment which are arranged at the non-signal control road intersection and aim at the vehicle sensing area, and acquiring the relative distance between at least one vehicle in the vehicle sensing area and the radar detection equipment through the radar detection equipment;
determining the longitude and latitude coordinates of the at least one vehicle according to the longitude and latitude coordinates of the radar detection equipment and the relative distance between the at least one vehicle and the radar detection equipment;
for the longitude and latitude coordinates of each vehicle in the at least one vehicle, if the difference values of the longitude and latitude coordinates of the vehicle and the longitude and latitude coordinates of the at least one intelligent networked vehicle do not meet a set threshold value, determining that the vehicle is a non-intelligent networked vehicle, and determining first running state information of the non-intelligent networked vehicle from the first running state information of the at least one vehicle according to the longitude and latitude coordinates of the non-intelligent networked vehicle, so as to determine the first running state information of each non-intelligent networked vehicle;
determining first driving intention information of each non-intelligent networked vehicle in the m vehicles, including:
aiming at any non-intelligent networked vehicle in the vehicle sensing area, determining the image center position coordinates of the non-intelligent networked vehicle in a video image coordinate system according to the coordinate conversion rule of a radar coordinate system and a video image coordinate system and based on the longitude and latitude coordinates of the non-intelligent networked vehicle in the radar coordinate system;
according to the image center position coordinates of the non-intelligent networked vehicles in a video image coordinate system, determining first driving intention information of the non-intelligent networked vehicles from the first driving intention information of at least one vehicle in the vehicle sensing area, and accordingly determining the first driving intention information of each non-intelligent networked vehicle in each vehicle sensing area.
7. The method of any one of claims 1 to 6, wherein generating traffic scheduling information for each intelligent networked vehicle located within the each vehicle sensing area based on the first travel state information and the first travel intention information of each non-intelligent networked vehicle and the second travel state information and the second travel intention information of each intelligent networked vehicle comprises:
according to a preset vehicle passing rule, based on first running state information and first running intention information of each non-intelligent networked vehicle and second running state information and second running intention information of each intelligent networked vehicle, generating a passing priority of each intelligent networked vehicle and each non-intelligent networked vehicle in each vehicle perception area at the non-signal control road intersection;
generating a traffic scheduling instruction for each intelligent internet vehicle according to the traffic priority of each intelligent internet vehicle and each non-intelligent internet vehicle at the non-signal control road intersection; and the traffic scheduling instruction is used for indicating each intelligent internet vehicle to sequentially pass through the non-signal control road intersection according to the traffic sequence.
8. The method of claim 7, after generating the traffic scheduling instructions for the intelligent networked vehicles, further comprising:
if at least one non-intelligent internet vehicle is detected to respectively pass through a stop waiting line of a lane where the non-intelligent internet vehicle is located, sending prompt information to at least one intelligent internet vehicle passing through the non-signal control road intersection, and sending waiting passing information to other intelligent internet vehicles not passing through the non-signal control road intersection; the prompt information comprises first running state information and first running intention information of the at least one non-intelligent networked vehicle; the prompt information is used for prompting the at least one intelligent networked vehicle to pay attention to deceleration and avoidance to the at least one non-intelligent networked vehicle according to the first running state information and the first running intention information of the at least one non-intelligent networked vehicle; the waiting traffic information is used for indicating other intelligent internet vehicles which do not pass through the non-signal control road intersection to stay in the lane to wait for passing.
9. A cooperative intersection passing device, comprising:
the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring first running information of m vehicles in each vehicle sensing area through each road side sensing device arranged at any non-trusted road intersection; second driving information of each intelligent networked vehicle is obtained through vehicle-mounted sensing equipment of each intelligent networked vehicle in each vehicle sensing area;
a processing unit, configured to determine first driving state information and first driving intention information of the m vehicles based on first driving information of the m vehicles, and determine second driving state information and second driving intention information of each intelligent internet vehicle based on second driving information of each intelligent internet vehicle; determining first running state information and first running intention information of each non-intelligent networked vehicle in the m vehicles; generating traffic scheduling information for each intelligent networked vehicle located in each vehicle sensing area based on first driving state information and first driving intention information of each non-intelligent networked vehicle and second driving state information and second driving intention information of each intelligent networked vehicle; and scheduling the intelligent networked vehicles according to the traffic scheduling information.
10. A computing device comprising at least one processor and at least one memory, wherein the memory stores a computer program that, when executed by the processor, causes the processor to perform the method of any of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210684180.4A CN115171371B (en) | 2022-06-16 | 2022-06-16 | Cooperative road intersection passing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210684180.4A CN115171371B (en) | 2022-06-16 | 2022-06-16 | Cooperative road intersection passing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115171371A true CN115171371A (en) | 2022-10-11 |
CN115171371B CN115171371B (en) | 2024-03-19 |
Family
ID=83484950
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210684180.4A Active CN115171371B (en) | 2022-06-16 | 2022-06-16 | Cooperative road intersection passing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115171371B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024131839A1 (en) * | 2022-12-23 | 2024-06-27 | 华为技术有限公司 | Radar transmission and processing method and device |
CN118447703A (en) * | 2024-07-04 | 2024-08-06 | 山东科技大学 | A traffic management system and method for right-turn vehicles at an intersection |
Citations (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102750837A (en) * | 2012-06-26 | 2012-10-24 | 北京航空航天大学 | No-signal intersection vehicle and vehicle cooperative collision prevention system |
CN104616541A (en) * | 2015-02-03 | 2015-05-13 | 吉林大学 | Fish streaming based non-signal intersection vehicle-vehicle cooperation control system |
CN104916152A (en) * | 2015-05-19 | 2015-09-16 | 苏州大学 | Cooperative vehicle infrastructure system-based intersection vehicle right turning guidance system and guidance method thereof |
CN105321362A (en) * | 2015-10-30 | 2016-02-10 | 湖南大学 | Intersection vehicle intelligent cooperative passage method |
US9672734B1 (en) * | 2016-04-08 | 2017-06-06 | Sivalogeswaran Ratnasingam | Traffic aware lane determination for human driver and autonomous vehicle driving system |
CN107730883A (en) * | 2017-09-11 | 2018-02-23 | 北方工业大学 | Intersection area vehicle scheduling method in Internet of vehicles environment |
US20180275678A1 (en) * | 2017-03-23 | 2018-09-27 | Edward Andert | Systems, methods, and apparatuses for implementing time sensitive autonomous intersection management |
CN109035767A (en) * | 2018-07-13 | 2018-12-18 | 北京工业大学 | A kind of tide lane optimization method considering Traffic Control and Guidance collaboration |
CN109448385A (en) * | 2019-01-04 | 2019-03-08 | 北京钛星科技有限公司 | Dispatch system and method in automatic driving vehicle intersection based on bus or train route collaboration |
CN109461320A (en) * | 2018-12-20 | 2019-03-12 | 清华大学苏州汽车研究院(吴江) | Intersection speed planing method based on car networking |
US20190088148A1 (en) * | 2018-07-20 | 2019-03-21 | Cybernet Systems Corp. | Autonomous transportation system and methods |
CN109859474A (en) * | 2019-03-12 | 2019-06-07 | 沈阳建筑大学 | Unsignalized intersection right-of-way distribution system and method based on intelligent vehicle-carried equipment |
US20200193813A1 (en) * | 2018-08-02 | 2020-06-18 | Beijing Tusen Weilai Technology Co., Ltd. | Navigation method, device and system for cross intersection |
CN111445692A (en) * | 2019-12-24 | 2020-07-24 | 清华大学 | Speed collaborative optimization method for intelligent networked automobile at signal-lamp-free intersection |
CN111599215A (en) * | 2020-05-08 | 2020-08-28 | 北京交通大学 | Non-signalized intersection mobile block vehicle guiding system and method based on Internet of vehicles |
CN111785062A (en) * | 2020-04-01 | 2020-10-16 | 北京京东乾石科技有限公司 | Method and device for realizing vehicle-road cooperation at signal lamp-free intersection |
US10807610B1 (en) * | 2019-07-23 | 2020-10-20 | Alps Alpine Co., Ltd. | In-vehicle systems and methods for intersection guidance |
CN212750105U (en) * | 2020-05-15 | 2021-03-19 | 青岛海信网络科技股份有限公司 | Device for testing vehicle-road cooperation |
CN112820125A (en) * | 2021-03-24 | 2021-05-18 | 苏州大学 | Intelligent internet vehicle traffic guidance method and system under mixed traffic condition |
CN113112797A (en) * | 2021-04-09 | 2021-07-13 | 交通运输部公路科学研究所 | Signal lamp intersection scheduling method and system based on vehicle-road cooperation technology |
CN113129623A (en) * | 2017-12-28 | 2021-07-16 | 北京百度网讯科技有限公司 | Cooperative intersection traffic control method, device and equipment |
KR102317430B1 (en) * | 2021-05-27 | 2021-10-26 | 주식회사 라이드플럭스 | Method, server and computer program for creating road network map to design a driving plan for automatic driving vehicle |
CN113744564A (en) * | 2021-08-23 | 2021-12-03 | 上海智能新能源汽车科创功能平台有限公司 | Intelligent network bus road cooperative control system based on edge calculation |
CN215265095U (en) * | 2021-07-16 | 2021-12-21 | 青岛理工大学 | An intelligent intersection control system based on vehicle-road coordination |
CN113971883A (en) * | 2021-10-29 | 2022-01-25 | 四川省公路规划勘察设计研究院有限公司 | Vehicle-road collaborative autonomous driving method and efficient transportation system |
US20220058945A1 (en) * | 2020-08-19 | 2022-02-24 | Toyota Motor Engineering & Manufacturing North America, Inc. | Systems and methods for collaborative intersection management |
CN114093169A (en) * | 2021-11-22 | 2022-02-25 | 安徽达尔智能控制系统股份有限公司 | Vehicle-road cooperation method and system of road side sensing equipment based on V2X |
CN114399914A (en) * | 2022-01-20 | 2022-04-26 | 交通运输部公路科学研究所 | A vehicle-road coordination method and system for joint scheduling of lanes, signal lights and vehicles |
CN114512007A (en) * | 2020-11-17 | 2022-05-17 | 长沙智能驾驶研究院有限公司 | Intersection traffic coordination method and device |
CN114537398A (en) * | 2022-04-13 | 2022-05-27 | 梅赛德斯-奔驰集团股份公司 | Method and device for assisting a vehicle in driving at an intersection |
-
2022
- 2022-06-16 CN CN202210684180.4A patent/CN115171371B/en active Active
Patent Citations (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102750837A (en) * | 2012-06-26 | 2012-10-24 | 北京航空航天大学 | No-signal intersection vehicle and vehicle cooperative collision prevention system |
CN104616541A (en) * | 2015-02-03 | 2015-05-13 | 吉林大学 | Fish streaming based non-signal intersection vehicle-vehicle cooperation control system |
CN104916152A (en) * | 2015-05-19 | 2015-09-16 | 苏州大学 | Cooperative vehicle infrastructure system-based intersection vehicle right turning guidance system and guidance method thereof |
CN105321362A (en) * | 2015-10-30 | 2016-02-10 | 湖南大学 | Intersection vehicle intelligent cooperative passage method |
US9672734B1 (en) * | 2016-04-08 | 2017-06-06 | Sivalogeswaran Ratnasingam | Traffic aware lane determination for human driver and autonomous vehicle driving system |
US20180275678A1 (en) * | 2017-03-23 | 2018-09-27 | Edward Andert | Systems, methods, and apparatuses for implementing time sensitive autonomous intersection management |
CN107730883A (en) * | 2017-09-11 | 2018-02-23 | 北方工业大学 | Intersection area vehicle scheduling method in Internet of vehicles environment |
CN113129623A (en) * | 2017-12-28 | 2021-07-16 | 北京百度网讯科技有限公司 | Cooperative intersection traffic control method, device and equipment |
CN109035767A (en) * | 2018-07-13 | 2018-12-18 | 北京工业大学 | A kind of tide lane optimization method considering Traffic Control and Guidance collaboration |
US20190088148A1 (en) * | 2018-07-20 | 2019-03-21 | Cybernet Systems Corp. | Autonomous transportation system and methods |
US20200193813A1 (en) * | 2018-08-02 | 2020-06-18 | Beijing Tusen Weilai Technology Co., Ltd. | Navigation method, device and system for cross intersection |
CN109461320A (en) * | 2018-12-20 | 2019-03-12 | 清华大学苏州汽车研究院(吴江) | Intersection speed planing method based on car networking |
CN109448385A (en) * | 2019-01-04 | 2019-03-08 | 北京钛星科技有限公司 | Dispatch system and method in automatic driving vehicle intersection based on bus or train route collaboration |
CN109859474A (en) * | 2019-03-12 | 2019-06-07 | 沈阳建筑大学 | Unsignalized intersection right-of-way distribution system and method based on intelligent vehicle-carried equipment |
US10807610B1 (en) * | 2019-07-23 | 2020-10-20 | Alps Alpine Co., Ltd. | In-vehicle systems and methods for intersection guidance |
CN111445692A (en) * | 2019-12-24 | 2020-07-24 | 清华大学 | Speed collaborative optimization method for intelligent networked automobile at signal-lamp-free intersection |
CN111785062A (en) * | 2020-04-01 | 2020-10-16 | 北京京东乾石科技有限公司 | Method and device for realizing vehicle-road cooperation at signal lamp-free intersection |
CN111599215A (en) * | 2020-05-08 | 2020-08-28 | 北京交通大学 | Non-signalized intersection mobile block vehicle guiding system and method based on Internet of vehicles |
CN212750105U (en) * | 2020-05-15 | 2021-03-19 | 青岛海信网络科技股份有限公司 | Device for testing vehicle-road cooperation |
US20220058945A1 (en) * | 2020-08-19 | 2022-02-24 | Toyota Motor Engineering & Manufacturing North America, Inc. | Systems and methods for collaborative intersection management |
CN114512007A (en) * | 2020-11-17 | 2022-05-17 | 长沙智能驾驶研究院有限公司 | Intersection traffic coordination method and device |
WO2022105797A1 (en) * | 2020-11-17 | 2022-05-27 | 长沙智能驾驶研究院有限公司 | Intersection traffic coordination method and apparatus |
CN112820125A (en) * | 2021-03-24 | 2021-05-18 | 苏州大学 | Intelligent internet vehicle traffic guidance method and system under mixed traffic condition |
CN113112797A (en) * | 2021-04-09 | 2021-07-13 | 交通运输部公路科学研究所 | Signal lamp intersection scheduling method and system based on vehicle-road cooperation technology |
KR102317430B1 (en) * | 2021-05-27 | 2021-10-26 | 주식회사 라이드플럭스 | Method, server and computer program for creating road network map to design a driving plan for automatic driving vehicle |
CN215265095U (en) * | 2021-07-16 | 2021-12-21 | 青岛理工大学 | An intelligent intersection control system based on vehicle-road coordination |
CN113744564A (en) * | 2021-08-23 | 2021-12-03 | 上海智能新能源汽车科创功能平台有限公司 | Intelligent network bus road cooperative control system based on edge calculation |
CN113971883A (en) * | 2021-10-29 | 2022-01-25 | 四川省公路规划勘察设计研究院有限公司 | Vehicle-road collaborative autonomous driving method and efficient transportation system |
CN114093169A (en) * | 2021-11-22 | 2022-02-25 | 安徽达尔智能控制系统股份有限公司 | Vehicle-road cooperation method and system of road side sensing equipment based on V2X |
CN114399914A (en) * | 2022-01-20 | 2022-04-26 | 交通运输部公路科学研究所 | A vehicle-road coordination method and system for joint scheduling of lanes, signal lights and vehicles |
CN114537398A (en) * | 2022-04-13 | 2022-05-27 | 梅赛德斯-奔驰集团股份公司 | Method and device for assisting a vehicle in driving at an intersection |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024131839A1 (en) * | 2022-12-23 | 2024-06-27 | 华为技术有限公司 | Radar transmission and processing method and device |
CN118447703A (en) * | 2024-07-04 | 2024-08-06 | 山东科技大学 | A traffic management system and method for right-turn vehicles at an intersection |
Also Published As
Publication number | Publication date |
---|---|
CN115171371B (en) | 2024-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12005904B2 (en) | Autonomous driving system | |
US12037015B2 (en) | Vehicle control device and vehicle control method | |
CN113376657B (en) | Automatic tagging system for autonomous vehicle LIDAR data | |
US11130492B2 (en) | Vehicle control device, vehicle control method, and storage medium | |
CN107826104B (en) | Method for providing information about an intended driving intention of a vehicle | |
US10935976B2 (en) | Blinker judgment device and autonomous driving system | |
CN109427213B (en) | Collision avoidance apparatus, method and non-transitory storage medium for vehicle | |
US12183205B2 (en) | Obstacle information management device, obstacle information management method, and device for vehicle | |
US8050460B2 (en) | Method for recognition of an object | |
WO2020116264A1 (en) | Vehicle travel assistance method, vehicle travel assistance device and autonomous driving system | |
CN111731296A (en) | Travel control device, travel control method, and storage medium storing program | |
US20200096360A1 (en) | Method for planning trajectory of vehicle | |
US20220009496A1 (en) | Vehicle control device, vehicle control method, and non-transitory computer-readable medium | |
JP2019049812A (en) | Traveling position evaluation system | |
CN115171371B (en) | Cooperative road intersection passing method and device | |
CN114973644B (en) | Road information generating device | |
JP2022142165A (en) | VEHICLE CONTROL DEVICE, VEHICLE CONTROL METHOD AND COMPUTER PROGRAM FOR VEHICLE CONTROL | |
CN111319631A (en) | Vehicle control device and vehicle control method | |
CN116745195A (en) | Method and system for safe driving outside lane | |
CN113454555A (en) | Trajectory prediction for driving strategies | |
CN113228131A (en) | Method and system for providing ambient data | |
US11710408B2 (en) | Communication apparatus, vehicle, computer-readable storage medium, and communication method | |
CN112689584B (en) | Automatic driving control method and automatic driving control system | |
US12240450B2 (en) | V2X warning system for identifying risk areas within occluded regions | |
CN113492844B (en) | Vehicle control device, vehicle control method, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |