[go: up one dir, main page]

US10880293B2 - Authentication of vehicle-to-vehicle communications - Google Patents

Authentication of vehicle-to-vehicle communications Download PDF

Info

Publication number
US10880293B2
US10880293B2 US16/040,013 US201816040013A US10880293B2 US 10880293 B2 US10880293 B2 US 10880293B2 US 201816040013 A US201816040013 A US 201816040013A US 10880293 B2 US10880293 B2 US 10880293B2
Authority
US
United States
Prior art keywords
vehicle
sensor data
primary
data
authentication
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US16/040,013
Other versions
US20190068582A1 (en
Inventor
Yu Seung Kim
Jinhyoung Oh
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ford Global Technologies LLC
Original Assignee
Ford Global Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ford Global Technologies LLC filed Critical Ford Global Technologies LLC
Priority to US16/040,013 priority Critical patent/US10880293B2/en
Assigned to FORD GLOBAL TECHNOLOGIES, LLC reassignment FORD GLOBAL TECHNOLOGIES, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KIM, YU SEUNG, Oh, Jinhyoung
Priority to CN201810952136.0A priority patent/CN109429197B/en
Priority to DE102018120655.0A priority patent/DE102018120655A1/en
Publication of US20190068582A1 publication Critical patent/US20190068582A1/en
Priority to US16/951,727 priority patent/US11582222B2/en
Application granted granted Critical
Publication of US10880293B2 publication Critical patent/US10880293B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0823Network architectures or network communication protocols for network security for authentication of entities using certificates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/46Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for vehicle-to-vehicle communication [V2V]
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/09Taking automatic action to avoid collision, e.g. braking and steering
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/0009Transmission of position information to remote stations
    • G01S5/0072Transmission between mobile stations, e.g. anti-collision systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • G01S5/0284Relative positioning
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0088Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots characterized by the autonomous decision making process, e.g. artificial intelligence, predefined behaviours
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/008Registering or indicating the working of vehicles communicating information to a remotely located station
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/06Authentication
    • H04W12/1202
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/12Detection or prevention of fraud
    • H04W12/121Wireless intrusion detection systems [WIDS]; Wireless intrusion prevention systems [WIPS]
    • H04W12/122Counter-measures against attacks; Protection against rogue devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/023Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • H04W4/026Services making use of location information using location based information parameters using orientation information, e.g. compass
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • H04W4/027Services making use of location information using location based information parameters using movement velocity, acceleration information
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/45External transmission of data to or from the vehicle
    • B60W2556/65Data transmitted between vehicles
    • G05D2201/0213
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1425Traffic logging, e.g. anomaly detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • H04L63/1483Countermeasures against malicious traffic service impersonation, e.g. phishing, pharming or web spoofing
    • H04W12/00503
    • H04W12/1004
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/10Integrity
    • H04W12/104Location integrity, e.g. secure geotagging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/60Context-dependent security
    • H04W12/63Location-dependent; Proximity-dependent

Definitions

  • This disclosure relates to performing authenticated vehicle-to-vehicle communications.
  • Automobiles provide a significant portion of transportation for commercial, government, and private entities.
  • Autonomous vehicles and driving assistance systems are currently being developed and deployed to provide safety, reduce an amount of user input required, or even eliminate user involvement entirely.
  • some driving assistance systems such as crash avoidance systems, may monitor driving, positions, and a velocity of the vehicle and other objects while a human is driving. When the system detects that a crash or impact is imminent the crash avoidance system may intervene and apply a brake, steer the vehicle, or perform other avoidance or safety maneuvers.
  • autonomous vehicles may drive and navigate a vehicle with little or no user input. Autonomous vehicle may further communicate with other autonomous vehicles to aid in crash avoidance and safety maneuvers. Efficient authentication of a vehicle's identity may be beneficial in communication between two or more autonomous vehicle.
  • V2V vehicle-to-vehicle
  • Inter-vehicle communication is known for improving safety features for each vehicle.
  • V2V communications the authenticity of a vehicle identity is verified by its digital certificate.
  • PKI public-key infrastructure
  • the validity of the digital certificate may be secured through a public-key infrastructure (PKI) system.
  • PKI public-key infrastructure
  • CA certificate authority
  • CA certificate authority
  • benign Cas can mistakenly issue a valid certificate to unauthorized parties.
  • FIG. 1 is a schematic block diagram of an autonomous vehicle or assisted driving system in accordance with the teachings and principles of the disclosure
  • FIG. 2 is a schematic block diagram of an example computing device suitable for implementing methods in accordance with the teachings and principles of the disclosure
  • FIG. 3 is a schematic diagram illustrating relative positions of vehicles performing authentication in accordance with the teachings and principles of the disclosure
  • FIGS. 4A, 4B, 5A and 5B are diagrams of images that may be processed in accordance with the teachings and principles of the disclosure.
  • FIG. 6 is a diagram illustrating distances and angles between vehicles performing authentication in accordance with the teachings and principles of the disclosure
  • FIG. 7A is a diagram illustrating distances and angles measured from a vehicle camera in accordance with the teachings and principles of the disclosure
  • FIG. 7B is a diagram illustrating the location of a vehicle license plate in an image in accordance with the teachings and principles of the disclosure.
  • FIG. 8 is a schematic block diagram illustrating a process flow for authenticating a vehicle in accordance with the teachings and principles of the disclosure
  • FIG. 9 is a plan view illustrating vehicle positions when an image is captured in accordance with the teachings and principles of the disclosure.
  • FIG. 10 is a plan view illustrating object maps captured by nearby vehicles in accordance with the teachings and principles of the disclosure.
  • FIG. 11 is a plan view illustrating translation and rotation to determine whether object maps captured by nearby vehicles are similar in accordance with the teachings and principles of the disclosure
  • FIG. 12 is a plan view illustrating velocity vector maps of nearby vehicles for use in authentication in accordance with the teachings and principles of the disclosure
  • FIG. 13 is a schematic flow chart diagram illustrating a method for authenticating a vehicle, in accordance with the teachings and principles of the disclosure
  • FIG. 14 is a schematic flow chart diagram illustrating a method for authenticating a vehicle, in accordance with the teachings and principles of the disclosure.
  • FIG. 15 is a schematic flow chart diagram illustrating a method for authenticating a vehicle, in accordance with the teachings and principles of the disclosure.
  • V2V vehicle-to-vehicle
  • Inter-vehicle communication is known for improving safety features for each vehicle.
  • V2V communications the authenticity of a vehicle identity is verified by its digital certificate.
  • PKI public-key infrastructure
  • the validity of the digital certificate may be secured through a public-key infrastructure (PKI) system.
  • PKI public-key infrastructure
  • CA certificate authority
  • CA certificate authority
  • benign Cas can mistakenly issue a valid certificate to unauthorized parties.
  • V2V communication While V2V communication are intended to increase the security and safety of vehicles, it also opens up potential threats for adversaries.
  • An attacker can launch different types of attacks to benefit themselves or to maliciously cause damage to victims. For example, attackers may transmit inaccurate information to influence neighboring vehicles to divert other vehicles on the path to gain free path or forge their sensor information to circumvent liabilities for accidents. Platooning vehicles are also vulnerable to collision induction attacks.
  • Sybil attacks are also possible by using multiple non-existing identities or pseudonyms. Hence, securing inter-vehicular communications is of critical significance and may save users from life-threatening attacks.
  • DSRC Dedicated Short-Range Communications
  • PKI public-key infrastructure
  • supplementary mechanisms may be supplied in addition to PKI authentication for V2V communication.
  • Such supplementary mechanisms may leverage camera sensors that are already prevalent in many autonomous or driving assistance vehicles.
  • Such supplementary mechanisms may include two vehicles taking a snapshot of each other, exchanging the snapshot, and verifying each other's identity by extracting, for example, a vehicle number, a relative distance between the vehicle, an azimuth angle from the received image, and so forth.
  • an advanced attacker may still be able to impersonate a vehicle's identity by preparing a dictionary of images taken from various locations offline and selecting an image from the dictionary to pretend to be the other vehicle around the victim vehicle.
  • an attacking vehicle might take a snapshot of the victim vehicle, crop out the victim vehicle from the snapshot, and superimpose the victim vehicle into the appropriate position to mislead the victim vehicle to believe that the image was taken by the attacking vehicle. Due to recent advances in computer vision, new camera image modification techniques may be available to an attacking vehicle.
  • a method for authenticating vehicle-to-vehicle communication includes receiving sensor data from a first vehicle and receiving secondary sensor data from a second vehicle.
  • the method includes extracting, based on the sensor data and the secondary sensor data, an authentication comprising one or more of: a proximity of the second vehicle relative to the first vehicle, or a common object identified by the sensor data and the secondary sensor data.
  • the method further includes determining whether the authentication satisfies a trust threshold for the first vehicle.
  • the method may further include permitting communication between the first vehicle and the second vehicle if the authentication satisfies the trust threshold.
  • Embodiments in accordance with the present disclosure may be embodied as an apparatus, method, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.), or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “module” or “system.” Furthermore, the present disclosure may take the form of a computer program product embodied in any tangible medium of expression having computer-usable program code embodied in the medium.
  • a computer-readable medium may include one or more of a portable computer diskette, a hard disk, a random access memory (RAM) device, a read-only memory (ROM) device, an erasable programmable read-only memory (EPROM or Flash memory) device, a portable compact disc read-only memory (CDROM), an optical storage device, and a magnetic storage device.
  • a computer-readable medium may comprise any non-transitory medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • Computer program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++, or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • the program code may execute entirely on a computer system as a stand-alone software package, on a stand-alone hardware unit, partly on a remote computer spaced some distance from the computer, or entirely on a remote computer or server.
  • the remote computer may be connected to the computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may also be stored in a non-transitory computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • a controller 102 may be housed within a vehicle.
  • the vehicle may include any vehicle known in the art.
  • the vehicle may have all of the structures and features of any vehicle known in the art including, wheels, a drive train coupled to the wheels, an engine coupled to the drive train, a steering system, a braking system, and other systems known in the art to be included in a vehicle.
  • the controller 102 may perform autonomous navigation and collision avoidance.
  • the controller 102 may perform authenticated V2V communication in accordance with an embodiment of the present disclosure.
  • the controller 102 may be coupled a forward facing camera 104 and a rearward facing camera 106 .
  • the forward facing camera 104 may be mounted on a vehicle with a field of view facing forward and the rearward-facing camera 106 may be mounted to the vehicle having the field of view thereof facing in a rearward direction.
  • the rearward-facing camera 106 may be a conventional back-up camera or a separate camera having a different field of view.
  • the cameras 104 , 106 may be used for performing authentication methods as disclosed herein and may additionally be used for performing obstacle detection.
  • the controller 102 may be coupled to one or more other sensing devices 108 , which may include microphones or other sensors useful for detecting obstacles, such as RADAR, LIDAR, SONAR, ultrasound, and the like.
  • a plurality of different sensors may be attached to an advanced driver-assistance systems (ADAS) bus or system. Any of these available sensors may be available to provide sensor data for purposes of assisted driving, automated driving, and/or authenticating vehicle-to-vehicle (V2V) communication.
  • ADAS advanced driver-assistance systems
  • the controller 102 may execute a V2V module 110 a .
  • the V2V module 110 a includes a location verification module 112 a .
  • the location verification module 112 a verifies that another vehicle seeking to communicate with the controller 102 using V2V communication is in fact a vehicle in proximity to the controller 102 .
  • the location verification module 112 a verifies the location of the other vehicle by exchanging images or other sensor data (e.g., frames of RADAR, LIDAR, SONAR, ultrasound, or other sensor data), object maps, and/or velocity maps, as discussed in greater detail below.
  • the V2V module 110 a may further include an authentication module 112 b .
  • the authentication module 112 b performs key exchange, such as using the Diffie-Hellman approach, public key encryption, or some other authentication technique.
  • the authentication module 112 b may further handle performing secured communication between the controller and the other vehicle. The manner in which authentication and secured communication is performed is described in greater detail below.
  • the controller 102 may further execute an obstacle identification module 110 b , collision prediction module 110 c , and decision module 110 d .
  • the obstacle identification module 110 b may analyze one or more image streams from the cameras 104 , 106 or other camera and identifies potential obstacles, including people, animals, vehicles, buildings, curbs, and other objects and structures.
  • the obstacle identification module 110 b may additionally identify potential obstacles from outputs of the sensing devices 108 , such as using data from a LIDAR, RADAR, ultrasound, or other sensing system.
  • the collision prediction module 110 c predicts which obstacle images are likely to collide with the vehicle based on its current trajectory or current intended path.
  • the decision module 110 d may make a decision to stop, accelerate, turn, etc. in order to avoid obstacles.
  • the manner in which the collision prediction module 110 c predicts potential collisions and the manner in which the decision module 110 d takes action to avoid potential collisions may be according to any method or system known in the art of autonomous vehicles.
  • the decision module 110 d may control the trajectory of the vehicle by actuating one or more actuators 114 controlling the direction and speed of the vehicle.
  • the actuators 114 may include a steering actuator 116 a , an accelerator actuator 116 b , and a brake actuator 116 c .
  • the configuration of the actuators 116 a - 116 c may be according to any implementation of such actuators known in the art of autonomous vehicles.
  • FIG. 2 is a block diagram illustrating an example computing device 200 .
  • Computing device 200 may be used to perform various procedures, such as those discussed herein.
  • the controller 102 may have some or all of the attributes of the computing device 200 .
  • Computing device 200 includes one or more processor(s) 202 , one or more memory device(s) 204 , one or more interface(s) 206 , one or more mass storage device(s) 208 , one or more Input/Output (I/O) device(s) 210 , and a display device 230 all of which are coupled to a bus 212 .
  • Processor(s) 202 include one or more processors or controllers that execute instructions stored in memory device(s) 204 and/or mass storage device(s) 208 .
  • Processor(s) 202 may also include various types of computer-readable media, such as cache memory.
  • Memory device(s) 204 include various computer-readable media, such as volatile memory (e.g., random access memory (RAM) 214 ) and/or nonvolatile memory (e.g., read-only memory (ROM) 216 ). Memory device(s) 204 may also include rewritable ROM, such as Flash memory.
  • volatile memory e.g., random access memory (RAM) 214
  • ROM read-only memory
  • Memory device(s) 204 may also include rewritable ROM, such as Flash memory.
  • Mass storage device(s) 208 include various computer readable media, such as magnetic tapes, magnetic disks, optical disks, solid-state memory (e.g., Flash memory), and so forth. As shown in FIG. 2 , a particular mass storage device is a hard disk drive 224 . Various drives may also be included in mass storage device(s) 208 to enable reading from and/or writing to the various computer readable media. Mass storage device(s) 208 include removable media 226 and/or non-removable media.
  • I/O device(s) 210 include various devices that allow data and/or other information to be input to or retrieved from computing device 200 .
  • Example I/O device(s) 210 include cursor control devices, keyboards, keypads, microphones, monitors or other display devices, speakers, network interface cards, modems, lenses, CCDs or other image capture devices, and the like.
  • Display device 230 includes any type of device capable of displaying information to one or more users of computing device 200 .
  • Examples of display device 230 include a monitor, display terminal, video projection device, and the like.
  • Interface(s) 206 include various interfaces that allow computing device 200 to interact with other systems, devices, or computing environments.
  • Example interface(s) 206 include any number of different network interfaces 220 , such as interfaces to local area networks (LANs), wide area networks (WANs), wireless networks, and the Internet.
  • Other interface(s) include user interface 218 and peripheral device interface 222 .
  • the interface(s) 206 may also include one or more peripheral interfaces such as interfaces for pointing devices (mice, track pad, etc.), keyboards, and the like.
  • Bus 212 allows processor(s) 202 , memory device(s) 204 , interface(s) 206 , mass storage device(s) 208 , I/O device(s) 210 , and display device 230 to communicate with one another, as well as other devices or components coupled to bus 212 .
  • Bus 212 represents one or more of several types of bus structures, such as a system bus, PCI bus, IEEE 1394 bus, USB bus, and so forth.
  • programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 200 , and are executed by processor(s) 202 .
  • the systems and procedures described herein can be implemented in hardware, or a combination of hardware, software, and/or firmware.
  • one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein.
  • VAuth vehicle authentication approach
  • the VAuth approach includes capturing a car's visual contextual information using a camera as a means to bind its physical identity and its co-presence to the other vehicle. Specifically, two moving vehicles on the road would have a unique pair of relative distance (d) and angle ( ⁇ ) at a given point of time that no other vehicles can experience.
  • FIG. 3 illustrates an embodiment of the use of VAuth 300.
  • vehicles A and B both take a snapshot of each other simultaneously (e.g. within one second, preferably within 100 ms, more preferably within 10 ms) and exchange the images to prove their relative d and ⁇ .
  • VAuth 300 leverages cryptographic commitment and decommitment schemes to bind the vehicle's cryptographic key with its physical identity (license plate number) and its co-presence (d and ⁇ ), which help to infer the location.
  • VAuth 300 eliminates the aforementioned location spoofing impersonation attacks. Through this binding, VAuth 300 is also robust against Man-in-the-Middle (MitM) attacks that may be present during the key agreement steps. In addition, VAuth 300 may also restrict image forgery and spoofing attacks because each vehicle may verify the validity of the received image using commonly observed objects (e.g., neighboring vehicles, road signs, terrains in the background, etc.).
  • commonly observed objects e.g., neighboring vehicles, road signs, terrains in the background, etc.
  • VAuth 300 enables automated key establishment among mobile vehicles even where the following constraints are present.
  • VAuth 300 The main goal of VAuth 300 is to secure against location spoofing impersonation attacks in V2V communications by binding the physical identity and co-presence of the pair of neighboring cars.
  • neighboring vehicles we define “neighboring vehicles” as vehicles within each other's line of sight (i.e., camera's field of view). In doing so, we enable a pair of vehicles to establish a secure channel by performing an ad-hoc secure key agreement while the vehicles are on the road. This process is referred to as “pairing.”
  • the key agreement protocol should be robust against active attacks such as Man-in-the-Middle (MitM) and image spoofing attacks.
  • VAuth 300 augments the integrity and authenticity of key agreement messages. Integrity and authenticity guarantees that the key agreement messages come unaltered en route from the claimed sender.
  • VAuth 300 discloses: (a) a secure V2V key agreement protocol that binds physical identity and presence to a cryptographic key; (b) security analysis of VAuth protocol to demonstrate its robustness against MitM attacks; (c) an implementation and evaluation of VAuth conducted with real-world vehicles.
  • the attacker's goal is to break the integrity and authenticity of the key agreement scheme between two legitimate vehicles.
  • This application considers both passive and active attackers. Passive attackers merely observe the wireless communication in attempts to launch attacks (e.g., eavesdropping attack). Active attackers may inject, replay, modify, and delete messages in the communication channel.
  • an approach is disclosed that deals with attackers that are co-located with the legitimate entities, i.e., neighboring vehicles traveling along the road.
  • VAuth 300 leverages visual images of the vehicles' contextual information to verify authenticity during a pairing process. Any neighboring pair of vehicles and only that neighboring pair of vehicles share and experience a unique relative distance (d) and angle ( ⁇ ) at a specific time that no other vehicles experience (where 0 ⁇ 2 ⁇ ). For example, vehicles A and B in FIG. 3 share a relative distance and angle. Note that it is possible for another pair of vehicles (e.g., vehicles B and C) to have their own d and ⁇ relative to each other, but it is impossible to have the same d and ⁇ relative to vehicle A.
  • d relative distance
  • angle
  • the vehicles prove their authenticity by taking camera snapshot of each other to present d and ⁇ as a proof.
  • the pair of vehicles identify each other as “target” vehicles to pair by initiating periodic beacon messages.
  • the two vehicles exchange beacon messages that contain their identifiers (i.e., license plate number). If the identifiers are not found in each vehicle's local “paired” list, the two vehicles will identify each other as the “target” vehicle to pair.
  • legitimate vehicles A and B may pair using VAuth 300 in the presence of an attacker vehicle M and possibly one or more benign vehicles C.
  • Each vehicle may have a forward-facing camera 104 A, 104 B, 104 M and a rearward facing camera 106 A, 106 B, 106 M.
  • Vehicles A and B may identify each other as targets for V2V communication. Subsequently, the two vehicles will take a picture of each other and exchange the images over the DSRC wireless channel. Specifically, snapshots taken by vehicle A's rear camera 106 A contains vehicle B's front image, and similarly, vehicle B's front camera 104 B contains vehicle A's rear image.
  • the images should share the same relative distance, d.
  • the distance d A between vehicles A and B as measured by vehicle A using the image including vehicle B should be equal (i.e. within some tolerance) of the distance d B measured using the image including vehicle A received from vehicle B, and vice versa.
  • the angle ⁇ A between vehicles A and B as measured by vehicle A using the image including vehicle B should be equal (i.e. within some tolerance) of the angle ⁇ B measured using the image including vehicle A received from vehicle B.
  • This constraint may be expressed as
  • VAuth 300 depends on the uniqueness of the distance (d A , d B ) and angle ( ⁇ A , ⁇ B ) of a pair of vehicles at a specific point of time.
  • vehicle M illustrated in FIG. 3
  • the attacker can prepare a “dictionary” of images offline, for example, when the victim's vehicle (vehicle B) is parked on a street.
  • VAuth 300 prevents such attacks by leveraging the fact that both vehicles' surroundings (e.g., neighboring vehicles, road signs, background objects/views, etc.) should approximately be equal.
  • VAuth 300 requires Vehicles A and B to take both front (V A F and V B F ) and rear images (V A R and V B R ) as depicted.
  • Each vehicle may therefore compare the images to check if the images contain similar surroundings. For example, V AF and V BF should share common features since they are pointing in the same direction and V AR and V BR should likewise share common features. If this check fails, the vehicles reject the pairing process.
  • the VAuth 300 protocol includes four phases: (1) synchronization; (2) snapshot; (3) key agreement; and (4) key confirmation phases. Each phase is discussed in detail below with respect to the algorithm of Table 1, below. The definitions of the variables of the algorithm of Table 1 is included in Table 2.
  • VAuth Protocol Synchronization 1.
  • BEACON A ID A 2.
  • B Checks against “paired” list; Aborts if found. 3.
  • A Checks against “paired” list; Aborts if found. 5.
  • SYNC AB Phase 2) Snapshot 6. Take snapshots; A Front (V A F ) and rear (V A R ); B Front (V B F ) and rear (V B R ).
  • C A H(g a
  • C B H(g b
  • D A g a
  • D B g b
  • each vehicle transmits a periodic beacon message to attempt to initiate VAuth 300 protocol.
  • the beacon message is simply a broadcast of its vehicle identifier (i.e., license plate number, ID A , ID B ).
  • vehicle A broadcasts its beacon message, BEACON A , and vehicle B receives this message as depicted in Table 1.
  • BEACON A vehicle B checks against its “paired” list. If ID A is not found in the list, vehicle B sends a request to pair to vehicle A. Similarly, vehicle A also checks the license plate of vehicle B (ID B ) against its “paired” list (Steps 2-4 of Table 1).
  • vehicle A transmits a synchronization message to vehicle B to initiate a synchronized snapshot phase so that both vehicles identify each other as a “target” vehicle to pair with (Step 5 of Table 1).
  • the protocol can be further modified so that if a vehicle receives multiple pairing requests, the vehicle can prioritize the requests using other information sent together with the requests (e.g., GPS-location information to prioritize requests based on the proximity of two vehicles).
  • both vehicle A and vehicle B simultaneously take snapshots of the front and rear views as shown in Step 6 of Table 1.
  • Front and rear images taken by vehicles A and B are referenced as V A F and V A R , and V B F and V B R , respectively.
  • Synchronized taking of photos may be performed by vehicles A and B coordinating a time at which photos will be taken.
  • FIGS. 4A and 4B illustrate V A F and V B F , respectively, for the scenario of FIG. 3 .
  • FIGS. 5A and 5B illustrate V A R and V B R for the scenario of FIG. 3 .
  • vehicles A and B first exchange their commitments (C A and C B ) and later reveal their decommitments (D A and D B ).
  • Each vehicle leverages commitments to bind the Diffie-Hellman (DH) public parameters (g a or g b ) with the vehicle ID (ID A or ID B ), which is the license plate number, and the physical co-presence via images ( ⁇ V A F ⁇ V A R ⁇ ) or ( ⁇ V B F ⁇ V B R ⁇ ).
  • DH Diffie-Hellman
  • ID A or ID B the vehicle ID
  • Steps 7-8 of Table 1 depict the exchanges of commitments. Subsequently, the vehicles exchange decommitments as depicted in Steps 9-10 of Table 1 to disclose their committed information to each other.
  • each vehicle Upon receiving the decommitments, each vehicle performs verification.
  • Table 3 depicts the logic of vehicle B verifying the decommitment (D A ) it received from vehicle A. First, vehicle B verifies if D A is indeed the hash of C A (Lines 2-5 of Table 3). If true, vehicle B finds out which image (front or back) contains the target vehicle's license plate number (Lines 7-9 of Table 3). For example, V B should be assigned to V BF because the image V BF contains vehicle A's license plate (ID A ) because vehicle A is in front of vehicle B.
  • vehicle B continues to verify the relative distance and angle of vehicles A and B (Lines 11-15 of Table 3). It does so by computing the distance and angle from the images, V B and V A .
  • D V B and ⁇ V B are relative distance and angle of vehicle B to the position of Vehicle A's license plate (ID A ).
  • D V A and ⁇ V A corresponds to those of vehicle A to the position of vehicle B's license plate (ID B ). If the pair of relative distances, ⁇ D V B and D V A ⁇ , and angles, ⁇ V B and ⁇ V A ⁇ , are not within an error bound, vehicle B aborts the pairing process (Lines 17-24 of Table 3).
  • vehicles A and B perform key confirmation to verify that both cars indeed generated the same key. This is depicted in Steps 11-14 of Table 1.
  • MAC Message Authentication Code
  • the attacker's success probability is estimated starting from a random guessing attack without any previous knowledge to a sophisticated image spoofing attack.
  • a basic attack by an attacker may be performed by a hacker who is in full control of the wireless channel.
  • the attacker tries to impersonate vehicle A to vehicle B and forges the DH public key.
  • vehicle A transmits its commitment and decommitment (C A and D A ) to Car B.
  • the attacker may try to launch more sophisticated attacks.
  • Vehicle M selects a prepared image, V M d ⁇ of vehicle B with corresponding d and ⁇ and simply transmits to vehicle B the forged commitment (C A′ ) and decommitments (D A′ ), which includes V M d ⁇ .
  • the attack may be divided into three cases for security analysis as outlined below.
  • the attacker has knowledge of d and ⁇ and the VAuth 300 protocol does not check for image spoofing attacks.
  • the attacker is assumed to be traveling along with vehicles A and B, and hence capable of determining an estimated distance and angle, d and ⁇ .
  • FIG. 6 illustrates this scenario, where Car M attempts to find out the relative distance, d AB , and the angle, ⁇ A , ⁇ B .
  • Vehicle M knows its relative distance to vehicles A and B (d AM and d BM ) and the relative angles ( ⁇ X , ⁇ Y , ⁇ M ).
  • vehicle M computes the distance and angle as shown in Equation 3 and Equation 4.
  • distances specified as d xy are defined as the distance between points x and y.
  • P CarM 1).
  • VAuth 300 checks for image spoofing attacks.
  • VAuth 300 includes verification steps for image spoofing attacks (Table 1, lines 26-31). Equation 5 depicts that the success probability of the attacker is equivalent to the success probability of succeeding in an image spoofing attack, P spoofing .
  • P success P spoofing Equation 5
  • VAuth 300 leverages commonly seen objects (e.g., common neighboring cars' license plates).
  • Vehicle B searches for neighboring vehicles' license plates from the image pair ⁇ V A F , V B F ⁇ and ⁇ V A R , V B R ⁇ . If the numbers are less than a predefined threshold value, protocol is aborted.
  • the similarity of other objects and their relative location can also be used to increase the verification accuracy. For example, buildings, road signs, trees, terrain, etc. can also be used.
  • VAuth 300 uses license plate recognition from the images taken by vehicle cameras.
  • VAuth 300 may use OpenALPR, an open-source library for automatic license plate recognition.
  • OpenALPR takes an input image and traverses through eight phases to output recognized license plate numbers, location (corners, width, and height) and confidence level (percentage).
  • OpenALPR implements the following phases.
  • Phase 1 (“Detection Phase”) finds “potential regions” of license plates. Subsequent phases will process all of the potential regions.
  • Phase 2 (“Binarization Phase”) creates multiple black and white images of plate regions to increase recognition accuracy.
  • Phase 3 (“Character Analysis”) finds regions or blobs of license plate number/character sizes from plate regions.
  • Phase 4 (“Plate Edges”) detects possible edges of license plates by detecting hough lines.
  • Phase 5 (“Deskew”) corrects the rotation and skew of the license plate image.
  • Phases 6 and 7 (“Character Segmentation” and “OCR”) isolate characters of the license plate and perform character recognition and confidence level determination.
  • Phase 8 (“Post Processing”) outputs a list of n potential candidate license plate numbers sorted with their confidence level.
  • systems and methods of the disclosure utilize techniques for image rectification and perspective correction in computer vision.
  • the algorithm leverages the ratio of real-world object in meters (“world-plane”) to pixels (“image-plane”) by leveraging dimensional knowledge of known objects.
  • V calibration a calibrated image is taken, which is a snapshot of the vehicle's license plate taken at a meter distance, d init , away from the vehicle, or some other known distance. From V calibration , the height (in pixels), h init , of the recognized license plate box.
  • the distance to a license plate in other images can be computed from the ratio of the height of the recognized license plate as shown in Equation 6.
  • each car can include their h init values in the commitment/decommitment messages.
  • the angle may be computed using the known distances from the image.
  • the problem of finding the relative angle is exemplified in FIGS. 7A and 7B .
  • the angle can be derived by using two distances, d image and d shiftm in meters as shown in Equation 7.
  • ⁇ image is the angle to the license plate compute
  • d image is the distance from the camera 104 , 106 to the license plate, d image .
  • the value d shift is the “imaginary” distance in meters that the car would have shifted horizontally if the car were originally on the same line as the camera, i.e. horizontally centered in the field of view of the camera.
  • d shift m the ratio of the pixels to meters is obtained using an object of known dimensions in meters and pixels.
  • the license plate is again used.
  • h m is the height of the actual license plate in meters, which is 0.15 meters (for example, California license plates have the dimension of 6′′ ⁇ 12′′ (0.15 m ⁇ 0.3 m)).
  • the value h px is the height in pixels of the license plate from the image. From FIG. 7A , one may also find d shift px , which is the shift distance in pixels.
  • Equation 8 d shift m is derived.
  • any type of sensor data may be used to authenticate a vehicle including its relative location to a parent vehicle.
  • LIDAR, RADAR, SONAR, or other data may be used to detect objects within their respective sensing ranges and determine the relative locations, angles, and the like based on that data.
  • any type of sensor data may be used alone to verify location or identity, or multiple types of sensor data may be used together to robustly confirm each other as well as any assertions made by a potential V2V communication source.
  • LIDAR or RADAR data may not be able to detect license plate numbers or a color of a vehicle, but they may be used to generate an object map for vehicles surrounding a parent vehicle.
  • object map For example, LIDAR or RADAR data may not be able to detect license plate numbers or a color of a vehicle, but they may be used to generate an object map for vehicles surrounding a parent vehicle.
  • velocity vectors relative to the parent vehicle for each object can be determined.
  • These object maps and/or velocity vector maps can be compared to those obtained based another vehicle sensor data to determine if that other vehicle may be trusted for V2V communication.
  • FIG. 8 is a schematic flow chart diagram illustrating a method 800 for authenticating a vehicle based on camera images, object maps, heading angles, and/or velocity vector maps.
  • the method 800 begins and both vehicles agree to initiate the communication at 802 .
  • Both vehicles capture a camera snapshot of the other vehicle, a surrounding map with RADAR/LIDAR sensors, and its heading angle with compass at 804 .
  • the vehicles check to see if velocity vector mapping is used at 806 . If velocity vector mapping is used as determined at 806 , the vehicles collect the velocity vector map of surrounding objects during a time t at 824 .
  • Both vehicles exchange camera snapshots, object maps, heading angles, and/or velocity vector maps at 808 .
  • Each vehicle extracts vehicle number, relative distance azimuth angle from the received camera image (or other sensor data or object maps) and adds the heading angle difference to the azimuth angle at 810 .
  • Each vehicle verifies if the extracted information matches with its own at 812 . If the extracted information does not match its own as determined at 812 , authentication fails 822 . If it does match as determined at 812 , each vehicle counts the number of commonly detected objects from its own map and the received object map from the other vehicle at 814 . The number of commonly detected objects is compared to see if it meets a system trust threshold (e.g., a percentage of detected objects or an absolute integer value for detected objects) at 816 .
  • a system trust threshold e.g., a percentage of detected objects or an absolute integer value for detected objects
  • authentication fails 822 . If the number of commonly detected objects not meet the system trust threshold as determined at 816 , authentication fails 822 . If the number of commonly detected objects as determined at 816 does meet the system trust threshold, authentication is a success 820 if velocity vector matching is not used at 818 . If velocity vector matching is used as determined at 818 , the vehicles calculate the ground vehicle velocity map by adding the vehicle velocity vector to the velocity vector of each commonly detected object at 826 . The vehicles check to see if their own ground velocity map approximately equals the received velocity maps at 828 . If the vehicle's own ground velocity map approximately equals the received velocity maps as determined at 828 , authentication succeeds 820 . If not, authentication fails 822 .
  • FIGS. 9-12 illustrate how various parameters and comparisons may be performed in order to perform the method of FIG. 8 .
  • FIG. 9 is a plan view illustrating relative vehicle positions on a roadway and with nearby objects including other vehicles (vehicle C 906 and vehicle D 908 ) and roadside objects (object E 910 and object F 912 ).
  • the heading angle h A of vehicle A 902 and the heading angle h B of vehicle B 904 are illustrated.
  • a camera image taken by vehicle A 902 is also illustrated with the relative distance between vehicle A 902 and vehicle B 904 (d A ) and the azimuth angle of vehicle B 904 (a B ).
  • FIG. 10 is a plan view illustrating a scene 1002 as illustrated in FIG. 9 , an ADAS object map 1004 of vehicle A 902 , and an ADAS object map 1006 of vehicle B 904 .
  • the object map 1004 of vehicle A 902 includes points or locations detected for object E 910 , vehicle B 904 , vehicle C 906 , and vehicle D 908 .
  • the object map 1004 includes the azimuth angle 1012 of object A 902 , object E points or locations 1020 , vehicle B points or locations 1014 , vehicle C points or locations 1016 , and vehicle D points or locations 1018 .
  • the object map 1006 of vehicle B 904 includes points or locations detected for object F 912 , vehicle A 902 , vehicle C 906 , and vehicle D 908 .
  • the object map 1006 includes the azimuth angle 1024 of object B 904 , vehicle C points or locations 1026 , vehicle D points or locations 1028 , object F points or locations 1032 , and vehicle A points or locations 1022 .
  • the object map 1004 of vehicle A 902 does not include object F 912 .
  • the object map 1006 of vehicle B 904 does not include object E 910 . This illustrates that the object maps 1004 , 1006 might not be perfect matches.
  • FIG. 11 is a plan view illustrating the resulting object maps as well as rotations and translations to attempt to match or overlap the object map 1004 of vehicle A 902 and the object map 1006 of vehicle B 904 .
  • the result of rotating the object map 1006 of vehicle B 904 by h A -h B is illustrated at 1102 .
  • the result of translating the object map 1006 of vehicle B 904 by d AB and overlapping the object map 1006 of vehicle B 904 over the object map 1004 of vehicle A 902 is illustrated at 1104 .
  • An object map may be rotated by the difference between the azimuth angles (e.g., h A ⁇ h B ) and then translated by the distance d AB between the vehicles or vehicle sensors that were used to create the object maps.
  • the number or percentage of matching objects can then be determined by counting matching and non-matching objects. As long as the number or threshold exceeds a trust threshold, discrepancies may not cause authentication to fail.
  • FIG. 12 is a plan view illustrating velocity vector-based matching.
  • FIG. 12 illustrates a vehicle A relative velocity vector map 1202 , a vehicle B relative velocity vector map 1204 , and a ground velocity vector map 1206 .
  • all maps include vector maps for all objects, for illustrative purposes.
  • the velocity vectors for each object may be compared by adding the velocity vector for a vehicle to the velocity vector for an object and determining whether that is sufficiently similar to the result of adding the velocity vector for the other vehicle to the velocity vector for the same object. This may be done for each commonly detected object.
  • the velocity of vehicle A V A 1208 is equal to the velocity of vehicle B V B 1210 when added to a ground velocity vector for an object, according to Equation 9, below.
  • v A +v X v B +v X Equation 9
  • the ground velocity vectors are illustrated in the ground velocity vector map 1206 , including the vehicle A ground velocity vector 121 , the vehicle B ground velocity vector 1214 , the vehicle C ground velocity vector 1216 , the vehicle D ground velocity vector 1218 , and object E ground velocity vector 1220 , and the object F ground velocity vector 1222 .
  • the relative velocities of various vehicle and objects as determined by vehicle A 902 are illustrated in vehicle A relative velocity vector map 1202 as dotted lines.
  • the relative velocities of various vehicle and objects as determined by vehicle B 904 are illustrated in vehicle B relative velocity vector map 1204 .
  • FIG. 13 illustrates schematic flow chart diagram a method 1300 for authentication vehicle-to-vehicle communication.
  • the method 1300 may be performed by any suitable computing device, including for example a controller 102 of a vehicle.
  • the method 1300 begins and a computing device receives sensor data from a first vehicle at 1302 .
  • the computing device receives secondary sensor data from a second vehicle at 1304 .
  • the computing device extracts, based on the sensor data and the secondary sensor data, an authentication comprising one or more of: a proximity of the second vehicle to the first vehicle, or a common object identified by the sensor data and the secondary sensor data at 1306 .
  • the computing device determines whether the authentication satisfies a trust threshold of the first vehicle at 1308 .
  • FIG. 14 illustrates a schematic flow chart diagram of a method 1400 for authentication vehicle-to-vehicle communication.
  • the method 1400 may be performed by any suitable computing device, including for example a controller 102 of a vehicle.
  • the method 1400 begins and a computing device receives sensor data from a first vehicle comprising a camera image and ranging data comprising one or more of RADAR data, LIDAR data, SONAR data, or ultrasound data at 1402 .
  • the computing device receives secondary sensor data from a second vehicle comprising a secondary camera image and secondary ranging data comprising one or more of RADAR data, LIDAR data, SONAR data, or ultrasound data at 1404 .
  • the computing devices extracts an authentication point from the camera image, wherein the authentication point comprises one or more of: a distance of the second vehicle relative to the first vehicle, an azimuth angle between the second vehicle and the first vehicle, or a vehicle identification of the second vehicle at 1406 .
  • the computing device verifies the authentication point based on ranging data at 1408 .
  • the computing device exchanges a heading angle of the first vehicle or the second vehicle to compensate for an angle discrepancy between the sensor data and the secondary sensor data at 1410 .
  • the computing device extracts, based on the authentication point, the sensor data, and the secondary sensor data, an authentication comprising one or more of: a proximity of the second vehicle to the first vehicle, or a common object identified by the sensor data and the secondary sensor data at 1412 .
  • the computing device determines whether the authentication satisfies a trust threshold of the first vehicle at 1414 .
  • FIG. 15 illustrates a schematic flow chart diagram of a method 1500 for authentication vehicle-to-vehicle communication.
  • the method 1500 may be performed by any suitable computing device, including for example a controller 102 of a vehicle.
  • the method 1500 begins and a computing device receives an object map captured by one or more sensors of a first vehicle at 1502 .
  • the computing devices receives a secondary object map captured by one or more sensors of a second vehicle at 1504 .
  • the computing device aligns the secondary object map with the object map to generate an overlaid object map by rotating and/or translating one of the object map or the secondary object map at 1506 .
  • the computing device determines whether there is one or more common objects identified in the object map and the secondary object map based on the overlaid object map at 1508 .
  • the computing device determines whether the one or more common objects satisfies a trust threshold for the first vehicle at 1510 .
  • the computing device authenticates communication between the first vehicle and the second vehicle if the one or more common objects satisfies the trust threshold at 1512 .
  • sensors in addition to images, or the alternative to images, may help reduce potential attacks.
  • an advanced attacker may able to impersonate a vehicle's identity by preparing a dictionary of images taken from various locations offline and selecting an image from the dictionary to pretend to be other vehicles around the victim vehicle.
  • an attacker might take a snapshot of a victim vehicle, crop out the victim vehicle from the snapshot, and superimpose the victim vehicle into the position to mislead the victim believe that the image is taken from another vehicle. Due to recent advances in computer vision, there will be more camera image modification techniques available to an attacker.
  • ADAS sensors In order to overcome the potential weaknesses in the previous camera image-based V2V authentication, other ADAS sensors that are rapidly becoming prevalent in modern vehicles may be leveraged. These other ADAS sensors (which include RADAR, LIDAR, and other sensors) are able to detect surrounding objects (stationary and/or moving) and can be used to provide an enhanced authentication procedure, as that shown in the process flow 800 of FIG. 8 .
  • Embodiments may still check for the vehicle identity (vehicle number) captured from a camera snapshot. Additionally, a vehicle extracts the relative distance, azimuth angle, and vehicle number from the camera snapshot of the other vehicle. However, to prevent the image modification attack, this information is also cross checked with the information from other ADAS sensors such as RADAR and LIDAR. Although these sensors do not provide the vehicle number information, they still provide the relative distance and azimuth angle of surrounding objects that should be identical with the information from the camera snapshot within an error boundary. In this process, the heading angle of each vehicle may also be exchanged to compensate the angle discrepancy when the two vehicles are not aligned (e.g. in a curved road).
  • the surrounding objects can be stationary (e.g. landmarks, trees, buildings, road signs) or moving (e.g. other surrounding vehicles, and potentially pedestrians).
  • the vehicle A object map 1004 is illustrated and the vehicle B object map 1006 is illustrated.
  • an algorithm rotates and/or translates one of the two object maps (the vehicle B object map 1006 as illustrated in FIG. 11 ) to determine if there is an acceptable match between any of the detected objects.
  • the system trust threshold can be determined empirically in the surrounding environment. For example, it will be higher in an urban area with high object density compared to a rural area. Other implementations may yield a trust decision choice more to each application layer rather than system layer. As such, with a given number of commonly detected objects in two vehicles some security critical applications do not authenticate the communication with more strict trust boundary, whereas other non-critical applications satisfy with it.
  • the authentication system can also use the velocity vectors of surrounding objects during a short duration.
  • a vehicle can calculate the ground velocity of surrounding objects by summing its velocity to each object's velocity vector. Comparing the ground velocity vectors of commonly detected objects in two vehicles will make even harder for an attacker to impersonate a vehicle in V2V communication. Compared to authentication based on images only, using the additional ADAS sensor data makes it even harder for an attacker to bypass the V2V authentication system.
  • the authentication may comprise an equivalent detected ground speed of a common object as identified by the sensor data and the secondary data.
  • Example 1 is a method that includes receiving sensor data from a vehicle A and receiving sensor data from a vehicle B.
  • the sensor data includes one or more of RADAR, LIDAR, image, SONAR, ultrasound, or other sensor data.
  • the method includes verifying the vehicle B is proximal to the vehicle A based on the sensor data.
  • the method may include verifying a relative position (distance and/or angle) of the vehicle B to the vehicle A based on the sensor data from a vehicle A and the sensor data from a vehicle B.
  • the method may include verifying relative positions of objects in a vehicle A object map determined based on the sensor data from a vehicle A and in a vehicle B object map determined based on the sensor data from a vehicle B.
  • the method may include verifying determined ground speeds of objects detected based on the sensor data from a vehicle A and the sensor data from a vehicle B.
  • object maps or vector maps may be exchanged so that each vehicle doesn't have to determine the object map or vector map for the other vehicle.
  • Example 2 the method of Example 1 includes verifying may determining that a similarity between an object map, velocity vector, or the like is within a desired threshold.
  • Example 3 the method of any of Examples 1-2 includes comparing relative positions determined based on one type of sensor data (e.g., image data) to a relative position based on another type of sensor data (e.g., RADAR or LIDAR) and authenticating if the relative positions are similar enough.
  • one type of sensor data e.g., image data
  • another type of sensor data e.g., RADAR or LIDAR
  • Example 4 the method in any of Examples 1-3 further includes: determining a vehicle A object map based on the vehicle A sensor data; determining a vehicle B object map based on the vehicle B additional sensor data; and verifying that the vehicle A object map and the vehicle B object map are sufficiently similar.
  • the authenticating the vehicle B includes authenticating in response to verifying that the vehicle A object map and the vehicle B object map are sufficiently similar.
  • Example 5 the method of Example 4 includes rotating and/or translating at least one of the vehicle A object map or the vehicle B object map to accommodate different orientations or locations of the vehicle A and the vehicle B.
  • Example 6 the verifying that the vehicle A object map and the vehicle B object map are sufficiently similar in any of Examples 4-5 includes determining that a number of commonly detected objects in the vehicle A object map and the vehicle B object map is higher than a system trust threshold.
  • Example 7 the method of any of Examples 1-6 includes: determining a ground velocity of one or more objects near vehicle a based on the vehicle A additional sensor data; determining a ground velocity of one or more objects near vehicle B based on the vehicle B additional sensor data; and verifying that the ground velocity of one or more objects near vehicle a and the ground velocity of one or more objects near vehicle B are sufficiently similar.
  • Example 8 is a method that includes: receiving a first image from a camera of a vehicle A: receiving a second image from a vehicle b; verifying that a relative position of the vehicle B to the vehicle A according to the first image corresponds to a relative position of the vehicle A to the vehicle B according to the second image; and in response to the verifying, authenticating the vehicle B.
  • Example 9 the verifying that the relative position of the vehicle B to the vehicle A according to the first image corresponds to the relative position of the vehicle A to the vehicle B according to the second image of Example 8 includes identifying, by the controller of the vehicle A, an image of the vehicle B in the first image and identifying, by the controller of the vehicle A, an image of the vehicle A in the second image.
  • the method includes determining, by the controller of the vehicle A, at least one of a first distance to the vehicle B from the vehicle A and a first angle to the vehicle B from the vehicle A according to a location of the image of the vehicle B in the first image.
  • the method further includes determining, by the controller of the vehicle A, at least one of a second distance to the vehicle A from the vehicle B and a second angle to the vehicle A from the vehicle B according to a location of the image of the vehicle A in the second image.
  • the method further includes determining, by the controller of the vehicle A, at least one of (a) the second distance is within a predetermined distance tolerance from the first distance and (b) the second angle is within a predetermined angle tolerance from the first angle.
  • Example 10 the at least one of the first distance to the vehicle B from the vehicle A and the first angle to the vehicle B from the vehicle A according to a location of the image of the vehicle B in the first image in any of Examples 8-9 includes both of the first distance to the vehicle B from the vehicle A and the first angle to the vehicle B from the vehicle A according to a location of the image of the vehicle B in the first image.
  • the at least one of the second distance to the vehicle A from the vehicle B and the second angle to the vehicle A from the vehicle B according to the location of the image of the vehicle A in the second image includes both of the second distance to the vehicle A from the vehicle B and the second angle to the vehicle A from the vehicle B according to the location of the image of the vehicle A in the second image.
  • Example 11 the receiving the first image from the camera of the vehicle A in any of Examples 8-10 includes receiving a first forward image from a forward facing camera mounted to the vehicle A and receiving a first rearward image from a rearward facing camera mounted to the vehicle A.
  • the method includes receiving the second image from the vehicle B includes receiving a second forward image from a forward facing camera mounted to the vehicle B and receiving a second rearward image from a rearward facing camera mounted to the vehicle B.
  • Example 12 the identifying the image of the vehicle b in the first image in any of Examples 8-11 includes identifying a license plate number of the vehicle B in the first image and wherein identifying the image of the vehicle A in the second image includes identifying a license plate number of the vehicle A in the second image.
  • Example 13 the method of Example 12 further includes receiving, by the controller of the vehicle A from the vehicle B, a message including the license plate number of the vehicle B.
  • Example 14 the determining the at least one of the first distance to the vehicle B from the vehicle A and the first angle to the vehicle B from the vehicle A according to the location of the image of the vehicle B in the first image in any of Examples 12-13 includes: determining a first dimension (h image i) of a license plate of the vehicle B in the first image in pixels; and determining the first distance (d 1 ) as equal to d init *h init /h image1 , where d init is a calibration distance and h init is a test dimension in pixels of a test license plate positioned at d init from a test camera.
  • Determining the at least one of the second distance to the vehicle A from the vehicle B and the second angle to the vehicle A from the vehicle B according to the location of the image of the vehicle A in the second image includes: determining a second dimension (h image2 ) of a license plate of the vehicle A in the second image in pixels; and determining the second distance (d 2 ) as equal to d init *h init /h image .
  • Example 15 the determining the at least one of the first distance to the vehicle B from the vehicle A and the first angle to the vehicle B from the vehicle A according to the location of the image of the vehicle B in the first image of Example 14 includes: determining a first pixel offset (d shift1 ) of the license plate of the vehicle B in the first image from a center of the first image in pixels; determine a first distance offset (d shift1m ) as equal to (h m *d shift1 /h image1 ), where h m is a measured dimension of a test license plate; and determining the first angle as equal to Arccos(d 1 /d shift1m ).
  • Determining the at least one of the second distance to the vehicle A from the vehicle B and the second angle to the vehicle A from the vehicle B according to the location of the image of the vehicle A in the second image includes: determining a second pixel offset (d shift2 ) of the license plate of the vehicle A in the second image from a center of the second image in pixels; determine a second distance offset (d shift2m ) as equal to (h m *d shift2 /h image2 ); and determining the second angle as equal to Arccos(d 2 /d shift2m ).
  • Example 16 the camera of the vehicle A in any of Examples 8-15 is a first camera.
  • the method further includes authenticating the vehicle B in response to determining that one or more background objects in the second image correspond to objects in an image received from a second camera mounted to the vehicle A and facing in an opposite direction to the first camera.
  • Example 17 the authenticating the vehicle B in any of Examples 8-16 includes performing Diffie-Hellman key exchange between the vehicle A and the vehicle B.
  • Example 18 the method of any of Examples 8-17 further includes: receiving vehicle A additional sensor data including one or more of RADAR and LIDAR data; and receiving vehicle B additional sensor data including one or more of RADAR and LIDAR data from the vehicle B.
  • Example 19 the method of Example 18 includes verifying that the relative position of the vehicle B to the vehicle A according to the vehicle A additional sensor data is similar to a relative position of the vehicle A to the vehicle B according to the vehicle B additional sensor data, and wherein the relative position is within an error boundary of the relative position determined based on the first image and the second image.
  • Example 20 the method in any of Examples 18-19 includes: determining a vehicle A object map based on the vehicle A additional sensor data; determining a vehicle B object map based on the vehicle B additional sensor data; and verifying that the vehicle A object map and the vehicle B object map are sufficiently similar.
  • the authenticating the vehicle B includes authenticating in response to verifying that the vehicle A object map and the vehicle B object map are sufficiently similar.
  • Example 21 the verifying that the vehicle A object map and the vehicle B object map are sufficiently similar of Example 20 includes rotating and/or translating at least one of the vehicle A object map or the vehicle B object map to accommodate different orientations or locations of the vehicle A and the vehicle B.
  • Example 22 the verifying that the vehicle A object map and the vehicle B object map are sufficiently similar as in any of Examples 20-21 includes determining that a number of commonly detected objects in the vehicle A object map and the vehicle B object map is higher than a system trust threshold.
  • Example 23 the method as in any of Examples 18-22 further includes: determining a ground velocity of one or more objects near vehicle A based on the vehicle A additional sensor data; determining a ground velocity of one or more objects near vehicle B based on the vehicle B additional sensor data; and verifying that the ground velocity of one or more objects near vehicle a and the ground velocity of one or more objects near vehicle B are sufficiently similar.
  • Example 24 is computer readable storage media storing instructions that, when executed by one or more processors, cause the one or more processors to implement a method or realize a system or apparatus as in any of Examples 1-23.
  • Example 25 is a method for authenticating vehicle-to-vehicle communication.
  • the method includes: receiving sensor data from a first vehicle and secondary sensor data from a second vehicle.
  • the method includes extracting, based on the sensor data and the secondary sensor data, an authentication comprising one or more of: a proximity of the second vehicle to the first vehicle or a common object identified by the sensor data and the secondary sensor data.
  • the method includes determining whether the authentication satisfies a trust threshold of the first vehicle.
  • Example 26 is a method as in Example 25, wherein the authentication comprises an equivalent detected ground speed of a common object identified by the sensor data and the secondary sensor data.
  • Example 27 is a method as in any of Examples 25-26, wherein: the sensor data comprises an image received from a camera of the first vehicle; the sensor data further comprises ranging data received from one or more of a RADAR sensor, a LIDAR sensor, a SONAR sensor, or an ultrasound sensor of the first vehicle; the secondary sensor data comprises a secondary image received from a camera of the second vehicle; and the secondary sensor data further comprises secondary ranging data received from one or more of a RADAR sensor, a LIDAR sensor, a SONAR sensor, or an ultrasound sensor of the second vehicle.
  • Example 28 is a method as in any of Examples 25-27, further comprising extracting an authentication point from the camera image, wherein the authentication point comprises one or more of: a distance of the second vehicle relative to the first vehicle; an azimuth angle between the second vehicle and the first vehicle; or a vehicle identification of the second vehicle.
  • Example 29 is a method as in any of Examples 25-28, further comprising verifying the authentication point based on the ranging data, wherein verifying the authentication point comprises: extracting a ranging authentication point from the ranging data.
  • the ranging authentication point comprises one or more of: the distance of the second vehicle relative to the first vehicle; or the azimuth angle between the second vehicle and the first vehicle.
  • the method further comprises cross-checking the authentication point extracted from the image with a corresponding ranging authentication point; and verifying the authentication point is equal to the corresponding ranging authentication point within an error boundary.
  • Example 30 is a method as in any of Examples 25-29, further comprising exchanging a heading angle of the first vehicle or the second vehicle to compensate for an angle discrepancy between the sensor data and the secondary sensor data.
  • Example 31 is a method as in any of Examples 25-30, further comprising: receiving an object map captured by one or more sensors of the first vehicle; receiving a secondary object map captured by one or more sensors of the second vehicle; aligning the secondary object map with the object map to generate an overlaid object map by rotating and/or translating one of the object map or the secondary object map; and determining whether there is one or more object matches between the object map and the secondary object map based on the overlaid object map.
  • Example 32 is a method as in any of Examples 25-31, further comprising: determining whether the one or more object matches between the object map and the secondary object map satisfies a trust threshold for an application of the first vehicle; authenticating communication between the application of the first vehicle and the second vehicle if the one or more object matches satisfies the trust threshold for the application; and denying communication between the application of the first vehicle and the second vehicle if the one or more object matches does not satisfy the trust threshold for the application.
  • Example 33 is a method as in any of Examples 25-32, further comprising: determining whether the one or more object matches between the object map and the secondary object map satisfies a global trust threshold for the first vehicle; authenticating communication between the first vehicle and the second vehicle if the one or more object matches satisfies the global trust threshold.
  • Example 34 is a method as in any of Examples 25-33, wherein determining whether the authentication satisfies the trust threshold of the first vehicle comprises verifying one or more of: an identify of the second vehicle; whether the second vehicle is within a close proximity to the first vehicle; a presence of one or more common objects sensed by the sensor data and the secondary sensor data; or equivalent determined ground speeds for one or more common objects based on the sensor data and the secondary sensor data.
  • Example 35 is a method as in any of Examples 25-34, further comprising permitting communication between the first vehicle and the second vehicle if the verification meets the trust threshold of the first vehicle.
  • Example 36 is a system for authenticating vehicle-to-vehicle communication.
  • the system includes: a first vehicle comprising one or more sensors providing sensor data; and a vehicle controller in communication with the one or more sensors and configured to receive secondary sensor data from a second vehicle.
  • the vehicle controller comprises non-transitory computer readable storage media storing instructions that, when executed by one or more processors, cause the one or more processors to: receive sensor data from the one or more sensors of the first vehicle; receive secondary sensor data from the second vehicle; extract, based on the sensor data and the secondary sensor data, an authentication comprising one or more of: a proximity of the second vehicle relative to the first vehicle; a common object identified by the sensor data and the secondary sensor data; or an equivalent detected ground speed of a common object identified by the sensor data and the secondary sensor data.
  • the instructions further cause the one or more processors to determine whether the authentication satisfies a trust threshold of the first vehicle.
  • Example 37 is a system as in Example 36, wherein: the sensor data comprises an image received from a camera of the first vehicle; the sensor data further comprises ranging data received from one or more of a RADAR sensor, a LIDAR sensor, a SONAR sensor, or an ultrasound sensor of the first vehicle; the secondary sensor data comprises a secondary image received from a camera of the second vehicle; and the secondary sensor data further comprises secondary ranging data received from one or more of a RADAR sensor, a LIDAR sensor, a SONAR sensor, or an ultrasound sensor of the second vehicle.
  • Example 38 is a system as in any of Examples 36-37, wherein the instructions further cause the one or more processors to extract an authentication point from the camera image, wherein the authentication point comprises one or more of: a distance of the second vehicle relative to the first vehicle; an azimuth angle between the second vehicle and the first vehicle; or a vehicle identification of the second vehicle.
  • Example 39 is a system as in any of Examples 36-38, wherein the instructions further cause the one or more processors to verify the authentication point based on the ranging data, wherein verifying the authentication point comprises: extracting a ranging authentication point from the ranging data, wherein the ranging authentication point comprises one or more of: the distance of the second vehicle relative to the first vehicle; or the azimuth angle between the second vehicle and the first vehicle.
  • the instructions further cause the one or more processors to: cross-check the authentication point extracted from the image with a corresponding ranging authentication point; and verify the authentication point is equal to the corresponding ranging authentication point within an error boundary.
  • Example 40 is a system as in any of Examples 36-39, wherein the instructions further cause the one or more processors to exchange a heading angle of the first vehicle or the second vehicle to compensate for an angle discrepancy between the sensor data and the secondary sensor data.
  • Example 41 is non-transitory computer readable storage media storing instructions that, when executed by one or more processors, cause the one or more processors to: receive sensor data from one or more sensors of a first vehicle; receive secondary sensor data from a second vehicle; extract, based on the sensor data and the secondary sensor data, an authentication comprising one or more of: a proximity of the second vehicle relative to the first vehicle; a common object identified by the sensor data and the secondary sensor data; or an equivalent detected ground speed of a common object identified by the sensor data and the secondary sensor data.
  • the instructions further cause the one or more processors to determine whether the authentication satisfies a trust threshold of the first vehicle.
  • Examples 42 is non-transitory computer readable storage media as in Example 41, wherein: the sensor data comprises an image received from a camera of the first vehicle; the sensor data further comprises ranging data received from one or more of a RADAR sensor, a LIDAR sensor, a SONAR sensor, or an ultrasound sensor of the first vehicle; the secondary sensor data comprises a secondary image received from a camera of the second vehicle; and the secondary sensor data further comprises secondary ranging data received from one or more of a RADAR sensor, a LIDAR sensor, a SONAR sensor, or an ultrasound sensor of the second vehicle.
  • Example 43 is non-transitory computer readable storage media as in any of Examples 41-42, wherein the instructions further cause the one or more processors to extract an authentication point from the camera image, wherein the authentication point comprises one or more of: a distance of the second vehicle relative to the first vehicle; an azimuth angle between the second vehicle and the first vehicle; or a vehicle identification of the second vehicle.
  • Example 44 is non-transitory computer readable storage media as in any of Examples 41-43, wherein the instructions further cause the one or more processors to verify the authentication point based on the ranging data, wherein verifying the authentication point comprises: extracting a ranging authentication point from the ranging data, wherein the ranging authentication point comprises one or more of: the distance of the second vehicle relative to the first vehicle; or the azimuth angle between the second vehicle and the first vehicle.
  • the instructions further cause the one or more processors to: cross-check the authentication point extracted from the image with a corresponding ranging authentication point; and verify the authentication point is equal to the corresponding ranging authentication point within an error boundary.
  • Example 45 is a system or device that includes means for implementing a method or realizing a system or apparatus as in any of Examples 1-44.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Game Theory and Decision Science (AREA)
  • Medical Informatics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

Systems, methods, and devices for authenticating vehicle-to-vehicle communication are disclosed. A method includes receiving sensor data from a first vehicle and receiving secondary sensor data from a second vehicle. The method includes extracting, based on the sensor data and the secondary sensor data, an authentication comprising one or more of: a proximity of the second vehicle to the first vehicle or a common object identified by the sensor data and the secondary sensor data. The method includes determining whether the authentication satisfies a trust threshold of the first vehicle.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims the benefit of U.S. Provisional Application No. 62/550,312 filed Aug. 25, 2017, and is hereby incorporated by reference herein in its entirety, including but not limited to those portions that specifically appear hereinafter, the incorporation by reference being made with the following exception: In the event that any portion of the above-referenced provisional application is inconsistent with this application, this application supersedes the above-referenced provisional application.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
Not applicable.
TECHNICAL FIELD
This disclosure relates to performing authenticated vehicle-to-vehicle communications.
BACKGROUND
Automobiles provide a significant portion of transportation for commercial, government, and private entities. Autonomous vehicles and driving assistance systems are currently being developed and deployed to provide safety, reduce an amount of user input required, or even eliminate user involvement entirely. For example, some driving assistance systems, such as crash avoidance systems, may monitor driving, positions, and a velocity of the vehicle and other objects while a human is driving. When the system detects that a crash or impact is imminent the crash avoidance system may intervene and apply a brake, steer the vehicle, or perform other avoidance or safety maneuvers. As another example, autonomous vehicles may drive and navigate a vehicle with little or no user input. Autonomous vehicle may further communicate with other autonomous vehicles to aid in crash avoidance and safety maneuvers. Efficient authentication of a vehicle's identity may be beneficial in communication between two or more autonomous vehicle.
Advances in vehicular technology include vehicle-to-vehicle (V2V) communications that may require authorization to ensure safety and security for a vehicle owner or deriver. Inter-vehicle communication, among many other applications, is known for improving safety features for each vehicle. In V2V communications, the authenticity of a vehicle identity is verified by its digital certificate. Considering its safety critical implication in many V2V applications, the validity of the digital certificate may be secured through a public-key infrastructure (PKI) system. However, in some implementations, it is nearly impossible to perfectly prevent any security breach in PKI. As seen in past decades, the certificate authority (CA) can be compromised to issue an unauthorized certificate, or even benign Cas can mistakenly issue a valid certificate to unauthorized parties. Further, it may be possible to directly capture the signing key for a certificate if it is not properly stored in a secure place.
BRIEF DESCRIPTION OF THE DRAWINGS
In order that the advantages of the disclosure will be readily understood, a more particular description of the disclosure briefly described above will be rendered by reference to specific embodiments illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the disclosure and are not therefore to be considered limiting of its scope, the disclosure will be described and explained with additional specificity and detail through use of the accompanying drawings, in which:
FIG. 1 is a schematic block diagram of an autonomous vehicle or assisted driving system in accordance with the teachings and principles of the disclosure;
FIG. 2 is a schematic block diagram of an example computing device suitable for implementing methods in accordance with the teachings and principles of the disclosure;
FIG. 3 is a schematic diagram illustrating relative positions of vehicles performing authentication in accordance with the teachings and principles of the disclosure;
FIGS. 4A, 4B, 5A and 5B are diagrams of images that may be processed in accordance with the teachings and principles of the disclosure;
FIG. 6 is a diagram illustrating distances and angles between vehicles performing authentication in accordance with the teachings and principles of the disclosure;
FIG. 7A is a diagram illustrating distances and angles measured from a vehicle camera in accordance with the teachings and principles of the disclosure;
FIG. 7B is a diagram illustrating the location of a vehicle license plate in an image in accordance with the teachings and principles of the disclosure;
FIG. 8 is a schematic block diagram illustrating a process flow for authenticating a vehicle in accordance with the teachings and principles of the disclosure;
FIG. 9 is a plan view illustrating vehicle positions when an image is captured in accordance with the teachings and principles of the disclosure;
FIG. 10 is a plan view illustrating object maps captured by nearby vehicles in accordance with the teachings and principles of the disclosure;
FIG. 11 is a plan view illustrating translation and rotation to determine whether object maps captured by nearby vehicles are similar in accordance with the teachings and principles of the disclosure;
FIG. 12 is a plan view illustrating velocity vector maps of nearby vehicles for use in authentication in accordance with the teachings and principles of the disclosure;
FIG. 13 is a schematic flow chart diagram illustrating a method for authenticating a vehicle, in accordance with the teachings and principles of the disclosure;
FIG. 14 is a schematic flow chart diagram illustrating a method for authenticating a vehicle, in accordance with the teachings and principles of the disclosure; and
FIG. 15 is a schematic flow chart diagram illustrating a method for authenticating a vehicle, in accordance with the teachings and principles of the disclosure.
DETAILED DESCRIPTION
Advances in vehicular technology include vehicle-to-vehicle (V2V) communications that may require authorization to ensure safety and security for a vehicle owner or deriver. Inter-vehicle communication, among many other applications, is known for improving safety features for each vehicle. In V2V communications, the authenticity of a vehicle identity is verified by its digital certificate. Considering its safety critical implication in many V2V applications, the validity of the digital certificate may be secured through a public-key infrastructure (PKI) system. However, in some implementations, it is nearly impossible to perfectly prevent any security breach in PKI. As seen in past decades, the certificate authority (CA) can be compromised to issue an unauthorized certificate, or even benign Cas can mistakenly issue a valid certificate to unauthorized parties. Further, it may be possible to directly capture the signing key for a certificate if it is not properly stored in a secure place.
While V2V communication are intended to increase the security and safety of vehicles, it also opens up potential threats for adversaries. An attacker can launch different types of attacks to benefit themselves or to maliciously cause damage to victims. For example, attackers may transmit inaccurate information to influence neighboring vehicles to divert other vehicles on the path to gain free path or forge their sensor information to circumvent liabilities for accidents. Platooning vehicles are also vulnerable to collision induction attacks. In addition, Sybil attacks are also possible by using multiple non-existing identities or pseudonyms. Hence, securing inter-vehicular communications is of critical significance and may save users from life-threatening attacks.
In efforts to secure the V2V communications, Dedicated Short-Range Communications (DSRC), the de facto V2V communication standard, leverages public-key infrastructure (PKI) systems to authenticate public keys of vehicles. While this solution aims to provide sufficient security guarantees, many attacks are in still possible. One of the main problems results from location spoofing impersonation attacks. In these attacks, an inside attacker (i.e., a malicious vehicle with a correct certificate), transmits messages with forged locations. For example, an attacker creates a “ghost vehicle” by forging his location to victim vehicles. Similarly, a malicious vehicle in a platoon may impersonate another vehicle's position by forging its location within the platoon.
Applicant recognizes that supplementary mechanisms may be supplied in addition to PKI authentication for V2V communication. Such supplementary mechanisms may leverage camera sensors that are already prevalent in many autonomous or driving assistance vehicles. Such supplementary mechanisms may include two vehicles taking a snapshot of each other, exchanging the snapshot, and verifying each other's identity by extracting, for example, a vehicle number, a relative distance between the vehicle, an azimuth angle from the received image, and so forth.
However, an advanced attacker may still be able to impersonate a vehicle's identity by preparing a dictionary of images taken from various locations offline and selecting an image from the dictionary to pretend to be the other vehicle around the victim vehicle. Further, an attacking vehicle might take a snapshot of the victim vehicle, crop out the victim vehicle from the snapshot, and superimpose the victim vehicle into the appropriate position to mislead the victim vehicle to believe that the image was taken by the attacking vehicle. Due to recent advances in computer vision, new camera image modification techniques may be available to an attacking vehicle.
Applicant herein presents systems, methods, and devices for vehicle-to-vehicle authentication and communication that overcome weaknesses presently known in PKI systems and other supplementary mechanisms that may be prone to attack by advanced vehicle systems. In an embodiment of the disclosure, a method for authenticating vehicle-to-vehicle communication is provided. The method includes receiving sensor data from a first vehicle and receiving secondary sensor data from a second vehicle. The method includes extracting, based on the sensor data and the secondary sensor data, an authentication comprising one or more of: a proximity of the second vehicle relative to the first vehicle, or a common object identified by the sensor data and the secondary sensor data. The method further includes determining whether the authentication satisfies a trust threshold for the first vehicle. The method may further include permitting communication between the first vehicle and the second vehicle if the authentication satisfies the trust threshold.
It will be readily understood that the components of the present disclosure, as generally described and illustrated in the Figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the disclosure, as represented in the Figures, is not intended to limit the scope of the disclosure, as claimed, but is merely representative of certain examples of presently contemplated embodiments in accordance with the disclosure. The presently described embodiments will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout.
Embodiments in accordance with the present disclosure may be embodied as an apparatus, method, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.), or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “module” or “system.” Furthermore, the present disclosure may take the form of a computer program product embodied in any tangible medium of expression having computer-usable program code embodied in the medium.
Any combination of one or more computer-usable or computer-readable media may be utilized. For example, a computer-readable medium may include one or more of a portable computer diskette, a hard disk, a random access memory (RAM) device, a read-only memory (ROM) device, an erasable programmable read-only memory (EPROM or Flash memory) device, a portable compact disc read-only memory (CDROM), an optical storage device, and a magnetic storage device. In selected embodiments, a computer-readable medium may comprise any non-transitory medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
Computer program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++, or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on a computer system as a stand-alone software package, on a stand-alone hardware unit, partly on a remote computer spaced some distance from the computer, or entirely on a remote computer or server. In the latter scenario, the remote computer may be connected to the computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
The present disclosure is described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions or code. These computer program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a non-transitory computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
Referring to FIG. 1, a controller 102 may be housed within a vehicle. The vehicle may include any vehicle known in the art. The vehicle may have all of the structures and features of any vehicle known in the art including, wheels, a drive train coupled to the wheels, an engine coupled to the drive train, a steering system, a braking system, and other systems known in the art to be included in a vehicle.
As discussed in greater detail herein, the controller 102 may perform autonomous navigation and collision avoidance. In particular, the controller 102 may perform authenticated V2V communication in accordance with an embodiment of the present disclosure.
The controller 102 may be coupled a forward facing camera 104 and a rearward facing camera 106. The forward facing camera 104 may be mounted on a vehicle with a field of view facing forward and the rearward-facing camera 106 may be mounted to the vehicle having the field of view thereof facing in a rearward direction. The rearward-facing camera 106 may be a conventional back-up camera or a separate camera having a different field of view. The cameras 104, 106 may be used for performing authentication methods as disclosed herein and may additionally be used for performing obstacle detection.
The controller 102 may be coupled to one or more other sensing devices 108, which may include microphones or other sensors useful for detecting obstacles, such as RADAR, LIDAR, SONAR, ultrasound, and the like. For example, a plurality of different sensors may be attached to an advanced driver-assistance systems (ADAS) bus or system. Any of these available sensors may be available to provide sensor data for purposes of assisted driving, automated driving, and/or authenticating vehicle-to-vehicle (V2V) communication.
The controller 102 may execute a V2V module 110 a. The V2V module 110 a includes a location verification module 112 a. The location verification module 112 a verifies that another vehicle seeking to communicate with the controller 102 using V2V communication is in fact a vehicle in proximity to the controller 102. In particular, the location verification module 112 a verifies the location of the other vehicle by exchanging images or other sensor data (e.g., frames of RADAR, LIDAR, SONAR, ultrasound, or other sensor data), object maps, and/or velocity maps, as discussed in greater detail below.
The V2V module 110 a may further include an authentication module 112 b. The authentication module 112 b performs key exchange, such as using the Diffie-Hellman approach, public key encryption, or some other authentication technique. The authentication module 112 b may further handle performing secured communication between the controller and the other vehicle. The manner in which authentication and secured communication is performed is described in greater detail below.
The controller 102 may further execute an obstacle identification module 110 b, collision prediction module 110 c, and decision module 110 d. The obstacle identification module 110 b may analyze one or more image streams from the cameras 104, 106 or other camera and identifies potential obstacles, including people, animals, vehicles, buildings, curbs, and other objects and structures. The obstacle identification module 110 b may additionally identify potential obstacles from outputs of the sensing devices 108, such as using data from a LIDAR, RADAR, ultrasound, or other sensing system.
The collision prediction module 110 c predicts which obstacle images are likely to collide with the vehicle based on its current trajectory or current intended path. The decision module 110 d may make a decision to stop, accelerate, turn, etc. in order to avoid obstacles. The manner in which the collision prediction module 110 c predicts potential collisions and the manner in which the decision module 110 d takes action to avoid potential collisions may be according to any method or system known in the art of autonomous vehicles.
The decision module 110 d may control the trajectory of the vehicle by actuating one or more actuators 114 controlling the direction and speed of the vehicle. For example, the actuators 114 may include a steering actuator 116 a, an accelerator actuator 116 b, and a brake actuator 116 c. The configuration of the actuators 116 a-116 c may be according to any implementation of such actuators known in the art of autonomous vehicles.
FIG. 2 is a block diagram illustrating an example computing device 200. Computing device 200 may be used to perform various procedures, such as those discussed herein. The controller 102 may have some or all of the attributes of the computing device 200.
Computing device 200 includes one or more processor(s) 202, one or more memory device(s) 204, one or more interface(s) 206, one or more mass storage device(s) 208, one or more Input/Output (I/O) device(s) 210, and a display device 230 all of which are coupled to a bus 212. Processor(s) 202 include one or more processors or controllers that execute instructions stored in memory device(s) 204 and/or mass storage device(s) 208. Processor(s) 202 may also include various types of computer-readable media, such as cache memory.
Memory device(s) 204 include various computer-readable media, such as volatile memory (e.g., random access memory (RAM) 214) and/or nonvolatile memory (e.g., read-only memory (ROM) 216). Memory device(s) 204 may also include rewritable ROM, such as Flash memory.
Mass storage device(s) 208 include various computer readable media, such as magnetic tapes, magnetic disks, optical disks, solid-state memory (e.g., Flash memory), and so forth. As shown in FIG. 2, a particular mass storage device is a hard disk drive 224. Various drives may also be included in mass storage device(s) 208 to enable reading from and/or writing to the various computer readable media. Mass storage device(s) 208 include removable media 226 and/or non-removable media.
I/O device(s) 210 include various devices that allow data and/or other information to be input to or retrieved from computing device 200. Example I/O device(s) 210 include cursor control devices, keyboards, keypads, microphones, monitors or other display devices, speakers, network interface cards, modems, lenses, CCDs or other image capture devices, and the like.
Display device 230 includes any type of device capable of displaying information to one or more users of computing device 200. Examples of display device 230 include a monitor, display terminal, video projection device, and the like.
Interface(s) 206 include various interfaces that allow computing device 200 to interact with other systems, devices, or computing environments. Example interface(s) 206 include any number of different network interfaces 220, such as interfaces to local area networks (LANs), wide area networks (WANs), wireless networks, and the Internet. Other interface(s) include user interface 218 and peripheral device interface 222. The interface(s) 206 may also include one or more peripheral interfaces such as interfaces for pointing devices (mice, track pad, etc.), keyboards, and the like.
Bus 212 allows processor(s) 202, memory device(s) 204, interface(s) 206, mass storage device(s) 208, I/O device(s) 210, and display device 230 to communicate with one another, as well as other devices or components coupled to bus 212. Bus 212 represents one or more of several types of bus structures, such as a system bus, PCI bus, IEEE 1394 bus, USB bus, and so forth.
For purposes of illustration, programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 200, and are executed by processor(s) 202. Alternatively, the systems and procedures described herein can be implemented in hardware, or a combination of hardware, software, and/or firmware. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein.
To enhance authentication of V2V communications, the systems, methods, and devices disclosed herein provide a cryptographic credential with a physical identity and co-presence component to help infer one's location. A vehicle authentication approach (“VAuth”) is disclosed herein and provides for secure authenticated key agreement scheme that addresses the aforementioned concerns for V2V communications while the vehicles are driven on the road. The VAuth approach includes capturing a car's visual contextual information using a camera as a means to bind its physical identity and its co-presence to the other vehicle. Specifically, two moving vehicles on the road would have a unique pair of relative distance (d) and angle (Φ) at a given point of time that no other vehicles can experience.
For example, FIG. 3 illustrates an embodiment of the use of VAuth 300. In the embodiment, vehicles A and B both take a snapshot of each other simultaneously (e.g. within one second, preferably within 100 ms, more preferably within 10 ms) and exchange the images to prove their relative d and Φ. Specifically, VAuth 300 leverages cryptographic commitment and decommitment schemes to bind the vehicle's cryptographic key with its physical identity (license plate number) and its co-presence (d and Φ), which help to infer the location.
Due to this binding, VAuth 300 eliminates the aforementioned location spoofing impersonation attacks. Through this binding, VAuth 300 is also robust against Man-in-the-Middle (MitM) attacks that may be present during the key agreement steps. In addition, VAuth 300 may also restrict image forgery and spoofing attacks because each vehicle may verify the validity of the received image using commonly observed objects (e.g., neighboring vehicles, road signs, terrains in the background, etc.).
VAuth 300 enables automated key establishment among mobile vehicles even where the following constraints are present. First, there may be a requirement for decentralized trust management making traditional approach of relying on a remote and central trusted third party (TTP) questionable. TTPs incur a large management cost and are vulnerable to a single point of failure. Second, there may be no requirement for human interaction and involvement due to fast dynamics of vehicles moving in traffic. Including drivers and passengers in the loop not only degrades usability but may significantly distract from driving tasks. Third, there may be a requirement to use available hardware in vehicles to keep vehicle costs low.
The main goal of VAuth 300 is to secure against location spoofing impersonation attacks in V2V communications by binding the physical identity and co-presence of the pair of neighboring cars. We define “neighboring vehicles” as vehicles within each other's line of sight (i.e., camera's field of view). In doing so, we enable a pair of vehicles to establish a secure channel by performing an ad-hoc secure key agreement while the vehicles are on the road. This process is referred to as “pairing.” The key agreement protocol should be robust against active attacks such as Man-in-the-Middle (MitM) and image spoofing attacks. VAuth 300 augments the integrity and authenticity of key agreement messages. Integrity and authenticity guarantees that the key agreement messages come unaltered en route from the claimed sender.
To summarize, the description of VAuth 300 included herein discloses: (a) a secure V2V key agreement protocol that binds physical identity and presence to a cryptographic key; (b) security analysis of VAuth protocol to demonstrate its robustness against MitM attacks; (c) an implementation and evaluation of VAuth conducted with real-world vehicles.
The attacker's goal is to break the integrity and authenticity of the key agreement scheme between two legitimate vehicles. This application considers both passive and active attackers. Passive attackers merely observe the wireless communication in attempts to launch attacks (e.g., eavesdropping attack). Active attackers may inject, replay, modify, and delete messages in the communication channel. In this application, an approach is disclosed that deals with attackers that are co-located with the legitimate entities, i.e., neighboring vehicles traveling along the road.
VAuth 300 leverages visual images of the vehicles' contextual information to verify authenticity during a pairing process. Any neighboring pair of vehicles and only that neighboring pair of vehicles share and experience a unique relative distance (d) and angle (Φ) at a specific time that no other vehicles experience (where 0≤ϕ≤2π). For example, vehicles A and B in FIG. 3 share a relative distance and angle. Note that it is possible for another pair of vehicles (e.g., vehicles B and C) to have their own d and Φ relative to each other, but it is impossible to have the same d and Φ relative to vehicle A.
The vehicles prove their authenticity by taking camera snapshot of each other to present d and Φ as a proof. The pair of vehicles identify each other as “target” vehicles to pair by initiating periodic beacon messages. The two vehicles exchange beacon messages that contain their identifiers (i.e., license plate number). If the identifiers are not found in each vehicle's local “paired” list, the two vehicles will identify each other as the “target” vehicle to pair.
Referring to FIG. 3 legitimate vehicles A and B may pair using VAuth 300 in the presence of an attacker vehicle M and possibly one or more benign vehicles C. Each vehicle may have a forward-facing camera 104A, 104B, 104M and a rearward facing camera 106A, 106B, 106M.
Vehicles A and B may identify each other as targets for V2V communication. Subsequently, the two vehicles will take a picture of each other and exchange the images over the DSRC wireless channel. Specifically, snapshots taken by vehicle A's rear camera 106A contains vehicle B's front image, and similarly, vehicle B's front camera 104B contains vehicle A's rear image.
If the images are taken by the intended vehicles (and not by a neighboring vehicle), then the images should share the same relative distance, d. Specifically, the distance dA between vehicles A and B as measured by vehicle A using the image including vehicle B should be equal (i.e. within some tolerance) of the distance dB measured using the image including vehicle A received from vehicle B, and vice versa. Likewise, the angle ΦA between vehicles A and B as measured by vehicle A using the image including vehicle B should be equal (i.e. within some tolerance) of the angle ΦB measured using the image including vehicle A received from vehicle B. This constraint may be expressed as |dA−dB|<ϵd and |ϕA−ϕB|<ϵϕ, where ϵd is a distance tolerance and ϵϕ is an angle tolerance. Where this constraint is not met, the pairing process is terminated.
The security of VAuth 300 depends on the uniqueness of the distance (dA, dB) and angle (ΦA, ΦB) of a pair of vehicles at a specific point of time. However, consider an attacker, vehicle M, illustrated in FIG. 3, traveling along with vehicles A and B. In order to launch a MitM attack, vehicle M estimates the relative distance (dM˜=dA) and angle (ΦM˜=ΦA) and impersonates vehicle A to vehicle B by simply spoofing an image of the victim vehicle (vehicle B) with and image from a pool of prepared images with varying distances and angle relative to vehicle B. The attacker can prepare a “dictionary” of images offline, for example, when the victim's vehicle (vehicle B) is parked on a street.
In some embodiments, VAuth 300 prevents such attacks by leveraging the fact that both vehicles' surroundings (e.g., neighboring vehicles, road signs, background objects/views, etc.) should approximately be equal. Hence, VAuth 300 requires Vehicles A and B to take both front (VA F and VB F ) and rear images (VA R and VB R ) as depicted. Each vehicle may therefore compare the images to check if the images contain similar surroundings. For example, VAF and VBF should share common features since they are pointing in the same direction and VAR and VBR should likewise share common features. If this check fails, the vehicles reject the pairing process.
The VAuth 300 protocol includes four phases: (1) synchronization; (2) snapshot; (3) key agreement; and (4) key confirmation phases. Each phase is discussed in detail below with respect to the algorithm of Table 1, below. The definitions of the variables of the algorithm of Table 1 is included in Table 2.
TABLE 1
VAuth Protocol.
VAuth Protocol
(Phase I) Synchronization
1.
Figure US10880293-20201229-C00001
BEACONA = IDA
2. B Checks against “paired” list;
Aborts if found.
3.
Figure US10880293-20201229-C00002
RQST_TO_PAIR
4. A Checks against “paired” list;
Aborts if found.
5.
Figure US10880293-20201229-C00003
SYNCAB
(Phase 2) Snapshot
6.
Figure US10880293-20201229-C00004
Take snapshots;
A Front (VA F ) and rear (VA R );
B Front (VB F ) and rear (VB R ).
(Phase 3) Key Agreement
7.
Figure US10880293-20201229-C00005
CA = H(ga||IDA||VA F ||VA R )
8.
Figure US10880293-20201229-C00006
CB = H(gb||IDB||VB F ||VB R )
9.
Figure US10880293-20201229-C00007
DA = ga||IDA||VA F ||VA R
B if verifyCmmt ( ) == true, accept ga'
Computes shared key K = (ga)b;
Aborts if verification failed.
10.
Figure US10880293-20201229-C00008
DB = gb||IDB||VB F ||VB R
A if verifyCmmt( ) == true, accept gb
Computes shared key K′ = (gb)a;
Aborts if verification failed.
(Phase 4) Key confirmation (check K
Figure US10880293-20201229-P00001
 K)
11. A
Figure US10880293-20201229-C00009
Figure US10880293-20201229-C00010
nA||MK′(nA)
12. B
Figure US10880293-20201229-C00011
Figure US10880293-20201229-C00012
nB||MK(nA||nB)
13. A MK(nA||nB)
Figure US10880293-20201229-P00001
 MK′(nA||nB);
Figure US10880293-20201229-C00013
MK′(nB)
14. B MK′(nB)
Figure US10880293-20201229-P00002
 MK(nB);
Aborts if confirmation fails.
TABLE 2
Notations for VAuth protocol.
Notation Description
Figure US10880293-20201229-C00014
In-band wireless Dedicated Short-Range Communication channel
Figure US10880293-20201229-C00015
Camera snapshot of each other
MK(x) MAC (e.g., HMAC) computed over the
input x, using key K
gx Diffie-Hellman public parameter (omit mod p
for brevity
H(x) Cryptographic hash (e.g., SHA-3) of input x
{0, 1}i Random binary string with length i
VX F Front snapshot image taken by Vehicle X
VX R Rear snapshot image taken by Vehicle X
In the synchronization phase, each vehicle transmits a periodic beacon message to attempt to initiate VAuth 300 protocol. The beacon message is simply a broadcast of its vehicle identifier (i.e., license plate number, IDA, IDB). Continuing with the example of FIG. 3, vehicle A broadcasts its beacon message, BEACONA, and vehicle B receives this message as depicted in Table 1. Upon receiving BEACONA, vehicle B checks against its “paired” list. If IDA is not found in the list, vehicle B sends a request to pair to vehicle A. Similarly, vehicle A also checks the license plate of vehicle B (IDB) against its “paired” list (Steps 2-4 of Table 1). If not found, vehicle A transmits a synchronization message to vehicle B to initiate a synchronized snapshot phase so that both vehicles identify each other as a “target” vehicle to pair with (Step 5 of Table 1). Note that the protocol can be further modified so that if a vehicle receives multiple pairing requests, the vehicle can prioritize the requests using other information sent together with the requests (e.g., GPS-location information to prioritize requests based on the proximity of two vehicles).
In the snapshot phase, following the synchronization phase, and in response to the messages received during the synchronization phase, both vehicle A and vehicle B simultaneously take snapshots of the front and rear views as shown in Step 6 of Table 1. Front and rear images taken by vehicles A and B are referenced as VA F and VA R , and VB F and VB R , respectively. Synchronized taking of photos may be performed by vehicles A and B coordinating a time at which photos will be taken.
FIGS. 4A and 4B illustrate VA F and VB F , respectively, for the scenario of FIG. 3. FIGS. 5A and 5B illustrate VA R and VB R for the scenario of FIG. 3.
In the key agreement phase, vehicles A and B first exchange their commitments (CA and CB) and later reveal their decommitments (DA and DB). Each vehicle leverages commitments to bind the Diffie-Hellman (DH) public parameters (ga or gb) with the vehicle ID (IDA or IDB), which is the license plate number, and the physical co-presence via images ({VA F ∥VA R }) or ({VB F ∥VB R }). Note that we omit mod p for brevity for all DH (Diffie-Hellman) public parameters, however mod p may still be used. Steps 7-8 of Table 1 depict the exchanges of commitments. Subsequently, the vehicles exchange decommitments as depicted in Steps 9-10 of Table 1 to disclose their committed information to each other.
Upon receiving the decommitments, each vehicle performs verification. Table 3 depicts the logic of vehicle B verifying the decommitment (DA) it received from vehicle A. First, vehicle B verifies if DA is indeed the hash of CA (Lines 2-5 of Table 3). If true, vehicle B finds out which image (front or back) contains the target vehicle's license plate number (Lines 7-9 of Table 3). For example, VB should be assigned to VBF because the image VBF contains vehicle A's license plate (IDA) because vehicle A is in front of vehicle B.
TABLE 3
Pseudocode of Commitment Verification
Algorithm 1 Pseudocode of commitment verification by
Car B in VAuth protocol depicted in FIG. 4 Step 9.
 1: procedure VERIFYCMMT(CA, DA, VB F , VB R )
 2:
Figure US10880293-20201229-P00003
 Returns FALSE if DA is not decommitment of CA
 3: if CA! = H(DA) then
 4: return FALSE
 5: end if
 6:
 7:
Figure US10880293-20201229-P00003
 Finds which image contains the target vehicle's ID
 8: VB ← whichImageContainsID(VB F , VB R , IDA)
 9: VA ← whichImageContainsID(VA F , VA R , IDB)
10:
11:
Figure US10880293-20201229-P00003
 Computes relative distance and angle from images
12: DV B ← computeDistance(VB, IDA)
13: ϕV B ← computeAngle(VB, IDA)
14: DV A ← computeDistance(VA, IDB)
15: ϕV B ← computeAngle(VA, IDB)
16:
17:
Figure US10880293-20201229-P00003
 Returns FALSE if the check for DB and ϕB fail
18: if ((DV A − ϵD <= DV B <= DV A + ϵD) &&
 (ϕV A − ϵϕ <= ϕV B <= ϕV A + ϵϕ)) then
19: return FALSE
20: end if
21:
Figure US10880293-20201229-P00003
 Returns FALSE if the check for DA and ϕA fail
22: if ((DV B − ϵD <= DV A >= DV B + ϵD) &&
 (ϕV B − ϵϕ <= ϕV A = ϕV B + ϵϕ)) then
23: return FALSE
24: end if
25:
26:
Figure US10880293-20201229-P00003
 Returns FALSE if spoofing attack suspected
27: if (spoofingAttackDetected(VA F , VB F ) ∥
 spoofingAttackDetected(VA R , VB R )) then
28: return FALSE
29: end if
30:
31:
Figure US10880293-20201229-P00003
 Successfully verified
32: return TRUE
33: end procedure
Subsequently, vehicle B continues to verify the relative distance and angle of vehicles A and B (Lines 11-15 of Table 3). It does so by computing the distance and angle from the images, VB and VA. Hence DV B and ϕV B are relative distance and angle of vehicle B to the position of Vehicle A's license plate (IDA). Similarly, DV A and ϕV A corresponds to those of vehicle A to the position of vehicle B's license plate (IDB). If the pair of relative distances, {DV B and DV A }, and angles, {ϕV B and ϕV A }, are not within an error bound, vehicle B aborts the pairing process (Lines 17-24 of Table 3). Potential image spoofing attacks are also detected by vehicle B (Lines 26-31 of Table 3). A pair of images facing the same direction (i.e., front={VA F , VB F } and rear={VA R , VB R } is input to spoofingAttackDetected( ) function to test if the snapshots are indeed taken by Vehicles A and B simultaneously. As discussed earlier, this check may involve checking the similarity of the surroundings in each image.
Once vehicle B successfully verifies the decommitment of A (DA), vehicle B accepts vehicle A's DH parameter, ga, and computes a shared symmetric key, K=(ga)b. Similarly, vehicle A also verifies decommitment of Vehicle B (DB) and if verification succeeds, computes a shared symmetric key, K′=(gb)a.
In the key confirmation phase, upon computing the shared symmetric key, vehicles A and B perform key confirmation to verify that both cars indeed generated the same key. This is depicted in Steps 11-14 of Table 1. Vehicle A transmits to vehicle B a randomly generated η bit nonce, ηA (η=256), and its Message Authentication Code (MAC) computed with the derived symmetric key, K′. (Note that in this example, HMAC-SHA-3 is used with a 256 hash bit length, but other hash functions can also be used.) Upon receiving the message, vehicle B first verifies the MAC using its derived symmetric key, K. If successful, vehicle B also transmits to vehicle A its randomly generated η bit nonce (η=256), ηB, along with a MAC computed over ηA∥ηB using its symmetric key, K′. Vehicle A verifies the MAC and if successful, sends a final MAC computed over ηB it received with the key, K′, to vehicle B. Finally, VAuth 300 protocol finishes with vehicle B's successful MAC verification. Vehicles A and B now use the generated shared symmetric key as their session key.
In an embodiment, the attacker's success probability is estimated starting from a random guessing attack without any previous knowledge to a sophisticated image spoofing attack. A basic attack by an attacker may be performed by a hacker who is in full control of the wireless channel. The attacker tries to impersonate vehicle A to vehicle B and forges the DH public key. In Table 1, steps 7 and 9, vehicle A transmits its commitment and decommitment (CA and DA) to Car B. Hence, the attacker, vehicle M, first prevents vehicle B from receiving DA (e.g., by jamming), and sends its own forged DA′=ga′∥DA∥VA F ∥VA R . However, because CA binds vehicle A's DH public key, ga, together with IDA∥VA F ∥VA R , the attacker's probability, PCarM, in succeeding the attack is equivalent to successfully finding a hash collision. Hence, the success probability of the attacker is bounded by the length of the hash function (256 bits SHA-3) as shown in Equation 1, where l is hash bit length (l=256).
P CarM=2−l   Equation 1
Not being able to sufficiently perform a random guess attack, the attacker may try to launch more sophisticated attacks. The attacker tries to forge both the DH public key (ga′) as well as the images (VA′ F or VA′ R ) to successfully impersonate vehicle A to vehicle B such that CA′=H(ga′∥DA∥VA′ F ∥VA′ R ). It is assumed that the attacker, vehicle M, first prepares a “dictionary” of images of the victim (vehicle B in this example) with varying distances and angles. Vehicle M, selects a prepared image, VM of vehicle B with corresponding d and ϕ and simply transmits to vehicle B the forged commitment (CA′) and decommitments (DA′), which includes VM . The attack may be divided into three cases for security analysis as outlined below.
In a first case, Attacker has no knowledge of d and ϕ and VAuth 300 does not check for image spoofing attacks. In this case, it is assumed that the attacker has no knowledge of the relative distance and angle between vehicle A vehicle B. For simpler analysis, we assume that VAuth 300 protocol does not check for image spoofing attacks (Hence, lines 26-31 of Table 1 may be omitted in this case). In this case, vehicle M needs to randomly guess d′ and ϕ′ to select an image such that the values are within the error bounds (ϵd and ϵϕ). Hence, the attacker's success probability is as shown in Equation 2, where dmax is the maximum distance of visible range, which depends on the capability of a camera.
P CarM = 2 ϵ ϕ 2 π · 2 ϵ d d max Equation 2
In a second case, the attacker has knowledge of d and ϕ and the VAuth 300 protocol does not check for image spoofing attacks. The attacker is assumed to be traveling along with vehicles A and B, and hence capable of determining an estimated distance and angle, d and ϕ.
FIG. 6 illustrates this scenario, where Car M attempts to find out the relative distance, dAB, and the angle, ϕA, ϕB. Vehicle M knows its relative distance to vehicles A and B (dAM and dBM) and the relative angles (ϕX, ϕY, ϕM). Using simple trigonometry, vehicle M computes the distance and angle as shown in Equation 3 and Equation 4. In Equations 3 and 4, distances specified as dxy are defined as the distance between points x and y. Hence, the probability of the attacker succeeding is always (PCarM=1).
d AB = d AP 2 + d BP 2 = d AP 2 + ( d BM - d BM ) 2 = ( d AM · sin ϕ M ) 2 + ( d BM - d AM · cos ϕ M ) 2 Equation 3 ϕ A ( = ϕ B ) = π 2 - ϕ A ( = π 2 - ϕ B ) = π 2 - arccos ( d 3 d AB ) = π 2 - arccos ( d 1 + d 2 d AB ) = π 2 - arccos ( ( d AM · cos ϕ x ) + ( d BM · cos ϕ y ) d AB ) Equation 4
In a third case, an attacker has knowledge of d and ϕ and VAuth 300 checks for image spoofing attacks. In order to prevent the attacker from successfully launching the attack as shown in the second case, VAuth 300 includes verification steps for image spoofing attacks (Table 1, lines 26-31). Equation 5 depicts that the success probability of the attacker is equivalent to the success probability of succeeding in an image spoofing attack, Pspoofing.
P success =P spoofing   Equation 5
To detect the image spoofing attack (Attacker Type 1), VAuth 300 leverages commonly seen objects (e.g., common neighboring cars' license plates). Vehicle B searches for neighboring vehicles' license plates from the image pair {VA F , VB F } and {VA R , VB R }. If the numbers are less than a predefined threshold value, protocol is aborted. The similarity of other objects and their relative location can also be used to increase the verification accuracy. For example, buildings, road signs, trees, terrain, etc. can also be used.
An example implementation of VAuth 300 uses license plate recognition from the images taken by vehicle cameras. In particular, VAuth 300 may use OpenALPR, an open-source library for automatic license plate recognition. OpenALPR takes an input image and traverses through eight phases to output recognized license plate numbers, location (corners, width, and height) and confidence level (percentage). OpenALPR implements the following phases. Phase 1 (“Detection Phase”) finds “potential regions” of license plates. Subsequent phases will process all of the potential regions. Phase 2 (“Binarization Phase”) creates multiple black and white images of plate regions to increase recognition accuracy. Phase 3 (“Character Analysis”) finds regions or blobs of license plate number/character sizes from plate regions. Phase 4 (“Plate Edges”) detects possible edges of license plates by detecting hough lines. Phase 5 (“Deskew”) corrects the rotation and skew of the license plate image. Phases 6 and 7 (“Character Segmentation” and “OCR”) isolate characters of the license plate and perform character recognition and confidence level determination. Finally, Phase 8 (“Post Processing”) outputs a list of n potential candidate license plate numbers sorted with their confidence level.
In an embodiment, in order to compute the distance and angle from an output of the OpenALPR, systems and methods of the disclosure utilize techniques for image rectification and perspective correction in computer vision. The algorithm leverages the ratio of real-world object in meters (“world-plane”) to pixels (“image-plane”) by leveraging dimensional knowledge of known objects. Similarly, a calibrated image is taken, Vcalibration, which is a snapshot of the vehicle's license plate taken at a meter distance, dinit, away from the vehicle, or some other known distance. From Vcalibration, the height (in pixels), hinit, of the recognized license plate box. The distance to a license plate in other images can be computed from the ratio of the height of the recognized license plate as shown in Equation 6.
d image = d init · h init h image Equation 6
Note that different vehicles may be equipped with different types of camera, resulting in hinit values that are varying between cameras. However, each car can include their hinit values in the commitment/decommitment messages.
The angle may be computed using the known distances from the image. The problem of finding the relative angle is exemplified in FIGS. 7A and 7B. Specifically, the angle can be derived by using two distances, dimage and dshiftm in meters as shown in Equation 7. As illustrated in FIG. 7B, ϕimage is the angle to the license plate compute and dimage is the distance from the camera 104, 106 to the license plate, dimage.
ϕ image = arccos ( d image d shift ) Equation 7
The value dshift is the “imaginary” distance in meters that the car would have shifted horizontally if the car were originally on the same line as the camera, i.e. horizontally centered in the field of view of the camera. To find dshift m , the ratio of the pixels to meters is obtained using an object of known dimensions in meters and pixels. For this object, the license plate is again used. hm is the height of the actual license plate in meters, which is 0.15 meters (for example, California license plates have the dimension of 6″×12″ (0.15 m×0.3 m)). The value hpx is the height in pixels of the license plate from the image. From FIG. 7A, one may also find dshift px , which is the shift distance in pixels. Finally, using Equation 8, dshift m is derived.
d shift m = h m · d shift px h px Equation 8
Although many of the previous examples discuss authenticating using information in or derived from camera images, any type of sensor data may be used to authenticate a vehicle including its relative location to a parent vehicle. In one embodiment, LIDAR, RADAR, SONAR, or other data may be used to detect objects within their respective sensing ranges and determine the relative locations, angles, and the like based on that data. In different embodiments, any type of sensor data may be used alone to verify location or identity, or multiple types of sensor data may be used together to robustly confirm each other as well as any assertions made by a potential V2V communication source. For example, LIDAR or RADAR data may not be able to detect license plate numbers or a color of a vehicle, but they may be used to generate an object map for vehicles surrounding a parent vehicle. In addition to the object map, velocity vectors relative to the parent vehicle for each object can be determined. These object maps and/or velocity vector maps can be compared to those obtained based another vehicle sensor data to determine if that other vehicle may be trusted for V2V communication.
FIG. 8 is a schematic flow chart diagram illustrating a method 800 for authenticating a vehicle based on camera images, object maps, heading angles, and/or velocity vector maps. The method 800 begins and both vehicles agree to initiate the communication at 802. Both vehicles capture a camera snapshot of the other vehicle, a surrounding map with RADAR/LIDAR sensors, and its heading angle with compass at 804. The vehicles check to see if velocity vector mapping is used at 806. If velocity vector mapping is used as determined at 806, the vehicles collect the velocity vector map of surrounding objects during a time t at 824. Both vehicles exchange camera snapshots, object maps, heading angles, and/or velocity vector maps at 808. Each vehicle extracts vehicle number, relative distance azimuth angle from the received camera image (or other sensor data or object maps) and adds the heading angle difference to the azimuth angle at 810. Each vehicle verifies if the extracted information matches with its own at 812. If the extracted information does not match its own as determined at 812, authentication fails 822. If it does match as determined at 812, each vehicle counts the number of commonly detected objects from its own map and the received object map from the other vehicle at 814. The number of commonly detected objects is compared to see if it meets a system trust threshold (e.g., a percentage of detected objects or an absolute integer value for detected objects) at 816. If the number of commonly detected objects not meet the system trust threshold as determined at 816, authentication fails 822. If the number of commonly detected objects as determined at 816 does meet the system trust threshold, authentication is a success 820 if velocity vector matching is not used at 818. If velocity vector matching is used as determined at 818, the vehicles calculate the ground vehicle velocity map by adding the vehicle velocity vector to the velocity vector of each commonly detected object at 826. The vehicles check to see if their own ground velocity map approximately equals the received velocity maps at 828. If the vehicle's own ground velocity map approximately equals the received velocity maps as determined at 828, authentication succeeds 820. If not, authentication fails 822.
FIGS. 9-12 illustrate how various parameters and comparisons may be performed in order to perform the method of FIG. 8. FIG. 9 is a plan view illustrating relative vehicle positions on a roadway and with nearby objects including other vehicles (vehicle C 906 and vehicle D 908) and roadside objects (object E 910 and object F 912). The heading angle hA of vehicle A 902 and the heading angle hB of vehicle B 904 are illustrated. A camera image taken by vehicle A 902 is also illustrated with the relative distance between vehicle A 902 and vehicle B 904 (dA) and the azimuth angle of vehicle B 904 (aB).
FIG. 10 is a plan view illustrating a scene 1002 as illustrated in FIG. 9, an ADAS object map 1004 of vehicle A 902, and an ADAS object map 1006 of vehicle B 904. The object map 1004 of vehicle A 902 includes points or locations detected for object E 910, vehicle B 904, vehicle C 906, and vehicle D 908. The object map 1004 includes the azimuth angle 1012 of object A 902, object E points or locations 1020, vehicle B points or locations 1014, vehicle C points or locations 1016, and vehicle D points or locations 1018. The object map 1006 of vehicle B 904 includes points or locations detected for object F 912, vehicle A 902, vehicle C 906, and vehicle D 908. The object map 1006 includes the azimuth angle 1024 of object B 904, vehicle C points or locations 1026, vehicle D points or locations 1028, object F points or locations 1032, and vehicle A points or locations 1022. Note that the object map 1004 of vehicle A 902 does not include object F 912. Note that the object map 1006 of vehicle B 904 does not include object E 910. This illustrates that the object maps 1004, 1006 might not be perfect matches.
FIG. 11 is a plan view illustrating the resulting object maps as well as rotations and translations to attempt to match or overlap the object map 1004 of vehicle A 902 and the object map 1006 of vehicle B 904. The result of rotating the object map 1006 of vehicle B 904 by hA-hB is illustrated at 1102. The result of translating the object map 1006 of vehicle B 904 by dAB and overlapping the object map 1006 of vehicle B 904 over the object map 1004 of vehicle A 902 is illustrated at 1104. An object map may be rotated by the difference between the azimuth angles (e.g., hA−hB) and then translated by the distance dAB between the vehicles or vehicle sensors that were used to create the object maps. The number or percentage of matching objects can then be determined by counting matching and non-matching objects. As long as the number or threshold exceeds a trust threshold, discrepancies may not cause authentication to fail.
FIG. 12 is a plan view illustrating velocity vector-based matching. FIG. 12 illustrates a vehicle A relative velocity vector map 1202, a vehicle B relative velocity vector map 1204, and a ground velocity vector map 1206. Note that all maps include vector maps for all objects, for illustrative purposes. The velocity vectors for each object may be compared by adding the velocity vector for a vehicle to the velocity vector for an object and determining whether that is sufficiently similar to the result of adding the velocity vector for the other vehicle to the velocity vector for the same object. This may be done for each commonly detected object.
The velocity of vehicle A V A 1208 is equal to the velocity of vehicle B V B 1210 when added to a ground velocity vector for an object, according to Equation 9, below.
v A +v X =v B +v X   Equation 9
The ground velocity vectors are illustrated in the ground velocity vector map 1206, including the vehicle A ground velocity vector 121, the vehicle B ground velocity vector 1214, the vehicle C ground velocity vector 1216, the vehicle D ground velocity vector 1218, and object E ground velocity vector 1220, and the object F ground velocity vector 1222. The relative velocities of various vehicle and objects as determined by vehicle A 902 are illustrated in vehicle A relative velocity vector map 1202 as dotted lines. The relative velocities of various vehicle and objects as determined by vehicle B 904 are illustrated in vehicle B relative velocity vector map 1204.
FIG. 13 illustrates schematic flow chart diagram a method 1300 for authentication vehicle-to-vehicle communication. The method 1300 may be performed by any suitable computing device, including for example a controller 102 of a vehicle. The method 1300 begins and a computing device receives sensor data from a first vehicle at 1302. The computing device receives secondary sensor data from a second vehicle at 1304. The computing device extracts, based on the sensor data and the secondary sensor data, an authentication comprising one or more of: a proximity of the second vehicle to the first vehicle, or a common object identified by the sensor data and the secondary sensor data at 1306. The computing device determines whether the authentication satisfies a trust threshold of the first vehicle at 1308.
FIG. 14 illustrates a schematic flow chart diagram of a method 1400 for authentication vehicle-to-vehicle communication. The method 1400 may be performed by any suitable computing device, including for example a controller 102 of a vehicle. The method 1400 begins and a computing device receives sensor data from a first vehicle comprising a camera image and ranging data comprising one or more of RADAR data, LIDAR data, SONAR data, or ultrasound data at 1402. The computing device receives secondary sensor data from a second vehicle comprising a secondary camera image and secondary ranging data comprising one or more of RADAR data, LIDAR data, SONAR data, or ultrasound data at 1404. The computing devices extracts an authentication point from the camera image, wherein the authentication point comprises one or more of: a distance of the second vehicle relative to the first vehicle, an azimuth angle between the second vehicle and the first vehicle, or a vehicle identification of the second vehicle at 1406. The computing device verifies the authentication point based on ranging data at 1408. The computing device exchanges a heading angle of the first vehicle or the second vehicle to compensate for an angle discrepancy between the sensor data and the secondary sensor data at 1410. The computing device extracts, based on the authentication point, the sensor data, and the secondary sensor data, an authentication comprising one or more of: a proximity of the second vehicle to the first vehicle, or a common object identified by the sensor data and the secondary sensor data at 1412. The computing device determines whether the authentication satisfies a trust threshold of the first vehicle at 1414.
FIG. 15 illustrates a schematic flow chart diagram of a method 1500 for authentication vehicle-to-vehicle communication. The method 1500 may be performed by any suitable computing device, including for example a controller 102 of a vehicle. The method 1500 begins and a computing device receives an object map captured by one or more sensors of a first vehicle at 1502. The computing devices receives a secondary object map captured by one or more sensors of a second vehicle at 1504. The computing device aligns the secondary object map with the object map to generate an overlaid object map by rotating and/or translating one of the object map or the secondary object map at 1506. The computing device determines whether there is one or more common objects identified in the object map and the secondary object map based on the overlaid object map at 1508. The computing device determines whether the one or more common objects satisfies a trust threshold for the first vehicle at 1510. The computing device authenticates communication between the first vehicle and the second vehicle if the one or more common objects satisfies the trust threshold at 1512.
Use of sensors in addition to images, or the alternative to images, may help reduce potential attacks. For example, an advanced attacker may able to impersonate a vehicle's identity by preparing a dictionary of images taken from various locations offline and selecting an image from the dictionary to pretend to be other vehicles around the victim vehicle. Or, an attacker might take a snapshot of a victim vehicle, crop out the victim vehicle from the snapshot, and superimpose the victim vehicle into the position to mislead the victim believe that the image is taken from another vehicle. Due to recent advances in computer vision, there will be more camera image modification techniques available to an attacker.
In order to overcome the potential weaknesses in the previous camera image-based V2V authentication, other ADAS sensors that are rapidly becoming prevalent in modern vehicles may be leveraged. These other ADAS sensors (which include RADAR, LIDAR, and other sensors) are able to detect surrounding objects (stationary and/or moving) and can be used to provide an enhanced authentication procedure, as that shown in the process flow 800 of FIG. 8.
Embodiments may still check for the vehicle identity (vehicle number) captured from a camera snapshot. Additionally, a vehicle extracts the relative distance, azimuth angle, and vehicle number from the camera snapshot of the other vehicle. However, to prevent the image modification attack, this information is also cross checked with the information from other ADAS sensors such as RADAR and LIDAR. Although these sensors do not provide the vehicle number information, they still provide the relative distance and azimuth angle of surrounding objects that should be identical with the information from the camera snapshot within an error boundary. In this process, the heading angle of each vehicle may also be exchanged to compensate the angle discrepancy when the two vehicles are not aligned (e.g. in a curved road). The surrounding objects can be stationary (e.g. landmarks, trees, buildings, road signs) or moving (e.g. other surrounding vehicles, and potentially pedestrians).
Referring again to FIGS. 10-11, the vehicle A object map 1004 is illustrated and the vehicle B object map 1006 is illustrated. In order to figure out if the two vehicles are near to each other (to authenticate), an algorithm rotates and/or translates one of the two object maps (the vehicle B object map 1006 as illustrated in FIG. 11) to determine if there is an acceptable match between any of the detected objects.
Due to sensor noise, different sensing capability, obstacles, the detected surrounding objects for two vehicles cannot be the same in practice. Therefore, if the number of commonly detected objects in two vehicles is higher than the system trust threshold, the authentication is granted for the following communication. The system trust threshold can be determined empirically in the surrounding environment. For example, it will be higher in an urban area with high object density compared to a rural area. Other implementations may yield a trust decision choice more to each application layer rather than system layer. As such, with a given number of commonly detected objects in two vehicles some security critical applications do not authenticate the communication with more strict trust boundary, whereas other non-critical applications satisfy with it.
In an implementation, the authentication system can also use the velocity vectors of surrounding objects during a short duration. A vehicle can calculate the ground velocity of surrounding objects by summing its velocity to each object's velocity vector. Comparing the ground velocity vectors of commonly detected objects in two vehicles will make even harder for an attacker to impersonate a vehicle in V2V communication. Compared to authentication based on images only, using the additional ADAS sensor data makes it even harder for an attacker to bypass the V2V authentication system. In such an embodiment, the authentication may comprise an equivalent detected ground speed of a common object as identified by the sensor data and the secondary data.
EXAMPLES
The following examples pertain to further embodiments.
Example 1 is a method that includes receiving sensor data from a vehicle A and receiving sensor data from a vehicle B. Wherein the sensor data includes one or more of RADAR, LIDAR, image, SONAR, ultrasound, or other sensor data. The method includes verifying the vehicle B is proximal to the vehicle A based on the sensor data. For example, the method may include verifying a relative position (distance and/or angle) of the vehicle B to the vehicle A based on the sensor data from a vehicle A and the sensor data from a vehicle B. As another example, the method may include verifying relative positions of objects in a vehicle A object map determined based on the sensor data from a vehicle A and in a vehicle B object map determined based on the sensor data from a vehicle B. As a further example, the method may include verifying determined ground speeds of objects detected based on the sensor data from a vehicle A and the sensor data from a vehicle B. In one embodiment, object maps or vector maps may be exchanged so that each vehicle doesn't have to determine the object map or vector map for the other vehicle.
In Example 2, the method of Example 1 includes verifying may determining that a similarity between an object map, velocity vector, or the like is within a desired threshold.
In Example 3, the method of any of Examples 1-2 includes comparing relative positions determined based on one type of sensor data (e.g., image data) to a relative position based on another type of sensor data (e.g., RADAR or LIDAR) and authenticating if the relative positions are similar enough.
In Example 4, the method in any of Examples 1-3 further includes: determining a vehicle A object map based on the vehicle A sensor data; determining a vehicle B object map based on the vehicle B additional sensor data; and verifying that the vehicle A object map and the vehicle B object map are sufficiently similar. The authenticating the vehicle B includes authenticating in response to verifying that the vehicle A object map and the vehicle B object map are sufficiently similar.
In Example 5, the method of Example 4 includes rotating and/or translating at least one of the vehicle A object map or the vehicle B object map to accommodate different orientations or locations of the vehicle A and the vehicle B.
In Example 6, the verifying that the vehicle A object map and the vehicle B object map are sufficiently similar in any of Examples 4-5 includes determining that a number of commonly detected objects in the vehicle A object map and the vehicle B object map is higher than a system trust threshold.
In Example 7, the method of any of Examples 1-6 includes: determining a ground velocity of one or more objects near vehicle a based on the vehicle A additional sensor data; determining a ground velocity of one or more objects near vehicle B based on the vehicle B additional sensor data; and verifying that the ground velocity of one or more objects near vehicle a and the ground velocity of one or more objects near vehicle B are sufficiently similar.
Example 8 is a method that includes: receiving a first image from a camera of a vehicle A: receiving a second image from a vehicle b; verifying that a relative position of the vehicle B to the vehicle A according to the first image corresponds to a relative position of the vehicle A to the vehicle B according to the second image; and in response to the verifying, authenticating the vehicle B.
In Example 9, the verifying that the relative position of the vehicle B to the vehicle A according to the first image corresponds to the relative position of the vehicle A to the vehicle B according to the second image of Example 8 includes identifying, by the controller of the vehicle A, an image of the vehicle B in the first image and identifying, by the controller of the vehicle A, an image of the vehicle A in the second image. The method includes determining, by the controller of the vehicle A, at least one of a first distance to the vehicle B from the vehicle A and a first angle to the vehicle B from the vehicle A according to a location of the image of the vehicle B in the first image. The method further includes determining, by the controller of the vehicle A, at least one of a second distance to the vehicle A from the vehicle B and a second angle to the vehicle A from the vehicle B according to a location of the image of the vehicle A in the second image. The method further includes determining, by the controller of the vehicle A, at least one of (a) the second distance is within a predetermined distance tolerance from the first distance and (b) the second angle is within a predetermined angle tolerance from the first angle.
In Example 10, the at least one of the first distance to the vehicle B from the vehicle A and the first angle to the vehicle B from the vehicle A according to a location of the image of the vehicle B in the first image in any of Examples 8-9 includes both of the first distance to the vehicle B from the vehicle A and the first angle to the vehicle B from the vehicle A according to a location of the image of the vehicle B in the first image. The at least one of the second distance to the vehicle A from the vehicle B and the second angle to the vehicle A from the vehicle B according to the location of the image of the vehicle A in the second image includes both of the second distance to the vehicle A from the vehicle B and the second angle to the vehicle A from the vehicle B according to the location of the image of the vehicle A in the second image.
In Example 11, the receiving the first image from the camera of the vehicle A in any of Examples 8-10 includes receiving a first forward image from a forward facing camera mounted to the vehicle A and receiving a first rearward image from a rearward facing camera mounted to the vehicle A. The method includes receiving the second image from the vehicle B includes receiving a second forward image from a forward facing camera mounted to the vehicle B and receiving a second rearward image from a rearward facing camera mounted to the vehicle B.
In Example 12, the identifying the image of the vehicle b in the first image in any of Examples 8-11 includes identifying a license plate number of the vehicle B in the first image and wherein identifying the image of the vehicle A in the second image includes identifying a license plate number of the vehicle A in the second image.
In Example 13, the method of Example 12 further includes receiving, by the controller of the vehicle A from the vehicle B, a message including the license plate number of the vehicle B.
In Example 14, the determining the at least one of the first distance to the vehicle B from the vehicle A and the first angle to the vehicle B from the vehicle A according to the location of the image of the vehicle B in the first image in any of Examples 12-13 includes: determining a first dimension (himagei) of a license plate of the vehicle B in the first image in pixels; and determining the first distance (d1) as equal to dinit*hinit/himage1, where dinit is a calibration distance and hinit is a test dimension in pixels of a test license plate positioned at dinit from a test camera. Determining the at least one of the second distance to the vehicle A from the vehicle B and the second angle to the vehicle A from the vehicle B according to the location of the image of the vehicle A in the second image includes: determining a second dimension (himage2) of a license plate of the vehicle A in the second image in pixels; and determining the second distance (d2) as equal to dinit*hinit/himage.
In Example 15, the determining the at least one of the first distance to the vehicle B from the vehicle A and the first angle to the vehicle B from the vehicle A according to the location of the image of the vehicle B in the first image of Example 14 includes: determining a first pixel offset (dshift1) of the license plate of the vehicle B in the first image from a center of the first image in pixels; determine a first distance offset (dshift1m) as equal to (hm*dshift1/himage1), where hm is a measured dimension of a test license plate; and determining the first angle as equal to Arccos(d1/dshift1m). Determining the at least one of the second distance to the vehicle A from the vehicle B and the second angle to the vehicle A from the vehicle B according to the location of the image of the vehicle A in the second image includes: determining a second pixel offset (dshift2) of the license plate of the vehicle A in the second image from a center of the second image in pixels; determine a second distance offset (dshift2m) as equal to (hm*dshift2/himage2); and determining the second angle as equal to Arccos(d2/dshift2m).
In Example 16, the camera of the vehicle A in any of Examples 8-15 is a first camera. The method further includes authenticating the vehicle B in response to determining that one or more background objects in the second image correspond to objects in an image received from a second camera mounted to the vehicle A and facing in an opposite direction to the first camera.
In Example 17, the authenticating the vehicle B in any of Examples 8-16 includes performing Diffie-Hellman key exchange between the vehicle A and the vehicle B.
In Example 18, the method of any of Examples 8-17 further includes: receiving vehicle A additional sensor data including one or more of RADAR and LIDAR data; and receiving vehicle B additional sensor data including one or more of RADAR and LIDAR data from the vehicle B.
In Example 19, the method of Example 18 includes verifying that the relative position of the vehicle B to the vehicle A according to the vehicle A additional sensor data is similar to a relative position of the vehicle A to the vehicle B according to the vehicle B additional sensor data, and wherein the relative position is within an error boundary of the relative position determined based on the first image and the second image.
In Example 20, the method in any of Examples 18-19 includes: determining a vehicle A object map based on the vehicle A additional sensor data; determining a vehicle B object map based on the vehicle B additional sensor data; and verifying that the vehicle A object map and the vehicle B object map are sufficiently similar. The authenticating the vehicle B includes authenticating in response to verifying that the vehicle A object map and the vehicle B object map are sufficiently similar.
In Example 21, the verifying that the vehicle A object map and the vehicle B object map are sufficiently similar of Example 20 includes rotating and/or translating at least one of the vehicle A object map or the vehicle B object map to accommodate different orientations or locations of the vehicle A and the vehicle B.
In Example 22, the verifying that the vehicle A object map and the vehicle B object map are sufficiently similar as in any of Examples 20-21 includes determining that a number of commonly detected objects in the vehicle A object map and the vehicle B object map is higher than a system trust threshold.
In Example 23, the method as in any of Examples 18-22 further includes: determining a ground velocity of one or more objects near vehicle A based on the vehicle A additional sensor data; determining a ground velocity of one or more objects near vehicle B based on the vehicle B additional sensor data; and verifying that the ground velocity of one or more objects near vehicle a and the ground velocity of one or more objects near vehicle B are sufficiently similar.
Example 24 is computer readable storage media storing instructions that, when executed by one or more processors, cause the one or more processors to implement a method or realize a system or apparatus as in any of Examples 1-23.
Example 25 is a method for authenticating vehicle-to-vehicle communication. The method includes: receiving sensor data from a first vehicle and secondary sensor data from a second vehicle. The method includes extracting, based on the sensor data and the secondary sensor data, an authentication comprising one or more of: a proximity of the second vehicle to the first vehicle or a common object identified by the sensor data and the secondary sensor data. The method includes determining whether the authentication satisfies a trust threshold of the first vehicle.
Example 26 is a method as in Example 25, wherein the authentication comprises an equivalent detected ground speed of a common object identified by the sensor data and the secondary sensor data.
Example 27 is a method as in any of Examples 25-26, wherein: the sensor data comprises an image received from a camera of the first vehicle; the sensor data further comprises ranging data received from one or more of a RADAR sensor, a LIDAR sensor, a SONAR sensor, or an ultrasound sensor of the first vehicle; the secondary sensor data comprises a secondary image received from a camera of the second vehicle; and the secondary sensor data further comprises secondary ranging data received from one or more of a RADAR sensor, a LIDAR sensor, a SONAR sensor, or an ultrasound sensor of the second vehicle.
Example 28 is a method as in any of Examples 25-27, further comprising extracting an authentication point from the camera image, wherein the authentication point comprises one or more of: a distance of the second vehicle relative to the first vehicle; an azimuth angle between the second vehicle and the first vehicle; or a vehicle identification of the second vehicle.
Example 29 is a method as in any of Examples 25-28, further comprising verifying the authentication point based on the ranging data, wherein verifying the authentication point comprises: extracting a ranging authentication point from the ranging data. The ranging authentication point comprises one or more of: the distance of the second vehicle relative to the first vehicle; or the azimuth angle between the second vehicle and the first vehicle. The method further comprises cross-checking the authentication point extracted from the image with a corresponding ranging authentication point; and verifying the authentication point is equal to the corresponding ranging authentication point within an error boundary.
Example 30 is a method as in any of Examples 25-29, further comprising exchanging a heading angle of the first vehicle or the second vehicle to compensate for an angle discrepancy between the sensor data and the secondary sensor data.
Example 31 is a method as in any of Examples 25-30, further comprising: receiving an object map captured by one or more sensors of the first vehicle; receiving a secondary object map captured by one or more sensors of the second vehicle; aligning the secondary object map with the object map to generate an overlaid object map by rotating and/or translating one of the object map or the secondary object map; and determining whether there is one or more object matches between the object map and the secondary object map based on the overlaid object map.
Example 32 is a method as in any of Examples 25-31, further comprising: determining whether the one or more object matches between the object map and the secondary object map satisfies a trust threshold for an application of the first vehicle; authenticating communication between the application of the first vehicle and the second vehicle if the one or more object matches satisfies the trust threshold for the application; and denying communication between the application of the first vehicle and the second vehicle if the one or more object matches does not satisfy the trust threshold for the application.
Example 33 is a method as in any of Examples 25-32, further comprising: determining whether the one or more object matches between the object map and the secondary object map satisfies a global trust threshold for the first vehicle; authenticating communication between the first vehicle and the second vehicle if the one or more object matches satisfies the global trust threshold.
Example 34 is a method as in any of Examples 25-33, wherein determining whether the authentication satisfies the trust threshold of the first vehicle comprises verifying one or more of: an identify of the second vehicle; whether the second vehicle is within a close proximity to the first vehicle; a presence of one or more common objects sensed by the sensor data and the secondary sensor data; or equivalent determined ground speeds for one or more common objects based on the sensor data and the secondary sensor data.
Example 35 is a method as in any of Examples 25-34, further comprising permitting communication between the first vehicle and the second vehicle if the verification meets the trust threshold of the first vehicle.
Example 36 is a system for authenticating vehicle-to-vehicle communication. The system includes: a first vehicle comprising one or more sensors providing sensor data; and a vehicle controller in communication with the one or more sensors and configured to receive secondary sensor data from a second vehicle. The system is such that the vehicle controller comprises non-transitory computer readable storage media storing instructions that, when executed by one or more processors, cause the one or more processors to: receive sensor data from the one or more sensors of the first vehicle; receive secondary sensor data from the second vehicle; extract, based on the sensor data and the secondary sensor data, an authentication comprising one or more of: a proximity of the second vehicle relative to the first vehicle; a common object identified by the sensor data and the secondary sensor data; or an equivalent detected ground speed of a common object identified by the sensor data and the secondary sensor data. The instructions further cause the one or more processors to determine whether the authentication satisfies a trust threshold of the first vehicle.
Example 37 is a system as in Example 36, wherein: the sensor data comprises an image received from a camera of the first vehicle; the sensor data further comprises ranging data received from one or more of a RADAR sensor, a LIDAR sensor, a SONAR sensor, or an ultrasound sensor of the first vehicle; the secondary sensor data comprises a secondary image received from a camera of the second vehicle; and the secondary sensor data further comprises secondary ranging data received from one or more of a RADAR sensor, a LIDAR sensor, a SONAR sensor, or an ultrasound sensor of the second vehicle.
Example 38 is a system as in any of Examples 36-37, wherein the instructions further cause the one or more processors to extract an authentication point from the camera image, wherein the authentication point comprises one or more of: a distance of the second vehicle relative to the first vehicle; an azimuth angle between the second vehicle and the first vehicle; or a vehicle identification of the second vehicle.
Example 39 is a system as in any of Examples 36-38, wherein the instructions further cause the one or more processors to verify the authentication point based on the ranging data, wherein verifying the authentication point comprises: extracting a ranging authentication point from the ranging data, wherein the ranging authentication point comprises one or more of: the distance of the second vehicle relative to the first vehicle; or the azimuth angle between the second vehicle and the first vehicle. The instructions further cause the one or more processors to: cross-check the authentication point extracted from the image with a corresponding ranging authentication point; and verify the authentication point is equal to the corresponding ranging authentication point within an error boundary.
Example 40 is a system as in any of Examples 36-39, wherein the instructions further cause the one or more processors to exchange a heading angle of the first vehicle or the second vehicle to compensate for an angle discrepancy between the sensor data and the secondary sensor data.
Example 41 is non-transitory computer readable storage media storing instructions that, when executed by one or more processors, cause the one or more processors to: receive sensor data from one or more sensors of a first vehicle; receive secondary sensor data from a second vehicle; extract, based on the sensor data and the secondary sensor data, an authentication comprising one or more of: a proximity of the second vehicle relative to the first vehicle; a common object identified by the sensor data and the secondary sensor data; or an equivalent detected ground speed of a common object identified by the sensor data and the secondary sensor data. The instructions further cause the one or more processors to determine whether the authentication satisfies a trust threshold of the first vehicle.
Examples 42 is non-transitory computer readable storage media as in Example 41, wherein: the sensor data comprises an image received from a camera of the first vehicle; the sensor data further comprises ranging data received from one or more of a RADAR sensor, a LIDAR sensor, a SONAR sensor, or an ultrasound sensor of the first vehicle; the secondary sensor data comprises a secondary image received from a camera of the second vehicle; and the secondary sensor data further comprises secondary ranging data received from one or more of a RADAR sensor, a LIDAR sensor, a SONAR sensor, or an ultrasound sensor of the second vehicle.
Example 43 is non-transitory computer readable storage media as in any of Examples 41-42, wherein the instructions further cause the one or more processors to extract an authentication point from the camera image, wherein the authentication point comprises one or more of: a distance of the second vehicle relative to the first vehicle; an azimuth angle between the second vehicle and the first vehicle; or a vehicle identification of the second vehicle.
Example 44 is non-transitory computer readable storage media as in any of Examples 41-43, wherein the instructions further cause the one or more processors to verify the authentication point based on the ranging data, wherein verifying the authentication point comprises: extracting a ranging authentication point from the ranging data, wherein the ranging authentication point comprises one or more of: the distance of the second vehicle relative to the first vehicle; or the azimuth angle between the second vehicle and the first vehicle. The instructions further cause the one or more processors to: cross-check the authentication point extracted from the image with a corresponding ranging authentication point; and verify the authentication point is equal to the corresponding ranging authentication point within an error boundary.
Example 45 is a system or device that includes means for implementing a method or realizing a system or apparatus as in any of Examples 1-44.
The present disclosure may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative, and not restrictive. The scope of the disclosure is, therefore, indicated by the appended claims, rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims (20)

What is claimed is:
1. A method comprising:
receiving primary sensor data from a first vehicle and secondary sensor data from a second vehicle;
extracting, based on one or more of the primary sensor data or the secondary sensor data, authentication data for use in authenticating vehicle-to-vehicle communications between the first vehicle and the second vehicle, wherein the authentication data comprises:
a primary object map based on the primary sensor data, wherein the primary object map comprises one or more objects surrounding the first vehicle; and
a secondary object map based on the secondary sensor data, wherein the secondary object map comprises one or more objects surrounding the second vehicle;
aligning the primary object map and the secondary object map to generate an overlaid object map; and
determining whether any of the one or more objects surrounding the first vehicle matches any of the one or more objects surrounding the second vehicle based on the overlaid map.
2. The method of claim 1, wherein the authentication data further comprises a proximity value of the second vehicle to the first vehicle.
3. The method of claim 1, wherein:
the primary sensor data comprises a primary image received from a camera of the first vehicle;
the primary sensor data further comprises primary ranging data received from one or more of a RADAR sensor, a LIDAR sensor, a SONAR sensor, or an ultrasound sensor of the first vehicle;
the secondary sensor data comprises a secondary image received from a camera of the second vehicle; and
the secondary sensor data further comprises secondary ranging data received from one or more of a RADAR sensor, a LIDAR sensor, a SONAR sensor, or an ultrasound sensor of the second vehicle.
4. The method of claim 3, further comprising extracting an authentication point from the primary image received from the camera of the first vehicle, wherein the authentication point comprises one or more of:
a distance of the second vehicle relative to the first vehicle;
an azimuth angle between the second vehicle and the first vehicle; or
a vehicle identification of the second vehicle.
5. The method of claim 4, further comprising verifying the authentication point based on the ranging data, wherein verifying the authentication point comprises:
extracting a ranging authentication point from the primary ranging data, wherein the ranging authentication point comprises one or more of:
the distance of the second vehicle relative to the first vehicle; or
the azimuth angle between the second vehicle and the first vehicle;
cross-checking the authentication point extracted from the primary image with a corresponding ranging authentication point; and
verifying the authentication point is equal to the corresponding ranging authentication point within an error boundary.
6. The method of claim 1, further comprising exchanging a heading angle of the first vehicle or the second vehicle to compensate for an angle discrepancy between the primary sensor data and the secondary sensor data.
7. The method of claim 1, wherein:
aligning the primary object map and the secondary object map comprises rotating and/or translating one of the primary object map or the secondary object map; and
determining whether any of the one or more objects surrounding the first vehicle matches any of the one or more objects surrounding the second vehicle comprises determining whether there are one or more common objects identified in the primary object map and the secondary object map based on the overlaid object map.
8. The method of claim 7, further comprising:
determining whether the one or more common objects satisfies a trust threshold for an application of the first vehicle;
authenticating communication between the application of the first vehicle and the second vehicle if the one or more common objects satisfies the trust threshold for the application; and
denying communication between the application of the first vehicle and the second vehicle if the one or more common objects does not satisfy the trust threshold for the application.
9. The method of claim 7, further comprising:
determining whether the one or more common objects identified in the primary object map and the secondary object map satisfies a global trust threshold for the first vehicle;
authenticating communication between the first vehicle and the second vehicle if the one or more common objects satisfies the global trust threshold.
10. The method of claim 1, further comprising determining whether the authentication data satisfies a trust threshold of the first vehicle by verifying one or more of:
an identity of the second vehicle;
whether the second vehicle is within a close proximity to the first vehicle; or
a presence of one or more common objects sensed by the primary sensor data and the secondary sensor data.
11. The method of claim 10, further comprising permitting communication between the first vehicle and the second vehicle if the authentication data satisfies the trust threshold of the first vehicle.
12. A system comprising:
a first vehicle comprising one or more sensors providing primary sensor data; and
a vehicle controller in communication with the one or more sensors and configured to receive secondary sensor data from a second vehicle;
wherein the vehicle controller comprises one or more processors configurable to execute instructions stored in non-transitory computer readable storage media, the instructions comprising:
receiving primary sensor data from the one or more sensors of the first vehicle;
receiving secondary sensor data from the second vehicle;
extracting, based on one or more of the primary sensor data or the secondary sensor data, authentication data for use in authenticating vehicle-to-vehicle communications between the first vehicle and the second vehicle, wherein the authentication data comprises:
a primary object map based on the primary sensor data, wherein the primary object map comprises one or more objects surrounding the first vehicle; and
a secondary object map based on the secondary sensor data, wherein the secondary object map comprises one or more objects surrounding the second vehicle;
aligning the primary object map and the secondary object map to generate an overlaid map; and
determining whether any of the one or more objects surrounding the first vehicle matches any of the one or more objects surrounding the second vehicle based on the overlaid map.
13. The system of claim 12, wherein:
the primary sensor data comprises a primary image received from a camera of the first vehicle;
the primary sensor data further comprises primary ranging data received from one or more of a RADAR sensor, a LIDAR sensor, a SONAR sensor, or an ultrasound sensor of the first vehicle;
the secondary sensor data comprises a secondary image received from a camera of the second vehicle; and
the secondary sensor data further comprises secondary ranging data received from one or more of a RADAR sensor, a LIDAR sensor, a SONAR sensor, or an ultrasound sensor of the second vehicle.
14. The system of claim 13, wherein the instructions further comprise extracting an authentication point from the primary image received from the camera of the first vehicle, wherein the authentication point comprises one or more of:
a distance of the second vehicle relative to the first vehicle;
an azimuth angle between the second vehicle and the first vehicle; or
a vehicle identification of the second vehicle.
15. The system of claim 14, wherein the instructions further comprise verifying the authentication point based on the primary ranging data, wherein verifying the authentication point comprises:
extracting a ranging authentication point from the primary ranging data, wherein the ranging authentication point comprises one or more of:
the distance of the second vehicle relative to the first vehicle; or
the azimuth angle between the second vehicle and the first vehicle;
cross-checking the authentication point extracted from the primary image with a corresponding ranging authentication point; and
verifying the authentication point is equal to the corresponding ranging authentication point within an error boundary.
16. The system of claim 12, wherein the instructions further comprise exchanging a heading angle of the first vehicle or the second vehicle to compensate for an angle discrepancy between the primary sensor data and the secondary sensor data.
17. Non-transitory computer readable storage media storing instructions for execution by one or more processors, the instructions comprising:
receiving primary sensor data from one or more sensors of a first vehicle;
receiving secondary sensor data from a second vehicle;
extracting, based on one or more of the primary sensor data or the secondary sensor data, authentication data for use in authenticating vehicle-to-vehicle communications between the first vehicle and the second vehicle, wherein the authentication data comprises:
a primary object map based on the primary sensor data, wherein the primary object map comprises one or more objects surrounding the first vehicle; and
a secondary object map based on the secondary sensor data, wherein the secondary object map comprises one or more objects surrounding the second vehicle;
aligning the primary object map and the secondary object map to generate an overlaid map; and
determining whether any of the one or more objects surrounding the first vehicle matches any of the one or more objects surrounding the second vehicle based on the overlaid map.
18. The non-transitory computer readable storage media of claim 17, wherein:
the primary sensor data comprises a primary image received from a camera of the first vehicle;
the primary sensor data further comprises primary ranging data received from one or more of a RADAR sensor, a LIDAR sensor, a SONAR sensor, or an ultrasound sensor of the first vehicle;
the secondary sensor data comprises a secondary image received from a camera of the second vehicle; and
the secondary sensor data further comprises secondary ranging data received from one or more of a RADAR sensor, a LIDAR sensor, a SONAR sensor, or an ultrasound sensor of the second vehicle.
19. The non-transitory computer readable storage media of claim 18, wherein the instructions further comprise extracting an authentication point from the primary image received from the camera of the first vehicle, wherein the authentication point comprises one or more of:
a distance of the second vehicle relative to the first vehicle;
an azimuth angle between the second vehicle and the first vehicle; or
a vehicle identification of the second vehicle.
20. The non-transitory computer readable storage media of claim 19, wherein the instructions further comprise verifying the authentication point based on the primary ranging data, wherein verifying the authentication point comprises:
extracting a ranging authentication point from the primary ranging data, wherein the ranging authentication point comprises one or more of:
the distance of the second vehicle relative to the first vehicle; or
the azimuth angle between the second vehicle and the first vehicle;
cross-checking the authentication point extracted from the primary image with a corresponding ranging authentication point; and
verifying the authentication point is equal to the corresponding ranging authentication point within an error boundary.
US16/040,013 2017-08-25 2018-07-19 Authentication of vehicle-to-vehicle communications Active 2039-02-08 US10880293B2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US16/040,013 US10880293B2 (en) 2017-08-25 2018-07-19 Authentication of vehicle-to-vehicle communications
CN201810952136.0A CN109429197B (en) 2017-08-25 2018-08-21 Method and system for authentication of vehicle-to-vehicle communication
DE102018120655.0A DE102018120655A1 (en) 2017-08-25 2018-08-23 AUTHENTICATION OF VEHICLE TO VEHICLE COMMUNICATIONS
US16/951,727 US11582222B2 (en) 2017-08-25 2020-11-18 Authentication of vehicle-to-vehicle communications

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762550312P 2017-08-25 2017-08-25
US16/040,013 US10880293B2 (en) 2017-08-25 2018-07-19 Authentication of vehicle-to-vehicle communications

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/951,727 Continuation US11582222B2 (en) 2017-08-25 2020-11-18 Authentication of vehicle-to-vehicle communications

Publications (2)

Publication Number Publication Date
US20190068582A1 US20190068582A1 (en) 2019-02-28
US10880293B2 true US10880293B2 (en) 2020-12-29

Family

ID=65435712

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/040,013 Active 2039-02-08 US10880293B2 (en) 2017-08-25 2018-07-19 Authentication of vehicle-to-vehicle communications
US16/951,727 Active 2038-12-01 US11582222B2 (en) 2017-08-25 2020-11-18 Authentication of vehicle-to-vehicle communications

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/951,727 Active 2038-12-01 US11582222B2 (en) 2017-08-25 2020-11-18 Authentication of vehicle-to-vehicle communications

Country Status (2)

Country Link
US (2) US10880293B2 (en)
CN (1) CN109429197B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11983661B2 (en) 2021-12-20 2024-05-14 Ford Global Technologies, Llc Device authentication and trust in multi-modal goods delivery
US12119883B2 (en) 2018-10-10 2024-10-15 Glydways Inc. Variable bandwidth free-space optical communication system for autonomous or semi-autonomous passenger vehicles
US12286099B2 (en) 2020-06-19 2025-04-29 Glydways, Inc. Braking and signaling schemes for autonomous vehicle system

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170242443A1 (en) 2015-11-02 2017-08-24 Peloton Technology, Inc. Gap measurement for vehicle convoying
US10520581B2 (en) 2011-07-06 2019-12-31 Peloton Technology, Inc. Sensor fusion for autonomous or partially autonomous vehicle control
US9582006B2 (en) 2011-07-06 2017-02-28 Peloton Technology, Inc. Systems and methods for semi-autonomous convoying of vehicles
US11214143B2 (en) * 2017-05-02 2022-01-04 Motional Ad Llc Visually obstructed object detection for automated vehicle using V2V/V2I communications
US10558217B2 (en) * 2017-08-28 2020-02-11 GM Global Technology Operations LLC Method and apparatus for monitoring of an autonomous vehicle
US11079757B1 (en) * 2017-11-20 2021-08-03 Amazon Technologies, Inc. Unmanned aerial vehicles to survey locations and collect data about different signal sources
KR102463718B1 (en) * 2017-12-14 2022-11-07 현대자동차주식회사 System and Method for detecting faked location information of a vehicle
KR102540025B1 (en) * 2018-01-25 2023-06-02 엘지전자 주식회사 Vehicle information inquiry method
WO2019198837A1 (en) * 2018-04-09 2019-10-17 엘지전자(주) V2x communication device and obe misbehavior detection method thereof
US12204038B2 (en) * 2018-09-25 2025-01-21 Toyota Jidosha Kabushiki Kaisha Vehicle-to-everything (V2X) full-duplex localization assistance for V2X receivers
US11038895B2 (en) * 2018-09-28 2021-06-15 Intel Corporation Trust management mechanisms
US11676427B2 (en) * 2019-02-12 2023-06-13 Toyota Jidosha Kabushiki Kaisha Vehicle component modification based on vehicle-to-everything communications
US11570625B2 (en) * 2019-03-25 2023-01-31 Micron Technology, Inc. Secure vehicle communications architecture for improved blind spot and driving distance detection
JP2020167607A (en) * 2019-03-29 2020-10-08 マツダ株式会社 Automobile arithmetic system and reception data processing method
CN110281916B (en) * 2019-05-10 2023-06-20 阿波罗智联(北京)科技有限公司 Vehicle control method, device and storage medium
SE543631C2 (en) * 2019-06-20 2021-04-27 Scania Cv Ab Method and control arrangement for associating received information to a source vehicle
US10932135B2 (en) * 2019-06-28 2021-02-23 Toyota Jidosha Kabushiki Kaisha Context system for providing cyber security for connected vehicles
DE102019213316A1 (en) * 2019-09-03 2021-03-04 Robert Bosch Gmbh Method for generating a reference representation
US12252151B2 (en) 2019-10-21 2025-03-18 Volvo Truck Corporation Method for determining reliability of received data
US11407423B2 (en) * 2019-12-26 2022-08-09 Intel Corporation Ego actions in response to misbehaving vehicle identification
US11438741B2 (en) 2020-01-27 2022-09-06 Honda Motor Co., Ltd. Coordinated transportation system and methods thereof
US11288762B2 (en) 2020-03-26 2022-03-29 Toyota Motor North America, Inc. Vacancy processing
US20210300334A1 (en) 2020-03-26 2021-09-30 Toyota Motor North America, Inc. Transport relocation
US11132899B1 (en) 2020-03-26 2021-09-28 Toyota Motor North America, Inc. Acquiring vacant parking spot
EP3904982A1 (en) 2020-04-29 2021-11-03 ABB Schweiz AG Access control within a modular automation system
DE102020212565A1 (en) * 2020-10-06 2022-04-07 Volkswagen Aktiengesellschaft Vehicle, device, computer program and method for implementation in a vehicle
CN112398822B (en) * 2020-10-29 2022-02-01 安徽江淮汽车集团股份有限公司 Internet of vehicles Sybil attack detection method, device, equipment and storage medium
EP4030339A4 (en) * 2020-11-16 2022-11-02 Huawei Technologies Co., Ltd. CAMERA IDENTIFICATION METHOD, CAMERA AUTHENTICATION METHOD AND SYSTEM, AND TERMINAL
US20220169279A1 (en) * 2020-12-02 2022-06-02 Micron Technology, Inc. Sunlight processing for autonomous vehicle control
US11854269B2 (en) 2021-06-04 2023-12-26 Waymo Llc Autonomous vehicle sensor security, authentication and safety
US20230186641A1 (en) * 2021-12-10 2023-06-15 Qualcomm Incorporated Image-coupled sensor sharing for cloud-based driving assistance
KR20240003977A (en) * 2022-07-04 2024-01-11 현대자동차주식회사 Method for verifying integrity of application in vehicle controller
CN115273530A (en) * 2022-07-11 2022-11-01 上海交通大学 Parking lot positioning and sensing system based on cooperative sensing
CN115071734B (en) * 2022-07-20 2025-04-25 阿波罗智能技术(北京)有限公司 Access control method, device, electronic device and autonomous driving vehicle
KR20240092690A (en) * 2022-12-14 2024-06-24 현대자동차주식회사 Apparatus for controlling autonomous driving and method thereof

Citations (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010049791A1 (en) * 2000-04-18 2001-12-06 Alain Gascher Security process of a communication for passive entry and start system
US20030080878A1 (en) * 2001-10-30 2003-05-01 Kirmuss Charles Bruno Event-based vehicle image capture
US20060059229A1 (en) * 2000-01-10 2006-03-16 David Bain Inter vehicle communication system
US20080297330A1 (en) * 2007-06-01 2008-12-04 Jeon Byong-Hoon Vehicle emergency preventive terminal device and internet system using facial recognition technology
JP2011204151A (en) 2010-03-26 2011-10-13 Daihatsu Motor Co Ltd Inter-vehicle communication method and inter-vehicle communication device
US20120105637A1 (en) * 2010-11-03 2012-05-03 Broadcom Corporation Multi-Level Video Processing Within A Vehicular Communication Network
US20130344859A1 (en) * 2012-06-21 2013-12-26 Cellepathy Ltd. Device context determination in transportation and other scenarios
US20140136414A1 (en) * 2006-03-17 2014-05-15 Raj Abhyanker Autonomous neighborhood vehicle commerce network and community
US20140281534A1 (en) * 2013-03-15 2014-09-18 Waveconnex, Inc. EHF Secure Communication Device
US20140302774A1 (en) * 2013-04-04 2014-10-09 General Motors Llc Methods systems and apparatus for sharing information among a group of vehicles
US20140306834A1 (en) * 2012-03-14 2014-10-16 Flextronics Ap, Llc Vehicle to vehicle safety and traffic communications
US20140308902A1 (en) * 2012-03-14 2014-10-16 Flextronics Ap, Llc Vehicle to vehicle social and business communications
US20150025708A1 (en) * 2008-09-11 2015-01-22 Deere & Company Leader-Follower Fully-Autonomous Vehicle with Operator on Side
US20150052352A1 (en) 2013-06-23 2015-02-19 Shlomi Dolev Certificating vehicle public key with vehicle attributes
US20150141043A1 (en) * 2013-08-23 2015-05-21 Cellepathy Ltd. Corrective navigation instructions
US20150168174A1 (en) * 2012-06-21 2015-06-18 Cellepathy Ltd. Navigation instructions
US9165198B2 (en) 2012-02-25 2015-10-20 Audi Ag Method for identifying a vehicle during vehicle-to-vehicle communication
US20150312404A1 (en) * 2012-06-21 2015-10-29 Cellepathy Ltd. Device context determination
US20160021238A1 (en) * 2010-09-21 2016-01-21 Cellepathy Ltd. Restricting mobile device usage
US20160099927A1 (en) * 2014-10-01 2016-04-07 Continental Intelligent Transportation Systems, LLC Hacker security solution for package transfer to and from a vehicle
US20160205238A1 (en) * 2013-08-23 2016-07-14 Dan Abramson Mobile device context aware determinations
US20160216130A1 (en) * 2012-06-21 2016-07-28 Cellepathy Ltd. Enhanced navigation instruction
US20160275801A1 (en) * 2013-12-19 2016-09-22 USA as Represented by the Administrator of the National Aeronautics & Space Administration (NASA) Unmanned Aerial Systems Traffic Management
US20160335897A1 (en) * 2015-05-15 2016-11-17 Hyundai America Technical Center, Inc Detecting misbehavior in vehicle-to-vehicle (v2v) comminications
US20170132477A1 (en) * 2015-11-10 2017-05-11 Ford Global Technologies, Llc Inter-Vehicle Authentication Using Visual Contextual Information
US20170153314A1 (en) * 2015-11-04 2017-06-01 Nxp B.V. Embedded communication authentication
US20170166168A1 (en) * 2014-01-31 2017-06-15 Huf Hüsbeck & Fürst GmbH & Co. KG Method for Providing an Operating Signal
US20170180388A1 (en) * 2015-12-16 2017-06-22 At&T Intellectual Property I, L.P. System For Providing Layered Security
US20170274827A1 (en) * 2016-03-23 2017-09-28 GM Global Technology Operations LLC Rear vision system for a vehicle and method of using the same
US20170279957A1 (en) * 2013-08-23 2017-09-28 Cellepathy Inc. Transportation-related mobile device context inferences
US20180079284A1 (en) * 2016-09-21 2018-03-22 Lg Electronics Inc. Vehicle control device mounted on vehicle and method for controlling the vehicle
US20180090013A1 (en) * 2016-09-23 2018-03-29 Sharp Laboratories Of America, Inc. Unmanned aircraft and operation thereof
US20180102831A1 (en) * 2016-10-11 2018-04-12 T-Mobile, U.S.A., Inc. Uav for cellular communication
US20180124233A1 (en) * 2010-09-21 2018-05-03 Cellepathy Inc. Restricting mobile device usage
US20180126951A1 (en) * 2016-11-07 2018-05-10 Nio Usa, Inc. Method and system for authentication in autonomous vehicles
US20180202822A1 (en) * 2017-01-19 2018-07-19 Andrew DeLizio Managing autonomous vehicles
US20180227514A1 (en) * 2015-07-24 2018-08-09 Sony Semiconductor Solutions Corporation Image sensor and electronic apparatus
US20190061939A1 (en) * 2017-08-24 2019-02-28 Qualcomm Incorporated Managing Package Deliveries by Robotic Vehicles
US20190061686A1 (en) * 2016-02-26 2019-02-28 Huf Hülsbeck & Fürst Gmbh & Co. Kg Method for activating at least one safety function of a vehicle safety system
US20190073543A1 (en) * 2016-05-27 2019-03-07 Mitsui Kinzoku Act Corporation Image information comparison system
US10440536B2 (en) * 2017-05-19 2019-10-08 Waymo Llc Early boarding of passengers in autonomous vehicles
US20190384320A1 (en) * 2019-07-24 2019-12-19 Lg Electronics Inc. Autonomous driving control method in restricted area and autonomous driving system using the same
US20200012281A1 (en) * 2019-07-25 2020-01-09 Lg Electronics Inc. Vehicle of automatic driving system and the control method of the system

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070124078A1 (en) * 2005-11-25 2007-05-31 Garry Vinje Vehicle impact avoidance system
US20090177677A1 (en) * 2008-01-07 2009-07-09 Lubos Mikusiak Navigation device and method
US8773281B2 (en) * 2009-09-15 2014-07-08 Ohanes D. Ghazarian Intersection vehicle collision avoidance system
US20160189544A1 (en) * 2011-11-16 2016-06-30 Autoconnect Holdings Llc Method and system for vehicle data collection regarding traffic
EP2827622B1 (en) * 2013-07-15 2019-09-04 Harman Becker Automotive Systems GmbH Techniques of Establishing a Wireless Data Connection
US20160357187A1 (en) * 2015-06-05 2016-12-08 Arafat M.A. ANSARI Smart vehicle
US9836056B2 (en) * 2015-06-05 2017-12-05 Bao Tran Smart vehicle
DE102015214611A1 (en) * 2015-07-31 2017-02-02 Conti Temic Microelectronic Gmbh Method and device for displaying an environmental scene of a vehicle combination
DE102016002603A1 (en) * 2016-03-03 2017-09-07 Audi Ag Method for determining and providing a database containing environmental data relating to a predetermined environment
US10509407B2 (en) * 2016-07-01 2019-12-17 Samsung Electronics Co., Ltd. Apparatus and method for a vehicle platform
US9686646B1 (en) * 2016-09-29 2017-06-20 Cars.Com, Llc Integrated geospatial activity reporting
KR101979269B1 (en) * 2016-10-28 2019-05-16 엘지전자 주식회사 Autonomous Vehicle and operating method for the same
US10209718B2 (en) * 2017-03-14 2019-02-19 Starsky Robotics, Inc. Vehicle sensor system and method of use
US11195033B2 (en) * 2020-02-27 2021-12-07 Gm Cruise Holdings Llc Multi-modal, multi-technique vehicle signal detection

Patent Citations (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060059229A1 (en) * 2000-01-10 2006-03-16 David Bain Inter vehicle communication system
US20010049791A1 (en) * 2000-04-18 2001-12-06 Alain Gascher Security process of a communication for passive entry and start system
US20030080878A1 (en) * 2001-10-30 2003-05-01 Kirmuss Charles Bruno Event-based vehicle image capture
US20140136414A1 (en) * 2006-03-17 2014-05-15 Raj Abhyanker Autonomous neighborhood vehicle commerce network and community
US20080297330A1 (en) * 2007-06-01 2008-12-04 Jeon Byong-Hoon Vehicle emergency preventive terminal device and internet system using facial recognition technology
US20150025708A1 (en) * 2008-09-11 2015-01-22 Deere & Company Leader-Follower Fully-Autonomous Vehicle with Operator on Side
JP2011204151A (en) 2010-03-26 2011-10-13 Daihatsu Motor Co Ltd Inter-vehicle communication method and inter-vehicle communication device
US20160021238A1 (en) * 2010-09-21 2016-01-21 Cellepathy Ltd. Restricting mobile device usage
US20180124233A1 (en) * 2010-09-21 2018-05-03 Cellepathy Inc. Restricting mobile device usage
US20120105637A1 (en) * 2010-11-03 2012-05-03 Broadcom Corporation Multi-Level Video Processing Within A Vehicular Communication Network
US9165198B2 (en) 2012-02-25 2015-10-20 Audi Ag Method for identifying a vehicle during vehicle-to-vehicle communication
US20140306834A1 (en) * 2012-03-14 2014-10-16 Flextronics Ap, Llc Vehicle to vehicle safety and traffic communications
US20140308902A1 (en) * 2012-03-14 2014-10-16 Flextronics Ap, Llc Vehicle to vehicle social and business communications
US20150168174A1 (en) * 2012-06-21 2015-06-18 Cellepathy Ltd. Navigation instructions
US20150312404A1 (en) * 2012-06-21 2015-10-29 Cellepathy Ltd. Device context determination
US20130344859A1 (en) * 2012-06-21 2013-12-26 Cellepathy Ltd. Device context determination in transportation and other scenarios
US20160216130A1 (en) * 2012-06-21 2016-07-28 Cellepathy Ltd. Enhanced navigation instruction
US20140281534A1 (en) * 2013-03-15 2014-09-18 Waveconnex, Inc. EHF Secure Communication Device
US20140302774A1 (en) * 2013-04-04 2014-10-09 General Motors Llc Methods systems and apparatus for sharing information among a group of vehicles
US20150052352A1 (en) 2013-06-23 2015-02-19 Shlomi Dolev Certificating vehicle public key with vehicle attributes
US20160205238A1 (en) * 2013-08-23 2016-07-14 Dan Abramson Mobile device context aware determinations
US20170279957A1 (en) * 2013-08-23 2017-09-28 Cellepathy Inc. Transportation-related mobile device context inferences
US20150141043A1 (en) * 2013-08-23 2015-05-21 Cellepathy Ltd. Corrective navigation instructions
US20160275801A1 (en) * 2013-12-19 2016-09-22 USA as Represented by the Administrator of the National Aeronautics & Space Administration (NASA) Unmanned Aerial Systems Traffic Management
US20170166168A1 (en) * 2014-01-31 2017-06-15 Huf Hüsbeck & Fürst GmbH & Co. KG Method for Providing an Operating Signal
US20160099927A1 (en) * 2014-10-01 2016-04-07 Continental Intelligent Transportation Systems, LLC Hacker security solution for package transfer to and from a vehicle
US20160335897A1 (en) * 2015-05-15 2016-11-17 Hyundai America Technical Center, Inc Detecting misbehavior in vehicle-to-vehicle (v2v) comminications
US20180227514A1 (en) * 2015-07-24 2018-08-09 Sony Semiconductor Solutions Corporation Image sensor and electronic apparatus
US20170153314A1 (en) * 2015-11-04 2017-06-01 Nxp B.V. Embedded communication authentication
US20170132477A1 (en) * 2015-11-10 2017-05-11 Ford Global Technologies, Llc Inter-Vehicle Authentication Using Visual Contextual Information
US20170180388A1 (en) * 2015-12-16 2017-06-22 At&T Intellectual Property I, L.P. System For Providing Layered Security
US20190061686A1 (en) * 2016-02-26 2019-02-28 Huf Hülsbeck & Fürst Gmbh & Co. Kg Method for activating at least one safety function of a vehicle safety system
US20170274827A1 (en) * 2016-03-23 2017-09-28 GM Global Technology Operations LLC Rear vision system for a vehicle and method of using the same
US20190073543A1 (en) * 2016-05-27 2019-03-07 Mitsui Kinzoku Act Corporation Image information comparison system
US20180079284A1 (en) * 2016-09-21 2018-03-22 Lg Electronics Inc. Vehicle control device mounted on vehicle and method for controlling the vehicle
US20180090013A1 (en) * 2016-09-23 2018-03-29 Sharp Laboratories Of America, Inc. Unmanned aircraft and operation thereof
US20180102831A1 (en) * 2016-10-11 2018-04-12 T-Mobile, U.S.A., Inc. Uav for cellular communication
US20180126951A1 (en) * 2016-11-07 2018-05-10 Nio Usa, Inc. Method and system for authentication in autonomous vehicles
US20180202822A1 (en) * 2017-01-19 2018-07-19 Andrew DeLizio Managing autonomous vehicles
US10440536B2 (en) * 2017-05-19 2019-10-08 Waymo Llc Early boarding of passengers in autonomous vehicles
US20190061939A1 (en) * 2017-08-24 2019-02-28 Qualcomm Incorporated Managing Package Deliveries by Robotic Vehicles
US20190384320A1 (en) * 2019-07-24 2019-12-19 Lg Electronics Inc. Autonomous driving control method in restricted area and autonomous driving system using the same
US20200012281A1 (en) * 2019-07-25 2020-01-09 Lg Electronics Inc. Vehicle of automatic driving system and the control method of the system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12119883B2 (en) 2018-10-10 2024-10-15 Glydways Inc. Variable bandwidth free-space optical communication system for autonomous or semi-autonomous passenger vehicles
US12286099B2 (en) 2020-06-19 2025-04-29 Glydways, Inc. Braking and signaling schemes for autonomous vehicle system
US11983661B2 (en) 2021-12-20 2024-05-14 Ford Global Technologies, Llc Device authentication and trust in multi-modal goods delivery

Also Published As

Publication number Publication date
US20190068582A1 (en) 2019-02-28
CN109429197B (en) 2025-04-25
US11582222B2 (en) 2023-02-14
US20210075780A1 (en) 2021-03-11
CN109429197A (en) 2019-03-05

Similar Documents

Publication Publication Date Title
US11582222B2 (en) Authentication of vehicle-to-vehicle communications
US9842263B2 (en) Inter-vehicle authentication using visual contextual information
US11418346B2 (en) System and method for recognition of biometric information in shared vehicle
US9769658B2 (en) Certificating vehicle public key with vehicle attributes
US11228438B2 (en) Security device for providing security function for image, camera device including the same, and system on chip for controlling the camera device
Lim et al. LIDAR: Lidar information based dynamic V2V authentication for roadside infrastructure-less vehicular networks
DE102018120655A1 (en) AUTHENTICATION OF VEHICLE TO VEHICLE COMMUNICATIONS
CN107392092B (en) A V2V-based perspective perception method for the road environment ahead of intelligent vehicles
EP3949266A1 (en) Cryptographically secure mechanism for remotely controlling an autonomous vehicle
US20230180011A1 (en) Secure vehicle communications architecture for improved blind spot and driving distance detection
Kamal et al. A comprehensive solution for securing connected and autonomous vehicles
CN113361348B (en) Safe traffic sign error surveying method and system based on V2X
CN115580867B (en) Vehicle service subscriber system, method and storage medium used in the system
US12254770B2 (en) System and method for detecting traffic pole verification for vehicles
CN113286055B (en) Safe vehicle driving method and system based on safe traffic signs
KR20200064439A (en) System and method of obstacle verification based on inter-vehicular communication
CN113395331A (en) Safety traffic sign error surveying method and system based on Internet of vehicles
Bubeníková et al. Security solutions of intelligent transportation system's applications with using VANET networks
CN113286272B (en) Vehicle safety driving method and system based on Internet of vehicles
Bubeníková et al. Secure solution of collision warning system integration with use of vehicular communications within intelligent transportation systems
Dolev et al. Optical puf for vehicles non-forwardable authentication
Tang et al. Cooperative Vehicle Identification for Safe Connected Autonomous Driving
Dolev et al. Peripheral Authentication for Parked Vehicles over Wireless Radio Communication
US20200235930A1 (en) Transportation vehicle transactional security authentication
CN119717859A (en) A method and system for protecting power inspection drones from fraudulent interference data

Legal Events

Date Code Title Description
AS Assignment

Owner name: FORD GLOBAL TECHNOLOGIES, LLC, MICHIGAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KIM, YU SEUNG;OH, JINHYOUNG;REEL/FRAME:046403/0309

Effective date: 20170825

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4