[go: up one dir, main page]

CN113366548A - System and method for vehicle identification - Google Patents

System and method for vehicle identification Download PDF

Info

Publication number
CN113366548A
CN113366548A CN201880100552.6A CN201880100552A CN113366548A CN 113366548 A CN113366548 A CN 113366548A CN 201880100552 A CN201880100552 A CN 201880100552A CN 113366548 A CN113366548 A CN 113366548A
Authority
CN
China
Prior art keywords
vehicle
identification information
individual
vehicles
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880100552.6A
Other languages
Chinese (zh)
Inventor
易晓勇
任力伟
章江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Voyager Technology Co Ltd
Original Assignee
Beijing Voyager Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Voyager Technology Co Ltd filed Critical Beijing Voyager Technology Co Ltd
Publication of CN113366548A publication Critical patent/CN113366548A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/0112Measuring and analyzing of parameters relative to traffic conditions based on the source of data from the vehicle, e.g. floating car data [FCD]
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/12Detection or prevention of fraud
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/60Context-dependent security
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)

Abstract

Systems and methods for vehicle identification are described herein. A set of vehicle identification information may be obtained from a set of autonomous vehicles. The individual vehicle identification information may convey identification information of one or more vehicles and a location of the one or more vehicles. Vehicle context information for an individual vehicle may be determined from the set of vehicle identification information. The vehicle context information for individual vehicles may describe the context of those individual vehicles. The context may include one or a combination of travel speed, travel direction, trajectory, or identity profile.

Description

System and method for vehicle identification
Technical Field
The present invention relates generally to vehicle identification.
Background
Some techniques, such as Automated License Plate Reader (ALPR) systems, have been used to automatically capture data such as the license plate number, location, date, time of a vehicle entering the field of view, and/or a photograph of the vehicle. This data may be uploaded to a central repository for use in connection with many applications. One application may include law enforcement use to ascertain past locations of vehicles, determine whether a particular vehicle is at a crime scene, or discover driving patterns so that they can mine for more criminal activity. Another application may include an application that incorporates a "hot list" of stolen vehicle identification information. Law enforcement can load the hotspot list into the ALPR system and actively search for stolen vehicles and vehicles related to criminals.
Disclosure of Invention
One or more implementations of such systems and methods involve vehicle identification using an autonomous vehicle. Autonomous vehicles may be equipped with a set of sensors configured to generate output signals conveying information about the environment surrounding the autonomous vehicle. For example, the sensor may include an image sensor configured to generate output signals conveying image information defining an image of the surrounding environment. These images can be used to identify vehicles present in the environment and their locations. Autonomous vehicles may be part of a fleet of autonomous vehicles each equipped with such sensors. The output of the sensors and/or information derived from the output of sensors from a fleet of autonomous vehicles may assist in implementing crowd-sourced technology vehicle identification. For example, information derived from different autonomous vehicles may be compared to determine one or a combination of a vehicle identity profile, a travel speed, a travel direction, or a trajectory.
One aspect of the present disclosure is directed to a method for vehicle identification. The method may include obtaining a set of vehicle identification information from a set of autonomous vehicles, individual vehicle identification information being obtained from an individual autonomous vehicle and conveying identification information of one or more vehicles and a location of the one or more vehicles; and determining vehicle context information of a single vehicle in the one or more vehicles according to the set of vehicle identification information, wherein the vehicle context information of the single vehicle describes the context of the single vehicle, and the context comprises one or a combination of a driving speed, a driving direction, an identity profile or a track.
Another aspect of the invention relates to a system for vehicle identification. The system may include one or more processors and memory storing instructions. The instructions, when executed by the one or more processors, may cause the system to perform operations comprising: obtaining a set of vehicle identification information from a set of autonomous vehicles, individual vehicle identification information obtained from an individual autonomous vehicle and conveying identification information of one or more vehicles and a location of the one or more vehicles; and determining vehicle context information for a single vehicle of the one or more vehicles from the set of vehicle identification information, the vehicle context information for the single vehicle describing a context of the single vehicle, the context including one or a combination of a speed of travel, a direction of travel, an identity profile, or a trajectory.
In some embodiments, the identification information includes one or a combination of a license plate number, color, make, model number, or unique indicia.
In some embodiments, the single vehicle identification information may include identification information of one or more vehicles.
In some embodiments, the identification information of the one or more vehicles may be derived from vehicle identification information. By way of non-limiting illustration, the individual vehicle identification information may include one or a combination of image information or video information. The identification information of the one or more vehicles may be derived from the image information and/or video information by one or more image and/or video processing techniques.
In some embodiments, determining vehicle context information may be based on comparing individual identification information and locations of individual vehicles with other identification information and locations of those individual vehicles.
In some embodiments, comparing individual ones of the identification information of individual vehicles to other ones of the identification information of the individual vehicles may facilitate a match between individual ones of the identification information to determine that multiple ones of the identification information correspond to the same vehicle.
In some embodiments, the identity profile of an individual vehicle may represent the identity of the individual vehicle as a whole. The identity profile may be determined by combining multiple identities determined to correspond to the same vehicle.
In some embodiments, the trajectory may include a path followed by the vehicle.
In some embodiments, the system may also perform tracking of the vehicle based on the set of vehicle identification information and/or the vehicle context information.
These and other features of the disclosed systems, methods and non-transitory computer-readable media, as well as methods of operation and function of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the invention. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Brief description of the drawings
Preferred and non-limiting embodiments of the present invention may be more readily understood by reference to the accompanying drawings, in which:
FIG. 1 illustrates an example environment for vehicle identification, according to various embodiments of the invention.
FIG. 2 illustrates an example flow diagram of vehicle identification in accordance with various embodiments of the invention.
FIG. 3 illustrates a block diagram of an exemplary computer system in which any of the embodiments described herein may be implemented.
Detailed Description
Specific, non-limiting embodiments of the present invention will now be described with reference to the accompanying drawings. It should be understood that particular features and aspects of any embodiment disclosed herein may be used and/or combined with particular features and aspects of any other embodiment disclosed herein. It should also be understood that such embodiments are by way of example only and are merely illustrative of a small number of embodiments within the scope of the present invention. Various changes and modifications apparent to those skilled in the art to which the invention pertains are deemed to be within the spirit, scope and concept of the invention as further defined in the appended claims.
The method disclosed herein improves the functionality of a computing system that identifies a vehicle. One or more techniques presented herein may perform vehicle identification using a fleet of autonomous vehicles. The information collected from autonomous vehicles may improve vehicle identification due to the distribution of autonomous vehicles in the environment and due to the amount of information that may be retrieved from autonomous vehicles providing a dense data set from which vehicles may be identified.
FIG. 1 illustrates an example system 100 for vehicle identification, in accordance with various embodiments. Example system 100 may include one or a combination of computing system 102, autonomous vehicle 116, or one or more other autonomous vehicles 122.
It should be noted that while some features and functionality of the systems and methods presented herein may be directed to autonomous vehicle 116, this is for illustrative purposes only and should not be considered limiting. For example, it should be understood that other autonomous vehicles included in one or more of the other autonomous vehicles 122 described herein may be configured the same as or similar to autonomous vehicle 116, and may include the same or similar components. Autonomous vehicle 116 and one or more other autonomous vehicles 122 may represent a group of autonomous vehicles that may be part of a fleet of autonomous vehicles.
Autonomous vehicle 116 may include one or more processors and memory (e.g., permanent memory, temporary memory). The one or more processors may be configured to perform various operations by interpreting machine-readable instructions stored in the memory. Autonomous vehicle 116 may include other computing resources. Autonomous vehicle 116 may access other computing resources or other entities participating in system 100 (e.g., via one or more connections, via one or more networks 110).
The autonomous vehicle 116 may include one or a combination of an identification component 118 or a sensor group 120. Autonomous vehicle 116 may include other components.
The sensor group 120 may include one or more sensors configured to generate output signals conveying vehicle identification information or other information. The vehicle identification information may convey identification information, location, or a combination of identification information and location of one or more vehicles present in the environment surrounding autonomous vehicle 116. The identification information of the one or more vehicles may include one or a combination of a license plate number, one or more vehicle colors, a vehicle brand (e.g., manufacturer), a vehicle model, or a unique indicia. The license plate number may be comprised of one or a combination of alphanumeric characters or symbols. The unique indicia may refer to one or a combination of decals, text, damage, or other indicia on the vehicle. In some embodiments, the identification information may be partial identification information. By way of non-limiting illustration, the partial identification information may include a portion of the license number (less than all of the alphanumeric characters or symbols making up the license number), some of the multiple colors of the vehicle (if the vehicle is multi-colored), or no-model brand identification information.
The sensor group 120 may include an image sensor, a group of image sensors, a position sensor, a group of position sensors, or a combination of image sensors, position sensors, and other sensors. A set of sensors (e.g., a set of image sensors) may include one or more sensors (e.g., one or more image sensors).
The image sensor may be configured to generate output signals conveying image information and/or video information. The image information may define the visual content in the form of one or more images. The video information may define visual content in the form of a sequence of images. A single image may be defined by pixels and/or other information. The pixels may be characterized by one or a combination of pixel location, pixel color, or pixel transparency. The image sensor may include one or more of a charge coupled device sensor, an active pixel sensor, a complementary metal oxide semiconductor sensor, an N-type metal oxide semiconductor sensor, and/or other image sensors. In some embodiments, identification information for one or more vehicles may be derived from image information or video information through one or more image and/or video processing techniques. Such techniques may include one or a combination of computer vision, Speeded Up Robust Features (SURF), Scale Invariant Feature Transform (SIFT), oriented FAST and rotation brief (orb), deep learning (neural network), or Optical Character Recognition (OCR).
In some implementations, the position sensor may be configured to generate output signals conveying position information. The position information derived from the output signals of the position sensors may define one or a combination of the position of autonomous vehicle 116, the altitude of autonomous vehicle 116, a timestamp of when the position was obtained, or other measurements. The location sensor may include one or a combination of a GPS, altimeter, or pressure sensor.
In some embodiments, the single vehicle identification information may include one or a combination of image information, video information, or identification information derived from the image information or the video information. The identification component 118 can determine identification information of one or more vehicles from one or a combination of image information or video information through one or more of the image or video processing techniques described herein. In some embodiments, the identification information may be included in the vehicle identification information, or the vehicle identification information may include one or a combination of image information or video information from which the vehicle identification information may be derived.
The recognition component 118 can communicate the vehicle identification information to the computing system 102 via one or more networks 110. The one or more networks 110 may include the internet or other networks. The computing system 102 may obtain other vehicle identification information from other autonomous vehicles 122. Thus, the computing system 102 may obtain a set of vehicle identification information.
Computing system 102 may include one or more processors and memory (e.g., permanent memory, temporary memory). The one or more processors may be configured to perform various operations by interpreting machine-readable instructions stored in the memory. Computing system 102 may contain other computing resources. Computing system 102 may access other computing resources or other entities participating in system 100 (e.g., via one or more connections, via one or more networks 110).
Computing system 102 can include one or a combination of an identification component 104, a context component 106, or a tracking component 108. Although computing system 102 is shown in fig. 1 as a single entity, this is for ease of reference only and is not intended to be limiting. One or more components or one or more functions of computing system 102 described herein may be implemented in a single computing device or multiple computing devices. In some embodiments, one or more components or one or more functions of computing system 102 described herein may be implemented in one or more networks 110, one or more endpoints, one or more servers, or one or more clouds.
The recognition component 104 can obtain a set of vehicle identification information from a set of autonomous vehicles. By way of non-limiting illustration, the recognition component 104 can obtain vehicle identification information from a single one of the autonomous vehicle 116 and one or more other autonomous vehicles 122. The individual vehicle identification information obtained from the individual autonomous vehicles may include identification information, location, or a combination of identification information and location of one or more vehicles.
In some embodiments, the vehicle identification information obtained by the identification component 104 can include identification information of the vehicle. For example, the autonomous vehicle 116 may determine identification information of the vehicle via the recognition component 118 and communicate the identification information to the computing system 102.
In some embodiments, the identification component 104 can determine identification information of the vehicle from the vehicle identification information. By way of non-limiting illustration, the recognition component 104 can obtain vehicle identification information that includes one or a combination of image information or video information. The identification component 104 can determine identification information of the vehicle using one or more image or video based techniques described herein.
The context component 106 can determine vehicle context information for individual vehicles of the one or more vehicles from the set of vehicle identification information. By way of non-limiting illustration, the context component 106 can determine vehicle context information for the autonomous vehicle 116 based on vehicle identification information obtained from the autonomous vehicle 116 and/or other vehicle identification information obtained from other autonomous vehicles.
In some embodiments, the vehicle context information for individual vehicles may describe the context of those individual vehicles. The context of individual vehicles may describe situations that are specific to those individual vehicles. By way of non-limiting illustration, the context may include one or a combination of travel speed, travel direction, trajectory, or identity profile.
In some embodiments, determining vehicle context information may be based on comparing individual identification information and locations of individual vehicles with other identification information and locations of those individual vehicles.
Comparing individual ones of the identification information of the individual vehicles with others of the identification information of the individual vehicles facilitates determining that ones of the identification information correspond to the same vehicle. For example, based on the comparison, it may be determined that multiple ones of the identification information (obtained from the same or different autonomous vehicles) match. A match may convey that the multiple identification information correspond to a logical inference of the same vehicle. In some implementations, a match can indicate that the identification information is the same or complementary.
In some embodiments, a match that is the same may indicate that the identification information is within the same threshold degree. By way of non-limiting illustration, for license plate identification information, two (or more) identification information may be determined to match if they depict the same sequence of alphanumeric characters or symbols, or 90% of the alphanumeric characters or symbols depicting the sequence are identical. By way of non-limiting illustration, for color identifying information, two (or more) identifying information may be determined to match if they depict a color within a threshold range observed on a color scale. By way of non-limiting illustration, for vehicle brand identification information, two (or more) identification information may be determined to match if they depict the same brand of vehicle. As a non-limiting illustration, for vehicle model identification information, if two (or more) pieces of identification information depict vehicles of the same model, it can be determined that they match. By way of non-limiting illustration, for unique marker identification information, two (or more) identification information may be determined to match if they depict the same visually significant unique marker at the same location of the vehicle.
In some embodiments, being complementary matches may indicate that the plurality of identification information may be partial identification information, which when combined form complete identification information. The portion identification information may include a depiction of different portions of the vehicle. The portion identification information may include a depiction of one or more overlapping portions of the vehicle. By way of non-limiting illustration, for license plate identification information, one identification information may depict a portion of a sequence of alphanumeric characters or symbols, while another identification information may depict another portion of the sequence of alphanumeric characters or symbols. These identification information may be combined to delineate a sequence of alphanumeric characters or symbols defining the license plate number as a whole. As a non-limiting illustration, for color identification information, one identification information may delineate one portion of a vehicle having a color, while another identification information may delineate another portion of the vehicle having the same color. These identification information may be combined to depict a vehicle of consistent color as a whole. By way of non-limiting illustration, for unique indicia identification information, one identification information may depict a portion of the unique indicia and another identification information may depict another portion of the unique indicia. These identification information may be combined to delineate the unique mark as a whole. In some implementations, the combined identifying information can be achieved by stitching images and/or videos together.
In some implementations, stitching may include operations such as one or a combination of feature point detection, image registration, alignment, or composition. Feature point detection may be achieved by techniques such as SIFT and SURF. Image registration may include matching features in a set of images. Methods for image registration may include random sample consensus (RANSAC) or other techniques. Aligning may include transforming the image to match the viewpoint of another image. The synthesis may comprise the process of: the images are aligned in such a way that they appear as a single lens of the vehicle. In some embodiments, deep learning (of neural networks) based methods may also be used.
In some embodiments, the identity profile of a single vehicle may represent the identity of the single vehicle as a whole. The overall identity of the vehicle may include a representation of more than one identity that a given vehicle may have obtained. That is, the identity profile may be determined by combining multiple identities (obtained from one or more autonomous vehicles) determined to correspond to the same vehicle. The identity profile may be made by the stitching techniques described herein, which may produce one or more images depicting more than one identification information. By way of non-limiting illustration, the identity profile of the vehicle may include one image or a sequence of images of two or more of a license number, color, make, model, or unique indicia from which the vehicle may be identified.
In some embodiments, determining one or a combination of a travel speed, a travel direction, or a trajectory of a vehicle may be based on comparing a single location associated with multiple identification information determined to correspond to the same vehicle.
In some embodiments, the travel speed may be represented by distance per unit of travel. Determining the travel speed of the vehicle may be accomplished by one or a combination of the following: the locations of the vehicle identification information are compared, the distance between the locations is determined, the time span between the locations is determined, or the distance is divided by the time span to obtain the travel speed (e.g., distance per unit time). By way of non-limiting illustration, a travel speed may specifically refer to a vehicle traveling at a speed of 110 kilometers per hour.
In some embodiments, the direction of travel may be represented by a cardinal direction. The cardinal directions may include north, south, east, and west. Determining the direction of travel may be accomplished by one or a combination of the following: comparing the locations of the vehicle identification information, determining which locations occur before others, determining that the vehicle is travelling from a first location to a second location and determining a pointing direction from the first location to the second location, or associating the pointing direction with a cardinal direction. By way of non-limiting illustration, the direction of travel may specifically refer to the vehicle traveling north.
In some embodiments, the trajectory of the vehicle may include a path followed by the vehicle. The trajectory may be specified in connection with one or a combination of named roads, highways (highways), highways (freeways), intersections, communities or cities. In some implementations, the location can be quoted in an environment map that contains information about one or a combination of named roads, highways, expressways, intersections, communities, or cities. By way of non-limiting illustration, a map service may be accessed and used to cross-reference the determined location with information conveyed by a map. The map service may include
Figure BDA0003137051250000081
A map. By way of non-limiting illustration, a trajectory may specifically refer to a vehicle traveling two miles on a main street, turning left and traveling six blocks on the first block, and so forth. In some embodiments, how the trajectory of the vehicle changes (or does not change) over time may reflect the driving pattern of the vehicle. The driving pattern may include a common trajectory that occurs more than once.
The tracking component 108 can be configured to proactively find one or more vehicles using the set of vehicle identification information, the vehicle context information, or a combination of the vehicle identification information and the vehicle context information. By way of non-limiting illustration, tracking component 108 may obtain one or more identification information of the vehicle, such as through user input designed by a user to locate the vehicle. The tracking component 108 can monitor the set of vehicle identification information obtained from the fleet of autonomous vehicles and the vehicle context information determined therefrom. The tracking component 108 can perform such monitoring while looking for a match between one or more identification information provided by the user and the identification information conveyed by the set of vehicle identification information. The tracking component 108 can provide vehicle context information for the matched vehicle to the user via one or more user interfaces in response to finding the match. By way of non-limiting illustration, the provided vehicle context information may include one or a combination of a travel speed, a travel direction, or a trajectory of the vehicle to enable a user to track the vehicle.
In some embodiments, autonomous vehicles may exchange notifications, tracking requests, or a combination of requests and notifications of vehicle identification information. For example, the autonomous vehicle 116 may send a request to identify a particular vehicle to other autonomous vehicles 122 via the identification component 118. The autonomous vehicle 116 can obtain requests to identify a particular vehicle from other autonomous vehicles 122 via the identification component 118. The autonomous vehicle 116 can notify other vehicles of the identified vehicle (e.g., by sending vehicle identification information) via the identification component 118. The autonomous vehicle 116 can obtain notifications about the identified vehicle from other vehicles (e.g., by receiving vehicle identification information) via the identification component 118. The requests and notifications may be sent to computing system 102, which in turn computing system 102 may forward the requests or notifications to autonomous vehicles in the vicinity of the GPS location of the requesting or notifying vehicle, or may send the requests and notifications directly from autonomous vehicle 116 to one or more autonomous vehicles in the vicinity.
FIG. 2 illustrates an example flow diagram 200 for vehicle identification in accordance with various embodiments of the invention. At block 202, a set of vehicle identification information may be obtained from a set of autonomous vehicles. Individual vehicle identification information may be obtained from a single autonomous vehicle. The individual vehicle identification information may convey identification information of one or more vehicles and a location of the one or more vehicles. At block 204, vehicle context information for a single vehicle may be determined from the set of vehicle identification information. The vehicle context information for individual vehicles may describe the context of those individual vehicles.
FIG. 3 is a block diagram that illustrates a computer system 300 upon which any of the embodiments described herein can be implemented. Computer system 300 includes a bus 302 or other communication mechanism for communicating information, and one or more hardware processors 304 coupled with bus 302 for processing information. For example, the one or more hardware processors 304 may be one or more general purpose microprocessors.
Computer system 300 also includes a main memory 306, such as a Random Access Memory (RAM), cache memory, and/or other dynamic storage device, coupled to bus 302 for storing information and instructions to be executed by one or more processors 304. Main memory 306 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor(s) 304. These instructions, when stored in a storage medium accessible by the one or more processors 304, cause the computer system 300 to enter a specific machine that is customized to perform the operations specified in the instructions. The main memory 306 may contain non-volatile media and/or volatile media. Non-volatile media may include, for example, optical or magnetic disks. Volatile media may include dynamic memory. For example, common forms of media may include: floppy disks, hard disks, solid state disks, magnetic tape, or any other magnetic data storage medium, CD-ROMs, any other optical data storage medium, any physical medium with patterns of holes, RAMs, DRAMs, PROMs, and EPROMs, FLASH-EPROMs, NVRAMs, any other memory chip or cartridge, and network versions thereof.
Computer system 300 may implement the techniques described herein using custom hardwired logic, one or more ASICs or FPGAs, firmware, and/or program logic that, in combination with the computer system, cause computer system 300 to be a special purpose machine or program the system to be a special purpose machine. According to one embodiment, the techniques herein are performed by computer system 300 in response to one or more processors 304 executing one or more sequences of one or more instructions contained in main memory 306. Such instructions may be read into main memory 306 from another storage medium, such as storage device 308. Execution of the sequences of instructions contained in main memory 306 causes processor 304 to perform the process steps described herein. For example, the processes/methods illustrated in FIG. 2 and described in connection with this figure may be implemented by computer program instructions stored in main memory 306. When the instructions are executed by the one or more processors 304, the processors may perform the steps as shown in FIG. 2 and described above. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
Computer system 300 also includes a communication interface 310 coupled to bus 302. Communication interface 310 provides a two-way data communication coupling to one or more network links that connect to one or more networks. As another example, communication interface 310 may be a Local Area Network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component that communicates with a WAN). Wireless links may also be implemented.
The performance of certain operations may be distributed among the processors, residing not only in a single machine, but also deployed across many machines. In some example embodiments, the processor or processor-implemented engine may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other exemplary embodiments, the processor or processor-implemented engine may be distributed across many geographic locations.
Certain embodiments are described herein as comprising logic or a number of components. The components may constitute software components (e.g., code embodied on a machine-readable medium) or hardware components (e.g., tangible units capable of performing certain operations that may be configured or arranged in some physical manner). As used herein, for convenience, components of computing system 102 and autonomous vehicle 116 may be described as being performed or configured to perform operations when such components may contain instructions that may program or configure computing system 102 and autonomous vehicle 116 to perform the operations.
Although examples and features of the disclosed principles are described herein, modifications, adaptations, and other implementations can be made without departing from the spirit and scope of the disclosed embodiments. Furthermore, the words "comprising," "having," "containing," and "containing" and other similar forms are equivalent in meaning and are open in a non-limiting sense, as one or more items following any one of these words are not intended to be an exhaustive list of such one or more items, or are intended to be limited to only the listed one or more items. It must also be noted that, as used herein and in the appended claims, the singular forms "a," "an," and "the" include plural referents unless the context clearly dictates otherwise.
The embodiments illustrated herein are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed. Other embodiments may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The detailed description is, therefore, not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.

Claims (20)

1. A system for vehicle identification, the system comprising:
one or more processors; and
a memory storing instructions that, when executed by the one or more processors, cause the system to perform:
obtaining a set of vehicle identification information from a set of autonomous vehicles, wherein individual vehicle identification information is obtained from an individual autonomous vehicle and conveys identification information of one or more vehicles and a location of the one or more vehicles; and
determining, from the set of vehicle identification information, vehicle context information for a single vehicle of the one or more vehicles, the vehicle context information for the single vehicle describing a context of the single vehicle, the context including one or a combination of a speed of travel, a direction of travel, or a trajectory.
2. The system of claim 1, wherein the identification information comprises one or a combination of a license plate number, a color, a brand, a model number, or a unique indicia.
3. The system of claim 1, wherein the individual vehicle identification information includes identification information of the one or more vehicles.
4. The system of claim 1, wherein the individual vehicle identification information includes one or a combination of image information or video information, and the identification information of the one or more vehicles is derived from the image information and/or the video information by one or more image and/or video processing techniques.
5. The system of claim 1, wherein determining the vehicle context information is based on comparing individual identification information and location of the individual vehicle with other identification information and location of the individual vehicle.
6. The system of claim 5, wherein the comparison of a single one of the identification information of the single vehicle to other ones of the identification information of the single vehicle facilitates a match between the single one of the identification information to determine that multiple ones of the identification information correspond to the same vehicle.
7. The system of claim 6, wherein the context further comprises an identity profile, the identity profile of a single vehicle collectively representing an identity of the single vehicle, and wherein the identity profile is determined by combining the plurality of the identification information determined to correspond to the same vehicle.
8. The system of claim 6, wherein determining one or a combination of the travel speed, the travel direction, or the trajectory is based on comparing locations of individual ones of the plurality of identification information determined to correspond to the same vehicle.
9. The system of claim 1, wherein the trajectory comprises a path followed by the single vehicle.
10. The system of claim 1, wherein the system further performs tracking of vehicles based on the set of vehicle identification information and/or the vehicle context information.
11. A method for vehicle identification, the method comprising:
obtaining a set of vehicle identification information from a set of autonomous vehicles, individual vehicle identification information obtained from an individual autonomous vehicle and conveying identification information of one or more vehicles and a location of the one or more vehicles; and
determining, from the set of vehicle identification information, vehicle context information for a single vehicle of the one or more vehicles, the vehicle context information for the single vehicle describing a context of the single vehicle, the context including one or a combination of a speed of travel, a direction of travel, or a trajectory.
12. The method of claim 11, wherein the identification information comprises one or a combination of a license plate number, a color, a brand, a model number, or a unique indicia.
13. The method of claim 11, wherein the individual vehicle identification information includes identification information of the one or more vehicles.
14. The method of claim 11, wherein the individual vehicle identification information includes one or a combination of image information or video information, and the identification information of the one or more vehicles is derived from the image information and/or the video information by one or more image and/or video processing techniques.
15. The method of claim 11, wherein determining the vehicle context information is based on comparing individual identification information and location of the individual vehicle with other identification information and location of the individual vehicle.
16. The method of claim 15, wherein the comparing of the individual ones of the identification information of the individual vehicles to the other ones of the identification information of the individual vehicles facilitates a match between the individual ones of the identification information to determine that multiple ones of the identification information correspond to the same vehicle.
17. The method of claim 16, wherein the context further comprises an identity profile, the identity profile of an individual vehicle as a whole representing an identity of the individual vehicle, and wherein the identity profile is determined by combining the plurality of the identification information determined to correspond to the same vehicle.
18. The method of claim 16, wherein determining one or a combination of the travel speed, the travel direction, or the trajectory is based on comparing individual ones of the plurality of identification information determined to correspond to the same vehicle.
19. The method of claim 11, wherein the trajectory comprises a path followed by the single vehicle.
20. The method of claim 11, further comprising: performing tracking of a vehicle based on the set of vehicle identification information and/or the vehicle context information.
CN201880100552.6A 2018-12-28 2018-12-28 System and method for vehicle identification Pending CN113366548A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2018/067988 WO2020139385A1 (en) 2018-12-28 2018-12-28 Systems and methods for vehicle identification

Publications (1)

Publication Number Publication Date
CN113366548A true CN113366548A (en) 2021-09-07

Family

ID=71127367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880100552.6A Pending CN113366548A (en) 2018-12-28 2018-12-28 System and method for vehicle identification

Country Status (2)

Country Link
CN (1) CN113366548A (en)
WO (1) WO2020139385A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110095908A1 (en) * 2009-10-22 2011-04-28 Nadeem Tamer M Mobile sensing for road safety, traffic management, and road maintenance
CN103093516A (en) * 2012-12-25 2013-05-08 北京理工大学 Vehicle trajectory replaying system
CN105632175A (en) * 2016-01-08 2016-06-01 上海微锐智能科技有限公司 Vehicle behavior analysis method and system
CN107085946A (en) * 2017-06-13 2017-08-22 深圳市麦谷科技有限公司 A kind of vehicle positioning method and system based on picture recognition technology
US9952594B1 (en) * 2017-04-07 2018-04-24 TuSimple System and method for traffic data collection using unmanned aerial vehicles (UAVs)
CN208126647U (en) * 2018-01-02 2018-11-20 乌鲁木齐明华智能电子科技有限公司 vehicle management system based on cloud server

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9731713B2 (en) * 2014-09-10 2017-08-15 Volkswagen Ag Modifying autonomous vehicle driving by recognizing vehicle characteristics
CN107481526A (en) * 2017-09-07 2017-12-15 公安部第三研究所 System and method for lane change detection record and illegal lane change report control

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110095908A1 (en) * 2009-10-22 2011-04-28 Nadeem Tamer M Mobile sensing for road safety, traffic management, and road maintenance
CN103093516A (en) * 2012-12-25 2013-05-08 北京理工大学 Vehicle trajectory replaying system
CN105632175A (en) * 2016-01-08 2016-06-01 上海微锐智能科技有限公司 Vehicle behavior analysis method and system
US9952594B1 (en) * 2017-04-07 2018-04-24 TuSimple System and method for traffic data collection using unmanned aerial vehicles (UAVs)
CN107085946A (en) * 2017-06-13 2017-08-22 深圳市麦谷科技有限公司 A kind of vehicle positioning method and system based on picture recognition technology
CN208126647U (en) * 2018-01-02 2018-11-20 乌鲁木齐明华智能电子科技有限公司 vehicle management system based on cloud server

Also Published As

Publication number Publication date
WO2020139385A1 (en) 2020-07-02

Similar Documents

Publication Publication Date Title
WO2022007818A1 (en) Method for updating high-definition map, and vehicle, server and storage medium
CN107563419B (en) Train positioning method combining image matching and two-dimensional code
US11094112B2 (en) Intelligent capturing of a dynamic physical environment
US10867189B2 (en) Systems and methods for lane-marker detection
CN112328730B (en) Map data updating method, related device, equipment and storage medium
CN111178253A (en) Visual perception method and device for automatic driving, computer equipment and storage medium
JP6175846B2 (en) Vehicle tracking program, server device, and vehicle tracking method
CN110785719A (en) Method and system for instant object tagging via cross temporal verification in autonomous vehicles
JP7293344B2 (en) Systems and methods for localization of road objects by unsupervised machine learning
EP3552388B1 (en) Feature recognition assisted super-resolution method
CN109886308A (en) A target-level-based dual-sensor data fusion method and device
CN112183367B (en) Vehicle data error detection method, device, server and storage medium
CN112001378B (en) Lane line processing method and device based on feature space, vehicle-mounted terminal and medium
Mousavian et al. Semantic image based geolocation given a map
US20220196432A1 (en) System and method for determining location and orientation of an object in a space
JP2020038632A5 (en)
Chandrasekaran et al. Computer vision based parking optimization system
Yin et al. Multimodal fusion of satellite images and crowdsourced GPS traces for robust road attribute detection
Islam et al. Real-time vehicle trajectory estimation based on lane change detection using smartphone sensors
Yin et al. Multimodal deep learning for robust road attribute detection
Thornton et al. Multi-source feature fusion for object detection association in connected vehicle environments
Hui et al. Vision-HD: road change detection and registration using images and high-definition maps
Lee et al. Augmenting car surrounding information by inter-vehicle data fusion
Sadekov et al. Road sign detection and recognition in panoramic images to generate navigational maps
CN113366548A (en) System and method for vehicle identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210907

RJ01 Rejection of invention patent application after publication