CN114688989A - Vehicle feature extraction method and system - Google Patents
Vehicle feature extraction method and system Download PDFInfo
- Publication number
- CN114688989A CN114688989A CN202011566214.7A CN202011566214A CN114688989A CN 114688989 A CN114688989 A CN 114688989A CN 202011566214 A CN202011566214 A CN 202011566214A CN 114688989 A CN114688989 A CN 114688989A
- Authority
- CN
- China
- Prior art keywords
- information
- vehicle
- detection
- detection unit
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 10
- 238000001514 detection method Methods 0.000 claims abstract description 393
- 238000000034 method Methods 0.000 claims abstract description 29
- 230000008569 process Effects 0.000 claims abstract description 9
- 239000000523 sample Substances 0.000 claims description 46
- 238000006073 displacement reaction Methods 0.000 claims description 39
- 239000000725 suspension Substances 0.000 claims description 16
- 238000012545 processing Methods 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 238000005303 weighing Methods 0.000 description 5
- 230000004044 response Effects 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 238000009434 installation Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000003137 locomotive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01G—WEIGHING
- G01G19/00—Weighing apparatus or methods adapted for special purposes not provided for in the preceding groups
- G01G19/02—Weighing apparatus or methods adapted for special purposes not provided for in the preceding groups for weighing wheeled or rolling bodies, e.g. vehicles
- G01G19/03—Weighing apparatus or methods adapted for special purposes not provided for in the preceding groups for weighing wheeled or rolling bodies, e.g. vehicles for weighing during motion
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01P—MEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
- G01P3/00—Measuring linear or angular speed; Measuring differences of linear or angular speeds
- G01P3/64—Devices characterised by the determination of the time taken to traverse a fixed distance
- G01P3/68—Devices characterised by the determination of the time taken to traverse a fixed distance using optical means, i.e. using infrared, visible, or ultraviolet light
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The disclosure relates to a vehicle feature extraction method and system. The method comprises the steps of obtaining first detection information of a vehicle in the process of traveling through a first detection unit; acquiring second detection information of the vehicle in the traveling process through a second detection unit; and acquiring characteristic information of the vehicle according to the first detection information and the second detection information; determining three-dimensional contour information of the vehicle according to the first detection information and the second detection information; and acquiring characteristic information of the vehicle based on the three-dimensional contour information. According to the method and the device, the first detection information and the second detection information are combined, so that the characteristic information of the vehicle can be acquired simultaneously, and the accuracy of the characteristic information is improved.
Description
Technical Field
The present disclosure relates generally to the field of intelligent traffic management. More particularly, the present disclosure relates to a method and a system for extracting vehicle features.
Background
This section is intended to provide a background or context to the embodiments of the disclosure recited in the claims. The description herein may include concepts that could be pursued, but are not necessarily ones that have been previously conceived or pursued. Thus, unless otherwise indicated herein, what is described in this section is not prior art to the description and claims in this application and is not admitted to be prior art by inclusion in this section.
The intelligent traffic management is based on key elements such as people, vehicles and roads, and utilizes scientific technologies such as computer technology, information technology, data communication technology, sensor technology, electronic control technology and the like to automatically identify and intelligently analyze key traffic management targets and automatically detect illegal behaviors so as to improve traffic efficiency and optimize traffic travel order. For example, the system can automatically collect the length, width, height, speed and weight of the vehicle through the mounted road weighing sensors and laser probes. Therefore, the intelligent management system becomes a modern tool for intelligently managing transportation. In vehicle overrun detection, the total mass limit of the vehicle is related not only to the number of axles of the vehicle, but also to other characteristics of the vehicle. However, no effective solution capable of simultaneously identifying multiple features of a vehicle has been proposed so far.
Disclosure of Invention
In order to solve at least one or more of the above technical problems, the present disclosure provides a vehicle feature extraction method and system. The embodiment of the disclosure combines the first detection information and the second detection information, so that not only can various characteristic information of the vehicle be acquired simultaneously, but also the accuracy of the characteristic information is improved. In view of this, the present disclosure provides corresponding solutions in the following aspects.
In a first aspect, the present disclosure provides a method for extracting vehicle features, comprising: acquiring first detection information of a vehicle in a traveling process through a first detection unit; acquiring second detection information of the vehicle in the traveling process through a second detection unit; and acquiring characteristic information of the vehicle according to the first detection information and the second detection information; preferably, three-dimensional contour information of the vehicle is determined from the first detection information and the second detection information; and acquiring characteristic information of the vehicle based on the three-dimensional contour information.
In one embodiment, wherein the first probe information comprises first probe position information and first probe energy information; the second detection information includes second detection position information and second detection energy information.
In another embodiment, wherein determining three-dimensional contour information of the vehicle based on the first detection information and the second detection information comprises: converting the first detection position information in the first detection information into first coordinate information; converting the second detection position information in the second detection information into second coordinate information; determining the speed of the vehicle at different moments according to the first detection position information, the second detection position information and a first preset angle; and determining three-dimensional contour information of the vehicle based on the first coordinate information, the second coordinate information, and the speed of the vehicle.
In yet another embodiment, wherein determining the speed of the vehicle at different times based on the first probe position information and the second probe position information and a first preset angle comprises: acquiring a plurality of different characteristic points of the vehicle according to the first detection position information, the first detection energy information and/or the second detection position information and the second detection energy information; determining the moving distances and the time differences of the plurality of characteristic points according to the first detection position information, the second detection position information and the first preset angle; and determining the speed of the vehicle based on the moving distances and the time differences of the plurality of feature points.
In yet another embodiment, wherein determining the three-dimensional contour information of the vehicle based on the first coordinate information, the second coordinate information, and the speed of the vehicle comprises: determining first displacement information and second displacement information of the vehicle according to the speed of the vehicle; and determining three-dimensional contour information of the vehicle based on the first coordinate information, the second coordinate information, the first displacement information, and the second displacement information.
In yet another embodiment, wherein determining the three-dimensional contour information of the vehicle based on the first coordinate information, the second coordinate information, the first displacement information, and the second displacement information comprises: combining the first coordinate information and the first displacement information into first contour information; combining the second coordinate information and the second displacement information into second contour information; and sequentially inserting the second contour information into the first contour information according to the sizes of the first displacement information and the second displacement information to form the three-dimensional contour information.
In yet another embodiment, wherein the plurality of feature points includes at least: the vehicle comprises a vehicle head front end point, a vehicle tail rear end point, a tire front end point, a tire rear end point, a reflector front end point, a reflector rear end point, a starting point of a vehicle body reflective strip and an end point of the vehicle body reflective strip.
In yet another embodiment, wherein the characteristic information includes at least one or more of axle group information, drive axle information, and air suspension information.
In yet another embodiment, wherein obtaining the axle set information of the vehicle based on the three-dimensional contour information comprises: determining a starting position, an ending position and a length of each axle group of the vehicle based on the three-dimensional contour information; and determining axle set information for the vehicle based on the starting position, ending position, and length of the axle set and/or the number of projected overlaps of adjacent tires of the vehicle.
In still another embodiment, wherein the acquiring of the driving axis information of the vehicle based on the three-dimensional profile information includes: determining a start position and an end position of a drive shaft of the vehicle based on the three-dimensional profile information; based on projected distances of the plurality of feature points between the start position and the end position of the drive axis and a height from the ground; and fitting based on the projection distance and the height to obtain driving shaft information of the vehicle.
In yet another embodiment, wherein obtaining the air suspension information of the vehicle based on the three-dimensional profile information comprises: determining drive shaft information of the vehicle based on the three-dimensional contour information; determining the air suspension information based on the driveshaft information.
In a second aspect, the present disclosure also provides a vehicle feature extraction system, including: a first detection unit arranged to acquire first detection information of the vehicle during travel; a second detection unit arranged to acquire second detection information of the vehicle during travel; and a data processing unit for acquiring characteristic information of the vehicle from the first probe information and the second probe information; preferably, the first detection surface of the first detection unit is perpendicular to the road surface and perpendicular to the vehicle driving direction, and the second detection surface of the second detection unit is perpendicular to the road surface and arranged at a first preset angle with the vehicle driving direction.
In one embodiment, the distance between the first detection unit and the second detection unit in the vehicle driving direction is set to be 0 mm-1000 mm, and the vehicle passes through the first detection unit and then passes through the second detection unit.
According to the embodiment of the disclosure, by combining the first detection information and the second detection information, not only can various feature information of the vehicle be acquired simultaneously, but also the accuracy of the feature information is improved. Further, the first detection information and the second detection information are spliced to obtain the three-dimensional outline information of the vehicle, and the characteristic information of the vehicle is obtained based on the three-dimensional outline information, so that the obtained information containing the vehicle is richer, and the accuracy of the characteristic information is improved. Further, the embodiment of the present disclosure sets the first detection surface of the first detection unit to be perpendicular to the vehicle traveling direction, so as to perform accurate vehicle separation. Meanwhile, the second detection surface of the second detection unit is set to be at the first preset angle with the driving direction of the vehicle, so that more characteristic information of the vehicle can be acquired at the same time, and the detection precision of the characteristic information of the vehicle is improved.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description, which proceeds with reference to the accompanying drawings. In the drawings, several embodiments of the disclosure are illustrated by way of example and not by way of limitation, and like or corresponding reference numerals indicate like or corresponding parts and in which:
FIG. 1 is an exemplary schematic diagram illustrating a conventional vehicle over-limit detection system;
fig. 2 shows an exemplary structural block diagram of a vehicle feature extraction system according to an embodiment of the present disclosure;
fig. 3 shows an exemplary schematic diagram of a first detection unit and a second detection unit arrangement according to an embodiment of the present disclosure;
FIG. 4 illustrates an exemplary flow chart of a method of extracting vehicle features according to an embodiment of the disclosure;
FIG. 5 shows a first coordinate system established based on a first detection unit according to an embodiment of the present disclosure;
FIG. 6 shows a second coordinate system established based on a second detection unit according to an embodiment of the present disclosure; and
fig. 7 illustrates an exemplary schematic diagram of determining a feature point movement distance according to an embodiment of the present disclosure.
Detailed Description
The principles and spirit of the present disclosure will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the present disclosure, and are not intended to limit the scope of the present disclosure in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Vehicle overrun detection systems typically include load cells or laser probes mounted on the road surface and road to capture the weight of the vehicle or to capture the length, width, height and speed of the vehicle in order to identify the type of vehicle. According to the standard for over-limit and overload determination of road freight vehicles, the total mass limit of a vehicle is not only related to the number of axles of the vehicle, but also to the type of axle set of the vehicle. The total mass limit values of vehicles with the same number of shafts and different types of shaft groups are different, for example, the total mass limit value of a three-shaft truck is 25 tons, and the total mass limit value of a three-shaft articulated train and a three-shaft mid-axle trailer train is 27 tons; the total mass limit value of the four-axis cargo vehicle is 31 tons, and the total mass limit value of the four-axis full-trailer vehicle train and the four-axis articulated train is 36 tons; the total mass limit value of the shaft group type of the five-shaft articulated train part is 43 tons, and the total mass limit value of the shaft group type of the five-shaft articulated train part is 42 tons.
As will be appreciated from the above description, the total mass limit for three-axle, four-axle, and five-axle vehicles is generally related to the axle set type of the vehicle. However, for 6-axle vehicles, the total mass limit of the vehicle is also related to the number of driving axles, and six-axle or more automobile trains have a total mass of over 49 tons, wherein the driving axle of the tractor is single-axle and the total mass of the cargo is over 46 tons, and the freight vehicles meeting the above situation are regarded as over-limit transport vehicles.
The total mass limit of the vehicle is also related to whether air suspension is equipped, and when the driving shaft is provided with double tires on each side of each shaft and is provided with the air suspension, the total mass limit of the three-shaft truck and the four-shaft truck is increased by 1 ton respectively; the driving shaft is a 4-shaft hinged train which is provided with double tires on each side of each shaft and is provided with an air suspension, the distance d between the two shafts of the semitrailer is more than or equal to 1800mm, and the total mass limit value is 37 tons.
FIG. 1 is an exemplary schematic diagram illustrating a conventional vehicle over-limit detection system. A road panel 1 is shown with a bar-type weighing device 2 arranged in the direction of travel of the vehicle, three bar-type weighing devices 2 being shown. When the vehicle runs through the bar type weighing device 2, the deformation signal related to the pavement plate 1 can be collected, and the weight of the vehicle can be obtained by analyzing the deformation signal. In addition, the vehicle can only acquire the deformation signal in the form of pulse when the wheel rolls the strip type weighing device. Based on this, the acquired deformation signals can be counted, thereby determining the number of axles of the vehicle. The conventional vehicle overrun detection system may also install a non-contact laser scanning device 3 on the road side in the vehicle traveling direction to identify the number of axles or drive shafts of the vehicle.
Based on the above description, the conventional vehicle overrun detection can recognize only one feature of the vehicle, for example, only the number of axles of the vehicle or the drive shafts of the vehicle, and cannot simultaneously acquire a plurality of features of the vehicle, thereby failing to correctly determine whether the vehicle is overrun.
In view of the above, in order to overcome the defects of one or more aspects, in an embodiment of the present disclosure, a method and a system for extracting a vehicle feature are provided, in which by combining first detection information acquired by a first detection unit and second detection information acquired by a second detection unit, not only a plurality of feature information of a vehicle can be acquired simultaneously, but also the accuracy of the feature information of the vehicle can be improved.
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
Fig. 2 shows an exemplary structural block diagram of a vehicle feature extraction system 200 according to an embodiment of the present disclosure. The extraction system 200 comprises a first detection unit 201, a second detection unit 202 and a data processing unit 203.
The first detection unit 201 is used for acquiring first detection information of the vehicle in the process of traveling. In one embodiment, the first detection unit may be a single line scanning lidar, which the present disclosure is not limited thereto. The first detection surface of the first detection unit is arranged perpendicular to the vehicle traveling direction, and the first detection surface of the first detection unit is arranged perpendicular to the ground. In another embodiment, the first probe information includes first probe position information and first probe energy information.
The second detection unit 202 is used for acquiring second detection information of the vehicle in the process of traveling. In one embodiment, the second detection unit may also be a single line scanning lidar, which the present disclosure is not limited thereto. The second detection surface of the second detection unit is arranged at a first preset angle relative to the driving direction of the vehicle, and the second detection surface of the second detection unit is also arranged to be perpendicular to the ground. Additionally, the first predetermined angle may be 35-40. In another embodiment, the second detection information includes second detection position information and second detection energy information.
In one implementation scenario, the first detecting unit and the second detecting unit are installed on the same side of the road along the driving direction of the vehicle, and the distances from the first detecting unit and the second detecting unit to the edge of the detection area may be the same or different. Alternatively, the first and second detecting units may be installed on the left side of the detection area in the vehicle traveling direction, and may also be installed on the right side of the detection area in the vehicle traveling direction, which is not limited by the present disclosure. The detection area is usually provided on a plaza lane or a toll gate lane before the vehicle enters a toll gate. The specific arrangement of the first detection unit and the second detection unit will be described in detail later.
After obtaining the first detection information and the second detection information, the data processing unit 203 is configured to obtain the characteristic information of the vehicle according to the first detection information and the second detection information. In an implementation scenario, the data processing unit is connected to the first detection unit and the second detection unit through a network cable, a serial port or Wifi. In one embodiment, the characteristic information of the vehicle may be one or more of axle group information, drive axle information, and air suspension information.
Fig. 3 shows an exemplary schematic diagram of an arrangement of a first detection unit and a second detection unit according to an embodiment of the present disclosure. The left side of the detection area 4 in the vehicle traveling direction in the drawing is provided with a second detection unit 5 and a first detection unit 6 in this order from front to back in the vehicle traveling direction. The second detection unit 5 is installed on the second installation support 7, the first detection unit 6 is installed on the first installation support 8, and the second detection surface of the second detection unit 5 and the first detection surface of the first detection unit 6 are both perpendicular to the ground. Further, the first detection surface of the first detection unit 6 is disposed perpendicular to the vehicle traveling direction (for example, as indicated by an arrow F in the drawing), and the second detection surface of the second detection unit 5 is disposed at a first preset angle (for example, as indicated by an arrow F1 in the drawing) with respect to the vehicle traveling direction. Additionally, the first predetermined angle may be 35-40. The second detection unit 5 and the first detection unit 6 are connected to the data processing unit 9 through a network port, a serial port or WiFi (not shown in the figure).
It is to be understood that when a vehicle passes through the detection region in the vehicle travel direction, the vehicle first passes the first detection unit and then passes the second detection unit. Based on this, the first detection unit can be utilized to accurately divide the vehicle, so that the information acquired by the first detection unit and the second detection unit only contains the information of one vehicle, but does not contain the information of other vehicles, and more accurate detection information is acquired. Meanwhile, the distance between the first detection unit and the second detection unit in the driving direction of the vehicle may be set between 0mm and 1000 mm, and those skilled in the art may set the distance according to actual situations, and the disclosure is not limited thereto. Preferably, the distance between the first detection unit and the second detection unit in the vehicle traveling direction may be set to 500 mm in order to ensure that all the axes of the vehicle can completely pass through the first detection plane of the first detection unit of the vehicle when the rearmost end of the vehicle rear passes through the first detection plane of the first detection unit.
In some embodiments, in one aspect, the height of the first detecting unit from the ground surface may be set to 1500 mm to 3000 mm. Preferably, the height of the first detection unit from the ground surface can be set to 1600 mm, so as to ensure that the first detection surface of the first detection unit can cover the whole detection area and can detect more detailed information of the bottom of the tyre of the vehicle. In another aspect, the height of the second sensing unit from the ground may be set to be 50 mm to 200 mm. As can be seen from the above description, the second detection surface of the second detection unit has an angle of 35-40 with the direction of travel of the vehicle. Based on this, can be convenient for second detecting element to carry out all-round scanning to the vehicle, especially to the scanning of the drive shaft of vehicle and air suspension, avoid scanning the disappearance of the second detected information of vehicle to guarantee to obtain more accurate vehicle information.
In combination with the above description, the embodiments of the present disclosure arrange the first detection unit and the second detection unit on one side of the detection area in the vehicle traveling direction, so as to determine the characteristic information of the vehicle in combination with the detection information respectively acquired by the first detection unit and the second detection unit, so as to acquire more complete detection information, and improve the accuracy of the characteristic information of the vehicle to be detected, so as to acquire more complete detection information. Furthermore, the first detection surface of the first detection unit is arranged to be perpendicular to the driving direction of the vehicle, so that when two adjacent vehicles to be detected are close to each other, the first detection unit can accurately divide the vehicle, and the defect that the second detection unit cannot accurately divide the vehicle due to the close distance between the two adjacent vehicles to be detected is overcome. Further, the embodiment of the present disclosure sets the height of the second detection unit from the ground to be lower than the height of the first detection unit from the ground by arranging the second detection surface of the second detection unit to have a first preset angle with the driving direction of the vehicle. Based on this, the embodiment of the disclosure can not only obtain the detection information of the side surface of the vehicle to be detected, but also obtain the chassis information of the vehicle, thereby making up for the defect that the first detection unit cannot scan the chassis information between the coaxial tires due to the shielding of the tires. For example, drive shaft information and air suspension information between coaxial tires cannot be obtained. Furthermore, the first detection unit and the second detection unit are arranged on one side of the detection area, so that the road surface does not need to be damaged, and the construction cost is low.
Fig. 4 illustrates an exemplary flow chart of a method 400 of extracting vehicle features according to an embodiment of the disclosure. As shown, at step 402, first detection information of a vehicle during travel is acquired by a first detection unit. In one embodiment, the first detection unit may be a single line scanning lidar, which the present disclosure is not limited thereto. The first detection unit is arranged on one side of the detection area in the vehicle traveling direction, and the first detection surface of the first detection unit is perpendicular to the ground and perpendicular to the vehicle traveling direction, and with respect to the arrangement of the first detection unit, reference may be made to the foregoing description, and it is not repeated here.
At step 404, second detection information of the vehicle during traveling is acquired by the second detection unit. In one embodiment, the second detection unit may also be a single line scanning lidar arranged on one side of the detection area in the direction of travel of the vehicle. The second detection surface of the second detection unit is at an angle of 35-40 deg. to the direction of travel of the vehicle, and reference may be made to the foregoing description as to the specific arrangement of the first detection unit and the second detection unit, which will not be repeated here.
After the first probe information and the second probe information are obtained, then, at step 406, characteristic information of the vehicle is acquired from the first probe information and the second probe information. Further, three-dimensional contour information of the vehicle is determined from the first detection information and the second detection information, and feature information of the vehicle is acquired based on the three-dimensional contour information.
In one embodiment, the first probe information includes first probe position information and first probe energy information, and the second probe information includes second probe position information and second probe energy information. It is to be understood that the first detection position information and the first detection energy information and the second detection position information and the second detection energy information may be directly obtained by the first detection unit and the second detection unit. When the first detection unit and the second detection unit adopt a single-line scanning type laser radar, the scanning frequency is 50Hz, the angular resolution is 0.1 degree, and the scanning angle is 180 degrees. The angle resolution is an included angle of adjacent light rays, and the scanning angle is a fan angle of emergent light rays. Therefore, 50 frames of scanning data can be obtained by using the single-line scanning type laser radar every second, each frame of scanning data comprises 1800 scanning points, and different scanning frame data correspond to different moments in the process that a vehicle to be detected passes through a detection area.
More specifically, the first detection position information includes distances from different scanning points to the light-emitting center of the first detection unit, which are obtained at different times, and included angles between a connection line from the different scanning points to the light-emitting center of the first detection unit, which are obtained at different times, and a ground vertical line. In one implementation scenario, the first probe location information may be denoted as { (P)1,θ1,t),(P2,θ2,t),(P3,θ3T), … }. Wherein (P)1,P2,P3…) denotes the distance of the different scanning points to the light emission center of the first detection unit at time t, (θ)1,θ2,θ3…) shows the angle between the line connecting the different scanning points to the light-emitting center of the first detecting unit and the ground vertical line at time t. The first detection energy information represents the echo signal energy of different scanning points in the first detection position information acquired at different moments. In one implementation scenario, the first probe energy information may be denoted as { (λ)1,t),(λ2,t),(λ3T), … }, where { λ1,λ2,λ3… represents the echo signal energy at the time t at the different scanning points in the first detected position information.
Similar to the first detection position information and the first detection energy information, the second detection position information includes distances from different scanning points acquired at different times to the light-emitting center of the second detection unit and included angles between connecting lines from the different scanning points acquired at different times to the light-emitting center of the second detection unit and a ground vertical line. In one implementation scenario, the second sounding location information may be recorded as { (P'1,θ′1,t),(P′2,θ′2,t),(P′3,θ′3T), … }. Wherein, (P'1,P′2,P′3…) denotes the distance (θ'1,θ′2,θ′3…) shows the angle between the line connecting the different scanning points to the light-emitting center of the second detecting unit and the ground vertical line at time t. Second detection energyThe information represents the echo signal energy of different scanning points in the second detection position information acquired at different moments. In one implementation scenario, the second detection energy information may be recorded as { (λ'1,t),(λ′2,t),(λ′3T), … }, where { λ'1,λ′2,λ′3… represents the echo signal energy at the time t at the different scanning points in the second detected position information.
Further, first coordinate information may be determined based on the first probe position information, and second coordinate information may be determined based on the second probe position information. Further, the three-dimensional contour information of the vehicle may be obtained from the first coordinate information, the second coordinate information, and the speed of the vehicle.
When the first coordinate information is determined based on the first detection position information, a coordinate system may be established with an intersection point of a straight line passing through the light emission center of the first detection unit and perpendicular to the ground and the ground as a coordinate origin O. Thereby, the scanning point obtained by the first detection unit may be represented within the established coordinate system in order to determine the first coordinate information. It is to be understood that a coordinate system of angles may be established by those skilled in the art, and the origin, x-axis and y-axis of the coordinate system may be arbitrarily selected, which is not limited by the present disclosure. For convenience of calculation, the embodiment of the present disclosure establishes a rectangular coordinate system, and more specifically, the embodiment of the present disclosure establishes a first rectangular coordinate system XOY by taking a straight line passing through the light emitting center of the first detecting unit and perpendicular to the ground as a Y positive half axis and a straight line perpendicular to the vehicle traveling direction (i.e., parallel to the first detecting surface of the first detecting unit) and directed toward the detection region as an X positive half axis, for example, as shown in fig. 5.
Fig. 5 shows a first coordinate system established based on a first detection unit according to an embodiment of the present disclosure. In the first orthogonal coordinate system XOY, a denotes a light emission center of the first probe unit, and B, C, D denotes different scanning points at the same time in the first probe information. B, C, D is connected to A, AB, AC, AD respectively represent the distance from the scanning point B, C, D in the first detection position information to the light emission center of the first detection unit, and beta1、β2、β3Respectively, the angles between the Y-axis and the line connecting the scanning point B, C, D to the light-emitting center of the first detecting unit. As can be seen from the above description, the distances AB, AC, AD can be respectively denoted as P1、P2、P3。β1、β2、β3That is to say theta1、θ2、θ3。
Assuming that the distance from the light emitting center A of the first detection unit to the origin O is H, when the height of the scanning point in the first detection information from the ground is greater than H, theta>90 degrees; when the height of the scanning point in the first detection information from the ground is less than H, theta<90 degrees; when the height of the scanning point in the first detection information from the ground is equal to H, θ is 90 °. For example, FIG. 5 shows scanning point B, C, D, where B is less than H away from the ground, and β is1<90 degrees; c and D are higher than H from the ground, then beta2>90°,β3>At 90 deg.. Based on the above, the method utilizes the triangle calculation principle to calculate the first detection position information { (P)1,θ1,t),(P2,θ2,t),(P3,θ3T), …, first coordinate information { (x) may be obtained1,y1,t),(x2,y2,t),(x3,y3T), … }, which can be expressed by the following formula:
wherein (x)1,x2,x3…) respectively indicate the x coordinate of the scanning point in the first detection information in the first coordinate information at time t, that is, the projection distance of the connecting line of the different scanning points and the light-emitting center of the first detection unit on the ground, which can be recorded as a first projection distance; (y)1,y2,y3…) respectively indicate the y coordinates of the different scanning points in the first detection information in the first coordinate information at time t, i.e. the distances from the different scanning points to the ground, which can be recorded as the first height.
When the second coordinate information is determined based on the second detection position information, a coordinate system may be established with an intersection point of a straight line passing through the light emission center of the second detection unit and perpendicular to the ground and the ground as a coordinate origin O'. Thereby, the scanning points obtained by the second detection unit may be represented within the established coordinate system in order to determine the second coordinate information. It is to be understood that a coordinate system of angles may be established by one skilled in the art, and the origin, x-axis and y-axis of the coordinate system may be arbitrarily selected, which is not limited by the present disclosure. For convenience of calculation, the embodiment of the present disclosure establishes a rectangular coordinate system, and more specifically, the embodiment of the present disclosure establishes a second rectangular coordinate system X ' O ' Y ' by taking a straight line passing through the light emitting center of the first detecting unit and perpendicular to the ground as a Y ' positive half axis and a straight line parallel to the second detecting surface of the second detecting unit and directed toward the detecting region as an X ' positive half axis, for example, as shown in fig. 6.
Fig. 6 shows a second coordinate system established based on a second detection unit according to an embodiment of the present disclosure. The light emission center a1 of the second detection unit is shown, assuming a height H' of a1 from the ground plane. Similar to the processing manner of the first rectangular coordinate system XOY, in the second rectangular coordinate system X ' O ' Y ', when the height of the scanning point in the second probe information from the ground is greater than H ', θ '>90 degrees; theta 'when the height of the scanning point in the second detection information from the ground is less than H'<90 degrees; when the height of the scanning point in the second detection information from the ground is equal to H ', θ' is 90 °. Based on this, the second probe position information { (P { 'is calculated from the triangle calculation principle'1,θ′1,t),(P′2,θ′2,t),(P′3,θ′3T), … } information { (X { ' of the coordinate system X ' O ' Y ') may be obtained '1,y′1,t),(x′2,y′2,t),(x′3,y′3T), … }, which can be expressed by the following formula:
wherein, (x'1,x′2,x′3…) respectively representAt time t, the X coordinate of the scanning point in the second detection information in the information of the coordinate system X ' O ' Y ', that is, the projection distance of the connecting line of the different scanning points and the light-emitting center of the second detection unit on the ground, (Y)1,y2,y3…) respectively indicate the Y coordinates of different scanning points in the second detection information in the information of the coordinate system X ' O ' Y ', i.e. the distances from different scanning points to the ground at the time t.
According to the above description, the second detection surface of the second detection unit is arranged at a first predetermined angle with respect to the direction of travel of the vehicle, the first predetermined angle being 35-40 °. Thus, it is also possible to establish a third rectangular coordinate system X "O" Y "with the intersection point of the straight line passing through the light emission center of the second detection unit and perpendicular to the ground as the origin of coordinates O", the straight line passing through the light emission center of the first detection unit and perpendicular to the ground as the Y' positive semi-axis, and the straight line perpendicular to the vehicle traveling direction and directed toward the detection area as the X "positive semi-axis, so as to determine the second coordinate information based on the second detection position information.
As shown in FIG. 6, a first preset angle α is shown, according to the information { (X {) of the coordinate system X 'O' Y 'obtained as described above'1,y′1,t),(x′2,y′2,t),(x′3,y′3T), … }, and the second coordinate system information { (x ″ ") can be obtained by using the triangle calculation principle1,y″1,t),(x″2,y″2,t),(x″3,y″3T), … }, which can be expressed by the following formula:
wherein, (x ″)1,x″2,x″3…) respectively indicate the x coordinate of the scanning point in the second detection information in the second coordinate information at the time t, that is, the projection distance of the connecting line of the different scanning points and the light-emitting center of the second detection unit on the ground perpendicular to the vehicle driving direction, which can be recorded as a second projection distance; (y ″)1,y″2,y″3…) respectively indicate the y-coordinate of the different scanning points in the second detection information in the second coordinate information at time t, i.e. the distances from the different scanning points to the ground, which can be recorded as the second height.
It should be understood that the above description is only the first detection information and the second detection information, the first detection position information and the second detection position information, the first detection energy information and the second detection energy information, and the first coordinate information and the second coordinate information of the plurality of scanning points at the same time, and those skilled in the art can obtain the first detection information and the second detection information, the first detection position information and the second detection position information, the first detection energy information and the second detection energy information, and the first coordinate information and the second coordinate information of the plurality of scanning points at different times based on the above description.
In one implementation scenario, the speed of the vehicle at different times may also be determined by the first probe position information, the second probe position information, and the first preset angle. Thereby, the three-dimensional contour information of the vehicle is determined based on the first coordinate information, the second coordinate information, and the speed of the vehicle obtained as described above. Further, a plurality of different feature points of the vehicle may be acquired based on the first probe position information, the first probe energy information, and/or the second probe position information, the second probe energy information. Next, the moving distances and time differences of the plurality of feature points are determined based on the first and second probe position information and the first preset angle. Finally, the speed of the vehicle is determined based on the moving distances and the time differences of the plurality of feature points.
In some embodiments, the first detection position information, the first detection energy information, and/or the second detection position information, the second detection energy information may be combined to determine a plurality of feature points, such as a vehicle front end point, a vehicle rear end point, a tire front end point, a tire rear end point, a mirror front end point, a mirror rear end point, a start point of the body reflecting strip, and an end point of the body reflecting strip. One skilled in the art may also obtain the front end point and the rear end point of other positions of the vehicle, which the present disclosure does not limit.
More specifically, assume that a plane passing through the characteristic point of the vehicle parallel to the vehicle traveling direction and perpendicular to the ground is denoted by ψ, the plane ψ is perpendicular to the first detection plane of the first detection unit, and an intersection of the plane ψ and the first detection plane of the first detection unit is denoted by L1. As can be seen from the above description, an included angle between the second detection plane of the second detection unit and the plane ψ is a first preset angle α, and an intersection line of the second detection plane of the second detection unit and the plane ψ is denoted as L2, and then a distance from L1 to L2 is a distance by which different feature points on the vehicle to be measured move.
Fig. 7 illustrates an exemplary schematic diagram of determining a feature point movement distance according to an embodiment of the present disclosure. The figure shows an intersection line L1 between the plane ψ and the first detection plane of the first detection unit, an intersection line L2 between the second detection plane of the second detection unit and the plane ψ, and an intersection line L1 and an intersection line L2 are the moving distances of different feature points. For convenience of calculation, the embodiment of the disclosure takes the point Q at the intersection line L1 and the intersection line L2 respectively2And Q1And Q2And Q1The height from the ground is H', and the hypothesis Q1To Q2Is marked as S, then S is a straight line l1And a straight line l2S is the moving distance of different feature points of the vehicle. The figure also shows the light emission center point Q of the first detection unit3Point Q of3The height from the ground is H; luminous center point Q of second detection unit5Point Q of5The height from the ground is denoted as H'. Let the point perpendicular to the ground at a height H' be denoted as Q4Then Q is4To Q5The distance between the first and second detection units represents the distance in the direction of travel of the vehicle and is denoted as S1. And the light-emitting center point Q of the first detection unit3To Q2Is denoted as S3The light emitting center point Q of the second detection unit5To Q1Is denoted as S2,Q2To Q4Is denoted as S4Using the principle of triangle calculation, S4Can be expressed by the following formula:
from the above analysis, if the included angle between the second detection surface of the second detection unit and the vehicle driving direction is the first preset angle, the straight line Q is1Q5And a straight line Q1Q2Is a first predetermined angle alpha, then S4It can also be expressed as the following equation:
S4=S2 sinα (5)
further, as can be seen from the figure,
S=S1+S2 cosα (6)
the moving distance of the vehicle feature point can be obtained by combining the above formula (4), formula (5), and formula (6). The person skilled in the art can directly obtain the moving distance of the characteristic point of the vehicle by the formula (6), wherein S1The distance between the first detection unit and the second detection unit in the vehicle driving direction is determined by the installation positions of the first detection unit and the second detection unit, S2Is the light emitting center point Q from the feature point to the second detection unit5The distance of (2) can be directly obtained from the second detection information.
After obtaining the moving distance of the vehicle feature point, in some embodiments, the feature point M of the vehicle to be tested is used1The time of the first detection surface passing through the first detection unit is recorded as T1The characteristic point M of the vehicle to be measured1The time of the second detection surface passing through the second detection unit is recorded as T2. Thus, the speed v of the vehicle can be obtained by the following formula:
ν=S/(T2-T1) (7)
it is understood that the speed (v) of the vehicle at different times at which the plurality of different characteristic points of the vehicle are obtained based on the first probe position information, the first probe energy information, and/or the second probe position information, the second probe energy information, and the like in combination with the above description can be obtained1′,v2′,v3′…)。
Still further, first displacement information and second displacement information of the vehicle may be determined according to a speed of the vehicle, and three-dimensional contour information of the vehicle may be determined based on the first coordinate information, the second coordinate information, the first displacement information, and the second displacement information.
In an implementation scenario, a scanning period of the first detection unit is the same as a scanning period of the second detection unit, the scanning period is T, and a time when a front end point of a vehicle head to be detected first passes through a first detection surface of the first detection unit is recorded as T1,t1Indicating the time at which the vehicle under test begins to enter the inspection area. At this time, the distance from the front end point of the vehicle head to be detected to the first detection surface of the first detection unit is 0. Recording the time when the front end point of the vehicle head to be detected firstly passes through the second detection surface of the second detection unit as t2. According to the description, the movement displacement S 'of the front end point of the vehicle head to be tested can be obtained'1And t2Speed v 'of the vehicle at time'1. Based on this, the value at t can be obtained1Time t2The distance N from the front end point of the vehicle head to be detected to the first detection surface of the first detection unit at different moments1*v′1T, wherein N1Is t1The number of scan cycles after the time instant. The time when the front end point of the first tire of the vehicle to be detected firstly passes through the first detection surface of the first detection unit is recorded as t3Thus, the value at t can be obtained2Time to t3The distance N from the front end point of the vehicle head of the vehicle to be detected to the first detection surface of the first detection unit at different moments2*v′1T, wherein N2Is t1The number of scanning cycles after the moment, and the distance N from the front end point of the vehicle head to be detected to the second detection surface of the second detection unit at different moments3*v′1T, wherein N3Is t2The number of scan cycles after the time instant.
In another embodiment, similar to the above description, the time when the front end point of the first tire of the vehicle under test first passes through the second detection surface of the second detection unit is denoted as t4According to the above description, the movement displacement S 'of the front end point of the first tire of the vehicle to be tested can be obtained'2And the first tyre of the vehicleFront end point at t4Speed at time v'2. Thus, the value at t can be obtained3Time to t4The distance from the front end point of the head of the vehicle to be detected to the first detection surface of the first detection unit at different moments is S between moments1+(t3-t2)*v′1+N4*v′2T, distance (T) from front end point of vehicle head to be detected to second detection surface of second detection unit at different time3-t2)*v′1+N4*v′2T, wherein N4Is t3The number of scan cycles after the time instant. Based on the first displacement information S', the distance from the front end point of the vehicle head to be detected to the first detection surface of the first detection unit at different moments can be obtained and recorded as the first displacement information S ″1,S″2,S″3…. And obtaining the distance from the front end point of the vehicle head to be detected at different moments to the second detection surface of the second detection unit, and recording the distance as second displacement information S'1,S″′2,S″′3,…。
Based on the above description, the first and second coordinate information and the first and second displacement information at different times of the vehicle can be obtained. More specifically, the first coordinate information and the first displacement information are combined into first contour information; and combining the second coordinate information and the second displacement information into second contour information. And finally, sequentially inserting the second contour information into the first contour information according to the sizes of the first displacement information and the second displacement information, thereby forming the three-dimensional contour information of the vehicle. And determining characteristic information of the vehicle, such as one or more of axle group information, driving axle information and air suspension information, from the obtained three-dimensional profile.
In one embodiment, the start position, the end position, and the length of the axle group of each axle group of the vehicle may be determined based on the three-dimensional contour information, and the axle group information of the vehicle may be determined according to the obtained start position, end position, and length of the axle group.
Through the analysis, the projection distance of the connecting line of the different scanning points acquired by the first detection unit and the light-emitting center of the first detection unit on the ground can be recorded as a first projection distance; the distance from the different scanning points to the ground can be recorded as a first height. The projection distance of the connecting line of the different scanning points acquired by the second detection unit and the light emitting center of the second detection unit, which is perpendicular to the driving direction of the vehicle, on the ground can be recorded as a second projection distance; the distance from the different scanning points to the ground can be recorded as a second height. Thus, a set of scanning points having a height from the ground that is greater than the first height threshold value and less than the second height threshold value in the three-dimensional profile information of the vehicle may be referred to as a first point set.
In order to accommodate tire sizes of different vehicle types, it is preferable that the first height threshold value is set to 30mm and the second height threshold value is set to 600mm, so as to calculate the maximum value X of the first projection distance or the second projection distance of all the scanning points in the first point setmaxAnd minimum value XminAnd calculating the minimum value Y of the first height or the second height of all the scanning points in the first point setmin. When Y isminLess than a first threshold value and Xmax-XminWhen the distance between the front end point of the locomotive and the first detection surface of the first detection unit is D, the distance indicates that the initial position of the axle group on one side of the vehicle to be detected, which is close to the first detection unit and/or the second detection unit, is found out1. It is to be understood that YminLess than the first threshold is captured for features where the tire is relatively close to the ground plane, such as a non-suspended axle tire in contact with the ground plane. In order to include the floating axis into the axis group information, the first threshold is typically set to be relatively large. Preferably, the first threshold may be set to 200 mm. Xmax-XminLess than the second threshold value is obtained the characteristic that the initial position of the tire is relatively flat and light cannot penetrate through the tire, when the first detection unit scans the initial position of the shaft group, Xmax-XminAre small. Preferably, the second threshold may be set to 100 mm.
Corresponding to the above, when YminGreater than or equal to a first threshold value and Xmax-XminWhen the value is larger than or equal to the second threshold value, the vehicle to be detected is obtained to approach the first detection unit and/or the second detection unitThe end position of the axle group on one side of the unit, at the moment, the distance from the front end point of the vehicle head to the first detection surface of the first detection unit is D2The length of the vehicle axle set is D2-D1. Based on this, the length of each axle group on the side close to the first detection unit and/or the second detection unit in the vehicle under test can be obtained. As known to those skilled in the art, a tire of a vehicle is typically 800 millimeters in diameter, and the number of axles in a vehicle axle set is (D)2-D1)/800. For example, as shown in fig. 3, a single axle 10, a two-axle group 11, and a three-axle group 12 of the vehicle may be obtained according to the embodiment of the present disclosure, wherein the two single axles are included in the two-axle group 11, and the three single axles are included in the three-axle group 12.
In some embodiments, the start position, the end position, and the length of the obtained axle group may be used in combination with whether projections of adjacent tires in a direction parallel to the vehicle driving direction overlap and the number of times the projections overlap to determine axle group information. Specifically, because the second detection surface of the second detection unit has a certain included angle with the driving direction of the vehicle, when the adjacent tires close to one side of the first detection unit and/or the second detection unit are in one axle group, the second detection unit can simultaneously scan two adjacent tires of the vehicle. Thus, the axle set information of the vehicle is determined based on the number of times the projections of the adjacent tires in parallel with the vehicle traveling direction coincide. When the projection coincidence times is 0, the axis group comprises 1 axis; when the projection coincidence times is 1, the axis group comprises 2 axes; when the number of projection coincidences is 2, the axis group includes 3 axes. When the number of the shafts in the shaft group is 1 shaft, the shaft group is a single shaft; when the number of the shafts in the shaft group is 2, the shaft group is a two-shaft group; when the number of the shafts in the shaft group is 3, the shaft group is a three-shaft group. The skilled person may determine the axle set information of the vehicle based on selecting any of the two ways described above, which the present disclosure does not limit.
In another embodiment, the start position and the end position of each axle group on the side of the vehicle under test close to the first detection unit and/or the second detection unit may be determined according to the above description. Further, depending on whether a tire is present or notAnd (3) sinking, namely judging whether each tire in the shaft group close to one side of the first detection unit and/or the second detection unit is a single tire or a double tire, and after the first double tire is found, the end position of the shaft group where the first double tire is located is the initial position of the driving shaft of the vehicle to be detected. At this time, the distance from the front end point of the corresponding vehicle head to the first detection surface of the first detection unit or the second detection surface of the second detection unit is D3. The number of the axle group where the first double tire is located is marked as M, and preferably, the first double tire is located in the second axle group.
It can be understood that, since the second detection surface of the second detection unit has a certain included angle with the vehicle driving direction, the start position and the end position of each axle group on the side of the vehicle to be detected far away from the first detection unit and the second detection unit can be obtained from the three-dimensional profile information of the vehicle to be detected according to the above description. Therefore, the end position of the side far away from the first detection unit and the second detection unit, where the axle group number is the same as M, is the end position of the driving axle of the vehicle to be tested. At this time, the distance from the front end point of the corresponding vehicle head to the first detection surface of the first detection unit or the second detection surface of the second detection unit is D4. Finally, the distance from the front end point of the vehicle head to the first detection surface of the first detection unit or the second detection surface of the second detection unit is larger than D3Is less than D4And fitting all the vehicle body points, identifying the fitted point cloud picture, and obtaining whether the point cloud picture contains an axle housing structure or not, thereby obtaining the number of the driving shafts of the vehicle to be tested. For example, as shown in fig. 3 described above, according to the embodiment of the present disclosure, two drive shafts 13 of the vehicle can be obtained.
In conjunction with the above description, the drive shaft information of the vehicle is obtained, and further, the air suspension information of the vehicle is determined based on the obtained drive shaft information of the vehicle. For example, the obtained point cloud chart is identified to determine whether a columnar air suspension device exists near the driving shaft, so that air suspension information of the vehicle to be tested is obtained.
According to the embodiment of the disclosure, the first displacement information and the second displacement information of the vehicle are obtained by combining the first detection information and the second detection information, the first detection information and the second detection information are fused according to the displacement magnitude of the first displacement information and the second displacement information to obtain the three-dimensional outline information of the vehicle, and the characteristic information of the vehicle is obtained based on the three-dimensional outline information, so that the obtained information containing the vehicle is richer, and the accuracy of the characteristic information is improved.
It should be noted that while the operations of the method of the present invention are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Rather, the steps depicted in the flowcharts may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
It should be understood that the terms "first," "second," "third," and "fourth," etc. in the claims, description, and drawings of the present disclosure are used to distinguish between different objects and are not used to describe a particular order. The terms "comprises" and "comprising," when used in the specification and claims of this disclosure, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the disclosure herein is for the purpose of describing particular embodiments only, and is not intended to be limiting of the disclosure. As used in the specification and claims of this disclosure, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the specification and claims of this disclosure refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
As used in this specification and claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
While various embodiments of the present disclosure have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous modifications, changes, and substitutions will occur to those skilled in the art without departing from the spirit and scope of the present disclosure. It should be understood that various alternatives to the embodiments of the disclosure described herein may be employed in practicing the disclosure. It is intended that the following claims define the scope of the disclosure and that equivalents or alternatives within the scope of these claims be covered thereby.
Claims (13)
1. A method for extracting vehicle features includes:
acquiring first detection information of a vehicle in a traveling process through a first detection unit;
acquiring second detection information of the vehicle in the traveling process through a second detection unit; and
acquiring characteristic information of the vehicle according to the first detection information and the second detection information; preferably, three-dimensional contour information of the vehicle is determined from the first detection information and the second detection information; and
feature information of the vehicle is acquired based on the three-dimensional contour information.
2. The method of claim 1, wherein,
the first detection information comprises first detection position information and first detection energy information;
the second detection information includes second detection position information and second detection energy information.
3. The method of any of claims 1-2, wherein determining three-dimensional contour information of the vehicle based on the first probe information and the second probe information comprises:
converting the first detection position information in the first detection information into first coordinate information;
converting the second detection position information in the second detection information into second coordinate information;
determining the speed of the vehicle at different moments according to the first detection position information, the second detection position information and a first preset angle; and
determining three-dimensional contour information of the vehicle based on the first coordinate information, the second coordinate information, and the speed of the vehicle.
4. The method of claim 3, wherein determining the speed of the vehicle at different times as a function of the first and second probe position information and a first preset angle comprises:
acquiring a plurality of different feature points of the vehicle according to the first detection position information, the first detection energy information and/or the second detection position information and the second detection energy information;
determining the moving distances and the time differences of the plurality of characteristic points according to the first detection position information, the second detection position information and the first preset angle; and
determining a speed of the vehicle based on the moving distances and the time differences of the plurality of feature points.
5. The method of claim 3 or 4, wherein determining the three-dimensional contour information of the vehicle based on the first coordinate information, the second coordinate information, and the speed of the vehicle comprises:
determining first displacement information and second displacement information of the vehicle according to the speed of the vehicle; and
determining three-dimensional contour information of the vehicle based on the first coordinate information, the second coordinate information, the first displacement information, and the second displacement information.
6. The method of claim 5, wherein determining three-dimensional contour information of the vehicle based on the first coordinate information, the second coordinate information, the first displacement information, and the second displacement information comprises:
combining the first coordinate information and the first displacement information into first contour information;
combining the second coordinate information and the second displacement information into second contour information; and
and sequentially inserting the second contour information into the first contour information according to the sizes of the first displacement information and the second displacement information to form the three-dimensional contour information.
7. The method of claim 6, wherein the plurality of feature points comprises at least: the vehicle comprises a vehicle head front end point, a vehicle tail rear end point, a tire front end point, a tire rear end point, a reflector front end point, a reflector rear end point, a starting point of a vehicle body reflective strip and an end point of the vehicle body reflective strip.
8. The method of claim 1, wherein the characteristic information includes at least one or more of axle set information, driveshaft information, and air suspension information.
9. The method of any of claims 1-8, wherein obtaining axle set information for the vehicle based on the three-dimensional contour information comprises:
determining a starting position, an ending position and a length of each axle group of the vehicle based on the three-dimensional contour information; and
determining axle set information for the vehicle based on the starting position, ending position, and length of the axle set and/or the number of projected overlaps of adjacent tires of the vehicle.
10. The method according to any one of claims 1-8, wherein acquiring the drive shaft information of the vehicle based on the three-dimensional profile information includes:
determining a start position and an end position of a drive shaft of the vehicle based on the three-dimensional contour information;
based on projected distances of the plurality of feature points between the start position and the end position of the drive shaft and a height from the ground; and
and fitting based on the projection distance and the height to obtain driving shaft information of the vehicle.
11. The method of any of claims 1-10, wherein obtaining air suspension information for the vehicle based on the three-dimensional contour information comprises:
determining drive shaft information of the vehicle based on the three-dimensional contour information;
determining the air suspension information based on the driveshaft information.
12. A vehicle feature extraction system, comprising:
a first detection unit arranged to acquire first detection information of the vehicle during travel;
a second detection unit arranged to acquire second detection information of the vehicle during travel; and
a data processing unit for acquiring characteristic information of the vehicle from the first probe information and the second probe information;
preferably, the first detection surface of the first detection unit is perpendicular to the road surface and perpendicular to the vehicle driving direction, and the second detection surface of the second detection unit is perpendicular to the road surface and arranged at a first preset angle with the vehicle driving direction.
13. The extraction system according to claim 12, wherein a distance in a vehicle traveling direction between the first detection unit and the second detection unit is set to 0mm to 1000 mm, and the vehicle passes through the first detection unit first and then the second detection unit.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011566214.7A CN114688989B (en) | 2020-12-25 | 2020-12-25 | Vehicle feature extraction method and extraction system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011566214.7A CN114688989B (en) | 2020-12-25 | 2020-12-25 | Vehicle feature extraction method and extraction system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114688989A true CN114688989A (en) | 2022-07-01 |
CN114688989B CN114688989B (en) | 2024-07-05 |
Family
ID=82130367
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011566214.7A Active CN114688989B (en) | 2020-12-25 | 2020-12-25 | Vehicle feature extraction method and extraction system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114688989B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116026440A (en) * | 2022-12-05 | 2023-04-28 | 北京万集科技股份有限公司 | Method and system for detecting vehicle information |
CN116147745A (en) * | 2022-12-30 | 2023-05-23 | 北京万集科技股份有限公司 | Vehicle overrun detection method and device, storage medium and electronic device |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101859491A (en) * | 2009-04-10 | 2010-10-13 | 张高军 | Method for obtaining longitudinal profile pattern of mobile vehicle and device thereof |
CN103925872A (en) * | 2013-12-23 | 2014-07-16 | 中国神华能源股份有限公司 | Laser scanning measurement device and method for acquiring spatial distribution of target objects |
CN103942529A (en) * | 2013-01-21 | 2014-07-23 | 卡波施交通公司 | Method and device for measuring a height profile of a vehicle passing on a road |
CN104064030A (en) * | 2014-07-01 | 2014-09-24 | 武汉万集信息技术有限公司 | Vehicle type identification method and vehicle type identification system |
CN104361752A (en) * | 2014-10-27 | 2015-02-18 | 北京握奇智能科技有限公司 | Laser scanning based vehicle type recognition method for free flow charging |
CN104966399A (en) * | 2015-06-03 | 2015-10-07 | 武汉万集信息技术有限公司 | Vehicle speed detecting device and method |
KR20160014363A (en) * | 2014-07-29 | 2016-02-11 | (주)뉴컨스텍 | Vehicle classifier using 3D camera |
CN207850506U (en) * | 2017-12-25 | 2018-09-11 | 北京万集科技股份有限公司 | One kind is weighed automobile overweight detecting system |
CN108819980A (en) * | 2018-06-27 | 2018-11-16 | 马鞍山市雷狮轨道交通装备有限公司 | A kind of device and method of train wheel geometric parameter on-line dynamic measurement |
CN111964608A (en) * | 2020-10-20 | 2020-11-20 | 天津美腾科技股份有限公司 | Automobile outline dimension detection method and automobile outline dimension detection device |
CN112014855A (en) * | 2020-07-20 | 2020-12-01 | 江西路通科技有限公司 | Vehicle outline detection method and system based on laser radar |
-
2020
- 2020-12-25 CN CN202011566214.7A patent/CN114688989B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101859491A (en) * | 2009-04-10 | 2010-10-13 | 张高军 | Method for obtaining longitudinal profile pattern of mobile vehicle and device thereof |
CN103942529A (en) * | 2013-01-21 | 2014-07-23 | 卡波施交通公司 | Method and device for measuring a height profile of a vehicle passing on a road |
CN103925872A (en) * | 2013-12-23 | 2014-07-16 | 中国神华能源股份有限公司 | Laser scanning measurement device and method for acquiring spatial distribution of target objects |
CN104064030A (en) * | 2014-07-01 | 2014-09-24 | 武汉万集信息技术有限公司 | Vehicle type identification method and vehicle type identification system |
KR20160014363A (en) * | 2014-07-29 | 2016-02-11 | (주)뉴컨스텍 | Vehicle classifier using 3D camera |
CN104361752A (en) * | 2014-10-27 | 2015-02-18 | 北京握奇智能科技有限公司 | Laser scanning based vehicle type recognition method for free flow charging |
CN104966399A (en) * | 2015-06-03 | 2015-10-07 | 武汉万集信息技术有限公司 | Vehicle speed detecting device and method |
CN207850506U (en) * | 2017-12-25 | 2018-09-11 | 北京万集科技股份有限公司 | One kind is weighed automobile overweight detecting system |
CN108819980A (en) * | 2018-06-27 | 2018-11-16 | 马鞍山市雷狮轨道交通装备有限公司 | A kind of device and method of train wheel geometric parameter on-line dynamic measurement |
CN112014855A (en) * | 2020-07-20 | 2020-12-01 | 江西路通科技有限公司 | Vehicle outline detection method and system based on laser radar |
CN111964608A (en) * | 2020-10-20 | 2020-11-20 | 天津美腾科技股份有限公司 | Automobile outline dimension detection method and automobile outline dimension detection device |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116026440A (en) * | 2022-12-05 | 2023-04-28 | 北京万集科技股份有限公司 | Method and system for detecting vehicle information |
CN116147745A (en) * | 2022-12-30 | 2023-05-23 | 北京万集科技股份有限公司 | Vehicle overrun detection method and device, storage medium and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN114688989B (en) | 2024-07-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7488765B2 (en) | Automatic detection of sensor miscalibration | |
WO2018105179A1 (en) | Vehicle-mounted image processing device | |
CN105403162B (en) | The automatic testing method of semitrailer outer profile size | |
CN106324618B (en) | Realize the method based on laser radar detection lane line system | |
CN107314741A (en) | Measurement of cargo measuring method | |
CN105352439B (en) | Vehicle body parameter measuring system and method based on full raster structure | |
CN108777065A (en) | A kind of vehicle bend meeting detection prior-warning device and its method | |
CN206019594U (en) | A kind of towed vehicle profile and wheelbase automatic measurement system | |
KR100917051B1 (en) | Traffic information calculating device of driving vehicle and its calculation method | |
CN104111058A (en) | Vehicle distance measuring method and device and vehicle relative speed measuring method and device | |
KR101613667B1 (en) | Apparatus for classifyng vehicle type using -dimensional image camera | |
CN114003849B (en) | Multi-lane non-contact type automatic calculation method and system for vehicle axle number | |
CN114074682B (en) | Method for handling self-return and vehicle | |
CN114688989B (en) | Vehicle feature extraction method and extraction system | |
JP2000298007A (en) | Vehicle width measurement method and device | |
CN114624726B (en) | Axle identification system and axle identification method | |
US20210094580A1 (en) | Driving control apparatus for automated driving vehicle, stop target, and driving control system | |
CN106585670A (en) | Video-based urban rail transit ahead train detection system and method | |
CN111637852A (en) | System and method for measuring articulation angle of full-trailer automobile train | |
JPH11191196A (en) | Vehicle measuring instrument and car type discriminating device | |
CN101900814A (en) | Reversing radar system and detection method | |
CN114942450A (en) | Method for detecting vehicle weighing behavior by laser radar | |
WO1993019429A1 (en) | Vision apparatus | |
CN106842189A (en) | A kind of radar for backing car two is to actual measurement system | |
CN117213395A (en) | Semi-trailer head-hanging separation outline measurement system based on measurement grating and laser radar |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |