CN115461258A - Method for object avoidance during autonomous navigation - Google Patents
Method for object avoidance during autonomous navigation Download PDFInfo
- Publication number
- CN115461258A CN115461258A CN202180030320.XA CN202180030320A CN115461258A CN 115461258 A CN115461258 A CN 115461258A CN 202180030320 A CN202180030320 A CN 202180030320A CN 115461258 A CN115461258 A CN 115461258A
- Authority
- CN
- China
- Prior art keywords
- time
- autonomous vehicle
- velocity
- calculating
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 257
- 230000033001 locomotion Effects 0.000 claims abstract description 431
- 230000009471 action Effects 0.000 claims abstract description 86
- 230000001133 acceleration Effects 0.000 claims description 46
- 230000004044 response Effects 0.000 claims description 32
- 238000001514 detection method Methods 0.000 claims description 10
- 238000012886 linear function Methods 0.000 claims description 5
- 230000010354 integration Effects 0.000 claims description 2
- 230000003213 activating effect Effects 0.000 claims 1
- 230000006870 function Effects 0.000 description 149
- 239000002131 composite material Substances 0.000 description 30
- 230000008569 process Effects 0.000 description 25
- 230000008859 change Effects 0.000 description 22
- 230000006872 improvement Effects 0.000 description 12
- 239000013598 vector Substances 0.000 description 12
- 230000003042 antagnostic effect Effects 0.000 description 9
- 238000010801 machine learning Methods 0.000 description 6
- 238000005259 measurement Methods 0.000 description 6
- 230000007423 decrease Effects 0.000 description 4
- 230000002123 temporal effect Effects 0.000 description 4
- 241001417527 Pempheridae Species 0.000 description 3
- 230000001186 cumulative effect Effects 0.000 description 3
- 230000001934 delay Effects 0.000 description 3
- 230000003111 delayed effect Effects 0.000 description 3
- 238000009795 derivation Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000007670 refining Methods 0.000 description 3
- 230000004931 aggregating effect Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 239000010426 asphalt Substances 0.000 description 2
- 238000012512 characterization method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000001747 exhibiting effect Effects 0.000 description 2
- 230000002441 reversible effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- HRANPRDGABOKNQ-ORGXEYTDSA-N (1r,3r,3as,3br,7ar,8as,8bs,8cs,10as)-1-acetyl-5-chloro-3-hydroxy-8b,10a-dimethyl-7-oxo-1,2,3,3a,3b,7,7a,8,8a,8b,8c,9,10,10a-tetradecahydrocyclopenta[a]cyclopropa[g]phenanthren-1-yl acetate Chemical group C1=C(Cl)C2=CC(=O)[C@@H]3C[C@@H]3[C@]2(C)[C@@H]2[C@@H]1[C@@H]1[C@H](O)C[C@@](C(C)=O)(OC(=O)C)[C@@]1(C)CC2 HRANPRDGABOKNQ-ORGXEYTDSA-N 0.000 description 1
- 238000012935 Averaging Methods 0.000 description 1
- 244000025254 Cannabis sativa Species 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000036316 preload Effects 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 230000008719 thickening Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W10/00—Conjoint control of vehicle sub-units of different type or different function
- B60W10/18—Conjoint control of vehicle sub-units of different type or different function including control of braking systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
- B60W30/09—Taking automatic action to avoid collision, e.g. braking and steering
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
- B60W30/095—Predicting travel path or likelihood of collision
- B60W30/0956—Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/14—Adaptive cruise control
- B60W30/143—Speed control
- B60W30/146—Speed limiting
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/18—Propelling the vehicle
- B60W30/18009—Propelling the vehicle related to particular drive situations
- B60W30/181—Preparing for stopping
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
- B60W40/06—Road conditions
- B60W40/068—Road friction coefficient
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/10—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
- B60W40/105—Speed
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
- B60W60/001—Planning or execution of driving tasks
- B60W60/0011—Planning or execution of driving tasks involving control alternatives for a single driving scenario, e.g. planning several paths to avoid obstacles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/50—Systems of measurement based on relative movement of target
- G01S17/58—Velocity or trajectory determination systems; Sense-of-movement determination systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/04—Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/165—Anti-collision systems for passive traffic, e.g. including static obstacles, trees
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/166—Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2420/00—Indexing codes relating to the type of sensors based on the principle of their operation
- B60W2420/40—Photo, light or radio wave sensitive means, e.g. infrared sensors
- B60W2420/408—Radar; Laser, e.g. lidar
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2552/00—Input parameters relating to infrastructure
- B60W2552/40—Coefficient of friction
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2554/00—Input parameters relating to objects
- B60W2554/40—Dynamic objects, e.g. animals, windblown objects
- B60W2554/404—Characteristics
- B60W2554/4041—Position
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2554/00—Input parameters relating to objects
- B60W2554/40—Dynamic objects, e.g. animals, windblown objects
- B60W2554/404—Characteristics
- B60W2554/4042—Longitudinal speed
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2554/00—Input parameters relating to objects
- B60W2554/40—Dynamic objects, e.g. animals, windblown objects
- B60W2554/404—Characteristics
- B60W2554/4043—Lateral speed
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2554/00—Input parameters relating to objects
- B60W2554/80—Spatial relation or speed relative to objects
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Y—INDEXING SCHEME RELATING TO ASPECTS CROSS-CUTTING VEHICLE TECHNOLOGY
- B60Y2300/00—Purposes or special features of road vehicle drive control systems
- B60Y2300/08—Predicting or avoiding probable or impending collision
- B60Y2300/095—Predicting travel path or likelihood of collision
- B60Y2300/0954—Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/12—Acquisition of 3D measurements of objects
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Mechanical Engineering (AREA)
- Transportation (AREA)
- Electromagnetism (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Mathematical Physics (AREA)
- Chemical & Material Sciences (AREA)
- Combustion & Propulsion (AREA)
- Traffic Control Systems (AREA)
- Navigation (AREA)
- Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
A method for autonomous navigation of an autonomous vehicle, comprising: estimating a stop duration for the autonomous vehicle to reach a full stop based on a current speed of the autonomous vehicle; calculating a critical time from a current time elapsed stop duration; detecting an object in a scanned image of a field in the vicinity of the autonomous vehicle, the scanned image captured by a sensor on the autonomous vehicle at a current time; deriving a current position and motion of the object based on the scan image; calculating a future state boundary based on the current position and motion of the object and a set of predefined motion restriction assumptions for the common object in the vicinity of the public road, the future state boundary representing a ground area that the object may enter by a critical time; and selecting a navigation action to avoid entering a future state boundary before a critical time.
Description
Cross Reference to Related Applications
This application claims priority to U.S. provisional patent application No. 62/980,131, filed on 21/2/2020, U.S. provisional patent application No. 62/980,132, filed on 21/2/2020, and U.S. provisional patent application No. 63/064,316, filed on 11/8/2020, each of which is incorporated by reference herein in its entirety.
Technical Field
The present invention relates generally to the field of autonomous vehicles (autonomous vehicles), and more particularly to a new and useful method for object avoidance (object avoidance) during autonomous navigation in the field of autonomous vehicles.
Brief Description of Drawings
FIG. 1 is a flow chart representation of a method;
FIG. 2 is a flow chart representation of a variation of the method;
FIGS. 3A, 3B, 3C are flow chart representations of a variation of the method;
FIG. 4 is a flow chart representation of a variation of the method;
FIG. 5 is a flow chart representation of a variation of the method;
FIG. 6 is a flow chart representation of a variation of the method; and
fig. 7 is a flow chart representation of a variation of the method.
Detailed Description
The following description of the embodiments of the present invention is not intended to limit the invention to these embodiments, but rather to enable any person skilled in the art to make and use the invention. The variations, configurations, embodiments, example embodiments, and examples described herein are optional and do not preclude their described variations, configurations, embodiments, example embodiments, and examples. The invention described herein may include any and all combinations of these variations, configurations, embodiments, example embodiments, and examples.
1. Method for producing a composite material
As shown in fig. 1, a method S100 for object avoidance during autonomous navigation includes: estimating, at the autonomous vehicle at a first time, a stop duration for the autonomous vehicle to reach a full stop based on a speed of the autonomous vehicle at the first time in block S110; in block S112, storing a critical time offset from the first time by a stop duration; in block S120, detecting an object in a first scan image of a field (field) around an autonomous vehicle captured at about a first time; and deriving, in block S122, a first position and a first radial velocity of the object along a first ray from the autonomous vehicle to the object based on the first scan image. The method S100 further includes calculating a first future state boundary representing a ground area accessible by the object from a first time to a critical time in block S130 based on: a first position of the object at a first time; a first radial velocity of the object; and a maximum assumed angular velocity and a maximum assumed acceleration of the generic object defined for operation of the autonomous vehicle. The method S100 further includes, in block S142, in response to a first distance between the autonomous vehicle at the first time and a perimeter of the future state boundary exceeding a threshold distance, blanking (mute) the object from inclusion in a next path planning consideration at the autonomous vehicle; and in response to the threshold distance exceeding the first distance, computing an access zone (access zone) around the autonomous vehicle that does not include a future state boundary of the object, and in block S144, performing a navigation action to remain in the access zone from the first time to a critical time.
One variation of method S100 includes: estimating, at the autonomous vehicle at a first time, a stop duration for the autonomous vehicle to reach a full stop based on a speed of the autonomous vehicle at the first time in block S110; in block S112, a critical time offset from the first time by the stop duration is calculated; in block S120, detecting an object in a first scan image of a field proximate the autonomous vehicle captured by a sensor on the autonomous vehicle at about a first time; deriving a first position and a first motion of the first object based on the first scan image in block S122; in block S130, calculating a first future state boundary based on a first position of the first object at a first time, a first motion of the first object, and a set of predefined motion limit hypotheses for common objects near the common road, the first future state boundary representing a first ground area that the first object may enter from the first time to a first critical time; and in block S140, selecting a first navigation action to avoid entering a first future state boundary before a first critical time.
Another variation of method S100 includes: in block S102, accessing a set of predefined motion limit hypotheses for a common object near a public road; in block S104, accessing a scanned image containing data captured by a sensor on an autonomous vehicle at a first time; in block S120, identifying a set of points in the scan image representing an object in a field proximate the autonomous vehicle, each point in the set of points including a position of a surface on the object relative to the autonomous vehicle and a radial velocity of the surface of the object relative to the autonomous vehicle; in block S122, calculating a correlation between radial velocity and position of points in the set of points; in block S122, based on the correlation, a function is calculated that relates the possible tangential velocity of the object and the possible angular velocity of the object at the first time; in block S122, calculating a radial velocity of the object at a first time based on the radial velocities of the points in the set of points; in block S130, calculating a future state boundary representing a ground area that the object may enter at a future time based on-at a first time-a set of possible tangential velocities of the object and possible angular velocities of the object, a radial velocity of the object, and predefined motion limit assumptions defined by the function; and in block S140, selecting a navigational action to avoid the future state boundary before the future critical time.
Yet another variation of method S100 includes, in block S102, accessing a set of predefined motion limit hypotheses for common objects in proximity to a common road. This variant of the method S100 further comprises, for a first scanning period: in block S104, accessing a first scan image containing data captured by a sensor on an autonomous vehicle at a first time; in block S120, identifying a first set of points representing a first object in the field in the vicinity of the autonomous vehicle in the first scanned image, each point in the first set of points including a first range value from the sensor to a surface on the first object, a first azimuthal position of the surface on the first object relative to the sensor, and a first radial velocity of the surface of the first object relative to the sensor; in block S122, a first correlation between a first radial velocity and a first azimuthal position of a point in the first group of points is calculated; in block S122, based on the first correlation, a first function is calculated that relates a possible tangential velocity of the first object and a possible angular velocity of the first object at a first time; and in block S122, a first radial velocity of the first object at the first time is calculated based on the first radial velocities of the points in the first group of points. This variation of method S100 further includes: estimating a first stop duration for the autonomous vehicle to reach a full stop based on a first speed of the autonomous vehicle at a first time in block S110; in block S112, a first critical time is calculated that is offset from the first time by a stop duration; in block S130, calculating a first future state boundary representing a first ground area accessible to the first object at a first critical time based on a set of possible tangential velocity of the first object and possible angular velocity of the first object, a first radial velocity, and a predefined motion limit hypothesis, defined by a first function, at the first time; and in block S140, a first navigation action is selected to avoid entering into the first future state boundary before the first critical time.
2. Applications of
Generally, method S100 may be performed by an autonomous vehicle (e.g., an autonomous bus, an autonomous passenger vehicle) to: detecting objects in its environment; assigning worst-case velocity and acceleration values to the objects based on a pre-loaded maximum motion assumption for the generic object (or pre-loaded antagonistic motion limits for the generic antagonistic object); estimating a maximum ground area that an object can enter from a current time to a time at which the autonomous vehicle can brake to a complete stop given its current speed; and selectively blanking the object if the autonomous vehicle is sufficiently far from the maximum accessible ground area without taking it into route planning consideration, or taking the object into account and performing speed and/or steering angle adjustments to avoid future access into the maximum accessible ground area.
More specifically, throughout operation, the autonomous vehicle may maintain an estimate of its stop duration within which the autonomous vehicle may reach a complete stop given its current speed. When the autonomous vehicle first detects an object in its field, the autonomous vehicle may: assigning a predefined, worst-case velocity and acceleration hypothesis to the subject for the antagonistic subject; and calculating the maximum ground area that the object can enter given the current position of the object under these worst-case speed and acceleration assumptions (hereinafter referred to as the "future state boundary") for the current stop duration of the autonomous vehicle. If the current position of the autonomous vehicle deviates sufficiently from or is otherwise outside of the future state boundary of the object, the autonomous vehicle: the ability of the autonomous vehicle to come to a complete stop before colliding with the object can be predicted even under worst-case antagonistic action by the object; and thus the object may be ignored or blanked without incorporating it into the current path planning decision, and instead wait until later when the autonomous vehicle approaches a future state boundary of the object to perform a navigation operation to avoid the object. Conversely, if the current position of the autonomous vehicle is near the future-state boundary of the object, the autonomous vehicle may decrease its speed (e.g., by an amplitude that is inversely proportional to the distance from the autonomous vehicle to the perimeter of the future-state boundary) such that: reducing a stop duration of the autonomous vehicle; narrowing the future state boundary of the object (representing the maximum ground area that the object can enter within the current stopping duration of the autonomous vehicle); and enable the autonomous vehicle to remain outside of the future state boundary of the object over time. Thus, the autonomous vehicle may perform the blocks of method S100 to inform predicted navigation actions (e.g., speed and/or steering angle adjustments) to maintain spatial and temporal distances from the object to enable the autonomous vehicle to come to a complete stop before colliding with the object, even if the object initiates or continues antagonistic actions in the direction of the autonomous vehicle immediately after the autonomous vehicle first detects the object in its environment.
Further, the autonomous vehicle may: detecting and tracking the object on subsequent scan images; deriving an actual velocity of the object from the scan images (e.g., an absolute velocity of the object based on a change in position of the object over a plurality of consecutive scan images and consecutive radial velocity measurements); and replaces the worst-case assumption for the velocity of the object with the actual velocity of the object. The autonomous vehicle may then repeat the blocks of method S100 to: recalculating future state boundaries for the object based on the actual velocity of the object and the worst case acceleration assumption for the generic object; and selectively blanking the object based on the position of the autonomous vehicle relative to the revised future-state boundary without taking it into account with the current path planning.
For example, an autonomous vehicle may: storing a worst-case speed and acceleration (e.g., a maximum speed of 50 meters/second, a maximum acceleration of 9 meters/second) for a high-performance passenger vehicle or a high-performance motorcycle; and implement these worst-case velocities and accelerations to compute future state boundaries for all detected objects, regardless of the actual type or class of the objects. Thus, the autonomous vehicle may reduce or eliminate reliance on object recognition and other machine learning techniques for identifying the type of object and distinguishing immutable objects (e.g., signs, poles) from changeable objects (e.g., pedestrians, vehicles) in the field surrounding the autonomous vehicle. More specifically, rather than predicting a future state of an object based on a dynamic model selected according to the predicted object type, the autonomous vehicle may: predicting and defining a future state of the object based on the limited motion data of the object, the current location of the object relative to the autonomous vehicle, and a maximum speed and acceleration assumption for a general object (e.g., a general high performance passenger vehicle); and as the autonomous vehicle collects additional speed data for the object over time, the future state boundaries of the object are refined (e.g., narrowed).
Thus, by executing the blocks of method S100 to inform the path planning decision, the autonomous vehicle may: reducing or eliminating the need to accurately identify the type or class of objects in their environment; reducing or eliminating this possible source of error in autonomous operation of the autonomous vehicle; and increasing robustness of autonomous operation of the autonomous vehicle, such as against adversarial computer vision attacks, adversarial neural network attacks, or with limited or no a priori training data.
Furthermore, the autonomous vehicle may implement the same detection, tracking, and motion planning decision paths for variable and non-variable objects, thereby reducing or eliminating the need to identify classes of objects (or classify objects as variable or non-variable) in the environment of the autonomous vehicle and reducing the number of unique computer vision, machine learning, and path planning pipelines executing on the autonomous vehicle. For example, the autonomous vehicle may perform the same detection, tracking, and motion planning decision paths to predict and process: objects that may be undetectable in the environment of the autonomous vehicle but are occluded by other detected objects (e.g., pedestrians standing behind utility poles; passenger vehicles occupying lanes occluded by tractor trailers in the field of view of the autonomous vehicle); an object entering a field of view of the autonomous vehicle for a first time; and objects existing in the field of view of the autonomous vehicle.
3. Autonomous vehicle
The autonomous vehicle may include: a set of sensors configured to collect data representative of objects in a field surrounding an autonomous vehicle; a local memory storing a navigation map defining a route for execution by the autonomous vehicle, and a positioning map representing a position along an unchangeable surface of a road; and a controller. The controller may: calculating a position of the autonomous vehicle in real space based on the sensor data and the positioning map collected from the set of sensors; calculating future state boundaries of the object detected in the sensor data according to blocks of method S100; selecting a future navigation action based on the future state boundaries, the real-world location of the autonomous vehicle, and the navigation map; and controlling actuators (e.g., accelerator, brake, and steering actuators) within the vehicle based on the navigation decisions.
In one implementation, the autonomous vehicle includes a set of 360 ° LIDAR sensors disposed on the autonomous vehicle, such as one LIDAR sensor disposed at a front of the autonomous vehicle and a second LIDAR sensor disposed at a rear of the autonomous vehicle, or a cluster of LIDAR sensors disposed on a roof of the autonomous vehicle. Each LIDAR sensor may output a three-dimensional distance map (or depth image), such as in the form of a 3D point cloud representing the distance between the LIDAR sensor and an external surface within the field of view of the LIDAR sensor, once per rotation of the LIDAR sensor (i.e., once per scan cycle). The autonomous vehicle may additionally or alternatively comprise: a set of infrared emitters configured to project structured light into a field near the autonomous vehicle; a set of infrared detectors (e.g., infrared cameras); and a processor configured to transform the image output by the infrared detector into a depth map of the field.
The autonomous vehicle may additionally or alternatively include a set of color cameras facing outward from a front, a rear, and/or sides of the autonomous vehicle. For example, each camera in the group may output a video feed of digital photographic images (or "frames") at a rate of 20 Hz. The autonomous vehicle can also include a set of RADAR sensors facing outward from the autonomous vehicle and configured to detect a presence and a speed of an object near the autonomous vehicle. Thus, a controller in the autonomous vehicle may fuse data streams from the LIDAR sensor, color camera, RADAR sensor, and the like, into one scanned image per scan cycle, such as in the form of a 3D color map or 3D point cloud containing a group of points (continents of points) representing roads, sidewalks, vehicles, pedestrians, and the like in the field surrounding the autonomous vehicle.
However, the autonomous vehicle may include any other sensors, and may implement any other scanning, signal processing, and autonomous navigation techniques or models to determine its geospatial position and orientation, sense objects in its vicinity, and select a navigation action based on sensor data collected by these sensors.
3.1 object position + motion data
In one implementation, an autonomous vehicle includes a sensor that outputs a scanned image containing a group of points (a constellation of points), wherein each point in the scanned image: representing the position of a surface in the environment relative to a sensor (or more generally relative to an autonomous vehicle); and to specify the velocity of the surface along a ray extending from the sensor (or more generally from the autonomous vehicle) to the surface.
In one example, the autonomous vehicle includes a 3D scanning LIDAR sensor configured to detect distances and relative velocities of surfaces in a field around the autonomous vehicle along rays extending from the sensor (or more generally the autonomous vehicle) to the surfaces. In this example, the 3D scanning LIDAR sensor may: representing the position of the surface in the field in spherical coordinates in a polar coordinate system that defines an origin at the 3D scanning LIDAR sensor (or at a reference location on the autonomous vehicle); and these polar coordinates are stored in one scanned image at each scan cycle (e.g., each rotation) of the sensor. Thus, in this example, the autonomous vehicle may access a scanned image containing data captured by a four-dimensional light detection and ranging sensor that: is mounted on an autonomous vehicle; and is configured to generate a scanned image representative of the position and velocity of a surface located within the field relative to the sensor.
In this example, the autonomous vehicle may include a plurality of such 3D scanning LIDAR sensors, each configured to output one scanned image per scanning cycle. The autonomous vehicle may then fuse the concurrent scan images output by the sensors into a composite scan image for the scan cycle.
Alternatively, the autonomous vehicle may include a set of sensors that capture different types of data and may merge the outputs of these sensors into a scanned image that contains points located at the positions of the surfaces in the field and that are annotated with the velocity of these surfaces along a ray extending between the autonomous vehicle and these surfaces. For example, the autonomous vehicle may include a 3D scanning LIDAR sensor that: defining a LIDAR field of view: and is configured to generate a 3D point cloud comprising a population of points during a scan cycle, wherein each point defines a location of an area on the surface in an environment surrounding the autonomous vehicle. In this example, the autonomous vehicle may also include a stationary or scanning RADAR sensor that: defining a RADAR field of view that intersects the LIDAR field of view; and generating a list of objects or surfaces in the RADAR field of view during the scan cycle, wherein each object or surface in the list is annotated with a velocity relative to the RADAR sensor. The autonomous vehicle then merges the concurrent outputs of the LIDAR and RADAR sensors during the scan cycle to annotate points in the 3D point cloud with the velocity of the corresponding object or surface detected by the RADAR sensor.
However, the autonomous vehicle may include any other type or configuration of sensor and may access or construct a scan image that represents the relative position and relative speed of objects or surfaces in the field around the autonomous vehicle during the scan cycle.
4. Preloaded rules/assumptions
The autonomous vehicle may also store predefined worst case motion assumptions for the generic object. In particular, the autonomous vehicle may store assumptions about the most aggressive (or "worst case") motions and motion variations of any objects that the autonomous vehicle may encounter during operation, and apply these worst case motion assumptions to predict the future state of all objects (e.g., pedestrians, passenger vehicles, trucks, trailers, RVs, motorcycles, street signs, light poles, traffic signals, utility poles, buildings) it encounters throughout operation.
For example, the autonomous vehicle may store: maximum possible speed of the generic object (e.g., 100 miles per hour; 55 meters per second); and the maximum possible linear acceleration of the generic object in any direction (e.g. 9 m/s per second). The autonomous vehicle may also store the maximum possible angular velocity of the generic object in any direction, such as an inverse function of the speed of the object. For example, the autonomous vehicle may store a maximum possible angular velocity function that outputs a maximum possible angular velocity of the generic object, around its center, that decreases as the linear velocity of the generic object increases. Thus, in this example, the maximum possible angular velocity function may predict the maximum possible angular velocity of the generic object when the generic object is stationary. (for example, a pedestrian standing still may exhibit a greater maximum possible angular velocity than a sports car traveling at 30 meters per second.)
The autonomous vehicle may also store object avoidance rules, such as a minimum temporal or spatial margin between the autonomous vehicle and a future state boundary of any object in the vicinity of the autonomous vehicle.
However, the autonomous vehicle may store and implement any other predefined worst-case motion assumptions and/or object avoidance rules for the generic object.
In addition, the autonomous vehicle may retrieve these predefined worst-case motion hypotheses and/or object avoidance rules set by an operator or stakeholder affiliated with the autonomous vehicle or the autonomous vehicle operating site. For example, a fleet manager or government officer may assign such values to a fleet of autonomous vehicles or to the operation of all autonomous vehicles within a municipality, city, county, district, state, region, country, or the like.
5. Stopping distance and stopping duration
Blocks S110 and S112 of method S100 recite: estimating, at the autonomous vehicle at a first time, a stop duration to bring the autonomous vehicle to a full stop based on a speed of the autonomous vehicle at the first time; and storing the critical time offset from the first time by the stop duration. Generally, in blocks S110 and S112, the autonomous vehicle estimates a future time and/or distance that the autonomous vehicle may reach a full stop based on its current speed-if the autonomous vehicle immediately initiates an emergency stop process. For example, the autonomous vehicle may implement a preload function that directly translates vehicle speed to stop duration and/or stop distance.
In another implementation, the autonomous vehicle estimates road surface quality based on data collected by various sensors in the autonomous vehicle. For example, an autonomous vehicle: implementing computer vision and machine learning techniques to detect the presence of puddles or ponding in the color image; and estimates the wetness of the road surface based on the presence and distribution of these puddles or pools. In another example, the autonomous vehicle: implementing computer vision and machine learning techniques to extract color data and texture information from a color image captured by a camera on an autonomous vehicle; and to interpret the type of road surface surrounding the autonomous vehicle, such as: maintaining asphalt; asphalt (e.g., pothole), smooth concrete that has been worn over time; textured concrete; gravel; fouling; grass; or water accumulation. In this implementation, the autonomous vehicle may then calculate or retrieve a coefficient of friction for the road surface based on the estimated moisture content and surface type of the road. The autonomous vehicle may additionally or alternatively implement a braking efficiency model for the autonomous vehicle to calculate a braking efficiency coefficient based on: mileage since last braking service of the autonomous vehicle; and/or mileage since the last tire change of the autonomous vehicle. The autonomous vehicle may then implement a braking model to estimate the stopping distance and/or stopping duration based on: current vehicle speed; coefficient of friction; and/or a braking efficiency coefficient.
However, the autonomous vehicle may implement any other method or technique to estimate the current stopping distance and/or current stopping duration of the autonomous vehicle.
The autonomous vehicle may also add safety margins to these stopping distance and/or stopping duration values, such as: by adding three meters to the stopping distance; by adding two seconds to the stop duration; or by multiplying these values by a safety margin (e.g. "1.2").
The autonomous vehicle may then calculate a critical time representing the fastest time that the autonomous vehicle may brake to a full stop by adding the current time and the stop duration.
6. Scanning images, object detection and object motion
Blocks S120 and S122 of method S100 recite: detecting an object in a first scanned image of a field around an autonomous vehicle captured at about a first time; and deriving, based on the first scanned image, a first position of the object and a first radial velocity along a first ray from the autonomous vehicle to the object. Generally, in blocks S120 and S122, the autonomous vehicle may: accessing a new scan image output by the LIDAR sensor as described above; detecting objects in the new scanned image that were not detected in the previous scanned image; and extracting a limited set of highly deterministic motion characteristics of the object (e.g., relative to the radial velocity of the autonomous vehicle) from the new scan image.
In one implementation, after receiving (or generating) a scan image for a current scan cycle, the autonomous vehicle performs an object detection technique to associate a set of points in the scan image with discrete objects in a field around the autonomous vehicle. For example, an autonomous vehicle may: aggregating a set of points, which are aggregated at similar depths from the autonomous vehicle and labeled with self-consistent velocities (e.g., range rate, azimuth velocity) for nearby objects; and associating the set of points with an object in the field.
The autonomous vehicle may then extract from the scanned image a radial velocity (or "rate of change of distance") of the object along a ray extending from the autonomous vehicle to the object (hereinafter referred to as the "radial direction") and an angular velocity of the object relative to the autonomous vehicle. For example, an autonomous vehicle may: transforming the radial velocity of the point defining the object to an absolute velocity in an absolute reference frame based on the position and velocity of the autonomous vehicle in the absolute reference frame at the current time; and calculating an angular velocity (or "yaw") of the object about its center in the absolute reference frame during the current scan cycle based on a difference between the absolute radial velocities of the leftmost point and the rightmost point included in the set of points associated with the object. In this example, the autonomous vehicle may also: averaging the stored radial velocities in a subset of points near the centroid of the set of points defining the object; and storing the average radial velocity as a radial velocity of the object relative to the autonomous vehicle in a radial direction along a ray from a center of the autonomous vehicle to a center of mass of the group of points. (the autonomous vehicle may also transform the radial velocity of the object relative to the autonomous vehicle to an absolute velocity of the object in a radial direction based on the velocity and angular velocity of the autonomous vehicle during the scan cycle.)
The autonomous vehicle may-in the scan image-repeat the process for other sets of points representing other objects in the field around the autonomous vehicle.
6.1 object tracking
The autonomous vehicle may also implement object tracking techniques to: linking a set of points representing a particular object in a currently scanned image to a set of similar points detected in a previously scanned image; and the groups of points-and the object they represent-are linked across the two scanned images. However, if the autonomous vehicle fails to match the set of points detected in the current scan image with a set of points in a previous scan image that are at a similar position and speed, the autonomous vehicle may mark the set of points in the current scan image as a new object (i.e., an object that first entered the autonomous vehicle's field of view during the current scan cycle).
7. Bounded future states: new object
Block S130 of method S100 recites calculating a first future state boundary representing a ground area accessible to an object from a first time to a critical time based on: a first location of the object at a first time; a first radial velocity of the object; and a maximum assumed acceleration, a maximum assumed angular velocity, and a maximum assumed acceleration of the generic object defined for the operation of the autonomous vehicle. Generally, in block S130, the autonomous vehicle may combine the limited motion data of the object, derived therefrom from the current scan image in which the object is first detected, and the worst case assumption for the object to make antagonistic motions to calculate a range of ground areas that the object may enter from the current time to a critical time (i.e., within a subsequent dwell duration), and store the accessible ground areas as future state boundaries of the object.
More specifically, when the autonomous vehicle first detects an object in the scanned image, the autonomous vehicle may: estimating a position of a center of an object (near a centroid of a point in the scanned image associated with the object) relative to the autonomous vehicle; deriving a yaw rate of the object relative to the autonomous vehicle based on velocity values stored in the set of points associated with the object in the scanned image; and the velocity of the object in the radial direction (i.e., along the ray extending from the autonomous vehicle to the object) is derived as described above. However, the scanned image in which the autonomous vehicle first detected the object may not contain sufficient data to enable the autonomous vehicle to derive the absolute velocity of the object or the velocity of the object perpendicular to the radial direction (hereinafter referred to as the azimuth direction). Thus, the autonomous vehicle may implement a worst-case assumption for the current speed of the object and the future acceleration of the object to calculate a future state boundary representing an area of ground that the object may enter from the current time to a critical time in a worst-case scenario.
In one implementation, the computer system calculates the maximum possible velocity of the object in each of a number of directions radially offset around the center of the object (e.g., one hundred directions radially offset by 3.6 °) based on: an assumed maximum possible speed of the generic object; and the velocity of the object in the radial direction. For a first direction in the set, the computer system then calculates a first integral over time of a maximum possible velocity of the object in the first direction and the measured angular velocity of the object from the current time to a critical time. For the first direction, the autonomous vehicle further: implementing an acceleration rule function linking angular and radial direction velocities with a maximum possible rate of acceleration of the generic object in the first direction to estimate the maximum possible rate of acceleration of the object in the first direction; and a second (double) integral over time of the maximum possible rate of acceleration of the object in the first direction from the current time to the critical time, limited by the maximum possible velocity of the generic object, is calculated. The autonomous vehicle then sums the first integral and the second integral to calculate a maximum possible distance traversed by the object in the first direction and locates a first vertex of the future state boundary on a ray extending from a center of the object along the first direction and offset from the center of the object by the maximum possible traversal distance. An autonomous vehicle: repeating the process for each other direction in the set to define vertices of a future state boundary in each of the directions; calculating splines through each of the vertices; and the area encompassed by the spline is stored as the future state boundary of the object.
Thus, because the autonomous vehicle has limited information related to the speed of the object during this first scanning period in which the object is visible, the autonomous vehicle may: a worst-case assumption is implemented for the current speed and future acceleration of the object in order to predict a worst-case ground area that the object may enter from the current time to a critical time (i.e., the fastest time that the autonomous vehicle may brake to a complete stop). The autonomous vehicle may then define a safe ground area outside of the future state boundary of the object, and perform a navigational action to remain within the safe ground area such that any collision between the autonomous vehicle and the object is only likely to occur after the autonomous vehicle reaches a complete stop (and such that a collision with the object may be entirely the responsibility of the object and not the responsibility of the autonomous vehicle). In particular, if the current position of the autonomous vehicle falls within a vicinity of (e.g., within a threshold distance of) the future state boundary, the autonomous vehicle may initiate an avoidance maneuver to avoid the future state boundary of the object. Stated otherwise, the autonomous vehicle may: verifying a very high confidence that the autonomous vehicle will avoid colliding with the object, even assuming that the object makes the most antagonistic action, if the autonomous vehicle continues to travel along its current trajectory and at its current speed until at least the next scanning cycle; and therefore blank the subject for the current scan period without incorporating path planning decisions.
For example, if the current position of the autonomous vehicle is very far from the future state boundary of the object, the autonomous vehicle may blank the object for the current scan period without incorporating a path planning decision. However, if the current position of the autonomous vehicle is within a threshold distance (e.g., 10 meters, 4 seconds) from the future state boundary of the object, the autonomous vehicle may include the object in a path planning decision for the current scan cycle, such as by: decelerating the autonomous vehicle at an amplitude and/or rate inversely proportional to proximity of the autonomous vehicle to a perimeter of a future state boundary of the object; and/or adjusting a steering angle of the autonomous vehicle to move a trajectory of the autonomous vehicle away from the future state boundary of the object.
8. Bounded future states: existing object
In general, an autonomous vehicle may capture an image with a relatively high resolution at a relatively far distance from the autonomous vehicle, such that when the autonomous vehicle first detects an object in the scanned image, the autonomous vehicle typically falls at a distance significantly beyond the future state boundary computed for that object. Thus, the autonomous vehicle may typically blank the object during the scan period in which the autonomous vehicle first detected the object without incorporating a path planning decision. However, the autonomous vehicle may also track the object on subsequent scan images, derive additional motion characteristics of the object from the scan images, update future state boundaries of the object accordingly, and selectively blank or account for the object during the subsequent scan periods based on the concurrent location of the autonomous vehicle and the refined future state boundaries of the object.
In one implementation, the autonomous vehicle captures a second scan image during a second scan cycle subsequent to a first scan cycle in which the autonomous vehicle first detected the object, as described above. The autonomous vehicle then implements the above-described methods and techniques to: deriving additional motion characteristics of the object (e.g., velocity in the azimuthal direction, angular velocity, and absolute velocity) from the second scan image and the difference between the first scan image and the second scan image; replacing the worst case assumption for the velocity of the object with these derived motion data; and the future state boundaries are recalculated for the object accordingly.
In one implementation, an autonomous vehicle: accessing a second scan image captured during a second scan period subsequent to the first scan period; implementing an object tracking technique to associate a set of points in the second scan cycle with the object detected in the first scanned image; estimating a center of the object in the first image and the second image; extracting a first position of the object at a first time of a first scanning period from the first scanned image; extracting a second position of the object at a second time of the second scanning period from the second scanned image; calculating a spatial distance between the first and second locations; a current speed of the object relative to the autonomous vehicle is estimated by dividing the spatial distance by a time interval between the first and second scan periods.
( However, because the range of the object represented by the set of points in the first and second images may be different, and because the time interval between the first and second scan periods may be short (e.g., 10 milliseconds), the change in position of the object from the first scan period to the second scan period may be prone to significant errors. More specifically, the uncertainty of the derived object velocity may be relatively high compared to the radial velocity of the object extracted from the current scan image. Thus, the autonomous vehicle may multiply the calculated speed of the object relative to the autonomous vehicle by an error margin, such as "1.5". Further, as the autonomous vehicle tracks the object over multiple consecutive scan cycles, the autonomous vehicle may calculate a combination (e.g., a weighted average) of the derived velocities of the object in order to reduce some error in the calculation. )
In this implementation, the autonomous vehicle may also: transforming a current speed of the object relative to the autonomous vehicle (adjusted by an error margin) to an absolute speed of the object based on the speed of the autonomous vehicle during the time interval; implementing the above-described methods and techniques to calculate a velocity of the object in a radial direction and an angular velocity of the object based on velocity values contained in the points in the group; and deriving a velocity of the object in an azimuthal direction-perpendicular to the radial direction, based on the absolute velocity of the object and the velocity of the object in the radial direction.
Thus, the autonomous vehicle may derive a more complete motion profile (such as including true absolute velocity) of the object during the second scan cycle based on data extracted from the second scan image and the previous scan image.
The autonomous vehicle may then implement the above-described methods and techniques to: recalculating a critical time of the autonomous vehicle based on the speed of the autonomous vehicle during the second scan cycle; and re-calculating future state boundaries of the object from the current time to the revised critical time based on the true (absolute or relative) velocity of the object from the current time to the revised critical time (rather than the worst case velocity of the generic object), the angular velocity of the object, and the maximum possible acceleration of the generic object (limited by the maximum possible velocity of the generic object).
Thus, because the true speed of an object may often be (significantly) less than the maximum assumed speed of a generic object, the revised future state boundary of the object, thus recalculated based on additional motion data collected during this second scan cycle, may be significantly less than the initial future state boundary of the object calculated by the autonomous vehicle after the first detection of the object.
The autonomous vehicle may then implement the above-described methods and techniques to selectively blank an object during the second scan period based on the distance of the object from the perimeter of the revised future state boundary of the object without incorporating path planning considerations.
The autonomous vehicle may repeat the process for each subsequent scan image thereby captured by the autonomous vehicle to refine and update the future state boundaries of the object, such as until the autonomous vehicle drives past the object or until the object moves outside of the field of view of the autonomous vehicle.
9. Bounded future states: occluded object
In one variation, the autonomous vehicle may: defining virtual objects in an area of a field around an autonomous vehicle occluded by a detected object (e.g., passenger vehicle, truck, building); implementing methods and techniques similar to those described above to assign a worst-case motion characteristic to the virtual object and define a virtual future state boundary for the virtual object based on the worst-case motion characteristic; and refining these worst-case motion characteristics of the virtual object based on the shrinkage range of the possible motion characteristics of the virtual object over time and the correspondingly recalculated virtual future state boundaries of the virtual object. More specifically, in this variation, the autonomous vehicle may anticipate the presence of an undetected object behind a detected object and implement methods and techniques similar to those described above to define a likely future state of the undetected object and selectively blank the likelihood that the undetected object is behind the detected object based on a distance navigation action maintained in a space into which the undetected object cannot enter.
In one implementation, an autonomous vehicle first detects a first object in a first scanned image that spans an azimuthal distance. The autonomous vehicle then implements the above-described methods and techniques to calculate a future state boundary of the first object based on the motion data extracted from the first scan image, and revise the future state boundary of the first object based on the motion data extracted from the subsequent scan image.
Concurrently, the autonomous vehicle: defining a virtual object that immediately follows the first object (e.g., two meters behind the first object); assigning to the virtual object a worst-case velocity in all directions and a worst-case acceleration in all directions that reaches a maximum possible velocity for the generic object; and calculating virtual future state boundaries for the virtual object based on these worst case motion values. For example, an autonomous vehicle may: assume that the virtual object moves in all directions (except the direction currently occluded by the first object) at the current time at the maximum possible speed for the common object; and calculating a virtual future state boundary for the virtual object based on an integral of the maximum possible speed over the current stop time of the autonomous vehicle in all directions (except for a direction currently occluded by the first object). The autonomous vehicle may then implement the above-described methods and techniques to verify that the current position of the autonomous vehicle is outside the virtual future state boundary, and accordingly selectively blank consideration of the virtual object during the current scan cycle without taking into account path planning considerations.
During the next scan cycle, the autonomous vehicle may similarly: accessing a next scan image; an object tracking technique is implemented to detect a first object in the next scan image and to link the first object in the next scan image to a first object detected in a previous scan image. Then, if the autonomous vehicle fails to detect a new object appearing from behind the first object in the next scan image, the autonomous vehicle may confirm that the azimuthal velocity of the virtual object relative to the first object is insufficient to pass through the azimuthal length of the first object in the field of view of the autonomous vehicle within the time interval from the previous scan cycle to the next scan cycle. More specifically, because the autonomous vehicle fails to detect a new object appearing behind the first object in the next scan image, the autonomous vehicle may predict that the velocity of the virtual object-relative to the first object and along the azimuthal direction defined by the autonomous vehicle-does not exceed the width of the first object divided by the time interval between the previous scan cycle and the current scan cycle. Thus, in this implementation, the autonomous vehicle may: extracting an azimuth length of the first object from the current scan image (or an average length of the first object extracted from the previous scan image and the current scan image); deriving an azimuthal velocity of the first object-relative to the autonomous vehicle based on a change in position of the first object between the first scan image and the second image; and calculating a maximum possible azimuthal velocity of the virtual object relative to the first object and along an azimuthal direction defined by the autonomous vehicle between the first scan cycle and the second scan cycle, assuming the virtual object is infinitely narrow, based on the azimuthal length of the first object and the time interval between the first scan cycle and the second scan cycle. The autonomous vehicle may then: calculating a maximum possible azimuth velocity of the virtual object relative to the autonomous vehicle by summing an azimuth velocity of the first object relative to the autonomous vehicle and an azimuth velocity of the virtual object relative to the first object; the above-described methods and techniques (for new objects) are then implemented to compute virtual future state boundaries for the virtual object based on the maximum azimuthal velocity thus estimated for the virtual object.
The autonomous vehicle may repeat the process for subsequent scan cycles, including: further revising a maximum possible azimuthal velocity of the virtual object-along an azimuthal direction relative to the autonomous vehicle-based on a length of the first object and a time interval over a set of scanned images in which the autonomous vehicle detected the first object; recalculating maximum possible velocities and accelerations of the virtual object in the respective directions based on the maximum possible azimuthal velocity of the virtual object; and refining the virtual future state boundaries of the virtual object based on these maximum possible velocities, maximum possible accelerations, and maximum possible azimuthal velocities of the virtual object.
The autonomous vehicle may also define a plurality of virtual objects behind the first object and implement similar methods and techniques to define a virtual future state boundary for each of these virtual objects (such as a first virtual object immediately behind the first object, a second virtual object two meters behind the first object, a third virtual object ten meters behind the first object, and a fourth virtual object 20 meters behind the first object, etc.).
For example, upon detecting a utility pole in the first scanned image, the autonomous vehicle may perform the aforementioned methods and techniques to calculate virtual future state boundaries for each of these virtual objects. In this example, if the autonomous vehicle tracks the utility pole over multiple consecutive scan images (e.g., captured within one second) and fails to detect a new object entering the field of view behind the utility pole, the autonomous vehicle may define a set of virtual future state boundaries indicating: there is no pedestrian (i.e., first virtual object) following immediately behind the utility pole walking at a speed faster than 0.2 m/s in the azimuth direction; there is no motorcycle (i.e., second virtual object) located in an area about two meters behind the utility pole moving faster than 0.2 meters/second in the azimuth direction; there is no passenger vehicle (i.e., third virtual object) located in an area about ten meters behind the utility pole moving faster than 1 meter/second in the azimuth direction; and there is no truck vehicle (i.e., third virtual object) located in an area about 20 meters behind the utility pole that moves faster than 1 meter/second in the azimuth direction.
Further, in this variation, upon detecting that the second object appears behind the first object and is located at a particular radial distance from the autonomous vehicle, the autonomous vehicle may: transferring the motion characteristics derived therefrom for virtual objects near the particular radial distance from the autonomous vehicle to the second object; these motion characteristics, which are transferred from the virtual object, are then implemented to compute the future state boundary of the second object.
10. Other objects
In general, the autonomous vehicle may concurrently perform multiple instances of the foregoing process to calculate future state boundaries for a number of discrete objects detected in the current scan image, define one or more virtual objects behind each of these detected objects, define virtual future state boundaries for each of these objects, and refine the future state boundaries over time.
11. Entry area
The autonomous vehicle may then select a next navigation action based on a subset of the detected objects and virtual objects based on proximity of the autonomous vehicle to future state boundaries of the detected objects and virtual objects.
In one implementation, an autonomous vehicle: aggregating future state boundaries calculated during the current scan cycle for the detected object and the virtual object; and-assembling the future state boundaries into a composite future state boundary based on the positions of the detected objects and virtual objects of the future state boundaries relative to the autonomous vehicle during the current scan cycle, the composite future state boundary defining all positions that are accessible by the detected objects and virtual objects from the current time to the critical time based on worst-case antagonistic motion characteristics of the objects. ( In this variant, to reduce the complexity of this composite future state boundary, the autonomous vehicle may also: a subset of future state boundaries is selected that defines a perimeter that falls within a preset minimum temporal or spatial margin of the autonomous vehicle current location. The autonomous vehicle may then assemble the subset of future state boundaries into a composite future state boundary. )
The autonomous vehicle may then store a reverse side (reverse) of the composite future state boundary as an entry zone for the autonomous vehicle. More specifically, the access zone may define a ground area: the autonomous vehicle may be operating within the ground area at least for the time interval from the current scan cycle to the next scan while maintaining very high confidence that the autonomous vehicle may brake to a full stop before colliding with any detected objects, even if one (or many) of these objects initiate antagonistic action (e.g., rapid acceleration to the maximum possible speed of a general object) during the current scan cycle. The autonomous vehicle may also align a georeferenced road network with the access area and further remove areas of the access area that extend outside of the road areas defined in the road network.
The autonomous vehicle may then calculate a navigational action that, when performed by the autonomous vehicle, maintains the autonomous vehicle within the ingress area, such as: if the autonomous vehicle is within a temporal or spatial margin of an edge of the access zone, decelerating the autonomous vehicle to reduce a rate of approaching the edge; and/or by adjusting a steering angle of the autonomous vehicle so as to redirect the autonomous vehicle toward a section of the access zone that extends further from the autonomous vehicle. (autonomous vehicles can also weigh these navigational actions to maintain the autonomous vehicle on or near a specified route.)
Thus, the autonomous vehicle may: utilizing future state boundaries for newly detected objects, existing detected objects, and virtual objects to calculate a ground area within which the autonomous vehicle may operate for a limited period of time (e.g., a time interval between two consecutive scan cycles) while maintaining a high degree of confidence that the autonomous vehicle may brake to a complete stop prior to colliding with any of the objects; navigation operations are then defined and performed to maintain the autonomous vehicle within the entry zone. The autonomous vehicle may then repeat the process for each subsequent scan cycle during operation.
12. Changing objects and points
Further, because the autonomous vehicle may not rely on object classification or recognition to predict the type of object and accordingly the motion of the object, the autonomous vehicle may define a set of points that span multiple real objects in the field (such as if the objects move along similar trajectories and at similar speeds). However, the autonomous vehicle may implement the aforementioned methods and techniques to calculate, refine, and avoid future state boundaries for this "grouped object" until these real objects no longer move along similar trajectories and/or at similar speeds, at which point the autonomous vehicle may: distinguishing the objects in the current scanning cycle; transferring motion characteristics from the preceding grouped objects to each of the different objects; future state boundaries are then computed for each of these objects, as described above.
Similarly, the autonomous vehicle may distinguish between two clusters of points representing a single real object, and implement the methods and techniques described above to calculate, refine, and avoid future state boundaries for the two clusters, such as until the autonomous vehicle determines that the self-consistency of the proximity and radial velocity (or rate of change of distance) of the points in the two clusters indicates a time for a single object.
Additionally or alternatively, the autonomous vehicle may implement the aforementioned methods and techniques to compute, refine, and avoid future state boundaries for individual points and smaller clusters of points representing sub-regions of objects in the field around the autonomous vehicle.
13. Sports disambiguation
For the first scanning cycle, one variant of the method S100 shown in fig. 2 comprises: in block S104, accessing a first scan image containing data captured by a sensor on an autonomous vehicle at a first time; identifying, in block S120, a first set of points in the first scan image representing an object in a field in the vicinity of the autonomous vehicle, each point in the first set of points including a first range value from the sensor to a surface on the object, a first azimuthal position of the surface on the object relative to the sensor, and a first radial velocity of the surface of the object relative to the sensor; in block S122, a first correlation between a first radial velocity and a first azimuthal position of a point in the first group of points is calculated; and in block S122, based on the first correlation, calculating a first function relating the possible tangential velocity of the object and the possible angular velocity of the object at the first time. This variation of method S100 similarly includes, for the second scan cycle: accessing a second scan image containing data captured by the sensor at a second time in block S104; in block S120, a second set of points representing objects in the field is identified in a second scan image; in block S122, a second correlation between a second radial velocity and a second azimuthal position of the points in the second set of points is calculated; and in block S122, based on the second correlation, calculating a second function that relates the possible tangential velocity of the object and the possible angular velocity of the object at the second time. This variation of method S100 also includes estimating, in block S124, a second tangential velocity of the object and a second angular velocity of the object relative to the autonomous vehicle at a second time based on an intersection of the first function and the second function.
In this variation, for the first scan cycle, the method S100 may similarly include: in block S104, accessing a first scan image containing data captured by a sensor on an autonomous vehicle at a first time; identifying, in block S120, a first set of points in the first scan image representing an object in a field proximate the autonomous vehicle, each point in the first set of points including a first ranging value from the sensor to a surface on the object, a first location of the surface on the object relative to the autonomous vehicle, and a first radial velocity of the surface of the object relative to the autonomous vehicle; in block S122, a first correlation between a first radial velocity and a first position of a point in the first group of points is calculated; and in block S122, a first function is calculated that relates the possible linear motion of the object and the possible angular motion of the object at the first time based on the first correlation. For the second scan cycle, this variation of method S100 may further include: accessing a second scan image containing data captured by the sensor at a second time in block S104; in block S120, a second set of points representing the object is identified in the second scanned image; in block S122, calculating a second correlation between a second radial velocity and a second position of the points in the second set of points; and in block S122, based on the second correlation, calculating a second function that relates the possible linear motion of the object and the possible angular motion of the object at the second time. This variation of method S100 may also include estimating linear motion of the object relative to the autonomous vehicle and angular motion of the object relative to the autonomous vehicle at a second time based on an intersection of the first function and the second function in block S126.
Additionally or alternatively, in this variation, for each scan cycle in the sequence of scan cycles at the autonomous vehicle, the method S100 may include: in block S104, accessing a scan image containing data captured by a sensor on the autonomous vehicle at a scan time; identifying, in block S120, a set of points in the scan image representing an object in the field in the vicinity of the autonomous vehicle, each point in the set of points including a position of a surface on the object relative to the autonomous vehicle and a radial velocity of the surface of the object relative to the autonomous vehicle; and in block S122 a function is calculated that relates possible linear motion of the object and possible angular motion of the object at the scanning time based on a correlation between the radial velocity and the position of the points in the set of points. This variation of method S100 may also include estimating, in block S126, a current linear motion of the object relative to the autonomous vehicle and a current angular motion of the object relative to the autonomous vehicle at a current time based on an intersection of a current function derived from a first scan image containing data captured at the current time and a previous function derived from a second scan image containing data captured prior to the current time.
13.1 three degrees of freedom
Generally, in this variant, the autonomous vehicle: deriving a relationship between a tangential velocity and an angular velocity of an object in a field of the autonomous vehicle based on characteristics of a group of points representing the object in a scanned image output by a sensor on the autonomous vehicle; further defining a possible current motion of the object based on the relation between the measured radial velocity of the object and the derived tangential and angular velocities of the object; and further refines the future state boundary computed for the object based on the possible current motion of the object and the motion constraint assumptions for the ground-based object.
In particular, in this variant, the autonomous vehicle may calculate a narrow range of possible tangential and angular velocities of the object, and thus of the total possible velocity of the object during a single scan cycle, using the relationship between the radial distance, radial velocity, tangential velocity, and angular velocity of the object and a limited number (e.g., as few as two) of distance, angle, and range rate measurements. The autonomous vehicle may also: tracking the object in a scanned image output by the sensor during a next scanning cycle; repeating the aforementioned processing based on the next scan image; and combining the results of the current and previous scan cycles to narrow the motion estimation of the object to a singular set (or a very narrow range thereof) of tangential velocity values, angular velocity values and total velocity values. Then, instead of calculating future state boundaries of the object based on the maximum acceleration assumption and the maximum velocity and range of possible velocities of the object, a narrower future state boundary of the object is calculated based on the maximum acceleration assumption and a single total velocity of the object derived by the autonomous vehicle using two independent measurements. More specifically, the autonomous vehicle may perform the blocks of method S100 to compress the set of two-dimensional movement possibilities for the nearby object into a set of one-dimensional movement possibilities for the object.
In general, motion of ground-based objects (e.g., vehicles, pedestrians) may occur substantially in a horizontal plane (i.e., parallel to the ground plane), including linear motion along an x-axis, linear motion along a y-axis, and rotation about a z-axis perpendicular to the horizontal plane, which may be expressed as linear velocity in the horizontal plane and angular velocity about an axis perpendicular to the horizontal plane. Thus, this variation of method S100 is described below as being performed by an autonomous vehicle to derive a tangential velocity, an angular velocity, and a total velocity of an object within a horizontal plane given a radial velocity and a position (e.g., a range and an angle) of a point on the object in the horizontal plane. However, the autonomous vehicle may implement similar methods and techniques to derive linear and angular velocities (i.e., three linear and three angular velocities) of the object in 3D space and accordingly to derive an absolute or relative total velocity of the object in 3D space.
More specifically, the sensor may be configured to return, for each surface in the field that falls within the sensor field of view, a range value (i.e., distance), an azimuth angle, and a velocity (i.e., radial velocity or "doppler") along a ray from the surface in the field back to the sensor during the scan period. Tangential velocity (e.g., linear motion in a direction perpendicular to the radial velocity and in a horizontal plane) and angular velocity (e.g., angular motion about the yaw axis of the autonomous vehicle) of a set of surfaces representing an object in a scanned image are included in the range-finding values, azimuth angles, and velocity data of points in the scanned image. However, the specific tangential and angular velocities of the object are uncertain from the perspective, azimuth and radial velocities contained in the set of points. Furthermore, tracking the object over multiple scan images and deriving the tangential velocity of the object from the change in position of the object depicted over the multiple scan images introduces significant errors: especially if the view angle of the object in the field of view of the autonomous vehicle changes from one scan cycle to the next, since the object will appear to change in size in successive scan cycles, which will be represented erroneously in the calculated tangential velocity of the object; especially if the area of the object that is occluded from view of the sensor changes during successive scanning cycles, because the velocity of the perceptible window over the visible area of the object will be represented erroneously in the calculated tangential velocity of the object; and in particular in cases where points on two consecutive scan images are unlikely to represent the same surface on the object if the object moves relative to the autonomous vehicle during consecutive scan cycles.
However, the autonomous vehicle may perform the blocks of method S100 to derive a first relationship (or "correlation") between the tangential velocity and the angular velocity of the object during the first scanning cycle based on the ranging values, the azimuth angle and the radial velocity data contained in the set of points representing the object in the first scanned image. The autonomous vehicle may then: repeating the process during a second scan cycle to calculate a second relationship between tangential velocity and angular velocity of the object during the second scan cycle based on range-finding values, azimuth angle and radial velocity data contained in a set of points representing the object in a second scan image; and derives a specific tangential velocity and a specific angular velocity (or a narrow range thereof) of the object that are consistent with the first and second relationships.
13.2 first scanning period
In one implementation shown in FIG. 2, a sensor on an autonomous vehicle at a first time T 0 A first scan cycle is performed and a first scan image is returned that contains the radial velocity, distance, and angular position of a group of points (e.g., small surfaces, areas) in the entire field around the autonomous vehicle. Then, the autonomous vehicle: implementing the above-described methods and techniques to identify groups of points (or "point clusters") corresponding to discrete objects in a field; and calculating the object at T based on a measure of the central tendency of the radial velocities of the points in the group 0 Radial velocity V of rad,0 . For example, the autonomous vehicle mayA measure of the central tendency is calculated as an arithmetic mean of the radial velocities of the points in the group. Similarly, the autonomous vehicle may calculate the target's T at T based on (e.g., equal to) the difference between the maximum and minimum azimuthal positions of the points in the set (i.e., the radial length of the set of points) 0 First radius R of 0 。
Then, the autonomous vehicle: based on at T 0 Calculating the position of the points in the set relative to the autonomous vehicle (e.g., within a polar coordinate system); and calculates the correlation between the angular position and the radial velocity of these points. In one example, the autonomous vehicle calculates this correlation as the slope of a best fit (or "trend") line passing through the radial velocities divided by the cosine of the angle between the point and the average location of the group of points, and divided by the sine of the angle between the point and the average location of the group of points.
The autonomous vehicle then calculates a first slope S of the best fit line 0 Which indicates that the object is at T 0 Tangential velocity V of tan,0 And angular velocity ω 0 The relationship between them. In particular, the slope S 0 Can be expressed at time T 0 In the field of view of the sensor, V tan,0 And omega 0 Multiplied by a first radius R of the object 0 The difference between the products of (a) and (b). Thus, the autonomous vehicle may be based on at time T 0 Slope S of 0 And a radius R 0 Generating V of the object tan,0 And ω 0 Associated first function (e.g. linear function) F 0 。
Based on function F 0 The autonomous vehicle may then calculate the line L 0 Which indicates that at T in a given object 0 Current radial velocity V of rad,0 In the case of (2), the object is at time T 0 Is possible of V tan,0 And omega 0 And (4) combining the motions.
In a similar implementation, the autonomous vehicle resolves motion of an object in three degrees of freedom, which includes: linear motion in a radial direction (i.e., radial velocity) along a ray between the sensor and the object; at the tangent orthogonal to the radial directionLinear motion upward and in a horizontal plane; and angular motion in the yaw direction about an axis orthogonal to the radial and tangential directions. In this implementation, the autonomous vehicle may: projecting a first radial velocity and a first azimuthal position of a point of a first group of points representing the object onto a horizontal plane (i.e., a 2D space substantially parallel to the road surface); calculating a first radius of the object at a first time based on a range of first azimuthal positions of points in the first group of points; calculating a first radial velocity of the object relative to the autonomous vehicle at a first time based on a first measure (e.g., an average) of a central tendency of the first radial velocities of the points in the first group of points; calculating a first linear trend line of a first radial velocity and a first azimuthal position through points in the first set of points; and calculating a first correlation based on a first slope of the first linear trend line, the first slope representing a relationship between a first tangential velocity of the object and a first angular velocity of the object at a first time. In particular, the first slope may represent the difference between: a first tangential velocity of the object at a first time; and a product of a first radius of the object at the first time and a first angular velocity of the object at the first time. The autonomous vehicle may then calculate a first linear function that relates a possible tangential velocity of the object relative to the autonomous vehicle at the first time to a possible angular velocity of the object at the first time based on the first slope and the first radius at the first time (e.g., the possible tangential velocity and the angular velocity satisfy the relationship S: S) 0 =V tan,0 -R 0 ω 0 ). More specifically, the first function may relate a possible tangential velocity of the object and a possible angular velocity of the object at a first time in a horizontal plane substantially parallel to the road surface.
Thus, the autonomous vehicle may map the probable V of the object tan,0 And ω 0 The motion-combined 2D surface (previously defined only by the maximum velocity assumption for the above-described ground-based object) is compressed into an object at time T 0 Is possible of tan,0 And ω 0 1D line of motion combinations. More specifically, the autonomous vehicle may thus be in a 2D spaceThree unknown properties of moving objects (i.e., V) rad,0 、V tan,0 、ω 0 ) Reduced to a single unknown property-that is, due to L 0 V of tan,0 And ω 0 All combinations of (2) are resolved at T 0 Of the measured object, so along the line L 0 Point of (a) indicates that the object is at T 0 True V of tan,0 And ω 0 。
13.3 Define the
In this implementation, the autonomous vehicle may also: calculating V tan,0 And omega 0 Range of values, the range and V rad,0 Combining to produce a maximum total velocity equal to or less than the maximum object velocity assumption described above; and connecting the line L 0 Is defined to V tan,0 And ω 0 This range of values. The autonomous vehicle may additionally or alternatively couple the line L 0 The maximum tangential and angular velocity assumptions that bound the ground-based object described above are shown in FIG. 2.
Then, at time T, at a given object 0 V of rad,0 And at a defined line L 0 V shown above tan,0 And ω 0 With a range of motion combinations, the autonomous vehicle may calculate the object at T 0 Relative to a range of possible overall speeds of the autonomous vehicle. Additionally or alternatively, the autonomous vehicle may place it at T 0 Absolute velocity of and V of the object rad,0 And at the defined line L 0 V shown above tan,0 And omega 0 Range merging of motion combinations to compute object at T 0 Range of possible absolute speeds.
13.4 future State boundaries after first Scan cycle
The autonomous vehicle may then: implementing the above-described methods and techniques to calculate future state boundaries of an object based on these possible relative or absolute velocity and maximum object acceleration assumptions for the object; and selectively modify its trajectory accordingly as described above.
For example, in blocks S110 and S112, the autonomous vehicle may implement the above-described methods and techniques to: accessing a second image of the field captured at about a first time by a second sensor disposed on the autonomous vehicle; interpreting a type of road surface occupied by the autonomous vehicle at the first time based on a set of features extracted from the second image; predicting the quality of the road surface based on the feature set; estimating a coefficient of friction acting on the road surface by a tire of the autonomous vehicle based on the type of road surface and the quality of the road surface; estimating a stop duration of the autonomous vehicle at a first time based on the autonomous vehicle speed of the autonomous vehicle at the first time, the coefficient of friction, and a stored braking model for the autonomous vehicle; and calculating a critical time offset from the first time by the stop duration.
In this example, in block S102, the autonomous vehicle may also access a set of predefined motion limit hypotheses, such as including: maximum linear acceleration of a generic ground-based object; a maximum linear velocity of the generic ground-based object; and/or a maximum angular velocity of a generic ground-based object.
Further, in block S122, the autonomous vehicle may: deriving a first position of the object at a first time based on the first range values and the first azimuthal positions of the points in the first group of points; then based on a) a possible tangential velocity of the object and a possible angular velocity of the object at a first time defined by a first function; a first radial velocity, b) a first position, and c) a set of predefined motion limit assumptions, to calculate a first future state boundary of the autonomous vehicle. More specifically, the autonomous vehicle may calculate a first ground area accessible to the object from the first time to the critical future time by: the radial velocity and possibly the tangential velocity and angular velocity pairs (or "first motion") of the object at a first time are integrated over a stop duration from a first position of the object-the object moves up to a maximum angular velocity and accelerates to a maximum linear velocity according to a maximum linear acceleration defined by a predefined motion limit assumption. The autonomous vehicle may then store the first ground area as a future state boundary of the object for a first time.
13.5 second scanning period
The autonomous vehicle may then repeat the foregoing process based on the next set of radial velocity, distance, and angular position of the point output by the sensor during the next scan cycle.
In particular, at a second time T 1 The sensor performs a second scan cycle and returns a second scan image containing the radial velocity, distance, and angular position of a group of points in the entire field around the autonomous vehicle. The autonomous vehicle then implements the above-described methods and techniques to: identifying groups of points corresponding to discrete objects in the field; and tracking the set of points representing the object from the first scan cycle to a corresponding set of points representing the object in the second scan cycle.
The autonomous vehicle then repeats the above process to: calculating a central measure of radial velocity (central measure) of the points in the set; storing the central metric as the object at time T 1 Radial velocity V of rad,1 (ii) a Calculating a second slope S for the data 1 Which indicates that the object is at time T 1 Tangential velocity V of tan,1 And angular velocity ω 1 The relationship between them. For example, the slope S 1 The difference between the following two terms can be expressed: v tan,1 (ii) a And subject at T 1 Omega of (b) 1 Multiplied by at time T 1 At a first radius R relative to an object position of the autonomous vehicle 1 The product of (a). Thus, the autonomous vehicle may calculate at T 1 Radius R of a measure representing the central tendency of the position of the point group of the object 1 And based on time T 1 Slope S of (d) 1 And a radius R 1 Generating V tan,1 And ω 1 Associated second function (e.g. linear function) F 1 。
Based on function F 1 The autonomous vehicle may then calculate the line L 1 Which indicates that at a given object, at T 1 Current radial velocity V of rad,1 In the case of (1), the object is at time T 1 Of (2)Possible V tan,1 And ω 1 And (4) combining the motions.
Subsequently, the autonomous vehicle may calculate the line L 0 And L 1 (or function F) 0 And F 1 ) The intersection of which indicates that the object is at T 1 Actual V of (C) tan,1 And ω 1 (or a very close value thereof) as shown in fig. 2. Thus, from T 0 From the first scanning period to the subsequent T 1 The autonomous vehicle can find the target at T 1 All three unknown motion characteristics of (1-including V) tan,1 、ω 1 And V rad,1 。
Then, at a given online L 0 And L 1 V represented at the intersection of rad,1 、V tan,1 And w 1 In this case, the autonomous vehicle may calculate the object is at T 1 At a total speed V relative to the autonomous vehicle tot,rel,1 . Additionally or alternatively, the autonomous vehicle may bring it at T 1 Absolute velocity of the point and V of the object rad,1 、V tan,1 And w 1 Merge to compute the object at T 1 Total absolute velocity V of tot,abs,1 。
Thus, in the foregoing implementation, the autonomous vehicle may: projecting a second radial velocity and a second azimuthal position of points in a second set of points representing the object onto a horizontal plane (i.e., a 2D space that is substantially parallel to the road surface); calculating a second radius of the object at a second time based on a range of second azimuthal positions of the points in the second set of points; calculating a second radial velocity of the object relative to the autonomous vehicle at a second time based on a second measure (e.g., an average) of the central tendency of the second radial velocities of the points in the second group of points; calculating a second linear trend line of a second radial velocity and a second azimuthal position through points in the second group of points; and calculating a second correlation based on a second slope of the second linear trend line, the second slope representing a relationship between a second tangential velocity of the object and a second angular velocity of the object at a second time. In particular, the second slope may represent the difference between: a second tangential velocity of the object at a second time; anda product of a second radius of the object at the second time and a second angular velocity of the object at the second time. The autonomous vehicle may then calculate a second linear function that relates a possible tangential velocity of the object relative to the autonomous vehicle at a second time to a possible angular velocity of the object at the second time based on a second slope and a second radius at the second time (e.g., the possible tangential velocity and the angular velocity satisfy the relationship S: S) 1 =V tan,1 -R 1 ω 1 ). More specifically, the second function may relate a possible tangential velocity of the object and a possible angular velocity of the object at a second time in a horizontal plane substantially parallel to the road surface.
The autonomous vehicle may then estimate a specific second tangential velocity of the object and a specific second angular velocity of the object (or a narrow range of possible tangential and angular motions of the object, as described below) relative to the autonomous vehicle at the second time based on an intersection of the first function and the second function in the state space of the three degrees of freedom. Further, the autonomous vehicle may perform the above-described methods and techniques to calculate, in block S126, a total absolute velocity of the object relative to the autonomous vehicle at the second time based on the second tangential velocity of the object, the second angular velocity of the object, the second radial velocity of the object, and the absolute velocity of the object at the second time.
The autonomous vehicle may then: implementing the above-described methods and techniques to calculate future state boundaries of an object based on these possible relative or absolute velocity and maximum object acceleration assumptions for the object; and selectively modify its trajectory accordingly as described above.
13.6 cumulative error
In this variant, the tangential velocity V of the object with respect to the autonomous vehicle tan And the angular velocity ω may be at T 0 A first scanning period of (A) and (T) 1 To a time T 1 This may be on line L 0 Creating (additional) errors. The magnitude of the error may be equal to T 0 And T 1 The time offset between them is a function of, and therefore can be a function of, the sampling rate of the sensor.
Thus, the autonomous vehicle may be moving from T, such as based on the motion limit assumption for the object described above 0 To T 1 Is measured with respect to the tangential velocity V of the object within the time offset of (c) tan And the maximum and minimum changes in the angular velocity ω are integrated to calculate the on-line L 0 Error bar (e.g., error bar L) on each side of (e.g., a top surface of) a substrate 0,error,low And L 0,error,high ). Then, as shown in FIG. 2, the autonomous vehicle may calculate L 1 And error bar L 0,error,low And L 0,error,high The intersection of the regions between, thereby narrowing the object at T 1 A possible V of tan,1 And omega 1 Range of values, while taking into account the time T 0 To a time T 1 Possible cumulative errors caused by the motion of the object relative to the autonomous vehicle.
Then, at a given V rad,1 And is represented by line L 0 Line L defined by the error bars of (1) 1 V on tan,1 And ω 1 With a range of motion combinations, the autonomous vehicle may calculate the object at T 1 Relative to a range of possible overall speeds of the autonomous vehicle. Additionally or alternatively, the autonomous vehicle may place it at T 1 Absolute velocity and V of rad,1 And at the defined line L 1 V shown above tan,1 And omega 1 Range merging of motion combinations to compute object at T 1 The range of possible absolute speeds.
For example, in block S126, the autonomous vehicle may characterize a first error of the first function, i.e., a worst-case motion change of the object from the first time to the second time, based on an integration of the set of predefined motion limit assumptions described above over a time difference between the first time and the second time. As described above, the autonomous vehicle may: calculating a first line relating a possible tangential velocity of the object relative to the autonomous vehicle at a first time to a possible angular velocity of the object based on the first correlation; calculating a first width of the first line based on the first error; and the first line and the first width of the first line are represented in a first function during a first scanning period. Thus, the first function may represent a two-dimensional ellipse containing possible combinations of the first tangential velocity of the object and the first angular velocity of the object at the first time.
During the second scan cycle, the autonomous vehicle may similarly calculate a second line relating a possible tangential velocity of the object and a possible angular velocity of the object relative to the autonomous vehicle at a second time based on the second correlation. The autonomous vehicle may then estimate a second range of tangential velocities of the object and a second range of angular velocities of the object relative to the autonomous vehicle at a second time based on an intersection of the first line and the second line of the first width.
13.7 best fit error
In a similar implementation shown in fig. 5, the autonomous vehicle may: calculating a first linear trend line through a first radial velocity and a first azimuthal position of points in a first set of points derived from the first scan image; calculating a first correlation between a first tangential velocity of the object at a first time and a first angular velocity of the object based on a first slope of the first linear trend line; characterizing a first error of the first linear trend line based on a deviation of the first radial velocity of the points in the first group of points from the first linear trend line in block S126; calculating a first line relating a likely tangential velocity of the object and a likely angular velocity of the object relative to the autonomous vehicle at a first time based on the first correlation; calculating a first width of the first line based on the first error; and representing the first line and a first width of the first line in a first function. For example, the autonomous vehicle may calculate a first error (and thus a width of the first line) that is proportional to a square root of a sum of squares of minimum distances from each point in the group to the first linear trend line. Thus, the first function may represent a two-dimensional ellipse containing possible combinations of the first tangential velocity and the first angular velocity of the object at the first time.
Autonomous vehicles may similarly: calculating a second linear trend line for a second radial velocity and a second azimuthal position through points in the second set of points; and based on a second slope of the second linear trend line, calculating a second correlation between a second tangential velocity of the object and a second angular velocity of the object at a second time; characterizing a second error of the second linear trend line based on a deviation of a second radial velocity of a point in the second set of points from the second linear trend line; calculating a second line that relates a likely tangential velocity of the object and a likely angular velocity of the object relative to the autonomous vehicle at a second time based on the second correlation; calculating a first width of the first line based on the first error; and representing the second line and a second width of the second line in a second function. Thus, the second function may represent a two-dimensional ellipse containing possible combinations of the second tangential velocity and the second angular velocity of the object at the second time.
Accordingly, the autonomous vehicle may estimate a second range of tangential velocities of the object and a second range of angular velocities of the object relative to the autonomous vehicle at a second time based on an intersection of the first line of the first width and the second line of the second width. Although the autonomous vehicle may not be able to resolve the specific tangential and angular velocities of the object at the second time, the autonomous vehicle may calculate a range of possible tangential and angular velocities of the object at the second time based on an intersection of the first and second functions, which is much narrower than the range of possible tangential and angular velocities of the object derived from the single scan image describing the object.
13.8 future State boundaries after second Scan cycle
The autonomous vehicle may then: implementing the above-described methods and techniques to calculate future state boundaries of an object based on these possible relative or absolute velocities of the object and predefined motion limit assumptions; and selectively modify its trajectory accordingly as described above.
For example, after calculating the critical time in block S112, the autonomous vehicle may integrate a second motion of the object at a second time, moving up to a maximum angular velocity and accelerating to a maximum linear velocity according to a predefined maximum linear acceleration assumed by the predefined motion limit, from a second position of the object for the stop duration to calculate a second ground area accessible by the object from the second time to the critical time; and storing the second ground area as a second future state boundary for the object at a second time, the second future state boundary having a size (e.g., an area in a horizontal plane substantially parallel to the road surface) that is (significantly) smaller than the size of the first future state boundary of the object.
In particular, because the autonomous vehicle compresses a wide range of possible tangential and angular velocity combinations (defined only by predefined motion limit assumptions) of an object represented by a first function into one or a small range of possible tangential and angular velocity combinations of the object at the intersection of the first and second functions, the autonomous vehicle can also calculate a smaller future state boundary of the object from the first scan cycle to the second scan cycle, and thus predict a larger entry zone in which the autonomous vehicle can operate up to a critical time without sacrificing the ability to come to a complete stop before colliding with other objects in the vicinity.
13.9 object motion processing
Then, as described above, in block S140, the autonomous vehicle may select a second navigation action to avoid entering into a second future state boundary before a critical time.
For example, an autonomous vehicle may implement the above-described methods and techniques to: in block S144, an entry zone is calculated around the autonomous vehicle that does not include a first future state boundary of the object; a first navigation action is then performed to navigate toward the ingress area (e.g., to change a trajectory of the autonomous vehicle) in response to the position of the autonomous vehicle at the second time falling within a threshold distance (e.g., two meters; a distance traversed within 500 milliseconds at a current speed of the autonomous vehicle) of a perimeter of a current future state boundary of the object. Additionally or alternatively, the autonomous vehicle may automatically perform a braking action to slow the autonomous vehicle in response to the position of the autonomous vehicle at the second time falling within a threshold of a perimeter of the current future state boundary of the object. Conversely, if the current position of the autonomous vehicle falls outside of the second future state boundary of the object, the autonomous vehicle may maintain its current trajectory (e.g., speed; speed) and steering angle) or otherwise blank the object during the second scan period without taking into account path planning considerations.
13.10 subsequent scanning periods
The autonomous vehicle may then repeat the above methods and techniques to: based on the presence of a sensor at a third time T 2 Average radial velocity V of a group of points (associated with the same object) tracked in the output third scan image rad,2 Slope S 2 And radius R 2 Calculating a third function F 2 (ii) a Based on function F 2 Calculating a third line L 2 (ii) a Then calculate the first line L 0 (with a base based on T 0 To T 2 Time-shifted error bar) of the first line L, a second line L 1 (with a base based on T 1 To T 2 Time-shifted error bar of (1) and a third line L 2 The intersection of which indicates that the object is at T 2 V is a possibility tan,2 And ω 2 The value is obtained.
Alternatively, during the third scan cycle, the autonomous vehicle may: discard line L 0 (ii) a And calculates a second line L 1 (with a base based on T 1 To T 2 Time-shifted error bars of) and the third line L 2 Of the object at T 2 Is at a possible V tan,2 And ω 2 The value is obtained.
As described above, the autonomous vehicle may then: calculating the T of the object based on the multi-path intersection (such as two-path intersection, three-path intersection, etc.) 2 A possible V of tan,2 And ω 2 A range of values; computing the object at T 2 Possible relative or absolute velocities; updating the future state boundaries of the object accordingly; and selectively modify its trajectory accordingly as described above.
13.11 Point grouping and ungrouping by object (ungroup)
As described above, the autonomous vehicles may group points in the scanned image by proximity, such as including similar range, azimuth, elevation values, and similar radial velocity. For each set of points detected in the first scanned image, the autonomous vehicle may calculate a function representing linear and angular motion of the object represented by the set of points. The autonomous vehicle may then: repeating the process for subsequent scan images; implementing an object tracking technique to link a set of points in the first scanned image with a set of points in the second scanned image; and refining the motion prediction for each object based on the intersection of the first and second function pairs for the sets of points derived from the first and second scan images.
The autonomous vehicle may also cluster two objects detected in the second scan image into one "composite object" (or "rigid body") if their derived motion is consistent (e.g., if their radial, tangential, and angular velocities are very similar or identical), such as if their motion falls within a predefined velocity discrimination (discrimination) threshold. The autonomous vehicle may then calculate a future state boundary for the composite object and selectively navigate relative to the composite object accordingly. Thus, the autonomous vehicle may interpret and process multiple point sets within a consistent motion as a single object, thereby reducing the number of discrete objects that the autonomous vehicle is tracking, and thus reducing the computational load of the autonomous vehicle during operation.
Similarly, the autonomous vehicle may: interpreting a separation of a first set of points in the first scan image that are predicted to represent one object at a first time as a second set of points and a third set of points in the second scan image that are predicted to represent two different objects at a second time; generating a unique function for the second object and the third object; and estimating motion of the second object and the third object based on the derived functions over the first and second scan periods.
In one example implementation, the autonomous vehicle implements the above-described methods and techniques to identify a first set of points in a first scan image captured at a first time and derive a first function representing motion of the object during a first scan cycle. During the second scan period, the autonomous vehicle may: accessing a second scan image, the second scan image containing data captured by the sensor at a second time subsequent to the first time; identifying a second set of points in a second scan image representing the object in the field; identifying, in the second scan image, a third set of points representing a second object in the field, the second object being separated from the object from the first time to the second time; calculating a second correlation between a second radial velocity and a second azimuthal position of points in the second set of points; calculating a third correlation between a third radial velocity and a third azimuthal position of points in the third set of points; calculating a second function that relates a possible tangential velocity of the object and a possible angular velocity of the object at a second time based on the second correlation; and, based on the third correlation, a third function is calculated that relates the possible tangential velocity of the second object and the possible angular velocity of the second object at the second time. Thus, as described above, the autonomous vehicle may estimate a second tangential velocity of the object and a second angular velocity of the object relative to the autonomous vehicle at a second time based on an intersection of the first function and the second function. However, the autonomous vehicle may also: a third tangential velocity of the second object and a third angular velocity of the second object relative to the autonomous vehicle at the second time are estimated based on an intersection of a first function representing motion of the object at the first time and a third function representing motion of the second object at the second time.
For example, the autonomous vehicle may implement the foregoing process to: detecting, at a first time, two point groups representing two vehicles travelling in the same direction and speed on two lanes adjacent to the autonomous vehicle; characterizing the motion of the objects; and track and respond to the two objects as a composite set that reflects the consistent motion of the objects at the first time. The autonomous vehicle may then: detecting two objects moving relative to each other at a second time, such as if one of the vehicles brakes and decelerates relative to the other; separating the composite object into two objects; the two objects are then tracked and responded to independently — they now exhibit different motions that exceed the velocity discrimination threshold.
13.12 concurrent data from multiple sensors
In one variation, the autonomous vehicle includes a plurality of offset sensors that output concurrent point clouds representing surfaces in a field around the autonomous vehicle at different perspectives during a scan cycle. In this variation, the autonomous vehicle may perform the aforementioned methods and techniques to: computing a pair of functions and lines for a co-spatial set of objects representing a single object in concurrent point clouds output by the sensors during a scan cycle; calculating the intersection of the lines; and estimating the tangential velocity and the angular velocity of the object based on the intersection.
For example, an autonomous vehicle may: at a first time T by a first sensor on board an autonomous vehicle 0 Identifying a first set of points representing a discrete object in the output first point cloud; calculating an average of the radial velocities of the first set of midpoints; storing the average as a first radial velocity V of the object at a first time rad,1,0 (ii) a Based on the radial velocity V of the first set of points at a first time rad,1,0 Slope S 1,0 And radius R 1,0 Calculating a first function F 1,0 (ii) a And based on function F 1,0 Calculating a first line L 1,0 . Autonomous vehicles may similarly: at a first time T by a second sensor on board the autonomous vehicle 0 Identifying a second point group representing the same object in the output second point cloud; calculating an average of the radial velocities of the points in the second group; storing the average value as a second radial velocity V of the object at the first time rad,2,0 (ii) a Based on the radial velocity V of the second set of points at the first time rad,2,0 Slope S 2,0 And a radius R 2,0 Calculating a second functionF 2,0 (ii) a And based on a function F 2,0 Calculating the second line L 2,0 。
The autonomous vehicle may then calculate a first line L 1,0 And a second line L 2,0 Which represents the object at time T 0 Actual V of (C) tan,0 And ω 0 (or a very close value thereof). Thus, the autonomous vehicle may resolve the object at T based on the data output by the two sensors during a single scan cycle 0 All three unknown motion characteristics of (1-including V) tan,0 、ω 0 And V rad,0 。
Then, at a given online L 1,0 And L 2,0 V represented at the intersection of rad,0 、V tan,0 And ω 0 In this case, the autonomous vehicle may calculate the object is at T 0 At a total speed V relative to the autonomous vehicle tot,rel,0 . Additionally or alternatively, the autonomous vehicle may bring it at T 0 Absolute velocity of the point and V of the object rad,0 、V tan,0 And ω 0 Merge to compute the object at T 0 Total absolute velocity V of tot,abs,0 。
The autonomous vehicle may then: implementing the above-described methods and techniques to calculate future state boundaries of an object based on these possible relative or absolute velocity and maximum object acceleration assumptions for the object; and selectively modify its trajectory accordingly as described above.
Further, the autonomous vehicle may: detecting an object depicted in two concurrent scan images captured by two sensors on an autonomous vehicle during a first scan cycle; deriving from the two scanned images a first function and a second function describing the motion of the object; and fusing the first and second functions into an estimate of the motion of the object during the first scan period. Concurrently, the autonomous vehicle may: detecting a second object depicted (e.g., not visible to the field of view of one of the sensors due to occlusion; or due to the different field of view of the two sensors) only in a first one of the two scanned images; and deriving from the first scanned image a third function describing the motion of the second object during the first scanning period. Then, as described above, during the next scanning cycle, the autonomous vehicle may: detecting a second object depicted only in the third scanned image; deriving a fourth function describing motion of the second object from the third scan image; and fusing the third and fourth functions into an estimate of the motion of the second object during the second scan cycle.
Thus, the autonomous vehicle may implement the aforementioned blocks of method S100 to characterize motion of a group of objects based on both concurrent scan images captured during a single scan cycle and a sequence of scan images captured over multiple scan cycles.
14.6DOF
One variation of the method S100 shown in fig. 3A, 3B, and 3C includes: calculating a first best-fit plane by a first radial velocity, a first azimuthal position, and a first elevation position of a point in a first set of points representing the object in a first scan image captured at a first time; calculating a second best-fit plane by a second radial velocity, a second azimuth position, and a second elevation position of points in a second set of points representing the object in a second scan image captured at a second time; and calculating a third best-fit plane by a third radial velocity, a third azimuth position, and a third elevation position of points in a third set of points representing the object in a third scan image captured at a third time.
In particular, the first best-fit plane represents a relationship between a first tangential velocity of the object (e.g., a composite tangential velocity of a tangential azimuth velocity and a tangential elevation velocity), a first yaw velocity of the object, and a first pitch velocity of the object at a first time. Accordingly, the autonomous vehicle may generate a first function based on the first best-fit plane, the first function representing a first relationship (e.g., a correlation) between possible tangential azimuth velocity and yaw velocity and a second relationship between possible tangential elevation velocity and pitch velocity at a first time.
Similarly, the second best-fit plane represents a relationship between a second tangential velocity of the object, a second yaw velocity of the object, and a second pitch velocity of the object at a second time. Accordingly, the autonomous vehicle may generate a second function based on the second best-fit plane, the second function representing a first relationship (e.g., a correlation) between possible tangential azimuth velocity and yaw velocity and a second relationship between possible tangential elevation velocity and pitch velocity at a second time.
Similarly, the third best fit plane represents a relationship between a third tangential velocity of the object, a third yaw velocity of the object, and a third pitch velocity of the object at the first time. Accordingly, the autonomous vehicle may generate a third function based on the first best-fit plane, the third function representing a first relationship between possible tangential azimuth velocity and yaw velocity and a third relationship between possible tangential elevation velocity and pitch velocity at a third time.
In this variation, the method S100 further includes calculating a third tangential velocity of the object (or separate tangential azimuth velocity and tangential elevation velocity), a third yaw velocity of the object, and a third pitch velocity of the object at a third time based on an intersection of the first function, the second function, and the third function in block S124.
14.1 3DOF versus 6DOF
In general, the above-described method S100 may be performed by an autonomous vehicle to characterize the motion of an object in three degrees of freedom (or "3 DOF"). However, in this variation, the autonomous vehicle may implement similar methods and techniques to characterize the motion of an object in six degrees of freedom (or "6 DOF").
In particular, when characterizing the motion of an object in three degrees of freedom as described above, the autonomous vehicle may interpret: linear motion of the object in radial and tangential directions in a horizontal plane; and only rotational movement about a yaw axis perpendicular to the horizontal plane. Conversely, when characterizing the motion of an object in six degrees of freedom as described above, the autonomous vehicle may interpret: linear motion of the object in the sagittal direction, the tangential azimuthal direction (e.g., parallel to the scanning direction of the sensor), and the tangential elevation direction (e.g., orthogonal to the sagittal direction and the tangential azimuthal direction); and a rotational movement about a pitch axis in a tangential azimuth direction and a rotational movement about a yaw axis in a tangential elevation direction.
Furthermore, the rotation of the object about a ray extending from the sensor to the object (i.e., a "roll" motion) may not be observed by the sensor within a single scan image. However, if the roll motion of the object is not coaxial with the ray, the radial velocity stored in the points in the successive scan images captured by the sensors (or concurrent scan images captured by two offset sensors) may contain information related to the roll velocity of the object, and the autonomous vehicle may therefore fuse groups of points representing images in the multiple scan images to further disambiguate the roll velocity of the object relative to the autonomous vehicle.
Furthermore, many (e.g., most) ground-based moving objects (such as the bodies of road vehicles and pedestrians) may exhibit minimal or no pitch velocity, and no tangential elevation velocity (e.g., may not move in any direction other than on a horizontal road surface). Thus, the tangential elevation and pitch velocities of the object may be (or may be close to) null. Thus, the best-fit plane through the set of points in three-dimensional space collapses to the best-fit line in two-dimensional space, and the derivation of the motion of such objects in six degrees of freedom according to this variation of method S100 collapses to the derivation of the motion of objects in three degrees of freedom as described above.
However, some objects on and near the road surface may exhibit non-zero tangential elevation and pitch velocities relative to autonomous vehicles, such as wheels, concrete mixers, and street sweepers. Similarly, a vehicle moving along an incline may exhibit non-zero tangential elevation and pitch velocities relative to the autonomous vehicle. The tangential elevation velocity and the pitch velocity of such an object are contained in the radial velocity data of the point representing the object in the scanned image, but they are uncertain from the radial velocity data contained in the single scanned image. Thus, the autonomous vehicle may fuse the relationships between the tangential azimuth velocity, tangential elevation velocity, yaw velocity, and pitch velocity of the object derived from multiple scan images depicting the object at different perspectives (i.e., as the autonomous vehicle and the object move relative to each other) to compute a specific or narrow range of possible tangential azimuth velocity, tangential elevation velocity, yaw velocity, and pitch velocity of the object.
14.2 example
For example, an autonomous vehicle may: implementing the above method and technique to isolate a set of points representing the object in the first scanned image; and project the points into three-dimensional space (i.e., radial velocity, azimuth and elevation space) based on the velocity (i.e., radial velocity) range, azimuth and elevation values contained in the points. The autonomous vehicle may then: calculating a first radial velocity of the object relative to the autonomous vehicle at a first time based on a first measure (e.g., an average) of a central tendency of the first radial velocities of the points in the first group of points; calculating a first position of the object relative to the autonomous vehicle at a first time based on a first measure (e.g., an average) of the central tendency of the first azimuth and elevation positions of the points in the first group of points; and calculating a first radial vector returning to the autonomous vehicle from the first location of the object.
Further, the autonomous vehicle may: calculating a first linear azimuthal trend line of a first radial velocity and a first azimuthal position through points in the first set of points; and calculating a first correlation based on a first slope of the first linear azimuthal trend line, the first correlation representing a relationship between a first tangential azimuthal velocity of the object and a first yaw velocity of the object at a first time. In particular, the first slope may represent a first difference between: a tangential velocity of the object in a first tangential direction (e.g., a tangential azimuthal direction); and a projection of a cross product (cross product) between a radial vector of the object and a yaw velocity (e.g., pitch velocity) of the object in a first tangential direction.
The autonomous vehicle may similarly calculate a first linear elevation trend line of a first radial velocity and a first elevation position through points of the first group of points; a second correlation is calculated based on a second slope of the second linear trend line, the second correlation representing a relationship between a first tangential elevation velocity of the object and a first pitch velocity of the object at a first time. In particular, the second slope may represent a second difference between: a tangential velocity of the object in a second tangential direction (e.g., a tangential elevation direction); and a projection of a cross product between the radial vector of the object and the yaw speed of the object in the first tangential direction.
Thus, the first linear azimuth and elevation trend line may represent a first best-fit plane plotted in three-dimensional radial velocity, azimuth, and elevation spaces for the points in the first set, as shown in fig. 3A.
The autonomous vehicle may then calculate a first function that relates possible tangential azimuth velocity, tangential elevation velocity, yaw velocity, and pitch velocity of the object at the first time based on the first slope, the second slope, and a first radial vector representing a relative position of the object at the first time. More specifically, the first function may relate, at a first time, a possible tangential azimuth velocity of the object to a possible yaw velocity of the object, and a possible tangential elevation velocity of the object to a possible pitch velocity of the object.
Alternatively, the autonomous vehicle may: directly calculating a first best fit plane for points in the first set, rather than independently calculating first linear azimuth and elevation trendlines; and/or derive the first function based on the tangential velocity in any other direction. For example, the autonomous vehicle may perform the process shown in fig. 7 to derive a function that relates observations of the object (i.e., azimuth, elevation, and range position of points representing the object, and radial velocity) to states of motion of the object in six degrees of freedom.
The autonomous vehicle may then repeat the process for subsequent scan images to generate a sequence of functions representing possible combinations of tangential and angular motion of the object, as shown in fig. 3A. The autonomous vehicle may then compute the intersection of three functions derived from three consecutive scanned images in a six degree of freedom state space to compute a specific or narrow range of possible radial velocity, tangential azimuth velocity, tangential elevation velocity, yaw velocity, pitch velocity, and roll velocity of the object.
Thus, the autonomous vehicle may fuse these tangential azimuth, elevation, yaw, and pitch velocities of the object with the radial velocity of the object derived from the current scan image to calculate the total velocity of the object in all six degrees of freedom relative to the autonomous vehicle, as shown in fig. 3C.
14.3 best fit error
In this variation, the autonomous vehicle may implement methods and techniques similar to those described above to calculate the width (or "thickness") of the best-fit plane. For example, the autonomous vehicle may calculate, for the scanned image, an error for each best-fit plane that is proportional to the square root of the sum of the squares of the smallest distances (in three dimensions) from each point in the set to the best-fit plane. The autonomous vehicle may then calculate the thickness of the plane based on the error or otherwise represent the error in a corresponding function calculated for the object. Thus, the function may represent a three-dimensional ellipsoid containing possible combinations of tangential velocity, yaw velocity and pitch velocity of the object during the scan cycle.
In this example, the autonomous vehicle may then compute the intersection of three consecutive (thickened) functions to compute a narrow range of possible radial velocity, tangential azimuth velocity, tangential elevation velocity, yaw velocity, pitch velocity, and roll velocity of the object at the current time. The autonomous vehicle may then implement the above-described methods and techniques to: based on the motion of the object in six degrees of freedom, future state boundaries of the object are calculated and selectively reacted to-the motion of the object in six degrees of freedom includes: the narrow range of possible tangential azimuth velocity, tangential elevation velocity, yaw velocity, pitch velocity, and roll velocity; the measured radial velocity of the object.
14.4 cumulative error
Additionally or alternatively, after calculating the first, second, and third functions over three consecutive scan cycles, the autonomous vehicle may: calculating a first set of possible tangential azimuth, tangential elevation, yaw, pitch and roll velocities of the object represented at the intersection of the first and second functions; calculating a worst-case motion of the object consistent with the set of possible tangential azimuth velocity, tangential elevation velocity, yaw velocity, pitch velocity, and roll velocity, and the predefined set of motion limit assumptions; integrating the worst case motion of the subject over a period of time from the first scan period to the third scan period; and stores this value as the thickness of the first best-fit plane and thus as the error represented by the first function. Thus, the first function may represent a three-dimensional ellipsoid containing possible combinations of tangential velocity, yaw velocity and pitch velocity of the object during the first scanning cycle.
Similarly, the autonomous vehicle may: calculating a second set of possible tangential azimuth, tangential elevation, yaw and pitch velocities of the object represented by a second function; calculating a worst-case motion of the object consistent with the set of possible tangential azimuth velocity, tangential elevation velocity, yaw velocity, and pitch velocity, and a predefined set of motion limit assumptions; integrating the worst case motion of the object over a period of time from the second scan period to a third scan period; and stores this value as the thickness of the second best-fit plane and thus as the error represented by the second function. Thus, the second function may represent a three-dimensional ellipsoid containing possible combinations of tangential velocity, yaw velocity and pitch velocity of the object during the second scan cycle.
In this example, the autonomous vehicle may then calculate the intersection of the first function (of thickness, maximum error), the second function (of thickening), and the third function to calculate a narrow range of possible tangential azimuth velocity, tangential elevation velocity, yaw velocity, pitch velocity, and roll velocity of the object at the third time, as shown in fig. 3C. The autonomous vehicle may then implement the above-described methods and techniques to: calculating and selectively reacting to future state boundaries of the object based on the motion of the object in six degrees of freedom-the motion of the object in six degrees of freedom comprising: the narrow range of possible tangential azimuth velocity, tangential elevation velocity, yaw velocity, pitch velocity, and roll velocity; the measured radial velocity of the object.
14.5 multiple Sensors
As described above, in variations of an autonomous vehicle that includes multiple offset sensors that output concurrent scan images, the autonomous vehicle may perform the aforementioned methods and techniques to: calculating a plurality of functions representing motion of an object in six degrees of freedom from a plurality of concurrently scanned images depicting the object; the motion of the object in six degrees of freedom is then derived based on the intersection of these functions.
For example, an autonomous vehicle may be depicted for one object in three concurrent scan images captured by three sensors on the autonomous vehicle, generating and fusing three functions. In another example, the autonomous vehicle may be depicted for one object in two pairs of scan images captured by each of two sensors on the autonomous vehicle over two consecutive scan cycles, generating and fusing two consecutive pairs of two functions.
Thus, the autonomous vehicle may implement the aforementioned blocks of method S100 to characterize the motion of a group of objects based on both concurrent scan images captured during a single scan cycle and a sequence of scan images captured over multiple scan cycles.
14.6 multiple objects
Furthermore, the autonomous vehicle may concurrently execute multiple instances of this variation of the method to derive motion of multiple objects in six degrees of freedom from multiple concurrent or consecutive scan images captured by the autonomous vehicle.
14.7 object segmentation
In one example of this variation, the autonomous vehicle captures a scanned image depicting a side of a road vehicle (e.g., passenger vehicle, truck). The autonomous vehicle implements the above-described methods and techniques to group points depicting the road vehicle in the scanned image based on proximity. However, if the road vehicle is moving (i.e., if its wheel speed is non-zero), the body of the road vehicle may exhibit a minimum or null tangential elevation velocity and pitch velocity relative to the autonomous vehicle, but the wheels of the road vehicle may exhibit non-zero tangential elevation velocity and pitch velocity. Thus, the radial speed described by the first subset of points of the group corresponding to the body of the road vehicle may not coincide with the radial speed described by the second subset of points of the group corresponding to the wheels of the road vehicle.
Thus, in one implementation, the autonomous vehicle may distinguish and separate the first and second subsets of points based on differences in radial velocity trends over the set of points, as shown in fig. 3B. For example, an autonomous vehicle may: implementing the above-described methods and techniques to calculate an initial best fit plane through the radial velocity, azimuth position, and elevation position represented by the set of points; and characterizing the error (e.g., distance from the initial best-fit plane) between the initial best-fit plane and the points in the set. If the error is high (e.g., exceeds a predefined threshold), the autonomous vehicle may: detecting a first cluster of points in the set characterized by a maximum error (e.g., a maximum distance from a best fit plane); separating the set of points into a first subgroup containing the first cluster of points and a second subgroup containing the remaining points; calculating a first best fit plane through the radial velocity, azimuth position and elevation position represented by the first subset of points; characterizing a first error between the first best-fit plane and a point in the first subset; similarly, a second best fit plane through the radial velocity, azimuth position and elevation position represented by a second subset of points is calculated; and characterizing a second error between the second best-fit plane and a point in the second subset. The autonomous vehicle may repeat the process to iteratively refine the first and second subsets until the error between each subset of points and its corresponding best-fit plane is less than a maximum error (e.g., less than a predefined threshold).
In this implementation, the autonomous vehicle may also segment the initial point group into a maximum number of subgroups, such as up to four subgroups, which may collectively represent: a body and two wheels of a passenger vehicle; two wheels and front and rear body portions of an articulated passenger vehicle; two wheels, a body and a sweeper element of a street sweeper; or the two wheels, body and concrete mixer elements of the truck. Additionally or alternatively, the autonomous vehicle may segment the initial set of points into subgroups, each subgroup having at least a predefined minimum number of points (e.g., 100 points).
More generally, the difference in tangential elevation and pitch velocities of the different elements of one road vehicle with respect to the autonomous vehicle (which is represented in the radial velocities of the points in the initial set) will produce an error between these points and the best-fit plane for the entire set, since this best-fit plane describes the uniform motion of all these elements of the road vehicle in six degrees of freedom. Thus, the autonomous vehicle may perform the aforementioned process to: detecting and separating a subset of points representing different elements on a road vehicle, which exhibit different movements relative to the autonomous vehicle; and calculating a set of functions (e.g., best-fit planes) that relate the tangential azimuth velocity, tangential elevation velocity, yaw velocity, and pitch velocity of these different elements of the road vehicle at the time of the scan cycle.
As described above, the autonomous vehicle may then repeat the process for multiple scan images (such as a set of concurrent images captured by multiple sensors or a continuous scan image captured by one sensor) to: separating out a subset of points representing different elements on the road vehicle; deriving a set of additional functions relating tangential azimuth velocity, tangential elevation velocity, yaw velocity and pitch velocity of the elements of the road vehicle; then, based on the intersection of the three sets of functions for each element of road vehicles, the motion of each element of road vehicles in six degrees of freedom relative to the autonomous vehicle is derived. As described above, the autonomous vehicle may also calculate the total absolute motion of each element of the road vehicle based on these relative motions and the concurrent motions of the autonomous vehicle.
14.8 linking objects
Furthermore, once the autonomous vehicle derives therefrom the relative or absolute motion of the various elements of the road vehicle in six degrees of freedom, the autonomous vehicle may implement methods and techniques similar to those described above to recombine the various elements into a composite object (e.g., a "rigid body") if the linear motion of the various elements is consistent, such as if their absolute or relative collective velocity falls within the predefined velocity discrimination threshold described above.
For example, in block S126, the autonomous vehicle may estimate a first linear motion of a first object and a first angular motion of the first object relative to the autonomous vehicle at a current time based on an intersection of a set of (e.g., three) functions derived from three subgroups of points representing the first object depicted in three consecutive scan images captured by the sensor. Concurrently, in block S126, the autonomous vehicle may estimate a second linear motion of the second object and a second angular motion of the second object relative to the autonomous vehicle at the current time based on an intersection of a set of (e.g., three) functions derived from three subgroups of points representing the second object depicted in the three consecutive scan images. Then, in block S160, the autonomous vehicle may identify the first object and the second object as corresponding to a common rigid body in response to an alignment between the first linear motion of the first object and the second linear motion of the second object, such as if a difference between the first linear motion and the second linear motion falls within the predefined velocity discrimination threshold described above, as shown in fig. 3B.
More specifically, two objects corresponding to different elements of the same road vehicle, detected and tracked by the autonomous vehicle over multiple scan cycles, may exhibit dissimilar pitch and yaw velocities relative to the autonomous vehicle, but will move together along the same path, and therefore will exhibit the same (or very similar) linear velocity. Thus, the autonomous vehicles group objects that are very close and exhibit the same (or very similar) linear velocity, and possibly different yaw and pitch velocities, to form one composite object (or one "rigid body") that represents the complete road vehicle.
14.9 object classification
Further, the autonomous vehicle may classify the type of the object based on the motion characteristics of the individual object.
In one example shown in fig. 3B, the autonomous vehicle may identify the object as a wheel by: projecting points in a (sub-) group representing the object into a three-dimensional space based on the azimuth position, the elevation position and the ranging values contained in the points; calculating a direction of an absolute linear velocity of the object; calculating a vertical plane through the set of points and parallel to (i.e., containing) the direction of motion of the object; and calculating a linear velocity component of the radial velocity of the points in the group in the vertical plane. Then, in block S162, the autonomous vehicle may identify the object as a wheel: if the maximum linear velocity of these points (i.e., the point representing the current top of the wheel or tire) in the vertical plane is about twice the absolute linear velocity of the object (and parallel to and/or in the same orientation as the direction of the absolute linear velocity of the object); and/or if the minimum linear velocity of these points (i.e., the points representing the current bottom of the wheel or tire) in the vertical plane is approximately null.
In a similar example, the autonomous vehicle may identify the object as a wheel by: calculating a direction of an absolute linear velocity of the object; and calculating a linear velocity component of the radial velocity of the points in the group parallel to the direction of absolute motion of the object. Then, in block S162, the autonomous vehicle may identify the object as a wheel: if the maximum linear velocity of these points (i.e., the points representing the current top of the wheel or tire), parallel to the direction of absolute motion of the object, is about twice the absolute linear velocity of the object (and parallel to and/or in the same orientation as the direction of the absolute linear velocity of the object); if the minimum linear velocity of these points (i.e., the points representing the current bottom of the wheel or tire), parallel to the absolute direction of motion of the object, is approximately null; and/or if the gradient of the linear velocity of a point in the group parallel to the absolute direction of motion of the object increases from approximately empty near the ground plane to approximately twice the absolute linear velocity of the object at the top of the object (e.g., at a point on the object that is twice the height above the ground plane than the height of the vertical center of the group of points above the ground plane).
Then, in response to identifying an object within the composite object as a wheel, the autonomous vehicle may classify the composite object as a wheeled vehicle, as shown in fig. 3B. More specifically, the autonomous vehicle may, such as instead of or in addition to implementing artificial intelligence and computer vision techniques to classify composite objects based on visual characteristics of the composite objects detected in color images or geometric characteristics of objects derived from depth images of the composite objects, classify the composite objects as wheeled vehicles based on motion characteristics and/or relative positions of objects contained in the composite objects. Thus, by classifying a composite object as a wheeled vehicle according to the motion of objects contained in the composite object based on simple, predefined, and regulatory rules, rather than based on sophisticated artificial intelligence and computer vision techniques, the autonomous vehicle may accurately classify the composite object in less time and/or with less computational load.
Further, after classifying the composite object as a wheeled vehicle, the autonomous vehicle may retrieve predefined motion limit hypotheses for the wheeled vehicle rather than for the generic object (i.e., for all possible object types) and assign or label these refined predefined motion limit hypotheses to the composite object, as shown in fig. 3B. For example, the autonomous vehicle may retrieve predefined motion limit hypotheses for the wheeled vehicle, which specify: a maximum angular velocity that is less than the maximum angular velocity of motorcycles and pedestrians, and that varies with and decreases in proportion to the ground speed of the vehicle; and a maximum linear acceleration less than the maximum linear acceleration of the motorcycle.
Furthermore, in this variant, if an object identified as a wheel within the composite object exhibits a maximum linear velocity parallel to the direction of linear motion of the object that is (much) greater than twice the total absolute linear velocity of the object, the autonomous vehicle may detect hostile motion of the composite object, as such a characteristic may indicate that the wheeled vehicle is "peeling off", "burning" or otherwise losing traction. Thus, the autonomous vehicle may retrieve predefined motion limit hypotheses for wheeled vehicles exhibiting lost traction and assign or tag these refined predefined motion limit hypotheses to the composite object. For example, the autonomous vehicle may retrieve predefined motion limit hypotheses for wheeled vehicles exhibiting lost traction, specifying a lower maximum linear velocity and a greater maximum angular velocity than wheeled vehicles having traction.
15. Direction of uncertainty of object motion
A variant of the method S100 shown in fig. 4 comprises, for a first scanning period: in block S104, accessing a first scan image containing data captured by a sensor on an autonomous vehicle at a first time; in block S120, identifying a first set of points in the first scanned image representing an object in a field in the vicinity of the autonomous vehicle, each point in the first set of points including a first location of a surface on the object relative to the autonomous vehicle and a first radial velocity of the surface of the object relative to the sensor; calculating a first radial velocity of the object relative to the autonomous vehicle at a first time based on a first measure of central tendency of the first radial velocity of the points in the first group of points in block S122; and in block S170, characterizing a first direction of uncertainty in motion of the object at a first time along a first tangential direction perpendicular to a first radial velocity of the object. This variation of method S100 further includes: calculating a predicted second direction of uncertainty of motion of the object at a second time subsequent to the first time based on the motion of the autonomous vehicle at the first time in block S172; and in response to the second direction of uncertainty being different from the first direction of uncertainty, in block S142, the object is blanked at the second time without taking into account braking considerations for avoidance of the autonomous vehicle from the object.
15.1 delaying Collision avoidance actions based on future data quality
Generally, in the aforementioned variations, the autonomous vehicle may require multiple scan cycles to derive a particular absolute or relative total motion of the object, such as two scan cycles to derive a total motion of the object in three degrees of freedom, or three scan cycles to derive a total motion of the object in six degrees of freedom. Furthermore, if the radial position of the object relative to the autonomous vehicle remains relatively consistent over these scan periods, the range of possible absolute or relative motion of the object calculated by the autonomous vehicle over these scan periods may be higher, yielding less certainty of the true motion of the object; and vice versa.
Thus, in this variant, the autonomous vehicle may: a current direction (e.g., in a tangential azimuth direction and/or a tangential elevation direction) that characterizes an uncertainty of a motion of the object during the current scan cycle; predicting a future direction of motion uncertainty of the object during a future (e.g., next) scan cycle (e.g., based on a predefined motion limit assumption for the generic object and a relative motion of the autonomous vehicle and the object); and comparing the current direction of motion uncertainty of the object with the predicted future direction to predict whether the autonomous vehicle will improve its certainty of the motion of the object in the future, such as if the current direction of motion uncertainty of the object and the future direction are not parallel, which may inform a more accurate response (e.g., braking, steering, or taking no action) for avoiding a collision with the object in the future. Thus, if the autonomous vehicle verifies that it is currently outside the future state boundaries of the object as described above, the autonomous vehicle may choose to delay the response to the object during the current scan cycle because the autonomous vehicle predicts that the movement of the object is more deterministic in the future.
More specifically, while the autonomous vehicle may have incomplete motion information of the object when the object first enters the field of view of the sensor on the autonomous vehicle, the autonomous vehicle may choose to delay the action (e.g., braking, changing direction) for avoiding a collision with the object, as the autonomous vehicle predicts that more or better information will be accessible in the future, which will reduce the uncertainty in the motion of the object. Thus, the autonomous vehicle may perform this variation of method S100 to improve ride quality and avoid unnecessary braking and steering actions that would otherwise be: may cause physical and emotional discomfort to the occupant; and may increase uncertainty of the autonomous vehicle's actions on nearby human drivers and pedestrians.
15.2 uncertainty of object motion
In general, the object that is first detected by the object during the current scan cycle may move at any combination of tangential and angular velocities that satisfies the function computed by the autonomous vehicle during the current scan cycle, and that falls within the maximum tangential and angular velocity assumptions specified by the predefined motion limit assumptions. For example, an object may move very quickly into the path of the autonomous vehicle, such as when both the autonomous vehicle and the object are close to an intersection, or may brake to avoid the autonomous vehicle. The radial velocity contained in the point representing the object in the current scan image approximates a single measurement direction (e.g., due to a small angle or so), and thus may contain insufficient information to resolve the specific tangential and angular velocities of the object.
However, during the next scan cycle, the autonomous vehicle may access more data representing the motion of the object, and then the autonomous vehicle may fuse this data with the motion description of the object during the previous scan cycle (e.g., the first radial velocity and the first function that relates the tangential velocity of the object to the angular velocity) to compute a narrow (narrower) range of possible tangential and angular velocity combinations for the object.
This refinement of the object motion using the data captured during the next scan cycle may be proportional to the orthogonality of the function relating the tangential and angular velocities of the object during the current scan cycle. In particular, if the two functions exhibit low orthogonality (i.e. high parallelism), the intersection of the two functions may be a relatively large area and thus a wide range of possible tangential and angular velocities of the object may be informed; and vice versa. More specifically, if the two functions exhibit low orthogonality, the intersection of the first function and the second function divided by the union of the first function and the second function may be relatively large, which may correspond to low certainty of the motion of the object.
15.3 uncertainty Direction derivation
In one implementation, an autonomous vehicle implements the above-described methods and techniques to: accessing a first scanned image; detecting an object at a first time in a first scanned image; calculating a first radial velocity during a first scan period and a first function relating the tangential velocity and the angular velocity of the object; and calculating a first future state boundary of the object.
Then, if the autonomous vehicle is far from the future state boundary, the autonomous vehicle may blank the object without taking into account path planning considerations. For example, if the autonomous vehicle's position at the current time falls outside the object's first future state boundary by more than a threshold distance (such as a threshold distance of 50 meters, or a distance traversed by the autonomous vehicle within a threshold time of 5 seconds given the autonomous vehicle's current speed), the autonomous vehicle may blank the object from the current time until at least the next scan period without taking into account braking considerations for avoiding the object.
Conversely, if the autonomous vehicle is located within the future state boundary, the autonomous vehicle may automatically perform a braking action to slow the autonomous vehicle to move the position of the autonomous vehicle outside of the future state boundary during a future (e.g., next) scan cycle.
However, if the autonomous vehicle is near a future state boundary of the object (e.g., outside of the future state boundary but within a threshold distance therefrom), the autonomous vehicle may perform the blocks of this variation of method S100 to characterize the direction of uncertainty of the motion of the object. If the angle between the direction of uncertainty of the motion of the object and the trajectory of the autonomous vehicle is greater than a threshold angle (e.g., if the object is passing through an intersection and approaching the same intersection as the autonomous vehicle), the autonomous vehicle can currently have access to insufficient information to discern whether the object is moving very quickly toward the autonomous vehicle or in the process of colliding with the autonomous vehicle. However, because the current position of the autonomous vehicle falls outside of the future state boundaries of the autonomous vehicle, even if the autonomous vehicle delays the maneuver for at least one more scan cycle, and even if the object is moving at the worst-case tangential and angular velocities within the predefined motion limit assumptions, the autonomous vehicle may confirm that the object will not collide with the autonomous vehicle before the autonomous vehicle can brake to a full stop. Thus, the autonomous vehicle may suspend (threshold) the act of avoiding the object until (at least) the next scan cycle when additional motion data about the object becomes available to the autonomous vehicle.
Further, if the autonomous vehicle is very close to the object (e.g., within two meters or 200 milliseconds from the object) and/or very close to the future state boundary of the object (e.g., within ten meters or one second from the future state boundary of the object), the autonomous vehicle may execute this variation of method S100 to predict the next direction of uncertainty for the object. For example, an autonomous vehicle may: selecting a nominal angular velocity hypothesis (e.g., 0 radians/second) for the object at the current time; and calculating a first tangential velocity of the object based on the first function and the nominal angular velocity. Alternatively, the autonomous vehicle may: calculating a maximum tangential velocity of the object towards the autonomous vehicle, the maximum tangential velocity being consistent with the set of predefined motion limit hypotheses and the first function (and thus being based on a radial velocity of a point representing the object in the current scan image and a radial length of the object); storing the maximum tangential velocity as a predicted first tangential velocity of the object; and calculating a corresponding predicted first angular velocity of the object based on the first function and the predicted first tangential velocity of the object. The autonomous vehicle may then predict a total relative motion of the object based on the first radial velocity, the predicted first tangential velocity, and the predicted nominal angular velocity; calculating a next relative position of the object with respect to the autonomous vehicle during a next scan period by integrating the total relative motion of the object over a time (e.g., a sampling interval of the sensor) from the current scan period to the next scan period; a predicted second direction of uncertainty of the motion of the object during the next scan cycle is then calculated, the second direction being perpendicular to a radial position of the object relative to the autonomous vehicle during the next scan cycle and falling within a horizontal plane. (more specifically, the autonomous vehicle may calculate a predicted second direction of uncertainty of object motion that falls in a tangential azimuthal direction predicted for the next scan cycle.)
Thus, the autonomous vehicle may calculate a predicted second direction of uncertainty of the motion of the object during the next scan cycle based on the motion of the autonomous vehicle at the current time and the first radial velocity, the predicted first tangential velocity, and the predicted first angular velocity of the object at the current time.
15.4 deterministic improvement prediction
Then, if the predicted second uncertainty direction is different from the first uncertainty direction for the current scan cycle, the autonomous vehicle may predict that the uncertainty of the motion of the object during the next scan cycle is reduced and confirm that the motion of the autonomous vehicle has not changed. The autonomous vehicle may also characterize a predicted improved magnitude in the certainty of the motion of the object at the next scan cycle based on (or in proportion to) an angle between the first direction of uncertainty of the motion of the object and the predicted second direction.
However, if the predicted second uncertainty direction is parallel or nearly parallel (e.g., within 5 ° of) the first uncertainty direction for the current scan cycle, then given the change in motion of the autonomous vehicle between the current scan cycle and the next scan cycle (such as a change in steering angle, braking input, or acceleration input), the autonomous vehicle may repeat the foregoing process to recalculate the predicted second uncertainty direction for the object (while remaining in the calculated entry region for many or all objects in the field and satisfying the predefined stationary autonomous navigation requirement). For example, the autonomous vehicle may model, by the autonomous vehicle, a navigational action that may produce a change in the direction of uncertainty of the object at the next scan cycle, thereby improving the certainty of the motion of the object.
Then, if one of these navigational actions produces a predicted change in the uncertainty direction (i.e., the uncertainty direction increases by an angle between the predicted second uncertainty direction and the first uncertainty direction), the autonomous vehicle may perform the navigational action to modify the motion of the autonomous vehicle relative to the object during the next scan cycle. In particular, the autonomous vehicle may specifically perform this navigation action to improve the chances of obtaining information that may better inform about the actual movement of the object, rather than having to avoid a collision with the object, because the autonomous vehicle has already confirmed that, even given the worst-case movement of the object, the autonomous vehicle has time to come to a complete stop before colliding with the object.
15.5 Flanking objects
If the first uncertainty direction intersects the current trajectory of the autonomous vehicle, such as within a threshold distance (e.g., 30 meters, 3 seconds, or a stop duration of the autonomous vehicle) ahead of the current position of the autonomous vehicle, the autonomous vehicle may additionally or alternatively perform the process to calculate a predicted second uncertainty direction of the motion of the object during the next scan cycle.
However, the autonomous vehicle may have a high certainty of the motion of the object toward the autonomous vehicle if the predicted second direction of uncertainty of the motion of the object during the next scan cycle intersects the current trajectory of the autonomous vehicle outside of the threshold distance, or if the predicted second direction of uncertainty of the motion of the object during the next scan cycle is substantially parallel to the current trajectory of the autonomous vehicle (e.g., differs from the current trajectory of the autonomous vehicle by 20 °).
For example, if the object and the autonomous vehicle flank each other on two immediately adjacent and parallel lanes, a first radial velocity of the object derived by the autonomous vehicle from a first scanned image may indicate the motion of the object towards the autonomous vehicle (i.e. the highest risk direction) with very high certainty even if the true tangential velocity of the object is unknown from the first scanned image. In this example, a direction of uncertainty in the motion of the object (e.g., in a direction of the tangential velocity of the object) is substantially parallel to the direction of motion of the autonomous vehicle and the object, and thus the motion of the object toward the autonomous vehicle is not informed. Furthermore, the angle between the trajectory of the autonomous vehicle and the direction of uncertainty of the motion of the object is approximately 0 °. Accordingly, the autonomous vehicle may select a navigational action to avoid collision with the object based on the future state boundary of the object; rather than selecting a navigation action that avoids a collision with an object based on the uncertainty of the motion of the object, since the motion components that (mainly) influence the risk of collision with an object are known.
15.6 future scan periods
The autonomous vehicle may also perform the foregoing process to predict a direction of uncertainty in the motion of the object over multiple future scan cycles (such as over the subsequent ten scan cycles or over the subsequent two seconds of operation of the autonomous vehicle), and thus predict an improvement in the certainty of the motion of the object. Accordingly, the autonomous vehicle may choose to blank the object over multiple subsequent scan periods without taking into account object avoidance considerations, as the autonomous vehicle predicts a change in direction of uncertainty of motion of the object over these multiple subsequent scan periods, and thus predicts an improvement in certainty of the motion of the object. Additionally or alternatively, the autonomous vehicle may: predicting (or "modeling") navigational actions performed by the autonomous vehicle that will produce a change in direction of uncertainty in the motion of the object over a plurality of subsequent scan cycles, and thus an improvement in the certainty of the motion of the object; such a navigation action is then performed within these scan periods to improve its certainty of the movement of the object.
16. Uncertainty of object motion
For a first scan cycle at the autonomous vehicle, a similar variation of the method S100 shown in fig. 5 includes: in block S104, accessing a first scan image containing data captured by a sensor on an autonomous vehicle at a first time; identifying a first set of points in the first scan image, the first set of points representing objects in a field near the autonomous vehicle in block S120; and characterizing a first motion of the object at a first time based on the first set of points in block S126. For a second scan cycle at the autonomous vehicle, this variation of method S100 further includes: accessing a second scan image containing data captured by the sensor at a second time subsequent to the first time in block S104; identifying a second set of points representing the object in the second scanned image in block S120; and characterizing a second motion of the object at a second time based on the second set of points and the first motion in block S126. This variation of method S100 further includes: characterizing a second uncertainty of a second motion of the object at a second time in block S180; calculating a predicted third uncertainty of a third motion of the object at a third time subsequent to the second time based on the second motion of the object at the second time and the motion of the autonomous vehicle at the second time in block S182; and in response to the predicted third uncertainty being lower than the second uncertainty, blanking the object at the second time without incorporating braking considerations for avoidance of the autonomous vehicle from the object in block S142.
Similarly, for a first scan cycle at the autonomous vehicle, this variation of method S100 may include: in block S104, accessing a first scan image containing data captured by a sensor on an autonomous vehicle at a first time; identifying a first set of points in the first scan image, the first set of points representing objects in a field near the autonomous vehicle in block S120; and characterizing a first motion of the object at a first time based on the first set of points in block S126. This variation of method S100 may also include: characterizing a first uncertainty of a first motion of the object at a second time in block S180; calculating a predicted second uncertainty of a second motion of the object at a second time subsequent to the first time based on the first motion of the object at the first time and the motion of the autonomous vehicle at the first time in block S182; and in response to the predicted second uncertainty being lower than the first uncertainty, blanking the object at a second time without incorporating braking considerations for avoidance of the autonomous vehicle from the object in block S142.
16.1 uncertainty of object motion
Generally, in this variation, the autonomous vehicle may implement methods and techniques similar to those described above to: calculating a predicted next direction of uncertainty of motion of the object during a next scan period; comparing the current direction of uncertainty of the motion of the object and the predicted next direction to predict a magnitude of improvement in the certainty of the motion of the object in the future (e.g., proportional to orthogonality of the current direction of uncertainty of the motion of the object and the predicted next direction); then, if the autonomous vehicle predicts a (meaningful, significant) improvement in the certainty of the movement of the object in the future, a navigation maneuver to avoid the object is selectively delayed at the current time in response to the low certainty of the movement of the object, thereby reducing the change in the movement of the autonomous vehicle and improving the ride quality of the passenger, as described above.
16.2 uncertainty characterization: 3DOF
More specifically, in variations of the above-described autonomous vehicle characterizing the motion of an object in three degrees of freedom, the autonomous vehicle may implement the above-described methods and techniques to: calculating a first function representing motion of the object based on a radial velocity of a point representing the object in the first scanned image; calculating a first future state boundary of the object when the object is first detected in a first scan image captured during a first scan period; and verifying that the autonomous vehicle is currently outside the first future state boundary of the object. The autonomous vehicle may then implement the above-described methods and techniques to predict a second function representing the motion of the object during the next scan cycle. For example, the autonomous vehicle may predict the (absolute or relative) motion of the object during the second scan cycle based on: a first radial velocity and a first function describing the motion of the object during a first scan period; a predefined motion limit assumption for the generic object; and integrating over a time difference from a first scan period to a next scan period. The autonomous vehicle may then: predicting a gradient of radial velocity and azimuthal position of a point representing the object in a next scan image; calculating a predicted second function representing possible tangential and angular velocities of the object during a next scan cycle based on the gradient of radial velocity at the azimuthal locations; and calculating the intersection of the first function and the predicted second function divided by the union. The autonomous vehicle may then predict an information gain-and thus an improvement in the certainty of the motion of the object-that is inversely proportional to the intersection of the first function and the predicted second function divided by the union.
Thus, in the variant of the autonomous vehicle described above that characterizes the motion of the object in three degrees of freedom, in block S180 the autonomous vehicle may characterize a second uncertainty of the motion of the object at the second time, the second uncertainty being proportional to the ratio of the intersection of the first function and the second function to the union of the first function and the second function.
16.2 uncertainty characterization: 6DOF
In the above-described variant in which the autonomous vehicle characterizes the motion of the object in six degrees of freedom, the autonomous vehicle may implement similar methods and techniques to characterize the uncertainty of the motion of the object during the next scan cycle.
For example, an autonomous vehicle may: calculating a first function representing possible tangential, angular and pitch motion of the object during a current (i.e., first) scan cycle; the above-described methods and techniques are implemented to predict (absolute or relative) motion of an object in six degrees of freedom during a second scan cycle; predicting a first gradient of radial velocity, azimuth position, and elevation position of a point representing the object in a next (i.e., second) scan image; and calculating a predicted second function based on the first gradient of radial velocity at the azimuth and elevation positions, the second function representing possible tangential, angular and pitch velocities of the object during a second scan cycle. The autonomous vehicle may then calculate an intersection of the first function (e.g., the three-dimensional ellipsoid) and the predicted second function (e.g., the three-dimensional ellipsoid) divided by the union.
The autonomous vehicle may then predict an information gain-and thus an improvement in the certainty of the motion of the object during the second scan period-that is inversely proportional to the intersection of the first function and the predicted second function divided by the union.
Then, during a second scan cycle, the autonomous vehicle may: calculating a second function representing possible tangential, angular and pitch motion of the object during a second scan cycle; and characterizing a difference between the second function and the predicted second function. The autonomous vehicle may then implement the above-described methods and techniques to predict (absolute or relative) motion of the object in six degrees of freedom during the third scan cycle based on motion expressed at the intersection of the first function and the second function, integrated over the time difference from the second scan cycle to the next (i.e., third) scan cycle, and corrected (or "adjusted") according to the difference between the second function and the predicted second function. The autonomous vehicle may then: predicting a second gradient of radial velocity, azimuth position and elevation position of points representing the object in the third scan image; and based on this second gradient of radial velocity at these azimuth and elevation positions, a predicted third function is calculated that represents the possible tangential, angular and pitch velocities of the object during the next (i.e. third) scan cycle. The autonomous vehicle may then calculate an intersection of the first function, the second function, and the predicted third function divided by the union.
Thus, the autonomous vehicle may predict an information gain-and thus an improvement in the certainty of the movement of the object during the third scan cycle-that is inversely proportional to the intersection of the first function, the second function, and the predicted third function divided by the union.
16.3 object Blanking
Then, as described above, if the current position of the autonomous vehicle falls outside of the current future state boundary computed for the object by more than a threshold distance, and if the autonomous vehicle predicts an improvement in motion uncertainty of the object (such as specifically in the direction of the current trajectory of the autonomous vehicle), the autonomous vehicle may blank the object at least until the next scan period, without taking into account braking considerations (or more generally, without a reactive navigation action) for the autonomous vehicle to avoid the object.
16.4 actions to reduce uncertainty
Alternatively, in this variation, the autonomous vehicle may select a navigational action to change its trajectory in order to capture motion data of the object, which may improve (i.e., reduce) the uncertainty of the motion of the object during future scan cycles, as described above.
16.5 futureScanning period
The autonomous vehicle may also perform the foregoing process to predict the uncertainty of the motion of the object over multiple future scan cycles (such as over the next ten scan cycles, or over the next two seconds of operation of the autonomous vehicle). Accordingly, the autonomous vehicle may choose to blank the object over multiple subsequent scan periods without taking into account object avoidance because the improvement in uncertainty of the autonomous vehicle predicting the motion of the object over these multiple subsequent scan periods is insufficient. Additionally or alternatively, the autonomous vehicle may: predicting (or "modeling") navigational actions performed by the autonomous vehicle that will yield an improvement in motion uncertainty of the object over a plurality of subsequent scan cycles; such a navigation action is then performed within these scan periods to improve its certainty of the movement of the object.
17. Uncertainty boundary
In a similar variation shown in fig. 6, the autonomous vehicle executes the blocks of method S100 to: detecting an object in a scanned image of a field (e.g., a 3D velocity annotated point cloud) surrounding an autonomous vehicle; extracting low uncertainty motion data for the object (e.g., relative to a radial velocity of the autonomous vehicle) from the scan image; identifying critical motion data for the object that is currently unavailable to the autonomous vehicle but that may enable the autonomous vehicle to verify worst-case object motion that may result in a future collision between the autonomous vehicle and the object; and predict when the critical motion data will be accessed by the autonomous vehicle given the current speed of the autonomous vehicle and this worst-case motion of the object. The autonomous vehicle may then selectively delay performing a collision avoidance action (e.g., slow down, stop) for the object in response to predicting that the autonomous vehicle will visit the critical motion data for the object at a future time, which still enables the autonomous vehicle to brake to a full stop prior to colliding with the object (e.g., so that any such collision may be solely the responsibility of the object and not the responsibility of the autonomous vehicle).
For example, when the autonomous vehicle detects an object in its vicinity, the autonomous vehicle may perform the blocks of method S100 to estimate a critical future time at which the position of the object relative to the autonomous vehicle will sufficiently change to enable the autonomous vehicle to capture additional object motion data that reduces uncertainty in object motion and thus enables the autonomous vehicle to verify the likelihood of a collision with the object. The autonomous vehicle may then confirm that even given the worst-case motion of the object (such as defined by a predefined assumption for the maximum speed of the generic object), if the autonomous vehicle delays the emergency stop until after the critical future time, the autonomous vehicle may still brake to a full stop before colliding with the object; if so, the autonomous vehicle may delay performing a preemptive collision avoidance action on the object, which may improve predictability of movement of the autonomous vehicle to other vehicles, drivers, and pedestrians in the vicinity and smooth movement of the autonomous vehicle during operation. Thus, the autonomous vehicle may delay performing a preemptive collision avoidance action on the object, which may improve predictability of movement of the autonomous vehicle to other vehicles, drivers, and pedestrians in the vicinity and smooth movement of the autonomous vehicle during operation.
Similarly, given the worst-case motion of the object, the autonomous vehicle may calculate a maximum critical speed of the autonomous vehicle at a critical future time that enables the autonomous vehicle to brake to a full stop before colliding with the object if the autonomous vehicle delays the emergency stop until after the critical future time. Then, if the current speed of the autonomous vehicle is less than the maximum critical speed, the autonomous vehicle may limit its maximum speed to the maximum critical speed until a critical future time; or if the current speed of the autonomous vehicle is greater than the maximum critical speed, the autonomous vehicle may automatically coast or brake to reduce its speed to the maximum critical speed by a critical future time.
For example, an autonomous vehicle may: storing a worst-case speed and acceleration (e.g., maximum speed of 50 meters/second, maximum acceleration of 9 meters/second) for a high-performance passenger vehicle or a high-performance motorcycle; defining possible movements of the object in directions that cannot be measured by the autonomous vehicle based on these worst-case velocities and accelerations; and verifying whether the object is likely to arrive at and collide with the autonomous vehicle given the speed within the limit; subsequent blocks of method S100 are then performed to selectively delay avoidance of the object in order to collect additional motion data and further verify the motion of the object. Thus, the autonomous vehicle may reduce or eliminate reliance on object recognition and other machine learning techniques to: identifying a type of the object; distinguishing between immutable objects (e.g., signs, poles) and mutable objects (e.g., pedestrians, vehicles) in a field around an autonomous vehicle; and selects a kinetic model or predicts future motion of the subject based on their type. More specifically, rather than predicting future movement of the object based on a dynamical model selected according to the predicted type of the object, the autonomous vehicle may: predicting and defining current and future motion of an object based on limited motion data collected during a current scan cycle, the current location of the object relative to the autonomous vehicle, and a maximum speed and acceleration assumption for a generic object (e.g., a generic high-performance passenger vehicle); and verifying that movement of the object within the bounds can cause the object to collide with the autonomous vehicle.
Thus, by executing the blocks of method S100 to inform path planning decisions, the autonomous vehicle may: reducing or eliminating the need to accurately identify the type or class of an object in its environment; reducing or eliminating this possible source of error in autonomous operation of the autonomous vehicle; and increasing robustness of autonomous operation of the autonomous vehicle, such as robustness against adversarial computer vision attacks, adversarial neural network attacks, or with limited or no a priori training data.
Furthermore, the autonomous vehicle may implement the same detection, tracking, and motion planning decision paths for both variable and non-variable objects, thereby reducing or eliminating the need to identify categories of objects (or classify objects as variable or non-variable) in the environment of the autonomous vehicle and reducing the number of unique computer vision, machine learning, and path planning pipelines performed on the autonomous vehicle. For example, the autonomous vehicle may perform the same detection, tracking, and motion planning decision paths to predict and process: objects that may be undetectable in the environment of the autonomous vehicle but are occluded by other detected objects (e.g., pedestrians standing behind utility poles; passenger vehicles occupying lanes occluded by tractor trailers in the field of view of the autonomous vehicle); an object entering a field of view of an autonomous vehicle for a first time; and objects existing in the field of view of the autonomous vehicle.
17.1 object motion measurement limits and uncertainties
In general, an autonomous vehicle is able to characterize the motion of an object detected in its field in three degrees of freedom, such as: a translation in a radial direction extending from the autonomous vehicle to the object; translating in a horizontal tangential direction perpendicular to the radial direction; and rotating about a yaw axis of the subject. However, the points in the scanned image described above may contain 1D motion observations (i.e., rate of change of distance along the radial axis) of the object in the field. As described above, the autonomous vehicle may: isolating clusters of points represented in the scanned image at similar distances from the autonomous vehicle; interpolating 2D motion (e.g., radial velocity relative to the autonomous vehicle and yaw rate about the object), the 2D motion being consistent with a 1D motion observation at a point in the scan image; and thus associate the cluster of points with one object in the farm. Thus, the autonomous vehicle may derive a radial velocity of the object (i.e., the velocity of the object along a ray extending from the autonomous vehicle through the object) and a yaw rate of the object from the scan image.
However, the scan image may not contain information related to the tangential velocity of the object (i.e., the motion perpendicular to the ray extending from the autonomous vehicle to the object). Thus, the uncertainty in the tangential velocity of the object during the current scan cycle may be relatively high compared to the uncertainty in the radial velocity of the object, which is directly measured by the sensors in the autonomous vehicle and stored in the current scan image.
However, if the autonomous vehicle is moving relative to the object, the perspective of the object by the autonomous vehicle may change from the current scan cycle to a later scan cycle such that the object falls at a different azimuthal location in the field of view of the autonomous vehicle during the later scan cycle. Thus, the radial velocity of the object thus derived from the later scan image captured by the autonomous vehicle during the later scan cycle may correspond to the velocity of the object in a direction in the absolute reference frame that is different from the radial direction of the object represented in the scan image captured during the current scan cycle.
Thus, as the autonomous vehicle and the object continue to move relative to each other during the subsequent scan cycle, the autonomous vehicle: a set of radial velocities of the accessing object within a range of tangential directions can be expected; the delayed collision avoidance action may be selectively selected to access the radial velocity of the object in these tangential directions and reduce uncertainty in object motion; and future path planning decisions may be built on the knowledge of higher certainty of object motion, thereby improving the efficiency and smoothing the motion of the autonomous vehicle.
17.2 velocity uncertainty boundary
In general, the autonomous vehicle may implement the above-described methods and techniques to fuse the measured radial velocity of an object and the maximum velocity of a generic object specified by predefined motion limit assumptions into a velocity uncertainty boundary that represents a set of many (or all) possible velocities of the object at the current time.
For example, an autonomous vehicle may: initializing a set of vectors in a (polar) coordinate system that is based on a center of the autonomous vehicle as an origin, wherein each vector represents a possible velocity of the object relative to the autonomous vehicle in the coordinate system during the current scan cycle; setting a component length of each vector in a radial direction equal to a currently measured radial velocity of the object; assigning a range spanning a total length of a negative maximum velocity of the generic object to a positive maximum velocity of the generic object to the set of vectors; locating a set of vectors extending from a center of the object in a coordinate system; and computes an ellipse or ellipsoid containing these vectors to define the velocity uncertainty boundary of the object during the current scan cycle.
In this example, the autonomous vehicle may similarly calculate a range of vectors in the radial direction having a component length that spans a range of radial velocities of points associated with the object in the current scan image and/or a range of errors in radial velocity measurements across the sensor that generated the scan image. The autonomous vehicle may then compute an ellipse or ellipsoid through these vectors to define the velocity uncertainty boundary of the object during the current scan cycle.
However, the autonomous vehicle may calculate the speed uncertainty boundary for the object in any other manner.
17.3 Collision velocity, collision duration and Critical time
In general, if the autonomous vehicle continues to travel along its current trajectory, the autonomous vehicle may predict a future time at which a particular speed of the object contained within the speed uncertainty boundary of the object will result in a collision with the autonomous vehicle.
More specifically, the autonomous vehicle may: predicting an upcoming path of the autonomous vehicle based on a current speed of the autonomous vehicle, its planned route, and/or a network of known lanes around the autonomous vehicle; scanning a speed uncertainty boundary of the object to obtain a particular speed that may cause the object to arrive at a particular location along an upcoming path of the autonomous vehicle (such as according to or regardless of a known lane network) at approximately the same time as the autonomous vehicle; estimating a time-to-collision that an object moving at the particular speed and an autonomous vehicle moving along the path will reach the particular location; and calculating a critical time before the time of the collision from a current stop duration of the autonomous vehicle.
17.4 object motion uncertainty prediction at critical time
Generally, an autonomous vehicle may: predicting object motion data accessible to the autonomous vehicle between a current time and a critical time; and predict how these additional object motion data can reduce the uncertainty of object motion.
17.4.1 second radial direction at critical time
In one implementation, an autonomous vehicle: estimating a position of the autonomous vehicle at a critical time based on a current path and speed of the autonomous vehicle; estimating a position of the object at a critical time based on the current position of the object and a worst-case velocity of the autonomous vehicle during the current scan cycle calculated accordingly; and calculating a second radial direction (or azimuth) from the autonomous vehicle to the object at the critical time based on the estimated positions of the autonomous vehicle and the object at the critical time. The autonomous vehicle may implement similar methods and techniques to estimate a range of radial directions from the autonomous vehicle to the object from a current time to a critical time based on a current path and speed of the autonomous vehicle, based on a current location of the object, and assuming a worst-case speed of the object over the time period.
17.4.2 future speed uncertainty boundary at critical time
Then, the autonomous vehicle: the above-described methods and techniques are implemented to calculate a future speed uncertainty boundary for an object based on object motion data that may be collected by the autonomous vehicle by the critical time (assuming that the autonomous vehicle and the object arrive at these estimated locations at the critical time).
17.4.3 uncertainty in critical time
The autonomous vehicle may then characterize the uncertainty of the object motion at the critical time, such as being proportional to a range of possible velocities of the object at the critical time in a tangential direction relative to the autonomous vehicle (i.e., perpendicular to the radial direction). Then, if this predicted uncertainty in the velocity of the object is below a threshold uncertainty at a critical time (e.g., if the range of possible tangential velocities of the object is less than 4 meters/second), the autonomous vehicle may blank the object during the current scan period without incorporating path planning decisions, or otherwise choose to delay any collision avoidance actions in response to the object to a future time in block S142.
Conversely, if the predicted uncertainty of the velocity of the object at the critical time exceeds the threshold uncertainty (e.g., if the range of possible tangential velocities of the object is greater than 4 meters/second), the autonomous vehicle may decrease its velocity, such as proportionally to the uncertainty, to further extend the critical time into the future, thereby enabling the autonomous vehicle to capture additional motion data of the object before a possible collision with the object, and thus reduce the motion uncertainty of the object before the delayed critical time.
17.5 changing objects and points
Further, because the autonomous vehicle may predict the type of object and accordingly the motion of the object independent of object classification or recognition, the autonomous vehicle may define a set of points that span multiple real objects in the field, such as if the objects move along similar trajectories and at similar speeds. However, the autonomous vehicle may implement the aforementioned methods and techniques to calculate, refine, and avoid future state boundaries for this "grouped object" until such time as these real objects no longer move along similar trajectories and/or at similar speeds, at which point the autonomous vehicle may: distinguishing the objects in the current scanning cycle; transferring motion characteristics from the preceding grouped objects to each of the different objects; future state boundaries are then computed for each of these objects, as described above.
Similarly, the autonomous vehicle may distinguish between two clusters of points representing a single real object, and implement the methods and techniques described above to calculate, refine, and avoid future state boundaries for the two clusters, such as until the autonomous vehicle determines that the proximity and self-consistency of the radial velocities (or rate of change of distance) of the points in the two clusters indicate a single object time.
Additionally or alternatively, the autonomous vehicle may implement the aforementioned methods and techniques to compute, refine, and avoid future state boundaries for individual points and smaller clusters of points representing sub-regions of objects in the field around the autonomous vehicle.
The systems and methods described herein may be at least partially embodied and/or implemented as a machine configured to receive a computer-readable medium storing computer-readable instructions. The instructions may be executed by computer-executable components integrated with an application, applet, host, server, network, website, communication service, communication interface, hardware/firmware/software element of a user computer or mobile device, wristband, smartphone, or any suitable combination thereof. Other systems and methods of the embodiments may be at least partially embodied and/or implemented as a machine configured to receive a computer-readable medium storing computer-readable instructions. The instructions may be executed by computer-executable components integrated with devices and networks of the type described above. The computer readable medium may be stored on any suitable computer readable medium, such as on RAM, ROM, flash memory, EEPROM, optical apparatus (CD or DVD), hard disk drive, floppy disk drive, or any suitable device. The computer-executable components may be processors, but any suitable dedicated hardware device may (alternatively or additionally) execute instructions.
As those skilled in the art will recognize from the foregoing detailed description and from the accompanying drawings and claims, modifications and variations can be made to the embodiments of the invention without departing from the scope of the invention as defined in the following claims.
Claims (60)
1. A method for autonomous navigation of an autonomous vehicle, comprising:
● Accessing a set of predefined motion limit hypotheses for a generic object approaching a public road;
● For a first scanning period:
accessing a first scan image containing data captured by a sensor on the autonomous vehicle at a first time;
identifying a first set of points in the first scan image, the first set of points representing a first object in a field in proximity of the autonomous vehicle, each point in the first set of points comprising:
a first range value from the sensor to a surface on the first object;
a first azimuthal position of the surface on the first object relative to the sensor; and
a first radial velocity of the surface of the first object relative to the sensor;
calculating a first correlation between a first radial velocity and a first azimuthal position of points in the first group of points;
Computing a first function relating a possible tangential velocity of the first object to a possible angular velocity of the first object at the first time based on the first correlation; and
calculating a first radial velocity of the first object at the first time based on first radial velocities of points in the first set of points;
● Estimating a first stop duration for the autonomous vehicle to reach a full stop based on a first speed of the autonomous vehicle at the first time;
● Calculating a first critical time offset from the first time by the stop duration;
● Calculating a first future state boundary that represents a first ground area that the first object may enter at the first critical time based on:
a possible tangential velocity of the first object and a possible angular velocity of the first object at the first time defined by the first function;
the first radial velocity; and
the set of predefined motion limit assumptions; and
● Selecting a first navigation action to avoid entering the first future state boundary before the first critical time.
2. The method of claim 1:
● Further comprising calculating an entry zone around the autonomous vehicle that does not include the first future state boundary of the first object; and
● Wherein selecting the first navigation action comprises performing the first navigation action to navigate toward the entry zone in response to a first location of the autonomous vehicle at the first time falling within a threshold distance of a perimeter of the first future state boundary.
3. The method of claim 1, wherein selecting the first navigation maneuver includes performing a braking maneuver that decelerates the autonomous vehicle in response to the autonomous vehicle falling within the first future state boundary at a first location of the first time.
4. The method of claim 1, wherein selecting the first navigation action includes maintaining a speed of the autonomous vehicle in response to a first position of the autonomous vehicle at the first time falling outside the first future state boundary.
5. The method of claim 1, further comprising:
● For a second scan period subsequent to the first scan period:
Accessing a second scan image containing data captured by the sensor at a second time;
identifying a second set of points in the second scan image representing the first object in the field;
calculating a second correlation between a second radial velocity and a second azimuthal position of points in the second group of points;
computing a second function relating a possible tangential velocity of the first object at the second time to a possible angular velocity of the first object based on the second correlation; and
calculating a second radial velocity of the first object at the second time based on second radial velocities of points in the second set of points;
● Estimating a second tangential velocity of the first object and a second angular velocity of the first object at the second time based on an intersection of the first function and the second function;
● Estimating a second stop duration for the autonomous vehicle to reach a full stop based on a second speed of the autonomous vehicle at the second time;
● Calculating a second critical time offset from the second time by the stop duration;
● Calculating a second future state boundary that represents a second surface area accessible to the first object at the second critical time based on:
a second tangential velocity of the first object;
the second angular velocity of the first object;
the first radial velocity; and
the set of predefined motion limit assumptions; and
● Selecting a second navigation action to avoid entering the second future state boundary before the first critical time.
6. The method of claim 5:
● Wherein calculating the first future state boundary comprises calculating the first future state boundary that satisfies:
in a plane substantially parallel to the road surface; and
o is characterized by a first area dimension; and
● Wherein calculating the second future state boundary comprises calculating the second future state boundary that satisfies:
within said plane; and
characterized by a second area dimension smaller than the first area dimension.
7. The method of claim 1, wherein calculating the first correlation comprises:
● Calculating a first linear trend line of a first radial velocity and a first azimuthal position through points of the first group of points; and
● Calculating the first correlation based on a first slope of the first linear trend line, the slope representing a relationship between a first tangential velocity of the first object and a first angular velocity of the first object at the first time.
8. The method of claim 7:
● Further comprising characterizing a first error of the first linear trend line based on a deviation of a first radial velocity of a point of the first group of points from the first linear trend line;
● Wherein computing the first function comprises:
calculating a first line relating a possible tangential velocity of the first object and a possible angular velocity of the first object at the first time based on the first correlation; and
calculating a first width of the first line based on the first error; and
● Wherein calculating the first future state boundary comprises calculating the first future state boundary based on a possible tangential velocity of the first object and a possible angular velocity of the first object at the first time, represented by a first line of the first width.
9. The method of claim 1, wherein accessing the first scanned image comprises accessing the first scanned image containing data captured by the sensor, the sensor comprising a four-dimensional light detection and ranging sensor that:
● Is mounted on the autonomous vehicle; and
● Is configured to generate a scanned image representative of a position and velocity of a surface within the field relative to the sensor.
10. A method for autonomous navigation of an autonomous vehicle, comprising:
● Estimating, at a first time, a stop duration for the autonomous vehicle to reach a full stop based on a speed of the autonomous vehicle at the first time;
● Calculating a critical time offset from the first time by the stop duration;
● Detecting an object in a first scanned image of a field in proximity to the autonomous vehicle, the first scanned image captured by a sensor on the autonomous vehicle at about the first time;
● Deriving a first position and a first motion of the first object based on the first scan image;
● Calculating a first future state boundary representing a first ground area accessible by the first object from the first time to the first critical time based on:
the first position of the first object at the first time;
The first motion of the first object; and
a set of predefined motion limit assumptions for general objects close to a public road; and
● Selecting a first navigation action to avoid entering the first future state boundary before the first critical time.
11. The method of claim 10:
● Further comprising accessing the set of predefined motion restriction hypotheses comprising:
maximum linear acceleration of the universal ground-based vehicle;
maximum linear speed of a general ground-based vehicle; and
maximum angular velocity of a general ground-based vehicle; and
● Wherein calculating the first future state boundary comprises:
integrating the first motion of the first object moving from the first position of the first object up to the maximum angular velocity and accelerating to the maximum linear velocity according to the maximum linear acceleration during the stopping duration to calculate the first ground area that the first object may enter from the first time to the first critical time; and
storing the first ground area as the first future state boundary.
12. The method of claim 11:
● Further comprising:
detecting a second object in the first scan image;
deriving a second position and a second motion of the second object based on the first scan image;
integrating the second motion of the second object moving from the second position of the second object up to the maximum angular velocity and accelerating to the maximum linear velocity according to the maximum linear acceleration, within the stopping duration, to calculate a second ground area accessible by the second object from the first time to the first critical time; and
storing the second ground region as a second future state boundary; and
● Wherein selecting the first navigational action comprises selecting the first navigational action to avoid entering into the first future state boundary and the second future state boundary before the first critical time.
13. The method of claim 12:
● Further comprising calculating an entry zone around the autonomous vehicle that does not include the first future state boundary of the first object and the second future state boundary of the second object; and
● Wherein selecting the first navigation action comprises performing the first navigation action that navigates towards the entry zone.
14. The method of claim 10:
● Further comprising calculating an entry zone around the autonomous vehicle that does not include the first future state boundary of the first object; and
● Wherein selecting the first navigation action comprises performing the first navigation action to navigate toward the entry zone in response to a first location of the autonomous vehicle at the first time falling within a threshold distance of a perimeter of the first future state boundary.
15. The method of claim 10:
● Wherein detecting the object in the first scanned image comprises identifying, in the first scanned image, a first set of points representing a first object in a field in a vicinity of the autonomous vehicle, each point of the first set of points comprising:
a first range value from the sensor to a surface on the first object;
a first azimuthal position of the surface on the first object relative to the autonomous vehicle; and
a first radial velocity of the surface of the first object relative to the autonomous vehicle;
● Wherein deriving the first position and the first motion of the first object comprises:
calculating a first correlation between a first radial velocity and a first azimuthal position of a point in the first set of points;
computing a first function relating a possible tangential velocity of the first object at the first time to a possible angular velocity of the first object based on the first correlation;
calculating a first radial velocity of the first object at the first time based on first radial velocities of points in the first group of points; and
deriving the first position of the first object based on first range values and first azimuthal positions of points in the first group of points; and
● Wherein calculating the first future state boundary comprises calculating the first future state boundary based on:
a possible tangential velocity of the first object and a possible angular velocity of the first object at the first time as defined by the first function;
the first radial velocity;
the first position, and
the set of predefined motion limit assumptions.
16. The method of claim 15, further comprising:
● For a second scan period subsequent to the first scan period:
accessing a second scan image containing data captured by the sensor at a second time;
identifying a second set of points in the second scan image that represent the first object in the field;
calculating a second correlation between a second radial velocity and a second azimuthal position of points in the second set of points;
computing a second function relating a possible tangential velocity of the first object at the second time to a possible angular velocity of the first object based on the second correlation; and
calculating a second radial velocity of the first object at the second time based on second radial velocities of points in the second set of points;
● Estimating a second tangential velocity of the first object and a second angular velocity of the first object at the second time based on an intersection of the first function and the second function;
● Estimating a second stop duration for the autonomous vehicle to come to a full stop based on a second speed of the autonomous vehicle at the second time;
● Calculating a second critical time offset from the second time by the stop duration;
● Calculating a second future state boundary representing a second surface area accessible by the first object at the second critical time based on:
the second tangential velocity of the first object;
the second angular velocity of the first object;
the second radial velocity; and
the set of predefined motion limit assumptions; and
● Selecting a second navigation action to avoid entering the second future state boundary before the first critical time.
17. The method of claim 16, wherein calculating the second future state boundary comprises calculating a second future state boundary representing the second ground area that is smaller than the first ground area.
18. The method of claim 10:
● Further comprising:
detecting a second object in the first scan image;
deriving a second position and a second motion of the second object based on the first scan image;
calculating a second future state boundary representing a second surface region accessible by the second object from the first time to the first critical time based on:
■ The second location of the second object at the first time;
■ The second motion of the second object; and
■ A set of said predefined motion limit hypotheses for a common object approaching a public road; and
responsive to a second distance from the autonomous vehicle to a second perimeter of the second future state boundary at the first time exceeding a threshold distance, blanking the second object without taking into account a next path planning at the autonomous vehicle; and
● Wherein selecting the first navigation action comprises activating the first object in the next path planning consideration at the autonomous vehicle in response to a first distance from the autonomous vehicle to a first perimeter of the first future state boundary at the first time falling within the threshold distance.
19. The method of claim 10, wherein estimating the stop duration comprises:
● Accessing a second image of the field captured at about the first time by a second sensor disposed on the autonomous vehicle;
● Interpreting a type of road surface occupied by the autonomous vehicle at the first time based on a set of features extracted from the second image;
● Predicting a quality of the road surface based on the set of features;
● Estimating a coefficient of friction for a tire of the autonomous vehicle to act on the road surface based on the type of the road surface and the mass of the road surface; and
● Estimating the stop duration based on:
a vehicle speed of the autonomous vehicle at the first time;
the coefficient of friction; and
a braking model for the autonomous vehicle.
20. A method for autonomous navigation of an autonomous vehicle, comprising:
● Accessing a set of predefined motion limit hypotheses for a generic object approaching a public road;
● Accessing a scan image containing data captured by a sensor on the autonomous vehicle at a first time;
● Identifying a set of points in the scan image, the set of points representing an object in a field proximate the autonomous vehicle, each point in the set of points comprising:
the position of a surface on the object relative to the autonomous vehicle;
a radial velocity of the surface of the object relative to the autonomous vehicle;
● Calculating a correlation between radial velocity and position of points in the set of points;
● Based on the correlation, computing a function that relates a possible tangential velocity of the object at the first time to a possible angular velocity of the object;
● Calculating a radial velocity of the object at the first time based on radial velocities of points in the set of points;
● Calculating a future state boundary representing a ground area accessible to the object at a future time based on:
a possible tangential velocity of the object and a possible angular velocity of the object at the first time defined by the function;
the radial velocity of the object; and
the set of predefined motion limit assumptions; and
● Selecting a navigation action to avoid the future state boundary before the future critical time.
21. A method for autonomous navigation of an autonomous vehicle, comprising:
● For a first scanning period:
accessing a first scan image containing data captured by a sensor on the autonomous vehicle at a first time;
Identifying a first set of points in the first scan image, the first set of points representing a first object in a field in proximity of the autonomous vehicle, each point in the first set of points comprising:
■ A first range value from the sensor to a surface on the first object;
■ A first azimuthal position of the surface on the first object relative to the sensor; and
■ A first radial velocity of the surface of the first object relative to the sensor;
calculating a first correlation between a first radial velocity and a first azimuthal position of points in the first group of points; and
calculating a first function relating a possible tangential velocity of the first object at the first time to a possible angular velocity of the first object based on the first correlation;
● For a second scan period:
accessing a second scan image containing data captured by the sensor at a second time;
identifying a second set of points in the second scan image that represent the first object in the field;
calculating a second correlation between a second radial velocity and a second azimuthal position of points in the second set of points; and
Calculating a second function relating a possible tangential velocity of the first object at the second time to a possible angular velocity of the first object based on the second correlation;
● Estimating a second tangential velocity of the first object and a second angular velocity of the first object relative to the autonomous vehicle at the second time based on an intersection of the first function and the second function; and
● Selecting a navigation action based on the second tangential velocity of the first object and the second angular velocity of the first object.
22. The method of claim 21:
● Further comprising:
calculating a second radial velocity of the first object relative to the autonomous vehicle at the second time based on a second measure of central tendency of second radial velocities of points in the second group of points; and
characterizing a total velocity of the first object relative to the autonomous vehicle at the second time based on the second tangential velocity of the first object, the second angular velocity of the first object, and the second radial velocity of the first object; and
● Wherein selecting the navigational action comprises selecting the navigational action to avoid the first object based on the total velocity of the first object at the second time.
23. The method of claim 21, wherein calculating the first correlation comprises:
● Calculating a first linear trend line of a first radial velocity and a first azimuthal position through points of the first group of points; and
● Calculating the first correlation based on a first slope of the first linear trend line, the first slope representing a relationship between a first tangential velocity of the first object and a first angular velocity of the first object at the first time.
24. The method of claim 23:
● Further comprising calculating a first radius of the first object relative to the autonomous vehicle at the first time based on a range of first azimuthal positions of points in the first group of points;
● Wherein calculating the first slope of the first linear trend line comprises calculating the first slope representing a first difference of:
the first tangential velocity of the first object at the first time; and
The product of the first radius of the first object at the first time and the first angular velocity of the first object at the first time; and
● Wherein calculating the first slope of the first linear trend line comprises calculating the first slope representing a first product of:
the first radius of the first object at the first time; and
a difference between the first tangential velocity of the first object at the first time and the first angular velocity of the first object at the first time; and
● Wherein calculating the first function comprises calculating a first linear function that relates a possible tangential velocity of the first object at the first time to a possible angular velocity of the first object relative to the autonomous vehicle at the first time based on the first slope and the first radius at the first time.
25. The method of claim 23:
● Wherein calculating the second correlation comprises:
calculating a second linear trend line of a second radial velocity and a second azimuthal position through points of the second set of points; and
Calculating the second correlation based on a second slope of the second linear trend line, the second slope representing a relationship between a second tangential velocity of the first object and a second angular velocity of the first object at the second time; and
● Further comprising:
characterizing a first error of the first linear trend line based on a deviation of a first radial velocity of a point in the first group of points from the first linear trend line; and
characterizing a second error of the second linear trend line based on a deviation of a second radial velocity of points in the second set of points from the second linear trend line;
● Wherein calculating the first function comprises:
calculating a first line relating a possible tangential velocity of the first object relative to the autonomous vehicle at the first time to a possible angular velocity of the first object based on the first correlation; and
calculating a first width of the first line based on the first error; and
● Wherein computing the second function comprises:
calculating a second line relating a possible tangential velocity of the first object relative to the autonomous vehicle at the second time to a possible angular velocity of the first object based on the second correlation; and
Calculating a first width of the first line based on the first error;
● Wherein estimating the second tangential velocity of the first object and the second angular velocity of the first object at the second time comprises estimating a second range of tangential velocities of the first object and a second range of angular velocities of the first object relative to the autonomous vehicle at the second time based on an intersection of the first line of the first width and the second line of the second width; and
● Wherein selecting the navigation action comprises selecting the navigation action to avoid the first object based on a second range of the tangential velocity of the first object and a second range of the angular velocity of the first object at the second time.
26. The method of claim 23:
● Further comprising:
accessing a set of predefined motion limit assumptions for a general object approaching a public road; and
characterizing a first error of the first function based on an integration of the set of predefined motion limit assumptions over a time difference between the first time and the second time;
● Wherein computing the first function comprises:
calculating a first line relating a possible tangential velocity of the first object relative to the autonomous vehicle at the first time to a possible angular velocity of the first object based on the first correlation; and
calculating a first width of the first line based on the first error; and
● Wherein computing the second function comprises:
calculating a second line relating a possible tangential velocity of the first object relative to the autonomous vehicle at the second time to a possible angular velocity of the first object based on the second correlation;
● Wherein estimating the second tangential velocity of the first object and the second angular velocity of the first object at the second time comprises estimating a second range of tangential velocities of the first object and a second range of angular velocities of the first object relative to the autonomous vehicle at the second time based on an intersection of the first line and the second line of the first width; and
● Wherein selecting the navigation action comprises selecting the navigation action to avoid the first object based on a second range of the tangential velocity of the first object and a second range of the angular velocity of the first object at the second time.
27. The method of claim 21:
● Wherein calculating the first function comprises calculating the first function relating a possible tangential velocity of the first object, which is located in a horizontal plane substantially parallel to a road surface at the first time, to a possible angular velocity of the first object;
● Wherein calculating the second function comprises calculating the second function relating a possible tangential velocity of the first object, which is located in the horizontal plane substantially parallel to the road surface at the second time, to a possible angular velocity of the first object; and
● Wherein estimating the second tangential velocity of the first object and the second angular velocity of the first object at the second time comprises estimating the second tangential velocity of the first object and the second angular velocity of the first object relative to the autonomous vehicle at the second time based on an intersection of the first function and the second function in a state space of three degrees of freedom.
28. The method of claim 21:
● Further comprising:
accessing a maximum object speed assumption for a general object approaching a public road;
Calculating a second radial velocity of the first object relative to the autonomous vehicle at the second time based on a second radial velocity of a point in the second group of points;
integrating the second radial velocity of the first object, the second tangential velocity of the first object, and the second angular velocity of the first object over a target duration to calculate a future state boundary of the first object; and
● Wherein selecting the navigation action comprises selecting the navigation action to avoid future entry into the future state boundary of the first object.
29. The method of claim 28:
● Further comprising:
estimating a stop duration for the autonomous vehicle to reach a full stop based on the speed of the autonomous vehicle at the first time; and
calculating the target duration based on the stop duration; and
● Wherein selecting the navigational action to avoid future entry into the future state boundary of the first object comprises performing a braking action that decelerates the autonomous vehicle in response to the position of the autonomous vehicle at the second time falling within a threshold distance of the future state boundary of the first object.
30. The method of claim 21, further comprising:
● For a third scan cycle:
accessing a third scan image containing data captured by the sensor at a third time subsequent to the second time;
identifying a third set of points in the third scan image that represent the first object in the field;
identifying a fourth point group in the third scan image, the fourth point group representing a second object in the field, the second object being separated from the first object from the second time to the third time;
calculating a third correlation between a third radial velocity and a third azimuthal position of points in the third set of points;
calculating a fourth correlation between a fourth radial velocity and a fourth azimuthal position of points in the fourth group of points;
calculating a third function relating a possible tangential velocity of the first object at the third time to a possible angular velocity of the first object based on the third correlation; and
calculating a fourth function relating a possible tangential velocity of the second object at the third time to a possible angular velocity of the second object based on the fourth correlation;
● Estimating a third tangential velocity of the first object and a third angular velocity of the first object relative to the autonomous vehicle at the third time based on an intersection of the second function and the third function; and
● Estimating a fourth tangential velocity of the second object and a fourth angular velocity of the second object relative to the autonomous vehicle at the third time based on an intersection of the second function and the fourth function; and
● Selecting a second navigation action to avoid the first object and the second object based on the third tangential velocity of the first object, the third angular velocity of the first object, the fourth tangential velocity of the second object, and the fourth angular velocity of the second object.
31. The method of claim 21, wherein accessing the first scanned image comprises accessing the first scanned image containing data captured by the sensor, the sensor comprising a four-dimensional light detection and ranging sensor that:
● Is mounted on the autonomous vehicle; and
● Configured to generate a scanned image representative of the position and velocity of a surface located within the field relative to the sensor.
32. The method of claim 21:
● Further comprising, during a third scan cycle, generating a third function based on a third correlation relating a possible tangential velocity of the first object represented in a third set of points to a possible angular velocity of the first object, the third set of points being detected in a third image containing data captured by the sensor at a third time subsequent to the first time and the second time;
● Wherein calculating the first correlation comprises calculating a first best fit plane through points of the first set of points for a first radial velocity, a first azimuthal position, and a first elevation position, the first best fit plane representing a relationship between a first tangential velocity of the first object, a first angular velocity of the first object, and a first pitch velocity of the first object at the first time;
● Wherein calculating the first function comprises calculating the first function based on the first best-fit plane;
● Wherein calculating the second correlation comprises calculating a second best fit plane through points in the second set of points for a second radial velocity, a second azimuth position, and a second elevation position, the second best fit plane representing a relationship between a second tangential velocity of the first object, a second angular velocity of the first object, and a second pitch velocity of the first object at the second time;
● Wherein calculating the second function comprises calculating the second function based on the second best-fit plane;
● Wherein estimating the second tangential velocity of the first object and the second angular velocity of the first object at the second time comprises calculating the second tangential velocity of the first object, the second angular velocity of the first object, and the second pitch velocity of the first object at the second time based on an intersection of the first function, the second function, and the third function.
33. A method for autonomous navigation of an autonomous vehicle, comprising:
● For a first scanning period:
accessing a first scan image containing data captured by a sensor on the autonomous vehicle at a first time;
identifying a first set of points in the first scan image, the first set of points representing a first object in a field in proximity of the autonomous vehicle, each point in the first set of points comprising:
■ A first range value from the sensor to a surface on the first object;
■ A first position of the surface on the first object relative to the autonomous vehicle; and
■ A first radial velocity of the surface of the first object relative to the autonomous vehicle;
calculating a first correlation between a first radial velocity and a first position of a point in the first group of points; and
calculating a first function relating a possible linear motion of the first object at the first time to a possible angular motion of the first object based on the first correlation;
● For a second scan period:
accessing a second scan image containing data captured by the sensor at a second time;
identifying a second set of points representing the first object in the second scan image;
calculating a second correlation between a second radial velocity and a second position of a point in the second group of points; and
calculating a second function relating a possible linear motion of the first object at the second time to a possible angular motion of the first object based on the second correlation;
● Estimating linear motion of the first object and angular motion of the first object relative to the autonomous vehicle at the second time based on an intersection of the first function and the second function; and
● Selecting a navigation action based on the linear motion of the first object and the angular motion of the first object at the second time.
34. The method of claim 33:
● Wherein calculating the first correlation comprises:
calculating a first linear trend line of a first radial velocity and a first azimuthal position through points of the first group of points projected onto a plane substantially parallel to the road surface; and
calculating the first correlation based on a first slope of the first linear trend line, the first slope representing a relationship between a first tangential velocity of the first object and a first angular velocity of the first object at the first time;
● Wherein calculating the second correlation comprises:
calculating a second linear trend line through a second radial velocity and a second azimuthal position of points in the second set of points projected onto the plane; and
calculating the second correlation based on a second slope of the second linear trend line, the second slope representing a relationship between a second tangential velocity of the first object and a second angular velocity of the first object at the second time; and
● Wherein estimating the linear motion of the first object and the angular motion of the first object at the second time comprises estimating a second tangential velocity of the first object and a second angular velocity of the first object relative to the autonomous vehicle at the second time based on an intersection of the first function and the second function.
35. The method of claim 33:
● Wherein calculating the first correlation comprises calculating a first best fit plane through a first radial velocity and a first position of a point in the first set of points, the first best fit plane representing a relationship between a first tangential velocity of the first object relative to the autonomous vehicle at the first time, a first angular velocity of the first object, and a first pitch velocity of the first object;
● Wherein calculating the first function comprises calculating the first function based on the first best-fit plane;
● Wherein calculating the second correlation comprises calculating a second best fit plane through a second radial velocity and a second location of points in the second set of points, the second best fit plane representing a relationship between a second tangential velocity of the first object, a second angular velocity of the first object, and a second pitch velocity of the first object relative to the autonomous vehicle at the second time;
● Wherein calculating the second function comprises calculating the second function based on the second best-fit plane.
36. The method of claim 35:
● Further comprising, during a third scan cycle, generating a third function based on a third correlation relating a possible tangential velocity of the first object represented in a third set of points to a possible angular velocity of the first object, the third set of points being detected in a third image containing data captured by the sensor at a third time subsequent to the first time and the second time; and
● Wherein estimating linear motion of the first object and angular motion of the first object relative to the autonomous vehicle at the second time comprises estimating linear motion of the first object and angular motion of the first object relative to the autonomous vehicle at the second time based on an intersection of the first function, the second function, and the third function.
37. The method of claim 35, further comprising:
● For the first scanning period:
identifying a third set of points in the first scan image that is proximate to the first set of points;
Calculating a third best-fit plane through a third radial velocity and a third position of a point in the third set of points, the third best-fit plane representing a relationship between a third tangential velocity of a second object relative to the autonomous vehicle at the first time, a third angular velocity of the second object, and a third pitch velocity of the second object; and
calculating a third function based on the third best-fit plane;
● For the second scan period:
identifying a fourth set of points in the second scan image that are proximate to the second set of points;
calculating a fourth best-fit plane through fourth radial velocities and fourth locations of points in the fourth point group, the fourth best-fit plane representing a relationship between a fourth tangential velocity of the second object relative to the autonomous vehicle at the second time, a fourth angular velocity of the second object, and a fourth pitch velocity of the second object; and
calculating a fourth function based on the fourth best-fit plane;
● Estimating a second linear motion of the second object and a second angular motion of the second object relative to the autonomous vehicle at the second time based on an intersection of the third function and the fourth function; and
● Identifying the first object and the second object as corresponding to a common rigid body in response to alignment between the linear motion of the first object and the second linear motion of the second object.
38. The method of claim 37:
● Wherein estimating the second angular motion of the second object at the second time comprises estimating the second angular velocity and the second pitch velocity of the second object relative to the autonomous vehicle at the second time based on an intersection of the third function and the fourth function; and
● Further comprising:
calculating a second radial velocity of the second object relative to the autonomous vehicle at the second time based on a second measure of central tendency of a fourth radial velocity of points in the fourth set of points;
calculating a total absolute velocity of the second object at the second time based on the second radial velocity, the second tangential velocity, the second angular velocity, and the second pitch velocity of the second object at the second time;
calculating a set of fourth velocity components of the fourth radial velocity of the points in the fourth point group in the direction of the total absolute velocity of the second object at the second time;
Identifying the second object as a wheel based on the maximum speed of the set of fourth speed components being approximately twice the total absolute speed of the second object; and
marking the common rigid body as a wheeled vehicle in response to identifying the second object as the wheel.
39. A method for autonomous navigation of an autonomous vehicle, comprising:
● For each scan cycle in the sequence of scan cycles at the autonomous vehicle:
accessing a scan image containing data captured by sensors on the autonomous vehicle at a scan time;
identifying a set of points in the scan image, the set of points representing a first object in a field in a vicinity of the autonomous vehicle, each point in the set of points comprising:
■ A position of a surface on the first object relative to the autonomous vehicle; and
■ A radial velocity of the surface of the first object relative to the autonomous vehicle; and
computing a function relating possible linear motion of the first object and possible angular motion of the first object at the scan time based on a correlation between radial velocity and position of points in the set of points;
● Estimating a current linear motion of the first object and a current angular motion of the first object relative to the autonomous vehicle at a current time based on an intersection of a current function derived from a first scan image containing data captured at the current time and a previous function derived from a second scan image containing data captured prior to the current time; and
● Selecting a navigation action based on the current linear motion of the first object and the current angular motion of the first object.
40. The method of claim 39:
● Further comprising:
estimating a stop duration for the autonomous vehicle to reach a full stop based on the speed of the autonomous vehicle at the first time;
calculating a current absolute linear motion of the first object and a current absolute angular motion of the first object at a current time based on the current linear motion of the first object, the current angular motion of the first object, and the motion of the autonomous vehicle at the current time;
accessing a maximum object acceleration assumption for a general object approaching a public road;
Calculating a range of possible absolute velocities of the first object relative to the autonomous vehicle at the first time based on the motion of the autonomous vehicle at the first time, a first range of tangential velocity and radial velocity pairs of the first object at the first time, and a first radial velocity of the first object at the first time; and
integrating the current absolute linear motion of the first object with the current absolute angular motion of the first object during the stopping duration with acceleration according to the maximum object acceleration assumption to calculate a ground area that the first object may enter from the first time to a first critical time; and
● Wherein selecting the navigation action comprises selecting a first navigation action to avoid entering into the ground area before the first threshold time.
41. A method for autonomous navigation of an autonomous vehicle, comprising:
● For a first scanning period:
accessing a first scan image containing data captured by a sensor on the autonomous vehicle at a first time;
Identifying a first set of points in the first scan image, the first set of points representing a first object in a field in proximity of the autonomous vehicle, each point in the first set of points comprising:
■ A first position of a surface on the first object relative to the autonomous vehicle; and
■ A first radial velocity of the surface of the first object relative to the sensor;
calculating a first radial velocity of the first object relative to the autonomous vehicle at the first time based on a first measure of central tendency of first radial velocities of points in the first point group; and
characterizing a direction of a first uncertainty of a motion of the first object at the first time in a first tangential direction perpendicular to the first radial velocity of the first object;
● Calculating a direction of a predicted second uncertainty of a motion of the first object at a second time subsequent to the first time based on the motion of the autonomous vehicle at the first time; and
● In response to the direction of the second uncertainty being different from the direction of the first uncertainty, blanking the first object at the second time without incorporating braking considerations for avoidance of the autonomous vehicle from the object.
42. The method of claim 41:
● Further comprising accessing a set of predefined motion limit hypotheses for a common object approaching the public road; and
● Wherein calculating the direction of the predicted second uncertainty of the motion of the first object at the second time comprises:
calculating a maximum tangential velocity of the first object towards the autonomous vehicle, the maximum tangential velocity coinciding with the set of predefined motion limit assumptions and a first radial velocity of a point of the first group of points; and
calculating a direction of a predicted second uncertainty of the motion of the first object at the second time based on the motion of the autonomous vehicle at the first time and the maximum tangential velocity of the first object at the first time.
43. The method of claim 41:
● Further comprising:
calculating a critical time offset from the first time;
deriving a first position of the first object based on the first scan image;
calculating a first correlation between a first radial velocity and a first position of a point in the first set of points; and
computing a first function relating a possible tangential velocity of the first object at the first time to a possible angular velocity of the first object based on the first correlation; and
Calculating a first future state boundary representing a first ground area accessible by the first object from the first time to the critical time based on:
■ The first location of the first object at the first time;
■ A first radial velocity of the first object;
■ A possible tangential velocity of the first object and a possible angular velocity of the first object at the first time defined by the first function; and
■ A set of predefined motion limit hypotheses for a common object approaching a public road; and
● Wherein blanking the first object at the second time without including braking considerations for avoidance of the autonomous vehicle from the object comprises blanking the first object at the second time without including braking considerations for avoidance of the autonomous vehicle from the object further in response to the position of the autonomous vehicle at the first time falling outside the first future state boundary by more than a threshold distance.
44. The method of claim 43, wherein calculating the critical time comprises:
● Estimating a stop duration for the autonomous vehicle to reach a full stop based on the speed of the autonomous vehicle at the first time; and
● Calculating the critical time offset from the first time by the stop duration.
45. The method of claim 41, further comprising:
● For a first scanning period:
identifying a second set of points in the first scan image, the second set of points representing a second object in the field, each point in the second set of points comprising:
■ A second position of a surface on the second object relative to the autonomous vehicle; and
■ A second radial velocity of the surface of the second object relative to the sensor;
calculating a second radial velocity of the second object relative to the autonomous vehicle at the first time based on a second measure of central tendency of second radial velocities of points in the second group of points; and
a direction characterizing a third uncertainty of a motion of the second object at the first time in a second tangential direction perpendicular to the second radial velocity of the second object;
● Calculating a direction of a predicted fourth uncertainty of the motion of the second object at the second time based on the motion of the autonomous vehicle at the first time; and
● In response to a difference between the direction of the predicted fourth uncertainty and the direction of the third uncertainty being less than a threshold difference, selecting a navigational action to modify a motion of the autonomous vehicle relative to the second object at the second time.
46. The method of claim 45, wherein selecting the navigation action comprises identifying the navigation action that positions the autonomous vehicle at an alternate location relative to the second object at the second time to produce a direction of an alternate fourth uncertainty in the motion of the second object that differs from the direction of the first uncertainty by more than the threshold difference, the navigation action selected from a group of navigation actions consisting of: a braking input, an acceleration input, and a steering input.
47. The method of claim 45, wherein selecting the navigation action comprises selecting the navigation action further in response to the direction of the third uncertainty intersecting the first trajectory of the autonomous vehicle at the first time within a threshold distance ahead of a location of the autonomous vehicle at the first time.
48. A method for autonomous navigation of an autonomous vehicle, comprising:
● For a first scan cycle at the autonomous vehicle:
accessing a first scan image, the first scan image containing data captured by a sensor on the autonomous vehicle at a first time;
identifying a first set of points in the first scan image, the first set of points representing a first object in a field near the autonomous vehicle; and
characterizing a first motion of the first object at the first time based on the first set of points;
● For a second scan cycle at the autonomous vehicle:
accessing a second scan image, the second scan image comprising data captured by the sensor at a second time subsequent to the first time;
identifying a second set of points representing the first object in the second scan image; and
characterizing a second motion of the first object at the second time based on the second set of points and the first motion;
● Characterizing a second uncertainty of the second motion of the first object at the second time;
● Calculating a predicted third uncertainty of a third motion of the first object at a third time subsequent to the second time based on the second motion of the first object at the second time and a motion of the autonomous vehicle at the second time; and
● In response to the predicted third uncertainty being lower than the second uncertainty, blanking the first object at the second time without incorporating braking considerations for autonomous vehicle avoidance of the object.
49. The method of claim 48:
● Wherein characterizing the first motion of the first object at the first time comprises:
calculating a first correlation between a first radial velocity and a first azimuthal position of points in the first group of points;
calculating a first function relating a possible tangential velocity of the first object at the first time to a possible angular velocity of the first object based on the first correlation;
● Wherein characterizing the second motion of the first object at the second time comprises:
calculating a second correlation between a second radial velocity and a second azimuthal position of points in the second set of points;
calculating a second function relating a possible tangential velocity of the first object at the second time to a possible angular velocity of the first object based on the second correlation; and
Estimating a second range of tangential velocity of the first object and a second range of angular velocity of the first object relative to the autonomous vehicle at the second time based on an intersection of the first function and the second function; and
● Wherein characterizing the second uncertainty of the second motion of the first object at the second time comprises characterizing the second uncertainty of the second motion of the first object at the second time as being proportional to a ratio of an intersection of the first function and the second function to a union of the first function and the second function.
50. The method of claim 49, wherein calculating the first correlation comprises:
● Calculating a first linear trend line of a first radial velocity and a first azimuthal position through points of the first group of points; and
● Calculating the first correlation based on a first slope of the first linear trend line, the first slope representing a relationship between a first tangential velocity of the first object and a first angular velocity of the first object at the first time.
51. The method of claim 50:
● Wherein calculating the second correlation comprises:
calculating a second linear trend line for a second radial velocity and a second azimuthal position through points in the second set of points;
calculating the second correlation based on a second slope of the second linear trend line, the second slope representing a relationship between a second tangential velocity of the first object and a second angular velocity of the first object at the second time; and
● Further comprising:
characterizing a first error of the first linear trend line based on a deviation of a first radial velocity of a point of the first group of points from the first linear trend line;
characterizing a second error of the second linear trend line based on a deviation of a second radial velocity of a point of the second group of points from the second linear trend line;
● Wherein calculating the first function comprises:
calculating a first line relating a possible tangential velocity of the first object relative to the autonomous vehicle at the first time to a possible angular velocity of the first object based on the first correlation; and
calculating a first width of the first line based on the first error; and
● Wherein calculating the second function comprises:
calculating a second line relating a possible tangential velocity of the first object relative to the autonomous vehicle at the second time to a possible angular velocity of the first object based on the second correlation; and
calculating a first width of the first line based on the first error;
● Wherein characterizing the second uncertainty of the second motion of the first object at the second time comprises characterizing the second uncertainty of the second motion of the first object at the second time as being proportional to an area of intersection of the first line of the first width and the second line of the second width.
52. The method of claim 49, wherein calculating the predicted third uncertainty of the third motion of the first object at the third time comprises:
● Calculating a predicted third position of the first object relative to the autonomous vehicle at the third time based on the second motion of the first object at the second time and the motion of the autonomous vehicle at the second time;
● Calculating a direction of a predicted third uncertainty of a motion of the first object at the third time based on the predicted third position of the first object relative to the autonomous vehicle at the third time; and
● Calculating a predicted third uncertainty of the third motion of the first object at the third time based on an intersection of a direction of the third uncertainty and the second function.
53. The method of claim 52, wherein calculating the predicted third position of the first object relative to the autonomous vehicle at the third time comprises:
● Calculating a second radial velocity of the first object relative to the autonomous vehicle at the second time based on a first metric of a central tendency of a second radial velocity of points in the second group of points;
● Estimating a second tangential velocity of the first object relative to the autonomous vehicle at the second time based on a second measure of central tendency of a second range of the tangential velocities; and
● Estimating a second angular velocity of the first object relative to the autonomous vehicle at the second time based on a third measure of central tendency of the second range of angular velocities; and
● Calculating a predicted third position of the first object relative to the autonomous vehicle at the third time based on the second radial velocity of the first object, the second tangential velocity of the first object, the second angular velocity of the first object, and the motion of the autonomous vehicle at the second time.
54. The method of claim 48:
● Further comprising:
calculating a critical time offset from the second time;
deriving a second position of the first object based on the second scan image; and
calculating a first future state boundary representing a second surface area accessible by the second object from the second time to the critical time based on:
■ The second location of the first object at the second time;
■ A second motion of the first object at the second time; and
■ A set of predefined motion limit hypotheses for a common object approaching a public road; and
● Wherein blanking the first object at the third time without including braking considerations for the autonomous vehicle to avoid the object comprises blanking the first object at the third time without including braking considerations for the autonomous vehicle to avoid the object further in response to the position of the autonomous vehicle at the second time falling outside the second future state boundary by more than a threshold distance.
55. The method of claim 54, wherein calculating the critical time comprises:
● Estimating a stop duration for the autonomous vehicle to reach a full stop based on the speed of the autonomous vehicle at the first time; and
● Calculating the critical time offset from the first time by the stop duration.
56. The method of claim 48, further comprising:
● For a first scanning period:
identifying a third set of points in the first scan image that represent a second object in the field;
characterizing a third motion of the second object at the first time based on the third set of points;
● For the second scan period:
identifying a fourth set of points in the second scan image that represent the second object in the field;
characterizing a fourth motion of the second object at the second time based on the second set of points and the third motion;
● A fourth uncertainty characterizing the fourth motion of the second object at the second time;
● Calculating a predicted fifth uncertainty of a fifth motion of the second object at the third time based on the fourth motion of the second object at the second time and a motion of the autonomous vehicle at the second time; and
● In response to a difference in the predicted fifth uncertainty and the fourth uncertainty being less than a threshold difference, selecting a navigational action to modify motion of the autonomous vehicle relative to the second object at the third time.
57. The method of claim 56, wherein selecting the navigational action comprises identifying the navigational action that positions the autonomous vehicle at an alternate location relative to the second object at the second time to reduce uncertainty of the motion of the second object at the third time, the navigational action selected from a group of navigational actions consisting of: a braking input, an acceleration input, and a steering input.
58. The method of claim 56, wherein selecting the navigational action includes selecting the navigational action further in response to the fourth motion of the second object at the second time intersecting a second trajectory of the autonomous vehicle at the second time.
59. A method for autonomous navigation of an autonomous vehicle, comprising:
● For a first scan cycle at the autonomous vehicle:
accessing a first scan image, the first scan image containing data captured by a sensor on the autonomous vehicle at a first time;
Identifying a first set of points in the first scan image, the first set of points representing a first object in a field near the autonomous vehicle; and
characterizing a first motion of the first object at the first time based on the first set of points;
● Characterizing a first uncertainty of the first motion of the first object at the first time;
● Calculating a predicted second uncertainty of a second motion of the first object at a second time subsequent to the first time based on the first motion of the first object at the first time and a motion of the autonomous vehicle at the first time; and
● In response to the predicted second uncertainty being lower than the first uncertainty, blanking the first object at the second time without incorporating braking considerations for autonomous vehicle avoidance of the object.
60. The method of claim 59, further comprising:
● For a second scan period subsequent to the first scan period:
accessing a second scan image, the second scan image containing data captured by the sensor at a second time;
identifying a second set of points representing the first object in the second scan image; and
Characterizing a second motion of the second object at the second time based on the second group of points and the first motion of the second object at the first time;
● Characterizing a second uncertainty of the second motion of the first object at the second time; and
● In response to the second uncertainty exceeding the predicted second uncertainty, selecting a navigational action to modify motion of the autonomous vehicle relative to the second object at a third time subsequent to the second time.
Applications Claiming Priority (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202062980132P | 2020-02-21 | 2020-02-21 | |
US202062980131P | 2020-02-21 | 2020-02-21 | |
US62/980,132 | 2020-02-21 | ||
US62/980,131 | 2020-02-21 | ||
US202063064316P | 2020-08-11 | 2020-08-11 | |
US63/064,316 | 2020-08-11 | ||
PCT/US2021/019122 WO2021168452A2 (en) | 2020-02-21 | 2021-02-22 | Method for object avoidance during autonomous navigation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115461258A true CN115461258A (en) | 2022-12-09 |
CN115461258B CN115461258B (en) | 2023-09-05 |
Family
ID=77366637
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202180030320.XA Active CN115461258B (en) | 2020-02-21 | 2021-02-22 | Method for object avoidance during autonomous navigation |
Country Status (9)
Country | Link |
---|---|
US (4) | US20210261158A1 (en) |
EP (1) | EP4096978A4 (en) |
JP (1) | JP7336604B2 (en) |
KR (1) | KR102503388B1 (en) |
CN (1) | CN115461258B (en) |
AU (1) | AU2021222055B2 (en) |
CA (1) | CA3168740C (en) |
MX (1) | MX2022010293A (en) |
WO (1) | WO2021168452A2 (en) |
Families Citing this family (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111656396B (en) * | 2018-02-02 | 2024-01-09 | 三菱电机株式会社 | Drop detection device, in-vehicle system, vehicle, and computer-readable recording medium |
DE102018211240A1 (en) * | 2018-07-07 | 2020-01-09 | Robert Bosch Gmbh | Method for classifying an object's relevance |
US11592832B2 (en) * | 2018-08-20 | 2023-02-28 | Uatc, Llc | Automatic robotically steered camera for targeted high performance perception and vehicle control |
US11360480B2 (en) | 2019-08-21 | 2022-06-14 | Zoox, Inc. | Collision zone detection for vehicles |
RU2745804C1 (en) * | 2019-11-06 | 2021-04-01 | Общество с ограниченной ответственностью "Яндекс Беспилотные Технологии" | Method and processor for control of movement of autonomous vehicle in the traffic line |
FR3103303B1 (en) * | 2019-11-14 | 2022-07-22 | Continental Automotive | Determination of a coefficient of friction for a vehicle on a road |
US11684005B2 (en) * | 2020-03-06 | 2023-06-27 | Deere & Company | Method and system for estimating surface roughness of ground for an off-road vehicle to control an implement |
US11718304B2 (en) | 2020-03-06 | 2023-08-08 | Deere & Comoanv | Method and system for estimating surface roughness of ground for an off-road vehicle to control an implement |
JP7431623B2 (en) * | 2020-03-11 | 2024-02-15 | 株式会社Subaru | Vehicle exterior environment recognition device |
US11667171B2 (en) | 2020-03-12 | 2023-06-06 | Deere & Company | Method and system for estimating surface roughness of ground for an off-road vehicle to control steering |
US11678599B2 (en) | 2020-03-12 | 2023-06-20 | Deere & Company | Method and system for estimating surface roughness of ground for an off-road vehicle to control steering |
US11753016B2 (en) * | 2020-03-13 | 2023-09-12 | Deere & Company | Method and system for estimating surface roughness of ground for an off-road vehicle to control ground speed |
US11685381B2 (en) | 2020-03-13 | 2023-06-27 | Deere & Company | Method and system for estimating surface roughness of ground for an off-road vehicle to control ground speed |
KR102749919B1 (en) * | 2020-04-21 | 2025-01-03 | 주식회사 에이치엘클레무브 | Driver assistance apparatus |
FR3112215B1 (en) * | 2020-07-01 | 2023-03-24 | Renault Sas | System and method for detecting an obstacle in a vehicle environment |
US11433885B1 (en) * | 2020-08-20 | 2022-09-06 | Zoox, Inc. | Collision detection for vehicles |
KR20220027327A (en) * | 2020-08-26 | 2022-03-08 | 현대모비스 주식회사 | Method And Apparatus for Controlling Terrain Mode Using Road Condition Judgment Model Based on Deep Learning |
US12181878B2 (en) | 2020-10-22 | 2024-12-31 | Waymo Llc | Velocity estimation and object tracking for autonomous vehicle applications |
US11954924B2 (en) * | 2020-10-23 | 2024-04-09 | Shoppertrak Rct Llc | System and method for determining information about objects using multiple sensors |
US11841439B2 (en) | 2020-11-02 | 2023-12-12 | Waymo Llc | Point cloud segmentation using a coherent lidar for autonomous vehicle applications |
US12233905B2 (en) * | 2020-11-02 | 2025-02-25 | Waymo Llc | Classification of objects based on motion patterns for autonomous vehicle applications |
US12050267B2 (en) | 2020-11-09 | 2024-07-30 | Waymo Llc | Doppler-assisted object mapping for autonomous vehicle applications |
US11702102B2 (en) * | 2020-11-19 | 2023-07-18 | Waymo Llc | Filtering return points in a point cloud based on radial velocity measurement |
US11656629B1 (en) | 2020-12-08 | 2023-05-23 | Waymo Llc | Detection of particulate matter in autonomous vehicle applications |
US20220289237A1 (en) * | 2021-03-10 | 2022-09-15 | Gm Cruise Holdings Llc | Map-free generic obstacle detection for collision avoidance systems |
US12216474B1 (en) | 2021-05-04 | 2025-02-04 | Waymo Llc | Vibrometry-based behavior prediction for autonomous vehicle applications |
KR20230000655A (en) * | 2021-06-25 | 2023-01-03 | 현대자동차주식회사 | Vehicle and control method thereof |
US12049236B2 (en) * | 2021-07-29 | 2024-07-30 | Ford Global Technologies, Llc | Complementary control system detecting imminent collision of autonomous vehicle in fallback monitoring region |
US11904906B2 (en) * | 2021-08-05 | 2024-02-20 | Argo AI, LLC | Systems and methods for prediction of a jaywalker trajectory through an intersection |
US12252158B2 (en) * | 2021-08-12 | 2025-03-18 | Waymo Llc | Time gaps for autonomous vehicles |
DE102021210006B3 (en) * | 2021-09-10 | 2022-09-29 | Zf Friedrichshafen Ag | Method and control device for controlling a vehicle |
US12229672B2 (en) * | 2021-10-21 | 2025-02-18 | EMC IP Holding Company LLC | Detecting domain changes with domain classifiers in autonomous vehicles |
CN113997943A (en) * | 2021-10-28 | 2022-02-01 | 山东新一代信息产业技术研究院有限公司 | Automatic driving vehicle control method, equipment and medium based on semantic clustering |
KR20230071437A (en) * | 2021-11-16 | 2023-05-23 | 에스케이하이닉스 주식회사 | Device for autonomous driving |
US12148174B2 (en) * | 2021-11-19 | 2024-11-19 | Shenzhen Deeproute.Ai Co., Ltd | Method for forecasting motion trajectory, storage medium, and computer device |
KR102512793B1 (en) * | 2021-11-29 | 2023-03-22 | 한국기술교육대학교 산학협력단 | System for platooning |
US12030528B2 (en) * | 2021-12-03 | 2024-07-09 | Zoox, Inc. | Vehicle perception system with temporal tracker |
US20230230484A1 (en) * | 2022-01-18 | 2023-07-20 | The Regents Of The University Of California | Methods for spatio-temporal scene-graph embedding for autonomous vehicle applications |
US20230326049A1 (en) * | 2022-04-07 | 2023-10-12 | Toyota Research Institute, Inc. | Self-supervised monocular depth estimation via rigid-motion embeddings |
CN118251660A (en) * | 2022-04-29 | 2024-06-25 | 辉达公司 | Detecting hardware faults in a data processing pipeline |
US12269371B2 (en) | 2022-05-30 | 2025-04-08 | Toyota Connected North America, Inc. | In-cabin detection framework |
US11974055B1 (en) | 2022-10-17 | 2024-04-30 | Summer Robotics, Inc. | Perceiving scene features using event sensors and image sensors |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160171898A1 (en) * | 2014-12-12 | 2016-06-16 | Atlantic Inertial Systems Limited (HSC) | Collision detection system |
US20160291149A1 (en) * | 2015-04-06 | 2016-10-06 | GM Global Technology Operations LLC | Fusion method for cross traffic application using radars and camera |
US9903728B2 (en) * | 2013-10-17 | 2018-02-27 | Fathym, Inc. | Systems and methods for predicting weather performance for a vehicle |
CN108701362A (en) * | 2016-02-29 | 2018-10-23 | 深圳市大疆创新科技有限公司 | Obstacle avoidance during target tracking |
WO2020035728A2 (en) * | 2018-08-14 | 2020-02-20 | Mobileye Vision Technologies Ltd. | Systems and methods for navigating with safe distances |
Family Cites Families (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE3322500A1 (en) * | 1983-06-23 | 1987-03-19 | Krupp Gmbh | PROCEDURE FOR PASSIVE DETERMINATION OF TARGET DATA OF A VEHICLE |
DE19747446A1 (en) * | 1997-10-28 | 1999-04-29 | Cit Alcatel | Controlling bus-stop or tram-stop displays |
JP3427815B2 (en) * | 2000-03-30 | 2003-07-22 | 株式会社デンソー | Method and apparatus for selecting preceding vehicle, recording medium |
US6982668B1 (en) * | 2003-09-30 | 2006-01-03 | Sandia Corporation | Tangential velocity measurement using interferometric MTI radar |
US20060100771A1 (en) * | 2004-11-10 | 2006-05-11 | E-Lead Electronic Co., Ltd. | Vehicle speed detection apparatus |
JP4893118B2 (en) | 2006-06-13 | 2012-03-07 | 日産自動車株式会社 | Avoidance control device, vehicle including the avoidance control device, and avoidance control method |
WO2009049887A1 (en) | 2007-10-16 | 2009-04-23 | I F M Electronic Gmbh | Method and apparatus for determining distance |
US8126642B2 (en) * | 2008-10-24 | 2012-02-28 | Gray & Company, Inc. | Control and systems for autonomously driven vehicles |
JP2010235063A (en) | 2009-03-31 | 2010-10-21 | Equos Research Co Ltd | Vehicle control apparatus, vehicle, and vehicle control program |
EP2728563A4 (en) | 2011-06-13 | 2015-03-04 | Toyota Motor Co Ltd | DRIVER ASSISTING DEVICE AND DRIVER ASSISTING METHOD |
US9784829B2 (en) | 2015-04-06 | 2017-10-10 | GM Global Technology Operations LLC | Wheel detection and its application in object tracking and sensor registration |
JP6655342B2 (en) | 2015-10-15 | 2020-02-26 | 株式会社Soken | Collision determination system, collision determination terminal, and computer program |
WO2017064981A1 (en) * | 2015-10-15 | 2017-04-20 | 日立オートモティブシステムズ株式会社 | Vehicle control device |
EP3349033A1 (en) * | 2017-01-13 | 2018-07-18 | Autoliv Development AB | Enhanced object detection and motion estimation for a vehicle environment detection system |
JP6711312B2 (en) | 2017-05-12 | 2020-06-17 | 株式会社デンソー | Vehicle automatic driving control system |
JP6972744B2 (en) | 2017-08-01 | 2021-11-24 | トヨタ自動車株式会社 | Driving support device |
US11334070B2 (en) * | 2017-08-10 | 2022-05-17 | Patroness, LLC | Systems and methods for predictions of state of objects for a motorized mobile system |
JP6989766B2 (en) * | 2017-09-29 | 2022-01-12 | ミツミ電機株式会社 | Radar device and target detection method |
CN111417871A (en) | 2017-11-17 | 2020-07-14 | 迪普迈普有限公司 | Iterative closest point processing for integrated motion estimation using high definition maps based on lidar |
WO2019138485A1 (en) | 2018-01-11 | 2019-07-18 | 住友電気工業株式会社 | Collision possibility determination device, collision possibility determination method, and computer program |
US11091162B2 (en) * | 2018-01-30 | 2021-08-17 | Toyota Motor Engineering & Manufacturing North America, Inc. | Fusion of front vehicle sensor data for detection and ranging of preceding objects |
US11242144B2 (en) * | 2018-02-09 | 2022-02-08 | Skydio, Inc. | Aerial vehicle smart landing |
US11022683B1 (en) * | 2018-03-15 | 2021-06-01 | Aeva, Inc. | Simultaneous measurement of range and velocity using optical radar |
US11550061B2 (en) * | 2018-04-11 | 2023-01-10 | Aurora Operations, Inc. | Control of autonomous vehicle based on environmental object classification determined using phase coherent LIDAR data |
US10706294B2 (en) * | 2018-05-03 | 2020-07-07 | Volvo Car Corporation | Methods and systems for generating and using a road friction estimate based on camera image signal processing |
JP2019197375A (en) | 2018-05-09 | 2019-11-14 | トヨタ自動車株式会社 | Collision avoidance support system |
EP3572839A1 (en) | 2018-05-23 | 2019-11-27 | Aptiv Technologies Limited | Method of estimating a velocity magnitude of a moving target in a horizontal plane and radar detection system |
US20200211394A1 (en) * | 2018-12-26 | 2020-07-02 | Zoox, Inc. | Collision avoidance system |
WO2020152534A1 (en) * | 2019-01-25 | 2020-07-30 | 4Iiii Innovations Inc. | Virtual inertia enhancements in bicycle trainer resistance unit |
US10943355B2 (en) * | 2019-01-31 | 2021-03-09 | Uatc, Llc | Systems and methods for detecting an object velocity |
CN110362074B (en) * | 2019-06-18 | 2021-11-23 | 华南理工大学 | Dynamic collision avoidance method for unmanned surface vehicle based on flight path re-planning |
WO2021090285A2 (en) * | 2019-11-08 | 2021-05-14 | Vayyar Imaging Ltd. | Systems and methods for sensing the surroundings of a vehicle |
-
2021
- 2021-02-22 JP JP2022549671A patent/JP7336604B2/en active Active
- 2021-02-22 CA CA3168740A patent/CA3168740C/en active Active
- 2021-02-22 US US17/182,168 patent/US20210261158A1/en not_active Abandoned
- 2021-02-22 KR KR1020227031758A patent/KR102503388B1/en active Active
- 2021-02-22 WO PCT/US2021/019122 patent/WO2021168452A2/en unknown
- 2021-02-22 MX MX2022010293A patent/MX2022010293A/en unknown
- 2021-02-22 EP EP21756397.2A patent/EP4096978A4/en active Pending
- 2021-02-22 US US17/182,173 patent/US11719821B2/en active Active
- 2021-02-22 CN CN202180030320.XA patent/CN115461258B/en active Active
- 2021-02-22 AU AU2021222055A patent/AU2021222055B2/en active Active
- 2021-02-22 US US17/182,165 patent/US11235785B2/en active Active
-
2023
- 2023-06-16 US US18/211,171 patent/US20230333252A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9903728B2 (en) * | 2013-10-17 | 2018-02-27 | Fathym, Inc. | Systems and methods for predicting weather performance for a vehicle |
US20160171898A1 (en) * | 2014-12-12 | 2016-06-16 | Atlantic Inertial Systems Limited (HSC) | Collision detection system |
US20160291149A1 (en) * | 2015-04-06 | 2016-10-06 | GM Global Technology Operations LLC | Fusion method for cross traffic application using radars and camera |
CN108701362A (en) * | 2016-02-29 | 2018-10-23 | 深圳市大疆创新科技有限公司 | Obstacle avoidance during target tracking |
WO2020035728A2 (en) * | 2018-08-14 | 2020-02-20 | Mobileye Vision Technologies Ltd. | Systems and methods for navigating with safe distances |
Also Published As
Publication number | Publication date |
---|---|
US11719821B2 (en) | 2023-08-08 |
WO2021168452A8 (en) | 2022-10-06 |
US20210261158A1 (en) | 2021-08-26 |
JP7336604B2 (en) | 2023-08-31 |
US20210261159A1 (en) | 2021-08-26 |
MX2022010293A (en) | 2023-01-04 |
WO2021168452A3 (en) | 2021-10-28 |
EP4096978A4 (en) | 2024-03-06 |
JP2023507671A (en) | 2023-02-24 |
KR20220134029A (en) | 2022-10-05 |
AU2021222055B2 (en) | 2022-12-01 |
CN115461258B (en) | 2023-09-05 |
EP4096978A2 (en) | 2022-12-07 |
CA3168740A1 (en) | 2021-08-26 |
US20210261157A1 (en) | 2021-08-26 |
US11235785B2 (en) | 2022-02-01 |
US20230333252A1 (en) | 2023-10-19 |
KR102503388B1 (en) | 2023-03-02 |
WO2021168452A2 (en) | 2021-08-26 |
CA3168740C (en) | 2023-08-01 |
AU2021222055A1 (en) | 2022-10-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115461258B (en) | Method for object avoidance during autonomous navigation | |
CN111771207B (en) | Enhanced vehicle tracking | |
US10984543B1 (en) | Image-based depth data and relative depth data | |
KR101572851B1 (en) | How to map your mobile platform in a dynamic environment | |
US20200133272A1 (en) | Automatic generation of dimensionally reduced maps and spatiotemporal localization for navigation of a vehicle | |
US11680801B2 (en) | Navigation based on partially occluded pedestrians | |
WO2020236720A1 (en) | Localization using semantically segmented images | |
JP5023186B2 (en) | Object motion detection system based on combination of 3D warping technique and proper object motion (POM) detection | |
US20210389133A1 (en) | Systems and methods for deriving path-prior data using collected trajectories | |
US11961304B2 (en) | Systems and methods for deriving an agent trajectory based on multiple image sources | |
JP7454685B2 (en) | Detection of debris in vehicle travel paths | |
US11961241B2 (en) | Systems and methods for deriving an agent trajectory based on tracking points within images | |
CN116783455A (en) | Systems and methods for detecting open doors | |
WO2023129656A1 (en) | Calculating vehicle speed for a road curve | |
CN117677972A (en) | System and method for road segment drawing | |
US20240094399A1 (en) | System and method for object reconstruction and automatic motion-based object classification | |
US20240265707A1 (en) | Systems and methods for deriving an agent trajectory based on multiple image sources | |
JP2020148601A (en) | Recognition device, vehicle controller, method for recognition, and program | |
JP2023116424A (en) | Method and device for determining position of pedestrian | |
US20240416948A1 (en) | Localization algorithm | |
US20250010888A1 (en) | Systems and methods for autonomous vehicle anchor point tracking | |
US20240262386A1 (en) | Iterative depth estimation | |
JP7334489B2 (en) | Position estimation device and computer program | |
US20240208492A1 (en) | Collision aware path planning systems and methods | |
US20250052581A1 (en) | Localizing vehicles using retroreflective surfaces |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |