US11544899B2 - System and method for generating terrain maps - Google Patents
System and method for generating terrain maps Download PDFInfo
- Publication number
- US11544899B2 US11544899B2 US16/653,730 US201916653730A US11544899B2 US 11544899 B2 US11544899 B2 US 11544899B2 US 201916653730 A US201916653730 A US 201916653730A US 11544899 B2 US11544899 B2 US 11544899B2
- Authority
- US
- United States
- Prior art keywords
- map
- terrain
- data
- weight
- online
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3807—Creation or updating of map data characterised by the type of data
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3863—Structures of map data
-
- G06T5/002—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2215/00—Indexing scheme for image rendering
- G06T2215/12—Shadow map, environment map
Definitions
- the present disclosure relates to a system and method of generating terrain maps, more specifically to generating maps surrounding an ego vehicle by fusing locally mapped terrain data with online data.
- Autonomous agents e.g., vehicles, robots, drones, etc.
- semi-autonomous agents use machine vision for analyzing a surrounding environment.
- Machine vision is distinct from the field of digital image processing due to the desire to recover a three-dimensional (3D) structure of the world from images and using the 3D structure for fully understanding a scene. That is, machine vision strives to provide a 3D map of the environment that surrounds an autonomous agent.
- autonomous agents may rely on trained neural networks, such as a convolutional neural network (CNN), to identify objects (e.g., pedestrians, cyclists, other cars, etc.) within areas of interest in an image of a surrounding environment.
- CNN convolutional neural network
- a CNN may be trained to identify objects captured by one or more sensors, such as light detection and ranging (LIDAR) sensors, sonar sensors, red-green-blue (RGB) cameras, RGB-depth (RGB-D) cameras, and the like.
- LIDAR light detection and ranging
- RGB red-green-blue
- RGB-D RGB-depth
- mapped terrain data which may be obtained by extracting relevant data from a map produced by a method such as Simultaneous Localization and Mapping (SLAM).
- SLAM Simultaneous Localization and Mapping
- this form of mapped terrain data is static, computed a priori, and may not be up-to-date.
- a second method of computing terrain maps may include computing online terrain data based on a point cloud data produced by sensors on the ego vehicle.
- Online terrain data may be computed live and reflects the environment around the ego vehicle.
- the range of online terrain data is limited by the range of the sensors.
- the sensor range of many known autonomous or semi-autonomous vehicles may be limited to about 70 meters, and performance may degrade as the range increases. Thus, at 70 meters and beyond, the quality of the terrain data may be at its worse.
- aspects of the present disclosure provide systems and methods of combining mapped terrain data and online terrain data to overcome the respective limitations of each.
- Aspects of the present disclosure provide for fusing online and mapped terrain estimates by using weighted grid cells that scale the values returned from online terrain and mapped terrain (and perhaps unknown space as well.
- a method of generating a terrain map is disclosed.
- a first set of map data may be retrieved from a memory.
- a second set of map data may be retrieved from at least one sensor.
- a terrain map may be generated from the first and second sets of map data.
- a grid overlaying the terrain map may be generated.
- the grid may include a plurality of cells. Each of the plurality of cells may be weighted with a first weight corresponding to the first set of map data and a second weight corresponding to the second set of map data. For each of the plurality of cells, a combined value from the first and second weight may be generated and a fused terrain map reflecting the combined value is also generated.
- a map generation device may include a memory storing a first set of map data, a sensor configured to obtain a second set of map data; and a map generating module.
- the map generating module may be configured to generate a terrain map from the first and second sets of map data.
- a grid may be applied overlaying the terrain map.
- the grid may comprise a plurality of cells. Each of the plurality of cells may be weighted with a first weight corresponding to the first set of map data and a second weight corresponding to the second set of map data. For each of the plurality of cells, a combined value may be generated from the first and second weight. A fused terrain map may be generated reflecting the combined value.
- FIG. 1 illustrates an example of an agent in an environment according to aspects of the present disclosure.
- FIG. 2 A illustrates a fused terrain map according to aspects of the present disclosure.
- FIG. 2 B illustrates a mapped terrain map according to aspects of the present disclosure.
- FIG. 2 C illustrates an online terrain map according to aspects of the present disclosure.
- FIG. 2 D illustrates a weighted fusion of mapped and online terrain map data according to aspects of the present disclosure.
- FIG. 3 depicts a method of generating a map according to one aspect of the present disclosure
- FIG. 4 depicts a hardware implementation for a map generating system according to aspects of the present disclosure.
- Autonomous agents e.g., vehicles, robots, drones, etc.
- semi-autonomous agents may use scene-understanding models, such as a trained artificial neural network, to identify objects and/or areas of interest in an image. Additionally, autonomous agents may predict a path (e.g., trajectory) of one or more detected objects. The predicted trajectory may be used for collision avoidance, route planning, and/or other tasks.
- scene-understanding models such as a trained artificial neural network
- trajectory prediction models may predict an object's trajectory based on a 2D or 3D map generated from a fusion of mapped terrain data with live sensory, or online, terrain data.
- Online terrain data may include data from one or more sensors, such as a light detection and ranging (LIDAR) sensor, associated with a machine vision system.
- LIDAR light detection and ranging
- data from the already-existing mapped terrain may be combined with the online terrain data.
- Weighted values of mapped terrain data may be combined with online terrain data to generate a fused terrain map of the environment and localize each object in the map.
- FIG. 1 illustrates an example of an agent 100 in an environment 150 according to aspects of the present disclosure.
- the agent 100 may be traveling on a road 110 .
- a first vehicle 104 may be ahead of the agent 100 and a second vehicle 116 may be adjacent to the agent 100 .
- the agent 100 may include a 2D camera 108 , such as a 2D RGB camera, and a LIDAR sensor 106 .
- Other sensors such as RADAR and/or ultrasound, are also contemplated.
- the agent 100 may include one or more additional 2D cameras and/or LIDAR sensors.
- the additional sensors may be side facing and/or rear facing sensors.
- the agent 100 may operate in the environment 150 according to mapped data retrieved or obtained from at least two sources, including a previously generated terrain map, and online sensory data retrieved in substantially real-time.
- Generated terrain maps may be previously generated using SLAM, from which relevant objects and other data may be extracted and used to aid the trajectory of the agent 100 .
- the SLAM data may feature static objects such as the road 110 , other roadways, buildings, traffic lights and other objects that are not likely to move or change over time.
- the environment 150 surrounding the agent 100 may include dynamic objects and static objects.
- a dynamic object refers to an object that may move, such as a pedestrian, bicycle, or car.
- a static object refers to background objects, such as a road, a sidewalk, or vegetation.
- Previously mapped terrain data may have, a priori identified and classified the static objects, while dynamic objects may be imaged and classified by online and on-board systems.
- Online sensory data may include data retrieved or obtained from the 2D camera 108 capturing a 2D image 120 that includes objects in the 2D camera's 108 field of view 114 .
- the LIDAR sensor 106 may generate one or more output streams.
- the first output stream may include a 3D cloud point of objects in a first field of view, such as a 360° field of view 112 (e.g., bird's eye view).
- the second output stream 126 may include a 3D cloud point of objects in a second field of view, such as a forward-facing field of view.
- the 2D image captured by the 2D camera may include a 2D image of the first vehicle 104 , as the first vehicle 104 is in the 2D camera's 108 field of view 114 .
- a semantic segmentation system of the agent 100 may extract features from objects in the 2D image.
- an artificial neural network such as a convolutional neural network, may extract features of the first vehicle 104 .
- the extracted features may be used to generate a semantic label for the first vehicle 104 to be included in the fused terrain map.
- a LIDAR sensor uses laser light to sense the shape, size, and position of objects in an environment.
- the LIDAR sensor may scan the environment vertically and horizontally.
- the artificial neural network of the agent 100 may extract height and/or depth features from the first output stream.
- the artificial neural network of the agent 100 may also extract height and/or depth features from the second output stream.
- the extracted features may be used to generate semantic labels of other objects in the environment 150 .
- FIG. 2 A illustrates an example of a fused terrain map for use in generating an agent's trajectory according to aspects of the present disclosure.
- the terrain map may include a fusion of mapped data from a previously generated map and online data retrieved from the agent's onboard systems.
- FIG. 2 B illustrates a previously generated terrain map 200 B.
- FIG. 2 C illustrates an exemplary online terrain map 200 C.
- objects such as sidewalks 208 , street signs 210 , vegetation 212 , buildings 214 (e.g., structures), and crosswalks 216 may have been previously imaged, classified and mapped into the stored terrain map 200 B as illustrated by the map of FIG. 2 B .
- Objects such as pedestrians 202 , cars 204 , and a train 206 , may be imaged, classified, and mapped into a substantially real-time, or online map 200 C using the agent's on-board systems, as illustrated by the map of FIG. 2 C .
- Aspects of the present disclosure are not limited to imaging and mapping the explicitly named elements. Other elements, both static and dynamic, may be imaged and classified. For simplicity, some labels of environmental objects are omitted.
- the online map 200 C illustrated herein depicts the identification of dynamic objects, such as pedestrians and vehicles, it will be appreciated that the on-board, online imaging systems may also image classify and otherwise make use of static objects in conjunction with data included in the previously stored terrain map 200 B.
- FIG. 2 D illustrates a conceptual fusion of stored terrain maps and online terrain maps according to one aspect of the disclosure.
- An agent, 201 such as an autonomous or semi-autonomous vehicle, may use a fused map 200 D to form a trajectory through an environment.
- the fused terrain map 200 D may include a two-dimensional (x,y) view of the environment surrounding the agent 201 .
- the fused map 200 D illustrated in FIG. 2 D represents the field-of-view in front of the agent 201 , mapped to an (x,y) coordinate system that may be conceptualized as laying flat in front of the agent 201 .
- the fused map of the surrounding environment may take the form of different fields-of-view, coordinate systems and dimensions, including three dimensions, without deviating from the scope of the disclosure.
- the environment may be mapped as disclosed herein using both data from a previously stored terrain map as well as online terrain data generated by the agent's on-board systems.
- a map generator module as described herein, may overlay the terrain map 200 D with a grid 230 including cells 235 having predetermined sizes.
- the terrain map 200 D may be divided into a two-dimensional grid of n 2 cells.
- the height value, z may be estimated in each cell.
- each cell 235 of the grid 230 may include weighted height values for the mapped terrain data, w m , and online terrain data, w o .
- the weight values of the mapped terrain data, w m may be based on a number of environmental factors. Mapped terrain weights, for example, may be affected by factors like the range, the age of the terrain map (older maps given lower weights), the number of samples in that specific cell, and the like. When including a ranging factor, weights may be lower near the agent 201 and increase in value as the cells 235 increase in distance away from the agent 201 . For example, as illustrated in FIG. 2 D , the weighted values of w m may increase for cells 235 as the distance increases away from the agent.
- the weight values of the online terrain data, w o may be affected by factors including the range, the number of samples in that specific cell, how many different sensor modalities produced measurements in a particular cell, the strength of the measurement signal in the cell, semantic labels, ego car localization uncertainty, and the like.
- the terrain data, w o may be at its highest in cells nearest to the agent 201 , and decreasing as the cells increase in distance from the agent 201 (e.g., in the z-axis of the grid 230 ).
- the terrain data for each cell may be computed as a combined value of the mapped terrain data w m , and the online terrain data w o .
- the combined value may include a weighted mean of the mapped terrain data w m and the online terrain data w o .
- the weighted mean for each cell may be smoothed to reduce noise, for example using a Gaussian kernel.
- the result is a fused terrain map that takes advantage of previously obtained terrain data supplemented with live, online terrain data.
- the resulting fused terrain map may form a comprehensive terrain map for the agent's use in generating a trajectory or other environmental navigation function.
- a map generation system may retrieve mapped terrain data from a data source.
- the system may retrieve or obtain the mapped terrain data by extracting relevant data from a map produced by SLAM.
- SLAM SLAM
- the map generating system may obtain online terrain data.
- online terrain data may be retrieved or obtained from an agent's on-board systems, including, without limitations, RADAR, LIDAR, sensors, cameras and other data collection devices.
- Online terrain data may be computed and/or generated on a point cloud produced by the on-board systems.
- the map generating system may combine the mapped terrain data with the online terrain data to form a fused terrain map.
- the map generating system may, as shown in block 308 , overlay the terrain map with a grid having cells of a predetermined size. Each cell of the grid may include terrain data retrieved from the mapped terrain as well as online terrain data obtained from the on-board agent's sensors.
- the data of each cell of the grid may be weighted.
- the mapped terrain data and the online terrain data may have different weighted values.
- the weighting value for the mapped terrain data may be lower for cells near the ego vehicle and increasingly higher for cells as the cells increase in distance away from the ego vehicle.
- the weighting value for the online terrain data may be higher for cells near the ego vehicle and decreasingly lower for cells as the cells increase in distance from the ego vehicle.
- the weighting values may be normalized such that the sum of the weighing values being equal to 1, but such normalization is not required.
- the map generating system may compute a combined value for each cell.
- the combined value may include a weighted mean of the mapped terrain data and the online terrain data.
- z m and z o represent respectively the height estimates for mapped and online terrain
- w o and w m represent the weighting factors that may be scaled between 0 and 1, but need not be, and can depend on multiple factors such as range, the cell quantity of information, and other information sources such as localization uncertainty.
- the weight value w o might decrease as the range from the ego vehicle increases since the sensor data gets sparser and more scattered the further an object is from the agent's sensors.
- the map generating system may smooth the combined value for each cell to reduce noise and form an estimated fused terrain map.
- the map generating system may smooth each cell with a Gaussian kernel, however other smoothing functions may also be applied.
- FIG. 4 is a diagram illustrating an example of a hardware implementation for a map generating system 400 , according to aspects of the present disclosure.
- the map generating system 400 may be a component of a vehicle, a robotic device, or other device.
- the map generating system 400 may be a component of a car 428 .
- Aspects of the present disclosure are not limited to the map generating system 400 being a component of the car 428 , as other devices, such as a bus, boat, drone, simulator, or robot, are also contemplated for using the map generating system 400 .
- the car 428 may be autonomous or semi-autonomous.
- the map generating system 400 may be implemented with a bus architecture, represented generally by a bus 430 .
- the bus 430 may include any number of interconnecting buses and bridges depending on the specific application of the map generating system 400 and the overall design constraints.
- the bus 430 may link together various circuits including one or more processors and/or hardware modules, represented by a processor 420 , a communication module 422 , a location module 418 , a sensor module 402 , a locomotion module 426 , a planning module 424 , and a computer-readable medium 414 .
- the bus 430 may also link various other circuits such as timing sources, peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further.
- the map generating system 400 may include a transceiver 416 coupled to the processor 420 , the sensor module 402 , a map generating module 408 , the communication module 422 , the location module 418 , the locomotion module 426 , the planning module 424 , and the computer-readable medium 414 .
- the transceiver 416 is coupled to an antenna 434 .
- the transceiver 416 communicates with various other devices over a transmission medium. For example, the transceiver 416 may receive commands via transmissions from a user or a remote device. As another example, the transceiver 416 may transmit driving statistics and information from the map generating module 408 to a server (not shown).
- the map generating module 408 may include the processor 420 coupled to the computer-readable medium 414 .
- the processor 420 may perform processing, including the execution of software stored on the computer-readable medium 414 providing functionality according to the disclosure.
- the software when executed by the processor 420 , causes the map generating system 400 to perform the various functions described for a particular device, such as the car 428 , or any of the modules 402 , 408 , 414 , 416 , 418 , 420 , 422 , 424 , 426 .
- the computer-readable medium 414 may also be used for storing data that is manipulated by the processor 420 when executing the software.
- the sensor module 402 may be used to obtain measurements via different sensors, such as a first sensor 406 , a second sensor 404 , and a third sensor 410 .
- the first sensor 406 may be a vision sensor, such as a stereoscopic camera or a red-green-blue (RGB) camera, for capturing 2D images.
- the second sensor 404 may be a ranging sensor, such as a light detection and ranging (LIDAR) sensor or a radio detection and ranging (RADAR) sensor.
- the third sensor 410 may include an in-cabin camera for capturing raw video or images of the interior environment of the car 428 .
- aspects of the present disclosure are not limited to the aforementioned sensors as other types of sensors, such as, for example, thermal, sonar, and/or lasers are also contemplated for either of the sensors 404 , 406 .
- the measurements of the sensors 404 , 406 , 410 , 406 may be processed by one or more of the processor 420 , the sensor module 402 , the map generating module 408 , the communication module 422 , the location module 418 , the locomotion module 426 , the planning module 424 , in conjunction with the computer-readable medium 414 to implement the functionality described herein.
- the data captured by the first sensor 406 and the second sensor 404 may be transmitted to an external device via the transceiver 416 .
- the sensors 404 , 406 , 410 may be coupled to the car 428 or may be in communication with the car 428 .
- the location module 418 may be used to determine a location of the car 428 .
- the location module 418 may use a global positioning system (GPS) to determine the location of the car 428 .
- the communication module 422 may be used to facilitate communications via the transceiver 416 .
- the communication module 422 may be configured to provide communication capabilities via different wireless protocols, such as WiFi, long-term evolution (LTE), 3G, etc.
- the communication module 422 may also be used to communicate with other components of the car 428 that are not modules of the map generating module 408 .
- the locomotion module 426 may be used to facilitate locomotion of the car 428 .
- the locomotion module 426 may control movement of the wheels.
- the locomotion module 426 may be in communication with a power source of the car 428 , such as an engine or batteries.
- a power source of the car 428 such as an engine or batteries.
- aspects of the present disclosure are not limited to providing locomotion via wheels and are contemplated for other types of components for providing locomotion, such as propellers, treads, fins, and/or jet engines.
- the map generating system 400 may also include the planning module 424 for planning a predicted route or trajectory or controlling the locomotion of the car 428 , via the locomotion module 426 .
- the planning module 424 overrides the user input when the user input is expected (e.g., predicted) to cause a collision.
- the modules may be software modules running in the processor 420 , resident/stored in the computer-readable medium 414 , one or more hardware modules coupled to the processor 420 , or some combination thereof.
- the map generating module 408 may be in communication with the sensor module 402 , the transceiver 416 , the processor 420 , the communication module 422 , the location module 418 , the locomotion module 426 , the planning module 424 , and the computer-readable medium 414 .
- the map generating module 408 may receive sensor data from the sensor module 402 .
- the sensor module 402 may receive the sensor data from the sensors 404 , 406 , 410 .
- the sensor module 402 may filter the data to remove noise, encode the data, decode the data, merge the data, extract frames, or perform other functions.
- the map generator 408 may receive sensor data directly from the sensors 404 , 406 , 410 .
- the map generating module 408 may be in communication with the planning module 424 and the locomotion module 426 to generate and operate the car 428 according to maps generated according to aspects of the present disclosure.
- the map generating module 408 may fuse online terrain data and mapped terrain data to provide accurate and robust terrain maps for the car 428 .
- the map generating module 408 may obtain mapped terrain data using SLAM, for example.
- the map generating module may generate a terrain map including both previously mapped terrain data as well as live, online terrain data obtained by the on-board sensors 404 , 406 , 408 .
- the map generating module 408 may further apply a grid over the mapped area surrounding the ego vehicle, and combine the values of the weighted mapped terrain data with the weighted online terrain data within each grid cell to produce a combined value of the data.
- the weighted value for the mapped terrain data may be lower closer to the car 428 (and within the range of the car's sensors 404 , 406 , 410 range) and higher farther away from the car 428 (and outside the range of the car's sensors 404 , 406 , 410 range).
- the weighted value for the online terrain data may be higher closer to the car 428 (and within the range of the car's sensors 404 , 406 , 410 range) and lower farther away from the car 428 (and outside the range of the car's sensors 404 , 406 , 410 range).
- the resulting grid may be used to determine the terrain and generate a fused terrain map accordingly.
- determining encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Additionally, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Furthermore, “determining” may include resolving, selecting, choosing, establishing, and the like.
- a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members.
- “at least one of: a, b, or c” is intended to cover: a, b, c, a-b, a-c, b-c, and a-b-c.
- the various illustrative logical blocks, modules and circuits described in connection with the present disclosure may be implemented or performed with a processor specially configured to perform the functions discussed in the present disclosure.
- the processor may be a neural network processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array signal (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components or any combination thereof designed to perform the functions described herein.
- the processing system may comprise one or more neuromorphic processors for implementing the neuron models and models of neural systems described herein.
- the processor may be a microprocessor, controller, microcontroller, or state machine specially configured as described herein.
- a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or such other special configuration, as described herein.
- a software module may reside in storage or machine readable medium, including random access memory (RAM), read only memory (ROM), flash memory, erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, a hard disk, a removable disk, a CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
- RAM random access memory
- ROM read only memory
- EPROM erasable programmable read-only memory
- EEPROM electrically erasable programmable read-only memory
- registers a hard disk, a removable disk, a CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
- a software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media.
- a storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.
- the methods disclosed herein comprise one or more steps or actions for achieving the described method.
- the method steps and/or actions may be interchanged with one another without departing from the scope of the claims.
- the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.
- an example hardware configuration may comprise a processing system in a device.
- the processing system may be implemented with a bus architecture.
- the bus may include any number of interconnecting buses and bridges depending on the specific application of the processing system and the overall design constraints.
- the bus may link together various circuits including a processor, machine-readable media, and a bus interface.
- the bus interface may be used to connect a network adapter, among other things, to the processing system via the bus.
- the network adapter may be used to implement signal processing functions.
- a user interface e.g., keypad, display, mouse, joystick, etc.
- the bus may also link various other circuits such as timing sources, peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further.
- the processor may be responsible for managing the bus and processing, including the execution of software stored on the machine-readable media.
- Software shall be construed to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
- the machine-readable media may be part of the processing system separate from the processor.
- the machine-readable media, or any portion thereof may be external to the processing system.
- the machine-readable media may include a transmission line, a carrier wave modulated by data, and/or a computer product separate from the device, all which may be accessed by the processor through the bus interface.
- the machine-readable media, or any portion thereof may be integrated into the processor, such as the case may be with cache and/or specialized register files.
- the various components discussed may be described as having a specific location, such as a local component, they may also be configured in various ways, such as certain components being configured as part of a distributed computing system.
- the machine-readable media may comprise a number of software modules.
- the software modules may include a transmission module and a receiving module. Each software module may reside in a single storage device or be distributed across multiple storage devices.
- a software module may be loaded into RANI from a hard drive when a triggering event occurs.
- the processor may load some of the instructions into cache to increase access speed.
- One or more cache lines may then be loaded into a special purpose register file for execution by the processor.
- Computer-readable media include both computer storage media and communication media including any storage medium that facilitates transfer of a computer program from one place to another.
- modules and/or other appropriate means for performing the methods and techniques described herein can be downloaded and/or otherwise obtained by a user terminal and/or base station as applicable.
- a user terminal and/or base station can be coupled to a server to facilitate the transfer of means for performing the methods described herein.
- various methods described herein can be provided via storage means, such that a user terminal and/or base station can obtain the various methods upon coupling or providing the storage means to the device.
- any other suitable technique for providing the methods and techniques described herein to a device can be utilized.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- Automation & Control Theory (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
z=(w o *z o +w m *z m)/(w o +w m) (Eq. 1)
where zm and zo represent respectively the height estimates for mapped and online terrain; wo and wm represent the weighting factors that may be scaled between 0 and 1, but need not be, and can depend on multiple factors such as range, the cell quantity of information, and other information sources such as localization uncertainty. For example, the weight value wo might decrease as the range from the ego vehicle increases since the sensor data gets sparser and more scattered the further an object is from the agent's sensors.
Claims (20)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/653,730 US11544899B2 (en) | 2019-10-15 | 2019-10-15 | System and method for generating terrain maps |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/653,730 US11544899B2 (en) | 2019-10-15 | 2019-10-15 | System and method for generating terrain maps |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20210110600A1 US20210110600A1 (en) | 2021-04-15 |
| US11544899B2 true US11544899B2 (en) | 2023-01-03 |
Family
ID=75383777
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/653,730 Active US11544899B2 (en) | 2019-10-15 | 2019-10-15 | System and method for generating terrain maps |
Country Status (1)
| Country | Link |
|---|---|
| US (1) | US11544899B2 (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11113584B2 (en) * | 2020-02-04 | 2021-09-07 | Nio Usa, Inc. | Single frame 4D detection using deep fusion of camera image, imaging RADAR and LiDAR point cloud |
Citations (21)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100264629A1 (en) * | 2009-04-16 | 2010-10-21 | Toyota Motor Engineering & Manufacturing North America, Inc. | Adjustable Airbag Systems for Vehicles |
| US8019736B1 (en) * | 2007-07-25 | 2011-09-13 | Rockwell Collins, Inc. | Systems and methods for combining a plurality of terrain databases into one terrain database |
| US20120053755A1 (en) * | 2010-08-30 | 2012-03-01 | Denso Corporation | Traveling environment recognition device and method |
| US20150036487A1 (en) * | 2012-02-20 | 2015-02-05 | Nec Corporation | Vehicle-mounted device and congestion control method |
| US20150268058A1 (en) * | 2014-03-18 | 2015-09-24 | Sri International | Real-time system for multi-modal 3d geospatial mapping, object recognition, scene annotation and analytics |
| US20150344028A1 (en) | 2014-06-02 | 2015-12-03 | Magna Electronics Inc. | Parking assist system with annotated map generation |
| US20160221592A1 (en) * | 2013-11-27 | 2016-08-04 | Solfice Research, Inc. | Real Time Machine Vision and Point-Cloud Analysis For Remote Sensing and Vehicle Control |
| US20160292626A1 (en) * | 2013-11-25 | 2016-10-06 | First Resource Management Group Inc. | Apparatus for and method of forest-inventory management |
| US9524647B2 (en) | 2015-01-19 | 2016-12-20 | The Aerospace Corporation | Autonomous Nap-Of-the-Earth (ANOE) flight path planning for manned and unmanned rotorcraft |
| US20170032509A1 (en) * | 2015-07-31 | 2017-02-02 | Accenture Global Services Limited | Inventory, growth, and risk prediction using image processing |
| US20170169605A1 (en) * | 2015-12-10 | 2017-06-15 | Ocean Networks Canada Society | Automated generation of digital elevation models |
| US20170345321A1 (en) | 2014-11-05 | 2017-11-30 | Sierra Nevada Corporation | Systems and methods for generating improved environmental displays for vehicles |
| US20180004210A1 (en) * | 2016-07-01 | 2018-01-04 | nuTonomy Inc. | Affecting Functions of a Vehicle Based on Function-Related Information about its Environment |
| US20180025234A1 (en) * | 2016-07-20 | 2018-01-25 | Ford Global Technologies, Llc | Rear camera lane detection |
| US20180122135A1 (en) | 2015-04-30 | 2018-05-03 | University Of Cape Town | Systems and methods for synthesising a terrain |
| US20190236763A1 (en) * | 2018-01-30 | 2019-08-01 | Canon Medical Systems Corporation | Apparatus and method for context-oriented blending of reconstructed images |
| US20190244517A1 (en) * | 2019-02-05 | 2019-08-08 | Hassnaa Moustafa | Enhanced high definition maps for a vehicle |
| US20200003897A1 (en) * | 2018-06-28 | 2020-01-02 | Zoox, Inc. | Multi-Resolution Maps for Localization |
| US20200134783A1 (en) * | 2017-04-28 | 2020-04-30 | Sony Corporation | Information processing device, information processing method, information processing program, image processing device, and image processing system |
| US20200271787A1 (en) * | 2017-10-03 | 2020-08-27 | Intel Corporation | Grid occupancy mapping using error range distribution |
| US20210131823A1 (en) * | 2018-06-22 | 2021-05-06 | Marelli Europe S.P.A. | Method for Vehicle Environment Mapping, Corresponding System, Vehicle and Computer Program Product |
-
2019
- 2019-10-15 US US16/653,730 patent/US11544899B2/en active Active
Patent Citations (21)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8019736B1 (en) * | 2007-07-25 | 2011-09-13 | Rockwell Collins, Inc. | Systems and methods for combining a plurality of terrain databases into one terrain database |
| US20100264629A1 (en) * | 2009-04-16 | 2010-10-21 | Toyota Motor Engineering & Manufacturing North America, Inc. | Adjustable Airbag Systems for Vehicles |
| US20120053755A1 (en) * | 2010-08-30 | 2012-03-01 | Denso Corporation | Traveling environment recognition device and method |
| US20150036487A1 (en) * | 2012-02-20 | 2015-02-05 | Nec Corporation | Vehicle-mounted device and congestion control method |
| US20160292626A1 (en) * | 2013-11-25 | 2016-10-06 | First Resource Management Group Inc. | Apparatus for and method of forest-inventory management |
| US20160221592A1 (en) * | 2013-11-27 | 2016-08-04 | Solfice Research, Inc. | Real Time Machine Vision and Point-Cloud Analysis For Remote Sensing and Vehicle Control |
| US20150268058A1 (en) * | 2014-03-18 | 2015-09-24 | Sri International | Real-time system for multi-modal 3d geospatial mapping, object recognition, scene annotation and analytics |
| US20150344028A1 (en) | 2014-06-02 | 2015-12-03 | Magna Electronics Inc. | Parking assist system with annotated map generation |
| US20170345321A1 (en) | 2014-11-05 | 2017-11-30 | Sierra Nevada Corporation | Systems and methods for generating improved environmental displays for vehicles |
| US9524647B2 (en) | 2015-01-19 | 2016-12-20 | The Aerospace Corporation | Autonomous Nap-Of-the-Earth (ANOE) flight path planning for manned and unmanned rotorcraft |
| US20180122135A1 (en) | 2015-04-30 | 2018-05-03 | University Of Cape Town | Systems and methods for synthesising a terrain |
| US20170032509A1 (en) * | 2015-07-31 | 2017-02-02 | Accenture Global Services Limited | Inventory, growth, and risk prediction using image processing |
| US20170169605A1 (en) * | 2015-12-10 | 2017-06-15 | Ocean Networks Canada Society | Automated generation of digital elevation models |
| US20180004210A1 (en) * | 2016-07-01 | 2018-01-04 | nuTonomy Inc. | Affecting Functions of a Vehicle Based on Function-Related Information about its Environment |
| US20180025234A1 (en) * | 2016-07-20 | 2018-01-25 | Ford Global Technologies, Llc | Rear camera lane detection |
| US20200134783A1 (en) * | 2017-04-28 | 2020-04-30 | Sony Corporation | Information processing device, information processing method, information processing program, image processing device, and image processing system |
| US20200271787A1 (en) * | 2017-10-03 | 2020-08-27 | Intel Corporation | Grid occupancy mapping using error range distribution |
| US20190236763A1 (en) * | 2018-01-30 | 2019-08-01 | Canon Medical Systems Corporation | Apparatus and method for context-oriented blending of reconstructed images |
| US20210131823A1 (en) * | 2018-06-22 | 2021-05-06 | Marelli Europe S.P.A. | Method for Vehicle Environment Mapping, Corresponding System, Vehicle and Computer Program Product |
| US20200003897A1 (en) * | 2018-06-28 | 2020-01-02 | Zoox, Inc. | Multi-Resolution Maps for Localization |
| US20190244517A1 (en) * | 2019-02-05 | 2019-08-08 | Hassnaa Moustafa | Enhanced high definition maps for a vehicle |
Non-Patent Citations (1)
| Title |
|---|
| Miller, Thomas Isaac, "Robotic Localization and Perception in Static Terrain and Dynamic Urban Environments," Jan. 2009, Cornell University. |
Also Published As
| Publication number | Publication date |
|---|---|
| US20210110600A1 (en) | 2021-04-15 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12266148B2 (en) | Real-time detection of lanes and boundaries by autonomous vehicles | |
| US20230366698A1 (en) | Map creation and localization for autonomous driving applications | |
| US11276230B2 (en) | Inferring locations of 3D objects in a spatial environment | |
| US12100230B2 (en) | Using neural networks for 3D surface structure estimation based on real-world data for autonomous systems and applications | |
| CN112740268B (en) | Target detection method and device | |
| EP3822852B1 (en) | Method, apparatus, computer storage medium and program for training a trajectory planning model | |
| JP7239703B2 (en) | Object classification using extraterritorial context | |
| CN113228043B (en) | System and method for obstacle detection and association based on neural network for mobile platform | |
| US20260022942A1 (en) | Systems and methods for deriving path-prior data using collected trajectories | |
| US12195021B2 (en) | Platform for perception system development for automated driving system | |
| CN115867940A (en) | Monocular Depth Supervision from 3D Bounding Boxes | |
| CN112710316B (en) | Focus on dynamic map generation in the field of construction and localization technology | |
| JP2023066377A (en) | Three-dimensional surface reconfiguration with point cloud densification using artificial intelligence for autonomous systems and applications | |
| US11556126B2 (en) | Online agent predictions using semantic maps | |
| US12080013B2 (en) | Multi-view depth estimation leveraging offline structure-from-motion | |
| US11544899B2 (en) | System and method for generating terrain maps | |
| CN115683125A (en) | Method, system and computer program product for automatically locating a vehicle | |
| Yang et al. | The research of 3D point cloud data clustering based on MEMS LiDAR for autonomous driving | |
| WO2024215334A1 (en) | High definition map fusion for 3d object detection | |
| CN115946723A (en) | Method and device for determining driving strategy, vehicle and storage medium | |
| US20240395768A1 (en) | Multi-view depth estimation leveraging offline structure-from-motion | |
| CN119027476A (en) | Method and device for determining free space |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| AS | Assignment |
Owner name: TOYOTA RESEARCH INSTITUTE, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SERAFIN, JACOPO;DERRY, MATTHEW;REEL/FRAME:050738/0885 Effective date: 20191008 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| AS | Assignment |
Owner name: TOYOTA JIDOSHA KABUSHIKI KAISHA, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TOYOTA RESEARCH INSTITUTE, INC.;REEL/FRAME:062590/0759 Effective date: 20230117 |