[go: up one dir, main page]

AU2019352559A1 - System and method for facilitating generation of geographical information - Google Patents

System and method for facilitating generation of geographical information Download PDF

Info

Publication number
AU2019352559A1
AU2019352559A1 AU2019352559A AU2019352559A AU2019352559A1 AU 2019352559 A1 AU2019352559 A1 AU 2019352559A1 AU 2019352559 A AU2019352559 A AU 2019352559A AU 2019352559 A AU2019352559 A AU 2019352559A AU 2019352559 A1 AU2019352559 A1 AU 2019352559A1
Authority
AU
Australia
Prior art keywords
processor
image data
operable
point cloud
geographical information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2019352559A
Inventor
Kim Sun Gerry ONG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gps Lands Singapore Pte Ltd
Original Assignee
Gps Lands Singapore Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gps Lands Singapore Pte Ltd filed Critical Gps Lands Singapore Pte Ltd
Publication of AU2019352559A1 publication Critical patent/AU2019352559A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/16Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/16Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
    • G01S5/163Determination of attitude
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Graphics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A system and method for facilitating generation of geographical information is disclosed. The system for facilitating the generation of geographical information may comprise an imaging module operable to obtain image data; and a processor operable to receive the image data and generate a point cloud having a plurality of data points using the image data, wherein the processor is further operable to analyse the point cloud to extract at least one feature associated with the geographical information, and generate the geographical information using the extracted feature.

Description

SYSTEM AND METHOD FOR FACILITATING GENERATION OF
GEOGRAPHICAL INFORMATION
Field of Invention
The present disclosure relates to a system and method for facilitating the generation of geographical information.
Background Art
The following discussion of the background to the invention is intended to facilitate an understanding of the present invention only. It may be appreciated that the discussion is not an acknowledgement or admission that any of the material referred to was published, known or part of the common general knowledge of the person skilled in the art in any jurisdiction as at the priority date of the invention.
With development of imaging sensors, three-dimensional type sensor (hereinafter referred to as“3D sensor”) is commonly used to obtain image data to generate geographical information, for example High Definition (HD) maps. An example of the 3D sensor is a Light Detection and Ranging (LiDAR) sensor. The LiDAR sensor measures distance to an object by illuminating the object with pulsed laser light and detecting the reflected pulses, and image data is obtained using the detected distance.
However, after the image data is obtained by the 3D sensor, a user may face complex processes to generate the geographical information using the image data. Existing systems designed for generation of the geographical information using the image data are often complex to use, since the systems require the user to analyse and process the image data to generate the geographical information.
In general, processing of image data will vary depending on the types of the geographical information that the user desires to generate. Depending on the type of the geographical information, the user is required to consider necessary geographical factors and process the image data in an appropriate way.
In light of the foregoing, it is not straightforward for the user to generate the geographical information using the obtained image data. In addition, as it takes time for the user to analyse and process the image data, the user may typically face challenges generating the geographical information in real-time or near real-time. The generation of the geographical information is laborious, inefficient, time-consuming and cost-consuming. This is in part caused by the computationally intensive nature of 3D image data generation based on obtained image data.
Considering the above, there exists a need to provide a solution that meets the mentioned needs or alleviates the challenges at least in part.
Summary of the Invention
Throughout the specification, unless the context requires otherwise, the word “comprise” or variations such as“comprises” or“comprising”, will be understood to imply the inclusion of a stated integer or group of integers but not the exclusion of any other integer or group of integers.
Furthermore, throughout the specification, unless the context requires otherwise, the word“include” or variations such as“includes” or“including”, will be understood to imply the inclusion of a stated integer or group of integers but not the exclusion of any other integer or group of integers.
The invention or disclosure seeks to provide a system and method to reduce the user’s manual and laborious work in generation of geographical information.
The technical solution is provided in the form of a system and method for facilitating generation of geographical information. In particular, the system comprises a processor operable to analyse image data obtained by an imaging module. Thereafter, the processor is operable to extract at least one feature from the image data based on the analysis. The extracted feature is associated with the geographical information that the user desires to generate. The processor is then operable to generate the geographical information using the extracted feature.
In this manner, the processor is operable to generate the geographical information that the user desires to generate, in real-time or near real-time, without the user’s manual and laborious work. In one aspect, there is a system for facilitating generation of geographical information comprising: an imaging module operable to obtain image data; and a processor operable to receive the image data and generate a point cloud having a plurality of data points using the image data, wherein the processor is further operable to analyse the point cloud to extract at least one feature associated with the geographical information, and generate the geographical information using the extracted feature.
In some embodiments, the imaging module comprises at least one 3D sensor operable to generate 3D image data as the image data.
In some embodiments, the generation of the point cloud includes georeferencing of the image data.
In some embodiments, the system further comprises a position and orientation system (POS) operable to send position and/or orientation related data to the processor.
In some embodiments, the processor is operable to parse the sent data and use the parsed information to perform the georeferencing of the image data. In some embodiments, the processor is operable to correct the point cloud radiometrically and/or geometrically.
In some embodiments, the processor is operable to compute an octree, and divide the point cloud into a plurality of cells based on the computed octree so that each of the plurality of cells have a same size. In some embodiments, the processor is operable to compute a normal for each of the data points and create an eigenvalue for each of the plurality of cells based on the computed normal.
In some embodiments, the processor is operable to compute at least one geometrical attribute for the each of the plurality of cells by normalizing the eigenvalues of the each of the plurality of cells.
In some embodiments, the processor is operable to segment the point cloud according to the geometrical attribute. In some embodiments, the processor is operable to classify the each of the data points, based on the segmented point cloud according to the geometrical attribute.
In some embodiments, the imaging module further comprises at least one 2D sensor operable to generate 2D image data. In some embodiments, the processor is operable to merge the 2D image data and the 3D image data.
In some embodiments, the processor is operable to re-compute the at least one geometrical attribute for the each of the plurality of cells and/or re-classify the each of the data points, based on the merged image data. In some embodiments, the processor is operable to receive a selection of a type of the geographical information to be generated and extract the at least one feature associated with the geographical information to be generated.
In some embodiments, the geographical information includes at least one of a high definition map, plant information or public infrastructure information. In some embodiments, the feature includes at least one of geographical feature or object feature.
In some embodiments, the imaging module is operable to provide the image data which was obtained previously to the processor.
In another aspect, there is a method for facilitating generation of geographical information comprising: obtaining image data at an imaging module; receiving, at a processor, the image data from the imaging module; generating, at the processor, a point cloud having a plurality of data points using the image data; analysing, at the processor, the point cloud; extracting, at least one feature associated with the geographical information; and generating, at the processor, the geographical information using the extracted feature.
Other aspects of the invention will become apparent to those of ordinary skilled in the art upon review of the following description of specific embodiments of the invention in conjunction with the accompanying drawings. Brief Description of the Drawings
The present invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
Fig. 1 shows a block diagram in accordance with some embodiments of the present invention.
Fig. 2 shows a flow diagram in accordance with some embodiments of the present invention.
Fig. 3 shows another block diagram in accordance with some embodiments of the present disclosure. Fig. 4 shows another flow diagram in accordance with some embodiments of the present disclosure.
Fig. 5 shows a flow diagram regarding a generation of geographical information in accordance with some embodiments of the present disclosure.
Other arrangements of the invention are possible and, consequently, the accompanying drawings are not to be understood as superseding the generality of the preceding description of the invention.
Description of Embodiments of the Invention
Fig. 1 shows a block diagram in accordance with some embodiments of the present disclosure. A system 100 may comprise an imaging module 1 10 and a processor 120. The system 100 may further comprise a Position and Orientation System (POS) module 130.
The imaging module 1 10 may comprise at least one 3D sensor 1 1 1 and/or at least one 2D sensor 1 12. The 3D sensor 1 1 1 may comprise, but not be limited to, a Light Detection and Ranging (LiDAR) sensor (also referred to as“LiDAR scanner”). The 2D sensor 1 12 may comprise, but not be limited to, an RGB camera, a multispectral imager and/or a hyperspectral imager. The 3D sensor 1 1 1 is operable to generate 3D image data, and the 2D sensor 1 12 is operable to generate 2D image data. The imaging module 1 10 may generate image data by capturing image(s) of an object. For example, the LiDAR sensor detects distance to an object by illuminating the object with pulsed laser light and detecting the reflected pulses, and generates the image data using the detected distance. For example, the image data may include, but not be limited to, at least one of raw data which is generated by capturing the image(s) of the object and/or data processed from the raw data (referred to as“processed data”).
The imaging module 1 10 is operable to provide the image data to the processor 120 in real-time or near real-time, for rapid assessment and time critical decision making by the processor 120. Although not shown, the imaging module 1 10 can obtain the image data from an external database and/or a server. For example, the imaging module 1 10 may receive the image data from the external database and/or the server, and then provide the image data to the processor 120.
Although not shown, the imaging module 110 can provide the image data which was generated previously to the processor 120. For example, the imaging module 1 10 may have an internal database. The internal database may include one or more memory units for storing the image data which was generated previously. The imaging module 1 10 may extract the image data from the internal database and then provide the image data to the processor 120. The POS 130 may comprise an Inertial Navigation System (INS) module 131 and a Global Navigation Satellite System (GNSS) module 132. The POS 130 may be operable to generate orientation and/or position related data through module 131 and/or module 132. The POS 130 may transmit the position and/or orientation related data to the processor 120. In some embodiments, the position and/or orientation related data is sent to the processor 120 in real-time or near real-time. The sending of the position and/or orientation related data may be via streaming.
The processor 120 may include an Explicit Data Graph Execution (EDGE) processor. The processor 120 can be installed on a portable device, such as a backpack, mobile and/or airborne platform. The portable device may include a mounting structure, such as a frame, for items to be affixed or attached thereon/thereto. The processor 120 may include various form factors based on a platform that the processor 120 is used on. The processor 120 may include at least one GPU card. The processor 120 may include at least one Graphics Processing Unit (GPU). The GPU may be integrated in the processor 120 in the form of a card as a hardware component with dedicated software installed in the processor 120.
The processor 120 may have a casing that fulfils certain requirements, e.g., to be operable under outdoor conditions. For example, the casing may be waterproof, dustproof and/or vibration proof. In some examples, the casing meets the requirements of IP68 (also referred to as IP Code, International Protection Marking, IEC standard 60529 published by the International Electrotechnical Commission (IEC), or European standard EN 60529). The processor 120 can be an off the shelf solution, having a small footprint case like that known under the commercial name ROSIE from the company Connecttech (http://connecttech.com/product/rosie-embedded-system-with-nvidia- jetson-tx2-tx1/), a rugged GPGU Computing Server as available from the company Mercury system (https://www.mrcv.com/ruqqed-servers/qpqpu-computinq-servers/), or it may comprise a custom casing, e.g., milled out of plastic or aluminium using a CNC machine or a print out with a 3D printer.
The processor 120 is operable to synchronize and control the imaging module 1 10 and/or the POS 130. The processor 120 is also operable to process and analyse the image data in real-time or near real-time. The GPU is operable to process the image data for output to a display device.
The processor 120 is further operable to generate geographical information using the processed and analysed image data in real-time or near real-time, thus facilitating real- time generation of data as compared to conventional ways. Wit In some embodiments, the processor 120 extracts at least one necessary feature which is associated with the geographical information that a user desires to generate, from the processed and analysed image data. Thereafter, the processor 120 generates the geographical information using the extracted factor(s) or information. This process can be achieved automatically, without the user’s manual and laborious work. It may be appreciated that the imaging module 1 10, the processor 120 and the POS 130 are linked together via a communication network, for example a Local Area Network (LAN) such as Ethernet or a wireless communication network which can include Wi-Fi, Bluetooth, or other mobile wireless networks. The system 100 can achieve real-time mapping and monitoring for a range of mobile platforms via the communication network.
In this manner, the system 100 provides a General-Purpose computing on Graphics Processing Units (GPGPU) hardware-software solution combining the imaging module 1 10 and the POS 130. The system 100 is able to collect the image data and position and/or orientation related data, and process the image data including georeferencing and analysis. In some embodiments, the analysis results may be streamed to the user, for example a distant viewer, via the LAN in real-time and near real-time, for immediate inspection.
In addition, the system 100 is able to generate geographical information using the image data, in real-time or near real-time, without any needs for the user to perform post-processing tasks. In some embodiments, the generated geographical information may be streamed to the user, for example a distant viewer, via the LAN in real-time and near real-time, for immediate inspection.
Fig. 2 shows a flow diagram in accordance with some embodiments of the present disclosure.
As described above, the system 100 may comprise the imaging module 110 and the processor 120. The system 100 may further comprise the POS 130.
First, the imaging module 1 10 obtains the image data (S210). The imaging module 1 10 may comprise at least one 3D sensor 1 1 1. The 3D sensor 1 11 generates 3D image data, for example LiDAR data, as the image data.
The processor 120 then receives the image data from the imaging module 1 10 (S220). The processor 120 may receive or collect the 3D image data from the 3D sensor 1 1 1 in real-time or near real-time, via a communication network, for example a LAN. After receiving the 3D image data from the 3D sensor 1 11 , the processor 120 performs a georeferencing of the 3D image data. The georeferencing is performed to generate one or more point cloud. It may be appreciated that the position and/or orientation related data generated from the POS 130 can be used to perform the georeferencing of the 3D image data.
The POS 130 sends the position and/or orientation related data to the processor 120 in real-time or near real-time. For example, the POS 130 streams a string of binary packets through a User Datagram Protocol (UDP) network connection. The UDP network connection allows the POS 130 to act as a server with respect to the processor 120 and to continuously stream the position and/or orientation related data through the
UDP network connection.
The processor 120, as a client with respect to the POS 130, receives the position and/or orientation related data, for example the string of binary packets, from the POS 130 and parses the string of binary packets to find needed information. The packets may contain, but not be limited to, position, velocity, attitude, speed and dynamics related data reported in a reference frame, for example POS 130 reference frame. It may be appreciated that these data can be a variable value that are fed into an equation, for example a LiDAR equation, to perform the georeferencing. In this manner, the processor 120 may use the parsed information to perform the georeferencing of the image data. By the georeferencing, coordinates in a sensor frame are transformed to coordinates in a mapping frame, for example ECEF (earth-centred, earth-fixed) coordinates.
The processor 120 then generates the point cloud using the georeferenced image data (S230). It may be appreciated that the point cloud includes a set of data points. Ithough not shown, the processor 120 may correct the point cloud radiometrically and/or geometrically. For example, an intensity value of the point cloud may be corrected geometrically and calibrated radiometrically. In this regard, recorded intensity may suffer from some deformations due to geometry of measurements and environment effect. Therefore a radiometric and geometric correction to the intensity value may be applied before being able to use it. It is to be appreciated that the recorded intensity may be a black image and white image, where the black image may indicate low intensity and the white image may indicate high intensity.
Thereafter, the processor 120 analyses the point cloud (S240). The analysis may include, but not be limited to one or more of the following list:- computation of a tree data structure, such as an octree, computation of normal, computation of a geometrical attribute, segmentation of the point cloud and classification of each of the data points.
In some embodiments, the processor 120 may compute the octree and divide the point cloud into a plurality of cells based on the computed octree, so that each of the plurality of cells have the same size, regardless of the number of points included in each cell. In some embodiments, each cell may be in the form of a cube. Due to the cells having the same size, spatial searches and subsequent processing can be expedited.
Then, the processor 120 may compute the normal for each of the data points, and create eigenvalues and/or eigenvectors for each of the plurality of cells based on the computed normal. The normal may be a vector perpendicular of the best-fitting plane for the set of the data points. In some embodiments, the eigenvalues and/or eigenvectors for each of the plurality of cells are for a covariance matrix created from each of the data points’ nearest neighbours.
Thereafter, the processor 120 may normalize the eigenvalues of each of the plurality of cells and compute at least one geometrical attribute for each of the plurality of cells. The at least one geometrical attribute may include, but not be limited to, linearity, planarity, scattering, omnivariance, anisotropy, eigenentropy, local curvature, normal values, intensity, elevation and delta elevation with regards to an Above Ground Elevation (AGE).
The processor 120 may then segment the point cloud according to the geometrical attribute. It may be appreciated that at least one of the geometrical attributes outlined above may be used to segment the point cloud, in particular elevation, normal and/or intensity, homogeneity/variation and/or (eigen) value. The processor 120 then classifies each of the data points, based on the segmented point cloud according to the geometrical attribute. The data points in each cell are classified according a predetermined set of classes, for example ASPRS (American Society for Photogrammetry and Remote Sensing) LiDAR classes. Thereafter, the data points may be meshed and used as level zero for the AGE estimation. Thus, each of the data points can have a class and the AEG elevation value. After each of the data points are classified, the processor 120 extracts at least one feature associated with the geographical information (S250).
The processor 120 receives a selection of a type of the geographical information to be generated from the user. The geographical information may include, but not limited to, High Definition maps resulting from 3D feature extraction, classification and/or segmentation, road and pavement surface condition monitoring and/or maintenance, change detection in natural and/or build environments, or plant information such as tree health. Although not shown, the processor 120 can suggest various types of geographical information, and the user can select at least one geographical information that the user desires to generate. For example, the processor 120 may output, to a display, information with regard to the various types of geographical information so that the user can select at least one geographical information.
The processor 120 extracts at least one feature associated with the selected geographical information. The feature may include, but not be limited to, at least one of geographical feature or object feature. For example, if plant information is selected, a value of NDVI (Normalized Difference Vegetation Index) is computed. Based on the computed value of the NDVI, the features may be extracted and/or determined. Low values of NDVI, for example 0.1 and below, may indicate to non-vegetated areas. Moderate values of NDVI, for example 0.2 to 0.3, may indicate shrub and grass. High values of NDVI, for example 0.6 to 0.8, may indicate trees and forests.
Thereafter, the processor 120 generates the geographical information using the extracted feature (S260).
In some embodiments, it may be appreciated that the imaging module 1 10 further comprises at least one 2D sensor 1 12. Therefore, a combination of the 3D sensor 1 11 and the 2D sensor 1 12 can be used. The 2D sensor is operable to generate 2D image data, for example 2D images. The 3D sensor 1 1 1 and the 2D sensor 1 12 may obtain the 3D image data and the 2D image data respectively, with respect to the same object or background. The processor 120 may receive or collect the 3D image data from the 3D sensor 1 1 1 and the 2D image data from the 2D sensor 1 12 in real-time or near real-time, via the communication network. In some embodiments, the processor 120 may match the 3D image data and the 2D image data based on the object or background containing in the 3D image data and the 2D image data. After receiving the 3D image data and the 2D image data from the 3D sensor 11 1 and the 2D sensor 1 12 respectively, the processor 120 performs a georeferencing of the 3D image data as well as the 2D image data, in real-time or near real-time.
The processor 120 then generates the point cloud using the georeferenced image data. Although not shown, the processor 120 may correct the point cloud radiometrically and/or geometrically. For the 2D image data, and in the case of a multi-spectral imager, after the image data is first radiometrically corrected and then geometrically corrected, some indices such as NDVI, SR (Simple Ratio Vegetation Index) and PRI (Photochemical Reflectance Index) are extracted or generated from bands ratio. The indices are used to analyse the point cloud. After each of the data points are classified, the processor 120 may merge and/or fuse the 2D image data and the 3D image data. The processor 120 can re-compute the at least one geometrical attribute for each of the plurality of cells and/or re-classify each of the data points, based on the merged image data. The processor 120 can select and perform either re-computation of the geometrical attribute or re-classification of each of the data points.
With regards to the re-computation of the geometrical attribute, the processor 120 can add other geometrical attributes, for example vegetation health, chlorophyll concentration, etc. Addition of the geometrical attributes to the data points is limited to first return points as the 2D image data come from 2D sensor 112 which is a passive sensor and does not have penetrating capability in term of vision.
With regards to the re-classification of the data points, i.e. classification refinement, the 2D image data of vegetation index may be used to re-enforce and affine the vegetation delineation in the point cloud. The fusion of the 2D image data and the 3D image data may rely on photogrammetric and ray tracing principles.
In this manner, the processor 120 performs further process and analysis of the enhanced point cloud by 2D and 3D metrics extraction such as Digital Surface Model (DSM), Digital Terrain Model (DTM), isolines, breaklines, points of interest, etc., in real- time or near real-time. The results may be stored in LAS or LAZ format for point cloud metrics and in GeoTIFF format for raster metrics. The point cloud which includes the actual 3D image data, may be stored as binary file format such as LAS or LAZ format. This is for the raw data and the processed data. The processor 120 may produce three (3) types of data from the point cloud. The types of data are as follows: point cloud for classification and segmentation; raster, for example 2D images, for the DSM, DTM, etc.; and/or vector for isolines and breaklines.
Fig. 3 shows a block diagram in accordance with some embodiments of the present disclosure. Fig. 3 shows a synthetic layout of a relationship between a server 121 and a client 122 of the processor 120, for example an edge processor. The server 121 of the processor 120 directly interfaces with the 3D sensor 1 1 1 , the 2D sensor 1 12 and/or the POS 130.
In Fig. 3, the processor 120 comprises one server 121 and one client 122. In other words, at least part of the processor 120 may be related to at least one process for the server 121 and at least part of the processor 120 may be related to at least one process for the client 122. It may be appreciated that if more processing power is required, additional client, for example one or more GPUs, can be added to the processor 120. Although not shown, in some embodiments, an integrated device can include the processor 120, the 3D sensor 11 1 , the 2D sensor 1 12 and/or the POS 130. The system 100 is able to perform an efficient parallelization of the different processes that take place either in a synchronous and asynchronous way. The system 100 also allows the processor 120 to be scaled up or down based on the system’s 100 physical setup. In this manner, the processor 120 can adjust the data volume to be processed in real-time or near real-time.
The server 121 can directly interface with the 3D sensor 1 1 1 and the 2D sensor 1 12 via a Transmission Control Protocol/Internet Protocol (TCP/IP) network connection. A stream Reader3D module 123a and a streamReader2D module 123b can retrieve 3D image data streams and 2D image data streams respectively. The 3D sensor 1 1 1 is programmatically accessible through a predetermined library which communicates with system 100 via the TCP/IP connection. The library may be a documented platform independent software library for control of and data retrieval from the 3D sensor 1 1 1. The library is operable to configure and control the 3D sensor 1 1 1 and to retrieve and parse the measurements from the 3D sensor 1 1 1. It may be appreciated that the system 100 is compatible with any scanner (e.g. V-Line scanner) as the 3D sensor 1 1 1 , whether the scanner is an airborne, mobile or terrestrial scanner.
It is appreciable that the predetermined library should be compatible to the hardware of the sensor/scanner.
The server 121 can directly interface with the POS 130 via a User Datagram Protocol (UDP) network connection. A streamReaderPOS module 123c can retrieve position and/or orientation related data streams.
In some embodiments, each data packet of a data stream may have an individual timestamp, as shown in 310a, 320a and 330a of Fig. 4. The timestamp is used to cross- reference each data packet to perform a georegistration on the GPU of the server 121 using the dataGeorefHandler module 124. The dataGeorefHandler module 124 may be an implementation of photogrammetric and LiDAR equations applied to the 2D image data and 3D image data. Upon a creation of raster metrics and point clouds, the raster metrics and point clouds are stored locally on a server-internal hard drive. The server 121 may send a series of messages to one or more clients 122 to inform that raw data sets, for example the raster metrics and point clouds, are available for further processing.
After receiving the series of messages from the server 121 , each client 122 may ask the server 121 to provide required data to perform their own processing tasks by metricXHandler 125a-125n. The required data may include, but not be limited to, specific metrics such as Digital Surface Models (DSM), Digital Terrain Models (DTM), etc.
Once the client 122 has successfully performed their own processing tasks, the client 122 may copy the results to the server-internal hard drive in a folder that is accessible through a Samba server 126 (file service on the LAN).
It may be appreciated that, in addition to the 3D image data streams, the 2D image data streams and the position and/or orientation related data streams, a system installation file including system installation parameters can be required by the dataGeorefHandler module 124. In some embodiments, the system installation parameters are unique to a system physical installation, as it concerns the actual spatial distribution and orientation of the 3D sensor 11 1 and the 2D sensor 112 with respect to the POS 130 and to a GPS antenna.
In some embodiments, to simplify the whole configuration of the system 100, the various parameters can be entered into a predefined XML file that can be stored in the server 121. Therefore, a simple text editor can be required to configure the processor 120.
Fig. 4 shows a flow diagram in accordance with some embodiments of the present disclosure. Details of elements 310a, 320a, 330a, 340a, 350a, 360a, 360b, 360c, 370a, 370b, 390a, 390b, 390c, and steps S320 to S390 will be described with reference to figure 4.
The streamReader module 123 is connected to the POS module 130, the 3D sensor(s) 1 1 1 and 2D sensor(s) 1 12, respectively and to the system configuration module (system installation module) 140. The POS module 130 sends to the streamReader module 123, position and attitude information at S330a. The 3D sensor 1 1 1 sends to 123 spatial coordinates of target information with reference to the 3D imaging module 1 1 1. It may also send associated attributes information, e.g. of a target, such as colour, reflectance. The 2D module 1 12 sends to the streamReader module 123 imaging information in the form of a 2D vector with a number of associated layers of information where each layer could have information such has Red, Green or Blue colour information, or reflectance measurements in various spectral bands. The system installation module 140 will pass to the streamReader module 123 the information contained in an XML file. These information are related to the physical installation of each sensors in relation to each other, e.g., including the vehicle on which the system is mounted. Data streams 310a, 320a and 330a have one field of information in common, the timestamp which actually comes from the synchronization of the 3D sensor 1 1 1 and the 2D sensor 1 12 by the POS module’s Pulse Per Second (PPS) signal. Streams 310a, 320a and 330a are synchronized by the streamReader module 123 as each stream will be receive with its own rate. Once synchronized and merge together by 123, the information is pass to the dataGeorefHandler module 124 where the actual point cloud (360a) is georeferenced for data coming out of module 1 1 1 and images are generated and georeferenced (360c) from data coming out of module 1 12. The dataGeorefHandler module 123 might also generate a Total Propagation Uncertainty (360b) information for each point of the point cloud. Point cloud 360a and associated raster images 360c are pass to module 125a for respectively point cloud (370a) and image (370b) classification and segmentation. Once classified, the two data are merged together by module S380. Thereafter, the merged data is passed to module 125n for the final features extraction process which will produce a set of; vector files
(390a), raster files (390c). Ultimately, these files will be pushed into as database for allow fasts“search and find”. Furthermore, files constituting vector files 390a and 390b are put on a SAMBA server, which allows any user to see the collected and processed data into a remote hard drive and they can copy the data back to their own computer by a simple drag and drop. It is important to note that for simplicity, module 123 shows on Fig.4 write a system log (350s), however, the reality is that in each embodiment, modules S350, S360, S370, S380 and S390 are all writing outputs to the system log (350a).
The client 122 may gather the required data packets and dispatch the data packets to various sub-modules that can take care of the direct georeferencing, via a dataGeorefHandler module 124 on the server 121 , and to the metrics productions through the metricXHandler module(s) 125a-125n atS370, S380, S390 on the client 122.
The details of the operations of the processor 123 are as follows:
• StreamReader module 123
Although not shown, StreamReader module 123 may include, but not be limited to, a stream Reader3D module 1 1 1 , a streamReader2D module 1 12, streamReaderPOS module 123c, and sysConfig Reader 123d. The StreamReader module 123 can interface with the imaging module (the 3D sensor 1 1 1 or the 2D sensor 1 1 1 ), for 3D and 2D sensors respectively, and POS S330. After the 3D image data is transmitted from the 3D sensor 1 1 1 to the stream Reader3D module 1 11 , the 2D image data is transmitted from the 2D sensor 1 12 to the streamReader2D module 1 12, the position and/or orientation related data is transmitted from the POS 130 to the streamReaderPOS module 123c, the StreamReader module 123 can receive or collect the 3D image data, 2D image data, the position and/or orientation related data, and the system installation information respectively from the 3D sensor 1 1 1 , the 2D sensor 1 12 and the POS 130 (S350).
• DataGeorefHandler module 124
A dataGeorefHandler module 124 may perform an actual georeferencing of the
3D image data and 2D image data (S360). By the georeferencing, coordinates in a sensor frame are transformed to coordinates in a mapping frame, for example ECEF coordinates. In some embodiments, to perform the transformation, a LiDAR equation and/or photogrammetric equations can be used. In some embodiments, the dataGeorefHandler module 124 can perform an automatic post-processing of a trajectory using Applanix Cloud service or PosPAC script (not shown).
Although not shown, another sub-module can be added to perform a strip alignment at the end of the 3D image data and the 2D image data. The strip alignment may also improve any inaccurate boresight angle estimation. In some embodiments, the dataGeorefHandler module 124 can perform the strip alignment.
It may be appreciated that the above coordinate transformation can be applied to the post-processing as well as real-time or near real-time processing.
• Metricsl Handler module 125a for classifier
This is the first stage of the data processing (S370) which occurs after the georeferencing. The Metricsl Handler module 125a can be a classifier for the 3D image data and the 2D image data.
For the 2D image data, and in the case of a multi-spectral imager, after the image data is first radiometrically corrected and then geometrically corrected, some indices such as NDVI, SR and PRI are extracted or generated from bands ratio, in most of the time the 2D image data is processed using remote sensing image analytics to segment the 2D image data.
For the 3D image data, intensity value of the point cloud may be corrected geometrically and calibrated radiometrically. Thereafter, an octree is computed so that the plurality of cells have the same size, regardless of the number of points included in each cell. Thereafter, a normal is computed for each of the data points, and eigenvalues and/or eigenvectors are created for each of the plurality of cells based on the computed normal. The normal may be a vector perpendicular of the best-fitting plane for the set of the data points. In some embodiments, the eigenvalues and/or eigenvectors for each of the plurality of cells are for covariance matrix created from each of the data points’ nearest neighbours. Thereafter, the eigenvalues are normalized, and at least one geometrical attribute is computed for each of the plurality of cells. The geometrical attribute may include, but not be limited to, linearity, planarity, scattering, omnivariance, anisotropy, eigenentropy, local curvature, normal values, intensity, elevation and delta elevation with regards to an Above Ground Elevation (AGE).
Then, the point cloud is segmented according to the geometrical attribute. It may be appreciated that at least one of geometrical features, elevation, normal and intensity, homogeneity/variation or value can be used to segment the point cloud. The data points in each cell are classified according a predetermined set of classes, for example the ASPRS LiDAR classes. Thereafter, the data points may be meshed and used as level zero for the AGE estimation. Thus, each of the data points can have a class and the AEG elevation value.
Metrics2Handler module 125b for data fusion
The 2D image data and the 3D image data are then be merged together (S380). The metrics2Handler module 125b can re-compute the at least one geometrical attribute for each of the plurality of cells and/or re-classify each of the data points, based on the merged image data. The metrics2Handler module 125b can select and perform either re-computation of the geometrical attribute or re- classification of each of the data points.
With regards to the re-computation of the geometrical attribute, the metrics2Handler module 125b can add other geometrical attributes, for example vegetation health, chlorophyll concentration, etc. Addition of the geometrical attributes to the data points are limited to first return point as the 2D image data come from 2D sensor 1 12 which is a passive sensor and does not have penetrating capability in term of vision.
With regards to the re-classification of the data points, i.e. classification refinement, the 2D image data of vegetation index is used to re-enforce and affine the vegetation delineation in the point cloud. The fusion between 2D image data and 3D image data may rely on photogrammetric and ray tracing principles.
In some embodiments, if there is no 2D image data, this step of S380 can be omitted. • MetricsNHandler module 125n for ad-hoc processing
The metricsNHandler module 125n is used to apply each of the data points having a class and the AEG elevation value to a generation of the geographical information (S390). In this manner, the metricsNHandler module 125n can generate the geographical information that the user desires.
It may be appreciated that the system 100 can generate the geographical information without the 2D image data. If there is no 2D image data, the step of 1 12 and elements of 320a, 360c and 370b can be omitted from Fig. 4.
Although the real-time or near-real time data processing is described throughout the description, it may be appreciated that the processor 120 can also operate as a post processor. In case the processor 120 is used as the post processor, the 2D image data and/or 3D image data are received from one or more image files, for example 2D image file and/or 3D image file, instead of 2D sensor 1 12 and/or 3D sensor 1 1 1.
In some embodiments, the 3D image file, for example a raw LiDAR file, may be an RXP file and accessible through a predetermined library. For the POS file, it may be either a post-process trajectory in Smooth Best Estimate Trajectory (SBET) format or‘file rnav_Mission1 .out’ format generated from Applanix PosPAC software. The latter file format can be used in case a real-time positioning service such as RTK (Real-time base station corrections) or RTX (Real-time satellite corrections) is used. The latter file may be the same format as the SBET file format, but does not require a post- processing applied to it. For the 2D image file, it may depend on the type of 2D sensor 1 12. If the 2D sensor 1 12 has a push-broom sensor such as the micro-CASI, the raw 2D images may be first post-processed image using the SBET trajectory, and then this image may be fed to the processor 120. If a CMOS sensor is used as the 2D sensor 1 12, the 2D images with their time tags can be directly fed into the processor 120. Fig. 5 shows a flow diagram regarding a generation of geographical information in accordance with some embodiments of the present disclosure.
After analysing the point could, the geographical information can be generated. The processor 120 receives a selection of a type of the geographical information to be generated from the user. Although not shown, the processor 120 can suggest various types of geographical information, and the user can select at least one geographical information that the user desires to generate.
The processor 120 extracts at least one feature associated with the selected geographical information. The feature may include, but not be limited to, at least one of geographical feature or object feature.
The geographical information may include, but not limited to, High Definition maps resulting from 3D feature extraction, classification and segmentation, road and pavement surface condition monitoring and maintenance, change detection in natural and build environments, or plant information such as tree health. In some embodiments, the geographical information may include, but not be limited to, under-canopy mapping information or emergency response information.
• HD map
The processor 120 can generate an HD map as the geographical information as follows. The generating process may correspond to S260 in figure 2.
The processor 120 extracts at least one feature associated with the selected geographical information. The feature to be extracted may be related to road condition and may include, but not limited to, road centreline, road marks, kerbs, traffic and light posts, signs, cables, etc. The extracting process may correspond to S250 in figure 2. In some embodiments, the processor 120 may be used with a dual scanner mobile mapping system. The dual scanner mobile mapping system may be composed of two LiDAR sensors, a controller unit, a camera trigger box, a Global Navigation Satellite System (GNSS) receiver, and a 360° spherical camera. In particular, the processor 120 may be used with an Applanix AP60, a VMX system (a Riegl’s MMS VMX-1 HA that has 2 Riegl VUX-1 HA), and a Ladybug5 camera provided from FLIR. The processor 120 can use a 360° rotating LiDAR sensor as the 3D profiling sensor (3D sensor 1 1 1 ). In some embodiments, the processor 120 can further use the 2D profiling sensor (2D sensor 1 12). The profiling sensors can collect 2D slices. The 2D slices can be stacked to generate the 3D image data.
For example, for the road centreline extraction, the profiling sensors can be used to extract near-vertical 2D slices at the back of the vehicle which will allow the centreline extraction. To achieve the extraction, the point cloud may be analysed using a Piecewise Linear Time Series Segmentation approach using a sliding windows by calculation of a moving average, to find corners or breaks in the point cloud. This analysis process may correspond to S240 in figure 2. Applying semantic analysis on their relative positions with respect to the vehicle and/or between each other’s, the road edges and/or kerbs are identifiable and the centreline is deduced. To achieve the analysis of the point cloud, the point cloud may be generated using an image data. This generation process may correspond to S230 in figure 2.
In some embodiments, an analysis of an accumulation of the position and/or orientation related data generated by the POS 130, for example IMU, may help to detect the road centreline. In some embodiments, the detection of the centreline may provide a set of directional vectors that can be used as a guide to slide a sliding analytic window in the right direction, to reduce the processing time. In some embodiments, the accumulation can also be converted into polylines that can be thereafter saved as Shapefile vector files. The saved files can be re-inspected later.
In some countries, the kerbs are the same pattern, for example an alternate black and grey (natural colour of the stone or concrete) strips. Therefore, the extraction of the kerbs is based on geometrical property as well as physical attribute, for example its colour and reflectance. The extraction may be done after the centreline detection, as each side of the road has been extracted. Once the edge of the road has been found, the reflectance may be analysed to find the limits of the kerbs. Road marks may be extracted based on intensity attribute. The road point cloud may be extracted from the overall point cloud using the kerbs limit information. From this sub-point cloud, i.e. the extracted point cloud, an intensity threshold may be applied and the remaining points may be clustered using a Euclidean distance filter. Thereafter, each cluster is analysed for its shape and label.
It may be appreciated that an extracted cylindrical based feature may be either manmade object such as light post, traffic light, etc., or a tree. A slice of data is extracted between 1 to 1.3 metre AGE. This slice of data is then divided into smaller parts to reduce processing time. For example, this slice of data is clustered using a Euclidean distance segmentation algorithm. For each cluster, a cylinder is fitted in the clustered slice of data, and a fitting score is outputted. The fitting score is used to assess the type of shape. In some embodiments, the fitting score may be an actual residual normal, which may be an actual RMSE (Root Mean Square Error) of the orthogonal distance between the best-fitting planes estimated from the points and the cylinder.
If the fitting score is high, then the cluster shows a cylindrical form which means either a tree or some kind of manmade pole. If the fitting score is low, then the cluster is not the cylindrical shape and can be put aside for further processing. For each fitting, the processor 120 can then extract other slices of data point every metre or 2 metres upward (until no more points are available) and perform the cylinder fitting. If the subsequent fittings show high scores and similar cylinder properties have all centred at around the same X, Y geographical centre as well as a consistent number of points within each cluster, then the processor 120 assumes that this feature is a manmade feature. Further processing is then performed by extracting a buffer area around the centroid of the lowest cluster.
The slice sequence’s attributes may then be used to establish the type of object (based on height, diameter, etc.) using a machine learning approach.
In the case of a tree, a Quantitative Structure Model (QSM) algorithm is applied to the buffered point cloud. This algorithm may produce a watertight mesh from the trunk and branches network. From this mesh, the processor 120 may evaluate the bounding box and adjust the size of the buffer zone. For this new buffer zone, the processor 120 can evaluate the height of the tree with respect to AGE and its crown diameter by fitting a circle or ellipsoid.
In some embodiments, based on the fitting score, for example residual normal, the fitting object may be decided. The fitting object may include, but not be limited to, cylinder, circle and/or ellipsoid. In some embodiments, a library which performs the fitting may offer multiple types of object for the fitting and output the fitting score.
The processor 120 may then evaluate the volume of the tree (trunk and branches network) using the mesh produced by QSM algorithm. From the volume and tree species information, for example from third source, the processor 120 may evaluate weight and/or carbon content of the tree.
Every object is then saved in a Geographical Information System database where it can be displayed and analysed further.
The results in real-time or near real-time are stored with appropriate timestamps and in the I MU/BODY reference frame. This may allow to refine final position of the features as if the user wishes a more accurate position, he may perform a post-process the trajectory to obtain a SBET file and apply it to the points of interest. · Tree Health (as shown in Fig. 5)
The processor 120 can generate a tree health information as the geographical information as follows. This generation process may correspond to S260 in figure 2.
The processor 120 can perform a mapping of trees along roadside, parks and housing block areas and assess their health. The geographical information may be used by an arborist to determine at least one of the following information: tree species based on images, tree girth measurement at height of predetermined metre from the ground, estimated tree height and crown size, tree location of all trees mapped, or a point feature indicating the centroid of the trunk at height of predetermined metre from the ground. For this application, two types of kinematic systems integration may be used. The first system may be from a vehicle such as a car, and the other system may be from a backpack. In some embodiments, the processor 120 may be used with the Riegl’s MMS VMX-1 HA. The Riegl’s MMS VMX-1 HA may be composed of two Riegl VUX-1 HA, a controller unit and a camera trigger box, an Applanix
AP60, a 360° spherical Ladybug5 camera from FLIR, and one or two microCASI- 1920 Itres. In other embodiments, the second system may be composed of the processor 120 (with smaller footprint), a Riegl miniVUX-1 UAV with an APX20 from Applanix, the Theta V, a 360° spherical camera from Ricoh, and a micro- CAS I 1920 from Itres.
The processing workflow may be identical with the HD map as described above. This means getting the 3D and 2D data from the 3D and 2D imagers, then georeferenced then using trajectory information (S501 and S505). A point cloud can undergo some geometric (S503) and radiometric (S504) corrections. It is to be appreciated that the point cloud may be generated in the same manner as in figures 3 and 4. S503 and S504 may correspond to S240 in figure 2. However, for this application an additional 2D sensor 1 12, for example a hyperspectral imager, may assist in identifying a vegetation in the scene through the analysis of hyperspectral vegetation indexes such as NDVI, SRI, etc (S51 1 ).
First, the LiDAR data as 3D image data may be classified using the methodology presented in the HD map as described above. Thereafter, the 2D image data from the hyperspectral imager may be radiometrically corrected using Itres’ RCX library (S507), and then be geometrically projected using the trajectory information (S508)Finally, the calibrated and georectified image data (S510), for example calibrated and georectified 2D image data, may undergo an atmospheric correction (S509). However, in this application, the distances between the sensor and the objects are minimal, thus no atmospheric correction may be applied, since the trees are on the ground.
Subsequently, a value of the NDVI is computed (S51 1 ). Based on the computed value of the NDVI, the features may be extracted and/or determined. This extraction and/or determination process may correspond to S250 in figure 2. Low values of NDVI, for example 0.1 and below, may indicate to non-vegetated areas. Moderate values of NDVI, for example 0.2 to 0.3, may indicate shrub and grass. High values of NDVI, for example 0.6 to 0.8, may indicate trees and forests. The reference values can be determined by the user and/or the processor 120.
At this point, 3D and 2D images, S510, S51 1 , S506 and S504, may be fused together based on their timestamp information. The fusion process may comprise point cloud colorization (S521 ), classification (S522) based on geometrical, colour and intensity attributes, and finally segmentation (S523). In the case of tree health, tress may be extracted from the point cloud, generating a tree mesh (S524) for volume estimation using a QSM algorithm. From the mesh volume estimation (S525) a Carbon content can be estimate using tree species information provided by local arborist and the tree volume (S526). Finally, a tree health indices can be estimate using Vegetation Index at S51 1.
It may be appreciated by the person skilled in the art that variations and combinations of features described above, not being alternatives or substitutes, may be combined to form yet further embodiments falling within the intended scope of the invention.

Claims

1. A system for facilitating generation of geographical information comprising: an imaging module operable to obtain image data; and a processor operable to receive the image data and generate a point cloud having a plurality of data points using the image data, wherein the processor is further operable to analyse the point cloud to extract at least one feature associated with the geographical information and generate the geographical information using the extracted feature.
2. The system according to claim 1 , wherein the imaging module comprises at least one 3D sensor operable to generate 3D image data as the image data.
3. The system according to claim 1 or 2, wherein the generation of the point cloud includes georeferencing of the image data.
4. The system according to any one of claims 1 to 3 further comprises a position and orientation system (POS) operable to send position and/or orientation related data to the processor.
5. The system according to claim 4, wherein the processor is operable to parse the sent data and use the parsed information to perform the georeferencing of the image data.
6. The system according to any one of claims 1 to 5, wherein the processor is operable to correct the point cloud radiometrically and/or geometrically.
7. The system according to any one of claims 1 to 6, wherein the processor is operable to compute an octree, and divide the point cloud into a plurality of cells based on the computed octree so that each of the plurality of cells have a same size.
8. The system according to claim 7, wherein the processor is operable to compute a normal for each of the plurality of data points and create an eigenvalue for each of the plurality of cells based on the computed normal.
9. The system according to claim 8, wherein the processor is operable to compute at least one geometrical attribute for the each of the plurality of cells by normalizing the eigenvalues of the each of the plurality of cells.
10. The system according to claim 9, wherein the processor is operable to segment the point cloud according to the geometrical attribute.
1 1. The system according to claim 10, wherein the processor is operable to classify each of the plurality of data points, based on the segmented point cloud according to the geometrical attribute.
12. The system according to any one of claims 1 to 1 1 , wherein the imaging module further comprises at least one 2D sensor operable to generate 2D image data.
13. The system according to claim 12, wherein the processor is operable to merge the 2D image data and the 3D image data.
14. The system according to any one of claims 9 to 1 1 , wherein the processor is operable to re-compute the at least one geometrical attribute for the each of the plurality of cells and/or re-classify the each of the plurality of data points, based on the merged image data.
15. The system according to any one of claims 1 to 14, wherein the processor is operable to receive a selection of a type of the geographical information to be generated and extract the at least one feature associated with the geographical information to be generated.
16. The system according to any one of claims 1 to 15, wherein the geographical information includes at least one of a high definition map, plant information or public infrastructure information.
17. The system according to any one of claims 1 to 16, wherein the feature includes at least one of geographical feature or object feature.
18. The system according to any one of claims 1 to 17, wherein the imaging module is operable to provide the image data which was obtained previously to the processor.
19. A method for facilitating generation of geographical information comprising: obtaining image data at an imaging module; receiving, at a processor, the image data from the imaging module; generating, at the processor, a point cloud having a plurality of data points using the image data; analysing, at the processor, the point cloud; extracting, at the processor, at least one feature associated with the geographical information; and generating, at the processor, the geographical information using the extracted feature.
AU2019352559A 2018-10-04 2019-09-18 System and method for facilitating generation of geographical information Abandoned AU2019352559A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
SG10201808791P 2018-10-04
SG10201808791P 2018-10-04
PCT/SG2019/050466 WO2020072001A1 (en) 2018-10-04 2019-09-18 System and method for facilitating generation of geographical information

Publications (1)

Publication Number Publication Date
AU2019352559A1 true AU2019352559A1 (en) 2020-12-17

Family

ID=70055977

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2019352559A Abandoned AU2019352559A1 (en) 2018-10-04 2019-09-18 System and method for facilitating generation of geographical information

Country Status (8)

Country Link
JP (1) JP2022511147A (en)
KR (1) KR20210067979A (en)
CN (1) CN113168714A (en)
AU (1) AU2019352559A1 (en)
GB (1) GB2589024A (en)
SG (1) SG11202009873RA (en)
TW (1) TW202022808A (en)
WO (1) WO2020072001A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114817426A (en) * 2021-01-28 2022-07-29 中强光电股份有限公司 Map construction device and method
CN113504543B (en) * 2021-06-16 2022-11-01 国网山西省电力公司电力科学研究院 UAV LiDAR system positioning and attitude determination system and method
CN113850823B (en) * 2021-09-18 2024-09-27 中北大学 Tree Extraction Method Based on Automatic Segmentation of Different Features
KR102792397B1 (en) * 2021-12-13 2025-04-04 충북대학교 산학협력단 Portable soil field survey system for using and method of survey
CN117710590A (en) * 2022-09-06 2024-03-15 北京图森智途科技有限公司 Parameterization and map construction method for point cloud data

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100208981A1 (en) * 2009-02-13 2010-08-19 Harris Corporation Method for visualization of point cloud data based on scene content
US20130300740A1 (en) * 2010-09-13 2013-11-14 Alt Software (Us) Llc System and Method for Displaying Data Having Spatial Coordinates
WO2012034236A1 (en) * 2010-09-16 2012-03-22 Ambercore Software Inc. System and method for detailed automated feature extraction from data having spatial coordinates
US10151839B2 (en) * 2012-06-01 2018-12-11 Agerpoint, Inc. Systems and methods for determining crop yields with high resolution geo-referenced sensors
AU2014254426B2 (en) * 2013-01-29 2018-05-10 Andrew Robert Korb Methods for analyzing and compressing multiple images
JP6080642B2 (en) * 2013-03-25 2017-02-15 株式会社ジオ技術研究所 3D point cloud analysis method
CN103745441A (en) * 2014-01-08 2014-04-23 河海大学 Method of filtering airborne LiDAR (Light Detection and Ranging) point cloud
US9803985B2 (en) * 2014-12-26 2017-10-31 Here Global B.V. Selecting feature geometries for localization of a device
JP6674822B2 (en) * 2015-04-01 2020-04-01 Terra Drone株式会社 Photographing method of point cloud data generation image and point cloud data generation method using the image
US9830706B2 (en) * 2015-09-17 2017-11-28 Skycatch, Inc. Generating georeference information for aerial images
CN106688017B (en) * 2016-11-28 2019-03-01 深圳市大疆创新科技有限公司 Generate method, computer system and the device of point cloud map
KR102647351B1 (en) * 2017-01-26 2024-03-13 삼성전자주식회사 Modeling method and modeling apparatus using 3d point cloud

Also Published As

Publication number Publication date
JP2022511147A (en) 2022-01-31
SG11202009873RA (en) 2020-11-27
TW202022808A (en) 2020-06-16
KR20210067979A (en) 2021-06-08
GB202019895D0 (en) 2021-01-27
WO2020072001A1 (en) 2020-04-09
GB2589024A (en) 2021-05-19
CN113168714A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
Torres-Sánchez et al. Assessing UAV-collected image overlap influence on computation time and digital surface model accuracy in olive orchards
AU2019352559A1 (en) System and method for facilitating generation of geographical information
Weiser et al. Individual tree point clouds and tree measurements from multi-platform laser scanning in German forests
Branson et al. From Google Maps to a fine-grained catalog of street trees
US20200401617A1 (en) Visual positioning system
US11625851B2 (en) Geographic object detection apparatus and geographic object detection method
Leitloff et al. An operational system for estimating road traffic information from aerial images
Apostol et al. Species discrimination and individual tree detection for predicting main dendrometric characteristics in mixed temperate forests by use of airborne laser scanning and ultra-high-resolution imagery
Vo et al. Processing of extremely high resolution LiDAR and RGB data: Outcome of the 2015 IEEE GRSS data fusion contest—Part B: 3-D contest
CN110059608A (en) A kind of object detecting method, device, electronic equipment and storage medium
JP2019527832A (en) System and method for accurate localization and mapping
JP5542530B2 (en) Sampling position determination device
Li et al. Toward automated power line corridor monitoring using advanced aircraft control and multisource feature fusion
Li et al. Ultrahigh-resolution boreal forest canopy mapping: Combining UAV imagery and photogrammetric point clouds in a deep-learning-based approach
US20220366605A1 (en) Accurate geolocation in remote-sensing imaging
CN118602997A (en) A UAV multi-dimensional space area measurement system
CN112445241A (en) Ground surface vegetation identification method and system based on unmanned aerial vehicle remote sensing technology and readable storage medium
Kuzmin et al. Automatic segment-level tree species recognition using high resolution aerial winter imagery
Ramli et al. Homogeneous tree height derivation from tree crown delineation using Seeded Region Growing (SRG) segmentation
CN109492606A (en) Multispectral vector picture capturing method and system, three dimensional monolithic method and system
Demir Using UAVs for detection of trees from digital surface models
Jarahizadeh et al. Advancing tree detection in forest environments: A deep learning object detector approach with UAV LiDAR data
Pahlavani et al. 3D reconstruction of buildings from LiDAR data considering various types of roof structures
Rangkuti et al. Optimization of Vehicle Object Detection Based on UAV Dataset: CNN Model and Darknet Algorithm
Weinmann et al. Point cloud analysis for uav-borne laser scanning with horizontally and vertically oriented line scanners–concept and first results

Legal Events

Date Code Title Description
MK4 Application lapsed section 142(2)(d) - no continuation fee paid for the application