[go: up one dir, main page]

CN114646991B - A navigation enhancement network system based on "body reference" and its construction method - Google Patents

A navigation enhancement network system based on "body reference" and its construction method Download PDF

Info

Publication number
CN114646991B
CN114646991B CN202210248804.8A CN202210248804A CN114646991B CN 114646991 B CN114646991 B CN 114646991B CN 202210248804 A CN202210248804 A CN 202210248804A CN 114646991 B CN114646991 B CN 114646991B
Authority
CN
China
Prior art keywords
information
terminal
reference station
target
enhancement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210248804.8A
Other languages
Chinese (zh)
Other versions
CN114646991A (en
Inventor
吴海涛
李亚平
郭笑尘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Information Research Institute of CAS
Original Assignee
Aerospace Information Research Institute of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Information Research Institute of CAS filed Critical Aerospace Information Research Institute of CAS
Priority to CN202210248804.8A priority Critical patent/CN114646991B/en
Publication of CN114646991A publication Critical patent/CN114646991A/en
Application granted granted Critical
Publication of CN114646991B publication Critical patent/CN114646991B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/13Receivers
    • G01S19/35Constructional details or hardware or software details of the signal processing chain
    • G01S19/37Hardware or software details of the signal processing chain
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/38Services specially adapted for particular environments, situations or purposes for collecting sensor information

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Automation & Control Theory (AREA)
  • Image Processing (AREA)
  • Navigation (AREA)

Abstract

The invention discloses a navigation enhancement network system based on 'body benchmark' and a construction method thereof, comprising a benchmark station determining system, a benchmark station model position resolving system, a benchmark station network data processing system, an information transmission system and a terminal enhancement information processing system, wherein the network system constructs a 'benchmark body' network based on 5G, vision and GNSS multisource fusion information, and faces to a multisource fusion terminal to form enhancement service of 'benchmark body', thereby improving positioning recognition precision of a three-dimensional measurement terminal and a panoramic terminal; compared with the traditional navigation enhancement system which only can provide differential correction information based on point positions, the network system can provide enhancement information based on 'body' information, remove position enhancement and enhance recognition accuracy; meanwhile, based on the characteristic of rapid transmission of 5G information, the application terminal supporting Beidou, INS and vision functions is served, the function of estimating the unknown point position by the known points is realized in any scene, and the indoor and outdoor seamless positioning problem is solved.

Description

Navigation enhancement network system based on 'body benchmark' and construction method
Technical Field
The invention relates to the technical field of navigation positioning, in particular to a navigation enhancement network system based on a body benchmark and a construction method thereof.
Background
Satellite navigation positioning is a basic space-time network, accurate position and time are provided for people, a coordinate system acquired by GNSS is point coordinates (longitude latitude height), mapping technology is changed, current holographic mapping is measured from original basis, a future live-action three-dimensional map replaces the existing two-dimensional and image map to become main stream of basic geographic information, the positioning concept is also suitable for new technical development, an object is abstract to be measured as a point, the requirements of scene, materialization and refinement management cannot be met, and an original geographic entity is identified by changing a particle position into a group of space-time point clouds. There is a certain trend from point coordinates to volume coordinates, for which no reasonable solution has been proposed at present.
Disclosure of Invention
The invention aims to provide a navigation enhancement network system based on a body benchmark and a construction method thereof, so as to solve the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions: a "body referencing" based navigation augmentation network system comprising: the system comprises a reference station determining system, a reference station model position resolving system, a reference station network data processing system, an information transmission system and a terminal enhanced information processing system;
The reference station determining system selects static ground object points as reference points on a remote sensing image through a visual interpretation method, and extracts and segments a target object to obtain coordinate information of a ground object outline;
The reference station model position resolving system utilizes target identification in a scene to obtain a three-dimensional coordinate sequence of a reference station and ground feature attribute information through a twin scene and a three-dimensional reconstruction technology;
The reference station network data processing system gathers the reference station network data provided by the reference station model position resolving system in a certain area, and extracts the 5G, visual and GNSS multisource fusion information of each reference station in the reference station network data to obtain a feature matrix, accurate coordinate position information and build an enhancement station database so as to prepare enhancement information service of the terminal enhancement information processing system;
The information transmission system transmits information of surrounding reference points to the terminal enhancement information processing system by using 5G transmission;
the terminal enhancement information processing system is dispersed in each terminal, acquires the accurate position of the terminal according to the sensors such as the GNSS and the INS of the terminal, requests the information of the peripheral reference stations from the reference station network data processing system, and enhances the positioning function of the terminal after acquiring the high-precision differential information of the peripheral visual reference stations and the GNSS.
According to another aspect of the present invention, a method for constructing a navigation enhancement network system based on "body reference" is provided, including the following steps:
Step 1, on a remote sensing image, a static feature point is selected as a reference point by a visual interpretation method, and a target object is extracted and segmented to obtain coordinate information of a feature contour;
Step 2, a reference station model position resolving system identifies targets in a scene, and a three-dimensional coordinate sequence of a reference station and ground feature attribute information are obtained through a twinning scene and a three-dimensional reconstruction technology, wherein the ground feature attribute information is expressed in a feature matrix mode;
Step 3, when the datum station network data of the preset area are converged to the datum station network data processing system in a unified mode, the datum station network data processing system builds an enhancement station database for the extracted feature matrix and the accurate coordinate position of each datum station, and prepares for enhancement information service of the terminal;
step 4, transmitting information of surrounding reference points to the surrounding terminals needing service by using 5G transmission;
And 5, the terminal acquires the accurate position of the terminal according to the GNSS sensor of the terminal, and simultaneously requests the information of the peripheral reference stations from the reference station network data processing module, and enhances the positioning function of the terminal after acquiring the peripheral vision reference stations and the GNSS high-precision differential information.
Further, the "body reference" refers to "points" of the ground in a two-dimensional map and are displayed in a "body" manner in a live-action three-dimensional manner based on the theory of "object-oriented".
Furthermore, the mode position refers to an antenna phase center position obtained through Beidou positioning, specifically, a particle position P (X, Y, Z), and a group of point sets with space topological relation are obtained through Beidou+intelligent recognition fusion positioning, namely, the mode position is called, specifically, a particle position set P { P1, P2 … pn }, and pn is an nth particle.
Furthermore, a static target is obtained by using a remote sensing mode recognition technology and used as a reference station, a GNSS is used for providing accurate space-time reference to obtain the position of the reference station, and remote sensing is used for obtaining characteristic points of a reference ground object, wherein the characteristic points are expressed in a characteristic matrix mode.
Further, the reference station network is constructed in a uniformly distributed manner in a certain area, the reference station is characterized by a 'body reference', and enhancement information based on the 'body reference' is taken as a basic constituent unit of the information.
Furthermore, the information transmission module utilizes a 5G transmission network, the data center provides the module positions P { P1, P2 … } of the peripheral 'body datum' for the multi-source fusion terminal, the terminal calculates the position of the terminal or the peripheral ground object by GNSS positioning and a scene internal reference object obtained by a sensor, and the information transmission module is different from the differential correction information based on the point position provided by the traditional navigation enhancement system, the 'body datum' provides enhancement information based on 'body' information, and enhances the recognition accuracy besides the enhancement position information.
Furthermore, the terminal enhancement information processing system works by relying on a terminal, and the terminal enhancement information processing system obtains the data processing system information of the reference station network through a 5G transmission network and works in cooperation with the sensor information of the terminal, so that the function of estimating the position of an unknown point from a known point is realized, and the indoor and outdoor seamless positioning problem is solved.
Further, the reference station mode position calculating system in step S2, the mode position calculating includes the steps of:
Step S2.1>: instance partitioning
Using ResNet-101+FPN as a backup, the instance segmentation is realized through two branches, namely a prototype generation branch and a Mask coefficient branch. Prototype generation branch Protonet generates mask prototypes based primarily on FCN implementations. Mask coefficient branches mainly realize prediction of Mask coefficients through an anchor-based target detector. The two branches are independently calculated, mask synthesis is carried out through matrix multiplication and Sigmoid function, and finally a segmentation result is output;
Step S2.2>: target depth acquisition
And acquiring depth information by adopting a binocular camera. Assuming a target at coordinates (i, j) whose depth value Z (i, j); estimating the depth value at each pixel position by adopting a neighborhood window; s is the size of a neighborhood window; Is a sign function;
step S2.3>: external parameter matrix calculation
Carrying out camera attitude estimation through n 3D points in a group of world coordinate systems and corresponding 2D coordinates in an image, and realizing attitude estimation by adopting EPnP (Efficient PnP) algorithm;
step S2.4>: position resolution
By converting the pixel into a coordinate system, the three-dimensional position of the target in the world coordinate system can be obtained.
Pixel coordinates (u, v), conversion relation with image coordinate system:
if fx and fy are focal lengths of the camera in the x-axis and the y-axis, and Xc, yc and Zc are coordinate values of the target in the camera coordinate system, then:
The (x, y) refers to the position of the object in the image coordinate system, the (u, v) refers to the position of the object in the pixel coordinate system, the (u 0,v0) refers to the parameter that shifts the origin of the image coordinate from the center of the image to the upper left corner, and the (dx, dy) refers to the size of each pixel point in the x-axis, y-axis;
if fx and fy are focal lengths of the camera in the x-axis and the y-axis, and Xc, yc and Zc are coordinate values of the target in the camera coordinate system, then:
knowing the pixel coordinates (u, v) of the target and the internal reference matrix of the camera, and solving the position information according to the external reference of the camera and the depth of the target; the R represents a rotation matrix, i.e. the rotation of the camera compared to the world coordinate system, and the T represents a translation matrix, i.e. the translation of the camera compared to the world coordinate system;
the (Xw, yw, zw) refers to the three-dimensional position of the target in the world coordinate system.
The beneficial effects are that:
Compared with the prior art, the traditional navigation enhancement system and the construction method provide differential correction information based on point positions, the method provides enhancement information based on 'body' information, and enhances recognition accuracy besides position enhancement; based on the characteristic of rapid transmission of 5G information, for the comprehensive application terminal development service based on Beidou/inertial navigation/vision, the function of calculating the unknown point position by the known point is realized in any scene, and the indoor and outdoor seamless positioning problem is thoroughly solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a "body referencing" based navigation enhancement network construction in accordance with an embodiment of the present invention.
FIG. 2 is a diagram of a navigation enhancement network service based on "body benchmarks" according to embodiments of the present invention.
Detailed Description
The technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by those skilled in the art without the inventive effort based on the embodiments of the present invention are within the scope of protection of the present invention.
Referring to fig. 1-2, in order to achieve the above object, according to an embodiment of the present invention, the following technical solutions are provided: a "body referencing" based navigation augmentation network system comprising:
the system comprises a reference station determining system, a reference station model position resolving system, a reference station network data processing system, an information transmission system and a terminal enhancement information processing system;
The reference station determining system selects static ground object points as reference points on a high-precision remote sensing image through a visual interpretation method, and extracts and segments a target object to obtain coordinate information of a ground object contour; the high precision is in the centimeter level.
The reference station model position resolving system utilizes target identification in a scene to obtain a three-dimensional coordinate sequence of a reference station and ground feature attribute information through a twin scene and a three-dimensional reconstruction technology;
The reference station network data processing system gathers the reference station network data provided by the reference station model position resolving system in a certain area, and performs data extraction on multi-source fusion information such as 5G, vision, GNSS and the like of each reference station in the reference station network data to obtain information such as a feature matrix, a precise coordinate position and the like, and builds an enhancement station database so as to prepare enhancement information service of the terminal enhancement information processing system;
The information transmission system utilizes the high-speed transmission characteristic of 5G to send the information of the surrounding datum points to the terminal enhancement information processing system;
The terminal enhancement information processing system is dispersed in each terminal, acquires the accurate position of the terminal according to the GNSS, INS and other sensors of the terminal, requests the information of the peripheral reference stations from the reference station network data processing system, and enhances the positioning function of the terminal after acquiring the information of the peripheral visual reference stations, GNSS high-precision difference and the like.
According to another embodiment of the present invention, a method for constructing a navigation enhancement network system based on "body reference" is provided, which is characterized by comprising the following steps:
step S1, on a high-precision remote sensing image, selecting a static ground object point as a reference point by a visual interpretation method, and extracting and dividing a target object to obtain coordinate information of a ground object contour;
S2, the model position of the reference station is identified by utilizing targets in the scene, and a three-dimensional coordinate sequence of the reference station and ground feature attribute information are obtained through a twinning scene and a three-dimensional reconstruction technology, wherein the ground feature attribute information is expressed in a feature matrix mode;
Step S3, when the reference station network data of a certain area are converged to the reference station network data processing system in a unified way, the reference station network data processing system builds an enhancement station database for the extracted feature matrix, the accurate coordinate position and the like of each reference station, and prepares for enhancement information service of the terminal;
s4, transmitting information of surrounding reference points to the surrounding terminals needing service by utilizing the high-speed transmission characteristic of 5G;
And S5, the terminal acquires the accurate position of the terminal according to the GNSS sensor of the terminal, requests the information of the peripheral reference stations from the reference station network data processing system, and enhances the positioning function of the terminal after acquiring the information of the peripheral visual reference stations, the GNSS high-precision difference and the like.
Further, the "body reference" refers to ground "points" under a certain scale in a two-dimensional map based on the theory of "object-oriented", and are displayed in a "body" manner in a live-action three-dimensional manner.
Furthermore, the mode position refers to an antenna phase center position obtained through Beidou positioning, specifically a particle position P (X, Y, Z), and a group of point sets with space topological relation are obtained through Beidou+intelligent identification fusion positioning, namely the mode position, specifically a particle position set P { P1, P2 … }.
Furthermore, a static target is obtained by using a remote sensing mode recognition technology and used as a reference station, a GNSS is used for providing accurate space-time reference to obtain the position of the reference station, and remote sensing is used for obtaining characteristic points of a reference ground object, wherein the characteristic points are expressed in a characteristic matrix mode.
Further, the reference station network is constructed in a uniformly distributed manner in a certain area, and further, the reference station is characterized by a 'body reference', and the enhancement information based on the 'body reference' is taken as a basic constituent unit of the technology.
Furthermore, the information transmission system utilizes a 5G high transmission network, the data center provides the mode positions P { P1, P2 … } of the peripheral 'body datum' for the multi-source fusion terminal, and the terminal calculates the position of the terminal or the peripheral ground object by GNSS positioning and the scene internal reference object obtained by the sensor.
Furthermore, the terminal enhancement information processing system works by relying on the terminal, and further, the terminal enhancement information processing system obtains the information of the reference station network data processing system through a 5G transmission network and works in cooperation with the sensor information of the terminal, so that the function of estimating the position of an unknown point from a known point is realized in any scene, and the problem of indoor and outdoor seamless positioning is thoroughly solved.
According to one embodiment of the present invention, the die position calculation in step S2 includes the steps of:
step S2.1: instance partitioning
Using ResNet-101+FPN as a backup, the instance segmentation is realized through two branches, namely a prototype generation branch and a Mask coefficient branch. Prototype generation branch Protonet generates mask prototypes based primarily on FCN implementations. Mask coefficient branches mainly realize prediction of Mask coefficients through an anchor-based target detector. The two branches are independently calculated, mask synthesis is carried out through matrix multiplication and Sigmoid function, and finally a segmentation result is output;
Step S2.2: target depth acquisition
And acquiring depth information by adopting a binocular camera. Assuming a target at coordinates (i, j) whose depth value Z (i, j); estimating the depth value at each pixel position by adopting a neighborhood window; wherein S is the size of the neighborhood window.Is a sign function;
step S2.3: external parameter matrix calculation
Carrying out camera attitude estimation through n 3D points in a group of world coordinate systems and corresponding 2D coordinates in an image, and realizing attitude estimation by adopting EPnP (Efficient PnP) algorithm;
Step S2.4: position resolution
By converting the pixel into a coordinate system, the three-dimensional position of the target in the world coordinate system can be obtained.
Pixel coordinates (u, v), conversion relation with image coordinate system:
The (x, y) refers to the position of the object in the image coordinate system, the (u, v) refers to the position of the object in the pixel coordinate system, the (u 0,v0) refers to the parameter that shifts the origin of the image coordinate from the center of the image to the upper left corner, and the (dx, dy) refers to the size of each pixel point in the x-axis, y-axis;
if fx and fy are focal lengths of the camera in the x axis and the y axis, xc, yc and Zc are coordinate values of the target in a camera coordinate system, knowing pixel coordinates (u, v) of the target and an internal reference matrix of the camera, and solving the external reference of the camera and the depth of the target according to the previous solution and the position information;
the (Xw, yw, zw) refers to the three-dimensional position of the target in the world coordinate system.
Compared with the prior art, the traditional navigation enhancement system provides differential correction information based on point positions, the technology provides enhancement information based on 'body' information, and the recognition accuracy is enhanced besides position enhancement; based on the characteristic of rapid transmission of 5G information, for the comprehensive application terminal development service based on Beidou/inertial navigation/vision, the function of estimating the unknown point position by the known point is realized in any scene, and the indoor and outdoor seamless positioning problem is solved.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (6)

1.一种基于“体基准”的导航增强网络系统的构建方法,所述导航增强网络系统包括:基准站确定系统、基准站模位置解算系统,基准站网数据处理系统、信息传输系统和终端增强信息处理系统;1. A method for constructing a navigation enhancement network system based on "body reference", the navigation enhancement network system comprising: a reference station determination system, a reference station model position solution system, a reference station network data processing system, an information transmission system and a terminal enhancement information processing system; 其中,所述基准站确定系统在遥感影像上,通过目视解译的方法,选择静态地物点作为基准点,对目标物进行提取、分割,获得地物轮廓的坐标信息;The base station determination system selects static ground object points as reference points on the remote sensing image through visual interpretation, extracts and segments the target objects, and obtains the coordinate information of the ground object contour; 所述基准站模位置解算系统利用对场景内的目标识别,通过孪生场景和三维重建技术,得到基准站的三维立体坐标序列,以及地物属性信息;The reference station model position solving system utilizes the target recognition in the scene, and obtains the three-dimensional coordinate sequence of the reference station and the attribute information of the ground object through the twin scene and three-dimensional reconstruction technology; 所述基准站网数据处理系统汇聚一定区域内的所述基准站模位置解算系统提供的基准站网数据,并对所述基准站网数据中每个基准站的5G、视觉、GNSS多源融合信息进行数据提取,获得特征矩阵、精准坐标位置信息并建设增强站数据库,为所述终端增强信息处理系统的增强信息服务做准备;The reference station network data processing system gathers the reference station network data provided by the reference station model position solution system in a certain area, and extracts the 5G, vision, and GNSS multi-source fusion information of each reference station in the reference station network data, obtains the feature matrix and precise coordinate position information, and builds an enhanced station database to prepare for the enhanced information service of the terminal enhanced information processing system; 所述信息传输系统利用5G传输向所述终端增强信息处理系统发送其周围基准点的信息;The information transmission system uses 5G transmission to send information of surrounding reference points to the terminal enhanced information processing system; 所述终端增强信息处理系统分散在各个终端,其依据终端自身GNSS或INS传感器,获取自身精准位置,同时向所述基准站网数据处理系统请求周边基准站的信息,在获取周边视觉基准站、GNSS高精度差分信息后,增强自身的定位功能;The terminal enhanced information processing system is dispersed in each terminal, and obtains its own precise position based on the terminal's own GNSS or INS sensor, and requests the information of surrounding reference stations from the reference station network data processing system, and enhances its own positioning function after obtaining the surrounding visual reference stations and GNSS high-precision differential information; 其特征在于,包括如下步骤:It is characterized by comprising the following steps: 步骤1、在遥感影像上,通过目视解译的方法,选择静态地物点作为基准点,对目标物进行提取、分割,获得地物轮廓的坐标信息;Step 1: On the remote sensing image, through the method of visual interpretation, select static ground object points as reference points, extract and segment the target objects, and obtain the coordinate information of the ground object contour; 步骤2、基准站模位置解算系统对场景内的目标进行识别,通过孪生场景和三维重建技术,得到基准站的三维立体坐标序列,以及带有地物属性信息,所述地物属性信息采用特征矩阵的方式表达;利用遥感模式识别技术获取静态目标作为“基准站”,利用GNSS提供精准的时空基准获取基准站的位置、利用遥感获取基准地物表面图像的特征点,所述特征点采用特征矩阵的方式进行表达;Step 2: The reference station model position solution system identifies the target in the scene, and obtains the three-dimensional coordinate sequence of the reference station and the attribute information of the ground object through the twin scene and three-dimensional reconstruction technology. The attribute information of the ground object is expressed in the form of a feature matrix; the static target is obtained as the "reference station" by using remote sensing pattern recognition technology, the position of the reference station is obtained by using GNSS to provide a precise spatiotemporal reference, and the feature points of the surface image of the reference ground object are obtained by remote sensing. The feature points are expressed in the form of a feature matrix; 步骤2中的基准站模位置解算系统,模位置计算包括以下步骤:The base station model position solution system in step 2, the model position calculation includes the following steps: 步骤2.1:实例分割Step 2.1: Instance Segmentation 基于神经网络算法,对目标物进行识别,同时对图像进行分割,提出目标物的图像特征;Based on the neural network algorithm, the target object is identified and the image is segmented to propose the image features of the target object; 步骤2.2:目标深度获取Step 2.2: Target depth acquisition 采用双目相机进行获取深度信息,假设坐标为(i, j)处的目标,其深度值Z(i, j);则对每一个像素位置上的深度值采用一个邻域窗口进估计;A binocular camera is used to obtain depth information. Assuming the target at coordinate (i, j), its depth value is Z(i, j); then a neighborhood window is used to estimate the depth value at each pixel position; 步骤2.3:外参矩阵计算Step 2.3: Calculation of the external parameter matrix 通过一组世界坐标系下的n 个3D点及其在图像中相应的2D坐标进行相机姿态估计采用 EPnP ( Efficient PnP ) 算法实现姿态估计;The camera pose estimation is performed through a set of n 3D points in the world coordinate system and their corresponding 2D coordinates in the image. The EPnP (Efficient PnP) algorithm is used to achieve pose estimation. 步骤2.4:位置解算Step 2.4: Position calculation 通过对像素进行坐标系的转换,得到目标在世界坐标系下的三维位置;By transforming the coordinate system of pixels, the three-dimensional position of the target in the world coordinate system is obtained; 像素坐标(u, v),与图像坐标系的转换关系:The conversion relationship between pixel coordinates (u, v) and image coordinate system is: , 所述(x,y)指代目标在图像坐标系中的位置,所述(u,v)指代目标在像素坐标系下的位置,所述(u0,v0)指代把图像坐标原点从图像中心转移到左上角的参数,所述(dx,dy)指代每个像素点在x轴、y轴的尺寸;The (x, y) refers to the position of the target in the image coordinate system, the (u, v) refers to the position of the target in the pixel coordinate system, the (u 0 , v 0 ) refers to the parameter for transferring the image coordinate origin from the center of the image to the upper left corner, and the (dx, dy) refers to the size of each pixel on the x-axis and y-axis; 若fx、fy为相机在x轴和y轴的焦距,Xc、Yc、Zc为目标在相机坐标系下的坐标值,则:If fx and fy are the focal lengths of the camera on the x-axis and y-axis, and Xc, Yc, and Zc are the coordinates of the target in the camera coordinate system, then: , 已知目标的像素坐标(u, v)和相机的内参矩阵,根据之前求解相机外参及目标深度,求解位置信息;所述R代表旋转矩阵,也即相机相较世界坐标系的旋转,和所述T代表平移矩阵,也即相机相较世界坐标系的平移;Given the pixel coordinates (u, v) of the target and the intrinsic matrix of the camera, the position information is solved based on the previously solved camera extrinsics and target depth; R represents the rotation matrix, that is, the rotation of the camera relative to the world coordinate system, and T represents the translation matrix, that is, the translation of the camera relative to the world coordinate system; 所述(Xw,Yw,Zw)指代目标在世界坐标系中的三维位置;The (Xw, Yw, Zw) refers to the three-dimensional position of the target in the world coordinate system; 步骤3、当预定区域基准站网数据统一汇聚到所述基准站网数据处理系统,所述基准站网数据处理系统对每个基准站的提取特征矩阵、精准坐标位置建设增强站数据库,为终端的增强信息服务做好准备;Step 3: When the reference station network data of the predetermined area is uniformly gathered to the reference station network data processing system, the reference station network data processing system constructs an enhanced station database for the extracted feature matrix and precise coordinate position of each reference station, so as to prepare for the enhanced information service of the terminal; 步骤4、利用5G传输向周边需要服务的终端发送其周围基准点的信息;Step 4: Use 5G transmission to send information about surrounding benchmark points to surrounding terminals that need services; 步骤5、终端依据自身的GNSS传感器,获取自身精准位置,同时向所述基准站网数据处理模块请求周边基准站的信息,在获取周边视觉基准站、GNSS高精度差分信息后,增强自身的定位功能。Step 5: The terminal obtains its own precise position based on its own GNSS sensor, and requests the information of surrounding reference stations from the reference station network data processing module. After obtaining the surrounding visual reference stations and GNSS high-precision differential information, it enhances its own positioning function. 2.根据权利要求1所述的一种基于“体基准”的导航增强网络系统的构建方法,其特征在于,所述“体基准”是指基于“面向对象”的理论,在二维地图中,地物被抽象成一个地面的“点”,在实景三维中,则有“一系列点+语义信息”的“体”的方式展现。2. According to the method for constructing a navigation enhancement network system based on "volume benchmark" as described in claim 1, it is characterized in that the "volume benchmark" refers to the theory based on "object-oriented". In a two-dimensional map, the ground feature is abstracted into a "point" on the ground, and in the real three-dimensional scene, it is presented in the form of a "volume" of "a series of points + semantic information". 3.根据权利要求1所述的一种基于“体基准”的导航增强网络系统的构建方法,其特征在于,所述模位置指的是,通过北斗定位得到天线相位中心位置,具体来说是质点位置P(X,Y,Z), 再通过“北斗+智能识别”融合定位得到一组具有空间拓扑关系的点集合即被称作模位置,具体来说是质点位置集合P{p1,p2…pn},pn为第n个质点。3. According to the method for constructing a navigation enhancement network system based on "body reference" as described in claim 1, it is characterized in that the mode position refers to the antenna phase center position obtained by Beidou positioning, specifically the particle position P (X, Y, Z), and then a set of points with spatial topological relationship is obtained by "Beidou + intelligent recognition" fusion positioning, which is called the mode position, specifically the particle position set P{p1, p2...pn}, where pn is the nth particle. 4.根据权利要求1所述的一种基于“体基准”的导航增强网络系统的构建方法,其特征在于,所述基准站网在一定区域内以均匀分布的形式进行构建,所述基准站以“体基准”为特征,基于“体基准”的增强信息作为信息的基本构成单元。4. According to the method for constructing a navigation enhancement network system based on "volume reference" as described in claim 1, it is characterized in that the reference station network is constructed in a uniformly distributed form within a certain area, the reference station is characterized by "volume reference", and the enhanced information based on "volume reference" is used as the basic constituent unit of information. 5.根据权利要求1所述的一种基于“体基准”的导航增强网络系统的构建方法,其特征在于,所述信息传输模块利用5G传输网络,数据中心向多源融合终端提供周边“体基准”的模位置 P{p1,p2…},终端通过GNSS定位以及本身传感器获得的场景内参考物,从而推算自己的或者周边地物的位置,区别于传统的导航增强系统提供的基于点位置的差分改正信息,“体基准”提供基于“体”信息的增强信息,增强位置信息之外,还对识别正确率进行增强。5. According to the method for constructing a navigation enhancement network system based on a "body reference" as described in claim 1, it is characterized in that the information transmission module utilizes the 5G transmission network, and the data center provides the model position P{p1, p2...} of the surrounding "body reference" to the multi-source fusion terminal. The terminal calculates its own position or the position of surrounding objects through GNSS positioning and reference objects in the scene obtained by its own sensors. Different from the differential correction information based on point position provided by the traditional navigation enhancement system, the "body reference" provides enhanced information based on the "body" information. In addition to enhancing the position information, it also enhances the recognition accuracy. 6.根据权利要求1所述的一种基于“体基准”的导航增强网络系统的构建方法,其特征在于,所述终端增强信息处理系统依托终端进行工作,所述终端增强信息处理系统通过5G传输网络获得所述基准站网数据处理系统信息并协同终端自身传感器信息进行工作,实现由已知点推算未知点位置的功能,解决室内外无缝定位问题。6. According to the method for constructing a navigation enhancement network system based on "body benchmark" as described in claim 1, it is characterized in that the terminal enhancement information processing system relies on the terminal to work, and the terminal enhancement information processing system obtains the reference station network data processing system information through the 5G transmission network and cooperates with the terminal's own sensor information to work, thereby realizing the function of inferring the position of unknown points from known points and solving the problem of seamless positioning indoors and outdoors.
CN202210248804.8A 2022-03-14 2022-03-14 A navigation enhancement network system based on "body reference" and its construction method Active CN114646991B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210248804.8A CN114646991B (en) 2022-03-14 2022-03-14 A navigation enhancement network system based on "body reference" and its construction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210248804.8A CN114646991B (en) 2022-03-14 2022-03-14 A navigation enhancement network system based on "body reference" and its construction method

Publications (2)

Publication Number Publication Date
CN114646991A CN114646991A (en) 2022-06-21
CN114646991B true CN114646991B (en) 2024-11-26

Family

ID=81993790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210248804.8A Active CN114646991B (en) 2022-03-14 2022-03-14 A navigation enhancement network system based on "body reference" and its construction method

Country Status (1)

Country Link
CN (1) CN114646991B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108919305A (en) * 2018-08-07 2018-11-30 北斗导航位置服务(北京)有限公司 Beidou ground enhances band-like method of servicing and system in communications and transportation
CN111045068A (en) * 2019-12-27 2020-04-21 武汉大学 An autonomous orbit and attitude determination method for low-orbit satellites based on non-navigation satellite signals

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111337031B (en) * 2020-02-24 2022-04-15 南京航空航天大学 Spacecraft landmark matching autonomous position determination method based on attitude information
CN111462241B (en) * 2020-04-08 2023-03-28 北京理工大学 Target positioning method based on monocular vision
CN111696162B (en) * 2020-06-11 2022-02-22 中国科学院地理科学与资源研究所 Binocular stereo vision fine terrain measurement system and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108919305A (en) * 2018-08-07 2018-11-30 北斗导航位置服务(北京)有限公司 Beidou ground enhances band-like method of servicing and system in communications and transportation
CN111045068A (en) * 2019-12-27 2020-04-21 武汉大学 An autonomous orbit and attitude determination method for low-orbit satellites based on non-navigation satellite signals

Also Published As

Publication number Publication date
CN114646991A (en) 2022-06-21

Similar Documents

Publication Publication Date Title
CN111275750B (en) Indoor space panoramic image generation method based on multi-sensor fusion
JP7273927B2 (en) Image-based positioning method and system
CN105096386B (en) A wide range of complicated urban environment geometry map automatic generation method
Teller et al. Calibrated, registered images of an extended urban area
CN108801274B (en) A landmark map generation method integrating binocular vision and differential satellite positioning
CN111060924B (en) A SLAM and Object Tracking Method
CN116883604A (en) Three-dimensional modeling technical method based on space, air and ground images
CN114063127A (en) Method for fusing multi-focal-length visual SLAM and GPS and storage medium
CN112132950B (en) Three-dimensional point cloud scene updating method based on crowdsourcing image
CN115578539B (en) Indoor space high-precision visual position positioning method, terminal and storage medium
Haala et al. A multi-sensor system for positioning in urban environments
CN113608234A (en) An urban data collection system
Yuan et al. Fully automatic DOM generation method based on optical flow field dense image matching
JP7365385B2 (en) Map generation method and image-based positioning system using the same
CN114646991B (en) A navigation enhancement network system based on "body reference" and its construction method
Deng et al. Automatic true orthophoto generation based on three-dimensional building model using multiview urban aerial images
CN118644554A (en) Aircraft navigation method based on monocular depth estimation and ground feature point matching
CN116894923A (en) High-resolution remote sensing image mapping conversion dense matching and three-dimensional reconstruction method
CN116824067A (en) Indoor three-dimensional reconstruction method and device thereof
Hairuddin et al. Development of 3D city model using videogrammetry technique
CN114387532A (en) Boundary identification method and device, terminal, electronic equipment and unmanned equipment
CN118229755B (en) Method for estimating urban building height by using street view image under severe shielding condition
CN118334263B (en) High-precision modeling method for fusion laser point cloud based on truncated symbol distance function
CN117928519B (en) Multi-sensor fusion positioning and mapping method and system for service robots
Zhou et al. Object detection and spatial location method for monocular camera based on 3D virtual geographical scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant