[go: up one dir, main page]

CN112560137B - Multi-model fusion method and system based on smart city - Google Patents

Multi-model fusion method and system based on smart city Download PDF

Info

Publication number
CN112560137B
CN112560137B CN202011402153.0A CN202011402153A CN112560137B CN 112560137 B CN112560137 B CN 112560137B CN 202011402153 A CN202011402153 A CN 202011402153A CN 112560137 B CN112560137 B CN 112560137B
Authority
CN
China
Prior art keywords
model
real
bim
oblique photography
bim model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011402153.0A
Other languages
Chinese (zh)
Other versions
CN112560137A (en
Inventor
姜益民
李先旭
李纯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Optics Valley Information Technology Co ltd
Original Assignee
Wuhan Optics Valley Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Optics Valley Information Technology Co ltd filed Critical Wuhan Optics Valley Information Technology Co ltd
Priority to CN202011402153.0A priority Critical patent/CN112560137B/en
Publication of CN112560137A publication Critical patent/CN112560137A/en
Application granted granted Critical
Publication of CN112560137B publication Critical patent/CN112560137B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Architecture (AREA)
  • Civil Engineering (AREA)
  • Data Mining & Analysis (AREA)
  • Structural Engineering (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明提供基于智慧城市的多模型融合方法、系统、电子设备及存储介质,方法包括:根据航拍摄影图像,生成BIM模型和倾斜摄影模型;通过B/S端方式且采用GPU渲染方式对BIM模型和倾斜摄影模型进行渲染;对渲染后的BIM模型和倾斜摄影模型进行融合,得到融合后的实景三维模型;根据实时采集的室内外移动目标的坐标点、移动轨迹和属性信息,构建移动目标的实时感知模型,将构建的移动目标的实时感知模型融合于实景三维模型中并显示。本发明能够将多种模型进行融合,通过B/S端且基于GPU与服务器渲染技术,将倾斜摄影数据、BIM数据模型以及移动目标的实时感知模型在浏览器端进行浏览和访问,以提升了用户的浏览体验。

The present invention provides a multi-model fusion method, system, electronic device and storage medium based on smart city, the method includes: generating BIM model and oblique photography model according to aerial photography images; rendering BIM model and oblique photography model by B/S end mode and GPU rendering mode; fusing the rendered BIM model and oblique photography model to obtain a fused real-life three-dimensional model; constructing a real-time perception model of a mobile target according to the coordinate points, movement trajectory and attribute information of indoor and outdoor mobile targets collected in real time, and fusing the constructed real-time perception model of the mobile target into the real-life three-dimensional model and displaying it. The present invention can fuse multiple models, browse and access oblique photography data, BIM data model and real-time perception model of mobile targets on the browser side through B/S end and based on GPU and server rendering technology, so as to improve the browsing experience of users.

Description

基于智慧城市的多模型融合方法及系统Multi-model fusion method and system based on smart city

技术领域Technical Field

本发明涉及模型融合领域,更具体地,涉及一种基于智慧城市的多模型融合方法及系统。The present invention relates to the field of model fusion, and more specifically, to a multi-model fusion method and system based on a smart city.

背景技术Background Art

近年来,随着5G、云计算、物联网以及AI(人工智能)等高新技术的快速发展,GIS(Geographic Information System,地理信息系统)被逐渐赋予了新的生命力,当前实景三维技术在智慧城市建设中扮演了重要的角色。In recent years, with the rapid development of high-tech technologies such as 5G, cloud computing, the Internet of Things and AI (artificial intelligence), GIS (Geographic Information System) has gradually been given new vitality. Currently, real-scene three-dimensional technology plays an important role in the construction of smart cities.

其中无人机航拍技术以其方便快捷的数据获取方式、自动化程度高的建模方式等优点,是目前获取实景三维数据的主要手段。同时随着人们对室内外场景的需求增加,通过大场景下倾斜摄影与BIM模型的结合也有效弥补室内场景展示不足的问题,另外在IOT以及AI识别成果方面,通过结合物联网感知设备以及AI实时感知技术在一定程度上增强了三维场景的真实性。Among them, drone aerial photography technology is currently the main means of obtaining real-life 3D data due to its convenient and fast data acquisition method and highly automated modeling method. At the same time, as people's demand for indoor and outdoor scenes increases, the combination of oblique photography and BIM models in large scenes can effectively make up for the problem of insufficient indoor scene display. In addition, in terms of IOT and AI recognition results, the combination of IoT sensing devices and AI real-time sensing technology has enhanced the authenticity of 3D scenes to a certain extent.

发明内容Summary of the invention

本发明实施例提供一种克服上述问题或者至少部分地解决上述问题的一种基于智慧城市的多模型融合方法及系统。The embodiments of the present invention provide a multi-model fusion method and system based on a smart city that overcomes the above-mentioned problems or at least partially solves the above-mentioned problems.

根据本发明的第一方面,提供了一种基于智慧城市的多模型融合方法,包括:根据航拍摄影图像,对BIM模型进行贴图处理,生成室内外场景的BIM模型;通过B/S端方式且采用GUP渲染方式对室内外场景的BIM模型和倾斜摄影模型进行渲染;对渲染后的BIM模型和倾斜摄影模型进行融合,得到融合后的实景三维模型;根据实时采集的室内外移动目标的坐标点、移动轨迹和属性信息,构建移动目标的实时感知模型,将构建的移动目标的实时感知模型融合于实景三维模型中,并在前端进行显示。According to a first aspect of the present invention, a multi-model fusion method based on a smart city is provided, comprising: mapping a BIM model according to an aerial photographic image to generate a BIM model of an indoor or outdoor scene; rendering the BIM model and the oblique photography model of the indoor or outdoor scene through a B/S terminal mode and using a GUP rendering mode; fusing the rendered BIM model and the oblique photography model to obtain a fused real-scene three-dimensional model; constructing a real-time perception model of a mobile target according to the coordinate points, movement trajectories and attribute information of the indoor or outdoor mobile target collected in real time, fusing the constructed real-time perception model of the mobile target into the real-scene three-dimensional model, and displaying it on the front end.

在上述技术方案的基础上,本发明实施例还可以作出如下改进。Based on the above technical solution, the embodiment of the present invention can also make the following improvements.

进一步的,所述根据航拍摄影图像,对BIM模型进行贴图处理,生成室内外场景的BIM模型包括:对BIM模型所用IFC格式进行解析,拆分所述BIM模型的节点;建立BIM模型的各节点与航拍摄影图像之间的对应关系;将与BIM模型的各个节点对应的航拍摄影图像导入BIM模型中,对BIM模型进行批量贴图,生成室内外场景的BIM模型。Furthermore, the method of mapping the BIM model according to the aerial photographic images to generate the BIM model of the indoor and outdoor scenes includes: parsing the IFC format used by the BIM model and splitting the nodes of the BIM model; establishing a corresponding relationship between each node of the BIM model and the aerial photographic images; importing the aerial photographic images corresponding to each node of the BIM model into the BIM model, batch mapping the BIM model, and generating the BIM model of the indoor and outdoor scenes.

进一步的,所述通过B/S端方式且采用GUP渲染方式对室内外场景的BIM模型和倾斜摄影模型进行渲染包括:在B/S端将BIM模型数据结构转换为3D Tiles数据格式,采用Cesium地图引擎对BIM模型数据进行加载渲染。Furthermore, the rendering of the BIM model and oblique photography model of indoor and outdoor scenes through the B/S end and using the GUP rendering method includes: converting the BIM model data structure into a 3D Tiles data format on the B/S end, and using the Cesium map engine to load and render the BIM model data.

进一步的,还包括:通过在Cesium地图引擎中设置最大屏幕空间误差参数降低屏幕出错率和维持BIM模型的精细程度;通过配置优先加载屏幕中央图块参数提高对BIM模型数据的加载速度。Furthermore, it also includes: reducing the screen error rate and maintaining the precision of the BIM model by setting the maximum screen space error parameter in the Cesium map engine; and increasing the loading speed of the BIM model data by configuring the parameters for prioritizing the loading of the central block of the screen.

进一步的,所述对渲染后的BIM模型和倾斜摄影模型进行融合,得到融合后的实景三维模型包括:Furthermore, the rendering of the BIM model and the oblique photography model is fused to obtain a fused real-scene 3D model, including:

对于BIM模型和倾斜摄影模型的重叠部分,基于Cesium三维地球框架对倾斜摄影模型进行裁剪;For the overlapping parts of the BIM model and the oblique photography model, the oblique photography model is cropped based on the Cesium 3D earth framework;

将BIM模型融合于裁剪后的倾斜摄影模型中,生成实景三维模型。The BIM model is integrated into the cropped oblique photography model to generate a real-life 3D model.

进一步的,所述对于BIM模型和倾斜摄影模型的重叠部分,基于Cesium三维地球框架对倾斜摄影模型进行裁剪包括:Furthermore, for the overlapping part of the BIM model and the oblique photography model, clipping the oblique photography model based on the Cesium three-dimensional earth framework includes:

对于倾斜摄影模型,以BIM模型边界为初始化多边形裁剪区域,记录所述多边形裁剪区域的每一条边的经纬度坐标值;For the oblique photography model, the BIM model boundary is used as the initialization polygon clipping area, and the longitude and latitude coordinate values of each edge of the polygon clipping area are recorded;

以倾斜摄影模型的坐标原点为基准进行坐标值的矩阵运算纠正所述多边形裁剪区域的坐标;Taking the coordinate origin of the oblique photography model as a reference, matrix operation of coordinate values is performed to correct the coordinates of the polygonal clipping area;

以纠正后的多边形裁剪区域的每一条边所在面作为裁剪面,得到多个裁剪面;Taking the surface where each edge of the corrected polygonal clipping area is located as the clipping surface, a plurality of clipping surfaces are obtained;

根据任一个裁剪面的法向量和倾斜摄影模型的坐标原点到任一个裁剪面的最短距离,重新构造裁剪面,最终得到重新构造的多个裁剪面;Reconstructing the clipping surface according to the normal vector of any clipping surface and the shortest distance from the coordinate origin of the oblique photography model to any clipping surface, and finally obtaining a plurality of reconstructed clipping surfaces;

基于重新构造的多个裁剪面,对倾斜摄影模型进行裁剪。The oblique photography model is cropped based on the reconstructed multiple cropping planes.

进一步的,所述根据实时采集的室内外移动目标的坐标点、移动轨迹和属性信息,构建移动目标的实时感知模型包括:根据监控装置通过建立的WebSocket实时传输通道发送的室内外移动目标的坐标点、移动轨迹和属性信息,构建移动目标的实时感知模型;所述将构建的移动目标的实时感知模型融合于实景三维模型中,并在前端进行显示之后还包括:根据监控装置实时更新的移动目标的坐标点和移动轨迹,更新移动目标的实时感知模型的位置和朝向,并将更新后的移动目标的实时感知模型在前端进行显示。Furthermore, the construction of a real-time perception model of a mobile target based on the coordinate points, movement trajectory and attribute information of indoor and outdoor mobile targets collected in real time includes: constructing a real-time perception model of a mobile target based on the coordinate points, movement trajectory and attribute information of indoor and outdoor mobile targets sent by the monitoring device through the established WebSocket real-time transmission channel; after integrating the constructed real-time perception model of the mobile target into the real-scene three-dimensional model and displaying it on the front end, it also includes: updating the position and orientation of the real-time perception model of the mobile target based on the coordinate points and movement trajectory of the mobile target updated in real time by the monitoring device, and displaying the updated real-time perception model of the mobile target on the front end.

进一步的,移动目标的实时感知模型包括人和车模型,还包括:当接收到监控装置通过建立的WebSocket实时传输通道发送的室内外移动目标的坐标点、移动轨迹和属性信息时,建立各个移动目标的实时感知模型ID与时间、位置、朝向之间的对应关系;所述根据监控装置实时更新的移动目标的坐标点和移动轨迹,更新移动目标的实时感知模型的位置和朝向包括:向更新朝向函数headingPitchRollQuaternion中输入移动目标的实时感知模型ID、时间、位置、朝向,以实现相应的移动目标的实时感知模型在对应时间的位置和朝向的更新。Furthermore, the real-time perception model of the mobile target includes a human and a vehicle model, and also includes: when receiving the coordinate points, movement trajectories and attribute information of indoor and outdoor mobile targets sent by the monitoring device through the established WebSocket real-time transmission channel, establishing a correspondence between the real-time perception model ID of each mobile target and the time, position and orientation; updating the position and orientation of the real-time perception model of the mobile target according to the coordinate points and movement trajectories of the mobile target updated in real time by the monitoring device includes: inputting the real-time perception model ID, time, position and orientation of the mobile target into the update orientation function headingPitchRollQuaternion to realize the update of the position and orientation of the real-time perception model of the corresponding mobile target at the corresponding time.

根据本发明实施例的第二方面,提供一种多模型融合系统,包括:According to a second aspect of an embodiment of the present invention, a multi-model fusion system is provided, including:

贴图模块,用于根据航拍摄影图像,对BIM模型进行贴图处理,生成室内外场景的BIM模型;渲染模块,用于通过B/S端方式且采用GUP渲染方式对室内外场景的BIM模型和倾斜摄影模型进行渲染;融合模块,用于对渲染后的BIM模型和倾斜摄影模型进行融合,得到融合后的实景三维模型;显示模块,用于根据实时采集的室内外移动目标的坐标点、移动轨迹和属性信息,构建移动目标的实时感知模型,将构建的移动目标的实时感知模型融合于实景三维模型中,并在前端进行显示。A mapping module is used to map the BIM model according to the aerial photography images to generate the BIM model of the indoor and outdoor scenes; a rendering module is used to render the BIM model and the oblique photography model of the indoor and outdoor scenes through the B/S terminal mode and the GPU rendering mode; a fusion module is used to fuse the rendered BIM model and the oblique photography model to obtain a fused real-scene three-dimensional model; a display module is used to construct a real-time perception model of the mobile target according to the coordinate points, movement trajectory and attribute information of the indoor and outdoor mobile targets collected in real time, fuse the constructed real-time perception model of the mobile target into the real-scene three-dimensional model, and display it on the front end.

根据本发明实施例的第三方面,提供了一种电子设备,包括存储器、处理器,所述处理器用于执行存储器中存储的计算机管理类程序时实现多模型融合方法的步骤。According to a third aspect of an embodiment of the present invention, there is provided an electronic device, comprising a memory and a processor, wherein the processor is used to implement the steps of a multi-model fusion method when executing a computer management program stored in the memory.

根据本发明实施例的第四方面,提供了一种计算机可读存储介质,其上存储有计算机管理类程序,所述计算机管理类程序被处理器执行时实现多模型融合方法的步骤。According to a fourth aspect of an embodiment of the present invention, a computer-readable storage medium is provided, on which a computer management program is stored, and when the computer management program is executed by a processor, the steps of the multi-model fusion method are implemented.

本发明实施例提供的一种基于智慧城市的多模型融合方法及系统,根据航拍摄影图像,生成室内外场景的BIM模型和倾斜摄影模型;通过B/S端方式且采用GUP渲染方式对BIM模型和倾斜摄影模型进行渲染;对渲染后的BIM模型和倾斜摄影模型进行融合,得到融合后的实景三维模型;根据实时采集的室内外移动目标的坐标点、移动轨迹和属性信息,构建移动目标的实时感知模型,将构建的移动目标的实时感知模型融合于实景三维模型中,并在前端进行显示。本发明实施例能够将倾斜摄影模型和BIM模型等多种模型融合,通过B/S端且基于GPU与服务器渲染技术,将倾斜摄影数据与BIM数据模型在浏览器端进行浏览和访问,以提升了用户的浏览体验。The embodiment of the present invention provides a multi-model fusion method and system based on smart city, which generates BIM models and oblique photography models of indoor and outdoor scenes according to aerial photography images; renders the BIM model and oblique photography model through the B/S end mode and adopts GPU rendering mode; fuses the rendered BIM model and oblique photography model to obtain a fused real-life three-dimensional model; constructs a real-time perception model of the mobile target according to the coordinate points, movement trajectory and attribute information of the indoor and outdoor mobile targets collected in real time, fuses the constructed real-time perception model of the mobile target into the real-life three-dimensional model, and displays it on the front end. The embodiment of the present invention can fuse multiple models such as oblique photography models and BIM models, browse and access oblique photography data and BIM data models on the browser side through the B/S end and based on GPU and server rendering technology, so as to improve the user's browsing experience.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为本发明实施例提供的一种多模型融合方法流程图;FIG1 is a flow chart of a multi-model fusion method provided by an embodiment of the present invention;

图2为建立BIM模型的各节点与航拍影像图像对应关系的示意图;FIG2 is a schematic diagram showing the correspondence between each node of the BIM model and the aerial image;

图3为本发明实施例提供的一种多模型融合方法的整体流程图;FIG3 is an overall flow chart of a multi-model fusion method provided by an embodiment of the present invention;

图4为本发明实施例提供的一种多模型融合系统结构图;FIG4 is a structural diagram of a multi-model fusion system provided by an embodiment of the present invention;

图5发明实施例提供的一种倾斜摄影模型裁剪方法流程图;FIG5 is a flow chart of a method for cropping an oblique photography model provided by an embodiment of the present invention;

图6为本发明实施例提供的一种可能的电子设备的硬件结构示意图;FIG6 is a schematic diagram of a possible hardware structure of an electronic device provided by an embodiment of the present invention;

图7为本发明实施例提供的一种可能的计算机可读存储介质的硬件结构示意图。FIG. 7 is a schematic diagram of a hardware structure of a possible computer-readable storage medium provided in an embodiment of the present invention.

具体实施方式DETAILED DESCRIPTION

下面结合附图和实施例,对本发明的具体实施方式作进一步详细描述。以下实施例用于说明本发明,但不用来限制本发明的范围。The specific implementation of the present invention is further described in detail below in conjunction with the accompanying drawings and examples. The following examples are used to illustrate the present invention, but are not intended to limit the scope of the present invention.

图1是本发明实施例提供的一种多模型融合方法流程图,如图1所示,所述方法包括:101、根据航拍摄影图像,对BIM模型进行贴图处理,生成室内外场景的BIM模型;102、通过B/S端方式且采用GUP渲染方式对室内外场景的BIM模型和倾斜摄影模型进行渲染;103、对渲染后的BIM模型和倾斜摄影模型进行融合,得到融合后的实景三维模型;104、根据实时采集的室内外移动目标的坐标点、移动轨迹和属性信息,构建移动目标的实时感知模型,将构建的移动目标的实时感知模型融合于实景三维模型中,并在前端进行显示。Figure 1 is a flow chart of a multi-model fusion method provided by an embodiment of the present invention. As shown in Figure 1, the method includes: 101. According to the aerial photography image, the BIM model is mapped to generate the BIM model of the indoor and outdoor scenes; 102. The BIM model and the oblique photography model of the indoor and outdoor scenes are rendered through the B/S end mode and the GUP rendering mode; 103. The rendered BIM model and the oblique photography model are fused to obtain a fused real-scene three-dimensional model; 104. According to the coordinate points, movement trajectories and attribute information of the indoor and outdoor mobile targets collected in real time, a real-time perception model of the mobile target is constructed, and the constructed real-time perception model of the mobile target is fused into the real-scene three-dimensional model, and displayed on the front end.

可以理解的是,根据已有的实景三维技术与人工智能技术融合方法调研,存在以下几种研究方法:1、以处理数据源方式实现多源实景数据融合;2、以C/S端方式加载大场景实景三维;3、GIS与AI实时感知技术相融合。It is understandable that based on the existing research on the integration methods of real-life 3D technology and artificial intelligence technology, there are several research methods: 1. Realize multi-source real-life data fusion by processing data sources; 2. Load large-scene real-life 3D in a C/S manner; 3. Integrate GIS with AI real-time perception technology.

按以上介绍,传统融合方法分别存在以下不足和亟待改进之处:1、数据源层面:无人机采集的图片未有效应用到BIM模型贴图材质中。2、多源数据融合通常采用处理源数据的方式,降低了对源数据操作的灵活性和复用性;3、C/S端软件存在可扩展性差以及存在难以满足当前用户定制化需求的问题;4、AI与实景三维的融合方式单一,可视化效果不佳。According to the above introduction, the traditional fusion methods have the following shortcomings and areas that need to be improved: 1. Data source level: The pictures collected by drones are not effectively applied to the BIM model texture material. 2. Multi-source data fusion usually uses the method of processing source data, which reduces the flexibility and reusability of source data operations; 3. The C/S end software has poor scalability and is difficult to meet the current user customization needs; 4. The fusion method of AI and real-life 3D is single, and the visualization effect is not good.

鉴于传统融合方法的单一或者不足的情况,本发明实施例综合考虑了多源数据和AI实时感知技术融合问题的,主要包括:充分利用根据航拍摄影图像,对BIM模型进行贴图,生成室内外场景的BIM模型和倾斜摄影模型。其中,BIM模型中包括各个微小模型,比如,包括一个具体的建筑物以及建筑物内部的各个部件,倾斜摄影模型中包括各个小模型的位置坐标。通常来说,倾斜摄影模型的精细程度要比BIM模型的精细程度要高。In view of the singleness or insufficiency of traditional fusion methods, the embodiments of the present invention comprehensively consider the fusion of multi-source data and AI real-time perception technology, mainly including: making full use of aerial photography images to map the BIM model, and generating BIM models and oblique photography models of indoor and outdoor scenes. Among them, the BIM model includes various micro models, for example, a specific building and various components inside the building, and the oblique photography model includes the location coordinates of each small model. Generally speaking, the degree of refinement of the oblique photography model is higher than that of the BIM model.

生成室内外场景的BIM模型和倾斜摄影模型后,通过B/S端方式且采用GUP渲染方式对室内外场景的BIM模型和倾斜摄影模型进行渲染,将渲染后的BIM模型和倾斜摄影模型进行融合,生成融合后的实景三维模型。根据实时采集的室内外移动目标的坐标点、移动轨迹和属性信息,构建移动目标的实时感知模型,将构建的移动目标的实时感知模型融合于实景三维模型中,并在前端进行显示。After generating the BIM model and oblique photography model of the indoor and outdoor scenes, the BIM model and oblique photography model of the indoor and outdoor scenes are rendered through the B/S end and the GPU rendering method, and the rendered BIM model and oblique photography model are fused to generate a fused real-life 3D model. Based on the coordinate points, movement trajectory and attribute information of the indoor and outdoor moving targets collected in real time, a real-time perception model of the moving targets is constructed, and the constructed real-time perception model of the moving targets is fused into the real-life 3D model and displayed on the front end.

本发明实施例能够将倾斜摄影模型和BIM模型等多种模型融合,通过B/S端且基于GPU与服务器渲染技术,将倾斜摄影数据与BIM数据模型在浏览器端进行浏览和访问,以提升了用户的浏览体验。The embodiment of the present invention can integrate multiple models such as oblique photography models and BIM models, and browse and access the oblique photography data and BIM data models on the browser side through the B/S side and based on GPU and server rendering technology, thereby improving the user's browsing experience.

在一种可能的实施例方式中,根据航拍摄影图像,对BIM模型进行贴图处理,生成室内外场景的BIM模型包括:对BIM模型所用IFC格式进行解析,拆分所述BIM模型的节点;建立BIM模型的各节点与航拍摄影图像之间的对应关系;将与BIM模型的各个节点对应的航拍摄影图像导入BIM模型中,对BIM模型进行批量贴图,生成室内外场景的BIM模型。In a possible implementation manner, mapping is performed on a BIM model according to an aerial photographic image to generate a BIM model of indoor and outdoor scenes, including: parsing the IFC format used by the BIM model and splitting the nodes of the BIM model; establishing a correspondence between each node of the BIM model and the aerial photographic image; importing the aerial photographic image corresponding to each node of the BIM model into the BIM model, batch mapping the BIM model, and generating a BIM model of the indoor and outdoor scenes.

可以理解的是,可参见图2,对于无人机航拍影像图像,通过人工识别算法(AI)识别出其中的航拍影像图。通常在BIM模型建模时,需要给BIM模型指定材质,但一般来说建模软件的材质并非能够反映BIM模型真实的纹理特征。通过研究BIM模型所用IFC格式并对其进行解析,拆分节点,获取BIM模型的图形数据以及空间结构信息,建立IfcMaterialResource中基点参考与AI识别图片结果的对应关系,可以理解为建立BIM模型的各个节点与航拍影像图像之间的对应关系。It can be understood that, as shown in Figure 2, for the drone aerial image, the aerial image is identified by the artificial recognition algorithm (AI). Usually when building a BIM model, it is necessary to specify the material for the BIM model, but generally speaking, the material of the modeling software cannot reflect the real texture characteristics of the BIM model. By studying the IFC format used by the BIM model and parsing it, splitting the nodes, obtaining the graphic data and spatial structure information of the BIM model, and establishing the correspondence between the base point reference in IfcMaterialResource and the AI recognition image result, it can be understood as establishing the correspondence between each node of the BIM model and the aerial image.

建立BIM模型中各个节点与航拍影像图像之间的对应关系后,将BIM模型所有节点对应的航拍影像图像均导入BIM模型中,其中,航拍影像图像为地面上室内外场景,那么生成的BIM模型为室内外场景BIM模型,实现BIM模型在进行切片时便会自动进行贴图处理。通过这种方式有效的利用了航拍影像图片资源,以BIM模型的节点为计算参考,以达到BIM模型的批量快速贴图效果。After establishing the correspondence between each node in the BIM model and the aerial image, the aerial image corresponding to all nodes of the BIM model are imported into the BIM model. If the aerial image is an indoor or outdoor scene on the ground, the generated BIM model is an indoor or outdoor scene BIM model, so that the BIM model will automatically be mapped when it is sliced. In this way, the aerial image resources are effectively utilized, and the nodes of the BIM model are used as calculation references to achieve the batch fast mapping effect of the BIM model.

在一种可能的实施例方式中,通过B/S端方式且采用GUP渲染方式对室内外场景的BIM模型和倾斜摄影模型进行渲染包括:在B/S端将BIM模型数据结构或倾斜摄影模型的数据结构转换为3D Tiles数据格式,采用Cesium地图引擎对BIM模型数据或倾斜摄影模型数据进行加载渲染。In a possible implementation manner, rendering the BIM model and oblique photography model of indoor and outdoor scenes through the B/S end and using the GUP rendering method includes: converting the BIM model data structure or the oblique photography model data structure into a 3D Tiles data format on the B/S end, and using the Cesium map engine to load and render the BIM model data or the oblique photography model data.

可以理解的是,在客户端渲染层面,为满足浏览器端三维呈现需求,采用B/S端方式GPU渲染方式进行场景融合。另外可以进行模型切片优化以及参数优调,以达到提升加载速率的目的。It is understandable that at the client rendering level, in order to meet the 3D rendering requirements of the browser, the B/S end GPU rendering method is used for scene fusion. In addition, model slicing optimization and parameter tuning can be performed to achieve the purpose of improving the loading rate.

具体的,通常B/S端的实景三维数据加载需要对其数据结构进行优化,利用3DTiles数据格式可以有效的对大量3D地理数据进行流式传输和海量渲染,Cesium地图引擎对3DTiles数据格式提供了支持,并且根据LOD层级进行加载,可以有效解决浏览器端海量数据渲染问题。Specifically, the loading of real-life 3D data on the B/S side usually requires optimization of its data structure. The 3DTiles data format can be used to effectively stream and render large amounts of 3D geographic data. The Cesium map engine supports the 3DTiles data format and loads it according to the LOD level, which can effectively solve the problem of rendering massive data on the browser side.

具体的,在B/S端,将BIM模型数据结构和倾斜摄影模型的数据结构均转换为3DTiles数据格式,采用Cesium地图引擎对BIM模型数据和倾斜摄影模型数据进行加载渲染。Specifically, on the B/S side, the data structure of the BIM model and the data structure of the oblique photography model are converted into the 3DTiles data format, and the Cesium map engine is used to load and render the BIM model data and the oblique photography model data.

其中,为进一步通过优化3dtiles加载效率,本发明实施例提供一套优化配置参数,比如在加载Cesium3DTileset对象时,可以通过在Cesium地图引擎加入最大屏幕空间误差参数进行性能控制,值越高,性能越好,但是会存在视觉质量低的情况。经过反复测试,将参数值控制在10-16之间可以有效的降低屏幕出错率和维持模型精细程度,保证三维场景稳定运行;另外可以同时配置优先加载屏幕中央图块参数改善加载速率。Among them, in order to further optimize the loading efficiency of 3dtiles, the embodiment of the present invention provides a set of optimization configuration parameters. For example, when loading a Cesium3DTileset object, the performance can be controlled by adding the maximum screen space error parameter in the Cesium map engine. The higher the value, the better the performance, but there will be low visual quality. After repeated testing, controlling the parameter value between 10-16 can effectively reduce the screen error rate and maintain the model refinement, ensuring the stable operation of the three-dimensional scene; in addition, the parameter of preferentially loading the central tile of the screen can be configured at the same time to improve the loading rate.

在一种可能的实施例方式中,所述对渲染后的BIM模型和倾斜摄影模型进行融合,得到融合后的实景三维模型包括:对于BIM模型和倾斜摄影模型的重叠部分,基于Cesium三维地球框架对倾斜摄影模型进行裁剪;将BIM模型融合于裁剪后的倾斜摄影模型中,生成实景三维模型。In a possible implementation manner, the fusion of the rendered BIM model and the oblique photography model to obtain a fused real-scene three-dimensional model includes: for the overlapping part of the BIM model and the oblique photography model, cropping the oblique photography model based on the Cesium three-dimensional earth framework; and fusing the BIM model into the cropped oblique photography model to generate a real-scene three-dimensional model.

可以理解的是,参见图3,针对BIM模型和倾斜摄影模型产生模型重叠而导致无法浏览室内场景问题,需要对倾斜摄影模型进行模型裁剪,具体的通过基于cesium三维地球框架进行模型裁剪。具体为,对于倾斜摄影模型,以BIM模型边界为初始化多边形裁剪区域,记录多边形裁剪区域的每一条边的经纬度坐标值;以倾斜摄影模型的坐标原点为基准进行坐标值的矩阵运算纠正多边形裁剪区域的坐标;以纠正后的多边形裁剪区域的每一条边所在面作为裁剪面,得到多个裁剪面;根据任一个裁剪面的法向量和倾斜摄影模型的坐标原点到任一个裁剪面的最短距离,重新构造裁剪面,最终得到重新构造的多个裁剪面;基于重新构造的多个裁剪面,对倾斜摄影模型进行裁剪。It can be understood that, referring to FIG3, in order to solve the problem that the indoor scene cannot be browsed due to the overlap of the BIM model and the oblique photography model, the oblique photography model needs to be cropped, specifically by cropping the model based on the Cesium three-dimensional earth framework. Specifically, for the oblique photography model, the BIM model boundary is used as the initial polygonal cropping area, and the longitude and latitude coordinate values of each edge of the polygonal cropping area are recorded; the coordinates of the polygonal cropping area are corrected by matrix operation of the coordinate values based on the coordinate origin of the oblique photography model; the surface where each edge of the corrected polygonal cropping area is located is used as the cropping surface to obtain multiple cropping surfaces; the cropping surface is reconstructed according to the normal vector of any cropping surface and the shortest distance from the coordinate origin of the oblique photography model to any cropping surface, and finally multiple reconstructed cropping surfaces are obtained; based on the reconstructed multiple cropping surfaces, the oblique photography model is cropped.

其中,对倾斜摄影模型进行裁剪的具体过程为,针对BIM模型和倾斜摄影模型产生模型重叠而导致无法浏览室内场景问题,需要对倾斜摄影进行模型裁剪,通过基于cesium地球框架构建ClippingPlane集合来进行模型裁剪。The specific process of clipping the oblique photography model is as follows: in order to solve the problem that the indoor scene cannot be browsed due to the overlap between the BIM model and the oblique photography model, the oblique photography model needs to be clipped, and the model is clipped by building a ClippingPlane collection based on the Cesium earth framework.

以BIM模型边界为初始化多边形裁减区,记录初始化多边形裁剪区各个顶点的经 纬度坐标值Pi,i= 0,1,2,…,其中分别为坐标的经度、纬度以及高度坐标值: Take the BIM model boundary as the initial polygon clipping area, and record the latitude and longitude coordinate values of each vertex of the initial polygon clipping area, i = 0, 1, 2, ..., where They are The longitude, latitude and altitude coordinate values of the coordinates:

.

以倾斜摄影模型原点矩阵的逆矩阵为基准,与记录的多边形顶点进行计算,生成 以倾斜摄影原点逆矩阵为基准的纠正点坐标,经过旋转后的矩阵,其中为局部 坐标值,该值可以通过cesium计算得到: The inverse matrix of the oblique photography model origin matrix is used as the reference, and the recorded polygon vertices are calculated to generate the correction point coordinates based on the inverse matrix of the oblique photography origin. The rotated matrix ,in is the local coordinate value, which can be calculated by Cesium:

;

经过转换后的坐标点,其中,,为坐标点的经度、纬度和高度,以此 类推,生成各纠正后的顶点坐标: The converted coordinates ,in , , is the coordinate point The longitude, latitude, and altitude of , and so on, generate the corrected vertex coordinates:

;

;

.

定义任意垂直地面向量与纠正后两两坐标点构成的向量进行叉乘,生成 最终所需的裁减面法向量,其中i=0,1,2..。 Define an arbitrary vertical ground vector The vector formed by the corrected coordinate points Perform cross multiplication to generate the final required clipping surface normal vector , where i=0,1,2…

通过参数法向量以及裁减面坐标点可以直接构造ClippingPlane集合,实现裁剪效果。The ClippingPlane collection can be directly constructed through the parameter normal vector and the clipping plane coordinate point to achieve the clipping effect.

在一种可能的实施例方式中,根据实时采集的室内外移动目标的坐标点、移动轨迹和属性信息,构建移动目标的实时感知模型包括:根据监控装置通过建立的WebSocket实时传输通道发送的室内外移动目标的坐标点、移动轨迹和属性信息,构建移动目标的实时感知模型;将构建的移动目标的实时感知模型融合于实景三维模型中,并在前端进行显示之后还包括:根据监控装置实时更新的移动目标的坐标点和移动轨迹,更新移动目标的实时感知模型的位置和朝向,并将更新后的移动目标的实时感知模型在前端进行显示。In a possible implementation manner, constructing a real-time perception model of a mobile target based on the coordinate points, movement trajectory and attribute information of indoor and outdoor mobile targets collected in real time includes: constructing a real-time perception model of the mobile target based on the coordinate points, movement trajectory and attribute information of the indoor and outdoor mobile targets sent by the monitoring device through the established WebSocket real-time transmission channel; integrating the constructed real-time perception model of the mobile target into the real-scene three-dimensional model, and displaying it on the front end also includes: updating the position and orientation of the real-time perception model of the mobile target based on the coordinate points and movement trajectory of the mobile target updated in real time by the monitoring device, and displaying the updated real-time perception model of the mobile target on the front end.

可以理解的是,将BIM模型和倾斜摄影模型融合,生成室内外场景的实景三维模型后,需要构建其中的各个移动目标的精细模型。此时,根据设置于各个位置的监控装置采集室内外场景的各个移动目标的动作,比如,采集各个移动目标的坐标点、移动轨迹以及自身的属性信息,其中,各个移动目标主要是指路面的人、车等,人、车的自身属性信息主要包括人车的颜色、大小、车牌等,根据移动目标的坐标点、移动轨迹以及自身属性信息等这些信息,构造移动目标的实时感知模型。将构造的各个移动目标的实时感知模型融合于实景三维模型中,并在前端进行显示,呈现人车模型以及整个室内外场景的实景三维模型。It is understandable that after the BIM model and the oblique photography model are integrated to generate a real-life 3D model of the indoor and outdoor scenes, it is necessary to construct a detailed model of each moving target therein. At this time, the actions of each moving target in the indoor and outdoor scenes are collected according to the monitoring devices set at various locations, for example, the coordinate points, movement trajectories and their own attribute information of each moving target are collected, wherein each moving target mainly refers to people and cars on the road, and the own attribute information of people and cars mainly includes the color, size, license plate, etc. of people and cars. According to the coordinate points, movement trajectories and their own attribute information of the moving targets, a real-time perception model of the moving targets is constructed. The constructed real-time perception models of each moving target are integrated into the real-life 3D model and displayed on the front end to present the people and car models and the real-life 3D model of the entire indoor and outdoor scenes.

其中,移动目标是实时运动的,因此,需要实时对移动目标的实时感知模型实时进行更新,具体的,根据设置于室内外各个位置的监控装置实时更新的移动目标的坐标点和移动轨迹,更新移动目标的实时感知模型的位置和朝向,并将更新后的移动目标的实时感知模型在前端进行显示。Among them, the mobile target moves in real time, so the real-time perception model of the mobile target needs to be updated in real time. Specifically, the coordinate points and movement trajectory of the mobile target are updated in real time according to the monitoring devices set at various locations indoors and outdoors, and the position and orientation of the real-time perception model of the mobile target are updated, and the updated real-time perception model of the mobile target is displayed on the front end.

在一种可能的实施例方式中,移动目标的实时感知模型包括人和车模型,还包括:当接收到监控装置通过建立的WebSocket实时传输通道发送的室内外移动目标的坐标点、移动轨迹和属性信息时,建立各个移动目标的实时感知模型ID与时间、位置、朝向之间的对应关系;所述根据监控装置实时更新的移动目标的坐标点和移动轨迹,更新移动目标的实时感知模型的位置和朝向包括:向更新朝向函数headingPitchRollQuaternion中输入移动目标的实时感知模型ID、时间、位置、朝向,以实现相应的移动目标的实时感知模型在对应时间的位置和朝向的更新。In a possible implementation manner, the real-time perception model of the mobile target includes a human and a vehicle model, and also includes: when the coordinate points, movement trajectories and attribute information of indoor and outdoor mobile targets are sent by the monitoring device through the established WebSocket real-time transmission channel, a correspondence between the real-time perception model ID of each mobile target and the time, position and orientation is established; the position and orientation of the real-time perception model of the mobile target is updated according to the coordinate points and movement trajectories of the mobile target updated in real time by the monitoring device, including: inputting the real-time perception model ID, time, position and orientation of the mobile target into the update orientation function headingPitchRollQuaternion to realize the update of the position and orientation of the real-time perception model of the corresponding mobile target at the corresponding time.

可以理解的是,在对实景三维模型中的各个移动目标的实时感知模型进行更新过程为,其中,可在室内外各个位置设置监控装置,各个监控装置与前端通过建立WebSocket实时网络传输通道实现数据流的传输。It can be understood that the process of updating the real-time perception model of each moving target in the real-scene three-dimensional model is as follows: monitoring devices can be set up at various locations indoors and outdoors, and each monitoring device and the front end realize data stream transmission by establishing a WebSocket real-time network transmission channel.

监控装置通过建立的WebSocket实时传输通道向前端发送采集到的室内外移动目标的坐标点、移动轨迹和属性信息,当前端接收到监控装置发送的室内外移动目标的坐标点、移动轨迹和属性信息时,建立各个移动目标的实时感知模型ID与时间、位置、朝向之间的对应关系。The monitoring device sends the collected coordinate points, movement trajectories and attribute information of indoor and outdoor mobile targets to the front end through the established WebSocket real-time transmission channel. When the front end receives the coordinate points, movement trajectories and attribute information of indoor and outdoor mobile targets sent by the monitoring device, a correspondence between the real-time perception model ID of each mobile target and the time, position and orientation is established.

随着时间,当每一个移动目标发生移动时,根据监控装置实时更新的移动目标的坐标点和移动轨迹,更新移动目标的实时感知模型的位置和朝向,具体的,向更新朝向函数headingPitchRollQuaternion中输入移动目标的实时感知模型ID、时间、位置、朝向,以实现相应的移动目标的实时感知模型在对应时间的位置和朝向的更新,可实时准确地呈现人车的运动情况。As time goes by, when each mobile target moves, the position and orientation of the real-time perception model of the mobile target are updated according to the coordinate points and movement trajectory of the mobile target updated in real time by the monitoring device. Specifically, the real-time perception model ID, time, position, and orientation of the mobile target are input into the updated orientation function headingPitchRollQuaternion to realize the update of the position and orientation of the real-time perception model of the corresponding mobile target at the corresponding time, which can accurately present the movement of people and vehicles in real time.

可参见图4,对本发明实施例提供的多模型融合方法进行描述,首先,根据航拍影像图像,对BIM模型进行切片贴图,以及对倾斜摄影模型进行切片贴图,并对BIM模型和倾斜摄影敏效果进行渲染融合,生成渲染后的实景三维模型。根据实时采集的移动目标的移动轨迹,实时更新实景三维模型中的人车模型,并实时展示更新后的实景三维模型。Referring to FIG. 4 , the multi-model fusion method provided by the embodiment of the present invention is described. First, according to the aerial image, the BIM model is sliced and mapped, and the oblique photography model is sliced and mapped, and the BIM model and the oblique photography effect are rendered and fused to generate a rendered real-life 3D model. According to the moving trajectory of the mobile target collected in real time, the human and vehicle models in the real-life 3D model are updated in real time, and the updated real-life 3D model is displayed in real time.

其中,本发明实施例所用场景为公司附近倾斜摄影以及公司内部BIM场景,融合BIM后的三维实景模型面积为1平方公里,融合识别后的人车模型测试模型约80个;测试机器为i5-8250U处理器,集成显卡,加载融合后的场景模型不卡顿。由于数据实传输只与网络带宽有关且不受机器性能影响,因此AI识别数据传输实时性不受机器影响,实景三维模型加载对GPU渲染有一定能要求,并且与加载实景三维数据量有关,建议采用高性能独立显卡机器,因此在大场景实景模型以及人车模型实时融合时需要提高机器性能。Among them, the scenes used in the embodiment of the present invention are oblique photography near the company and the company's internal BIM scenes. The area of the three-dimensional real-life model after fusion of BIM is 1 square kilometer, and there are about 80 test models of people and vehicles after fusion and recognition; the test machine is an i5-8250U processor with integrated graphics card, and the loading of the fused scene model is not stuck. Since the real-time data transmission is only related to the network bandwidth and is not affected by the machine performance, the real-time transmission of AI recognition data is not affected by the machine. The loading of the real-life three-dimensional model has certain requirements for GPU rendering and is related to the amount of real-life three-dimensional data loaded. It is recommended to use a high-performance independent graphics card machine. Therefore, the machine performance needs to be improved when the real-life model of a large scene and the real-life model of the people and vehicles are fused in real time.

本发明实施例根据航拍摄影图像,生成室内外场景的BIM模型和倾斜摄影模型;通过B/S端方式且采用GUP渲染方式对BIM模型和倾斜摄影模型进行渲染;对渲染后的BIM模型和倾斜摄影模型进行融合,得到融合后的实景三维模型;根据实时采集的室内外移动目标的坐标点、移动轨迹和属性信息,构建移动目标的实时感知模型,将构建的移动目标的实时感知模型融合于实景三维模型中,并在前端进行显示。本发明实施例能够将倾斜摄影模型和BIM模型等多种模型融合,建立模型的各个节点与航拍影像图像之间的对应关系实现模型的批量贴图,通过B/S端且基于GPU与服务器渲染技术,将倾斜摄影数据与BIM数据模型在浏览器端进行浏览和访问,以提升了用户的浏览体验;建立WebSocket网络通讯,将时间、位置建立一一对应联系,平滑、流畅并且准确的实现人车模型坐标轨迹点位的动态更新。The embodiment of the present invention generates BIM models and oblique photography models of indoor and outdoor scenes based on aerial photography images; renders the BIM model and oblique photography model through the B/S end mode and adopts the GPU rendering mode; fuses the rendered BIM model and oblique photography model to obtain a fused real-scene three-dimensional model; constructs a real-time perception model of the mobile target based on the coordinate points, movement trajectory and attribute information of the indoor and outdoor mobile targets collected in real time, fuses the constructed real-time perception model of the mobile target into the real-scene three-dimensional model, and displays it on the front end. The embodiment of the present invention can fuse multiple models such as oblique photography models and BIM models, establish the corresponding relationship between each node of the model and the aerial photography image to realize batch mapping of the model, browse and access the oblique photography data and BIM data model on the browser side through the B/S end and based on GPU and server rendering technology, so as to improve the user's browsing experience; establish WebSocket network communication, establish a one-to-one correspondence between time and position, and smoothly, fluently and accurately realize the dynamic update of the coordinate trajectory points of the human and vehicle models.

图5为本发明实施例提供的一种多模型融合系统结构图,如图5所示,一种多模型融合系统包括贴图模块501、渲染模块502、融合模块503和显示模块504,其中:FIG5 is a structural diagram of a multi-model fusion system provided by an embodiment of the present invention. As shown in FIG5 , a multi-model fusion system includes a mapping module 501, a rendering module 502, a fusion module 503 and a display module 504, wherein:

贴图模块501,用于根据航拍摄影图像,对BIM模型进行贴图处理,生成室内外场景的BIM模型和倾斜摄影模型;A mapping module 501 is used to perform mapping processing on the BIM model according to the aerial photography image to generate a BIM model and an oblique photography model of the indoor and outdoor scenes;

渲染模块502,用于通过B/S端方式且采用GUP渲染方式对室内外场景的BIM模型和倾斜摄影模型进行渲染;A rendering module 502 is used to render the BIM model and the oblique photography model of the indoor and outdoor scenes by using a B/S terminal mode and a GPU rendering mode;

融合模块503,用于对渲染后的BIM模型和倾斜摄影模型进行融合,得到融合后的实景三维模型;A fusion module 503 is used to fuse the rendered BIM model and the oblique photography model to obtain a fused real-scene 3D model;

显示模块504,用于根据实时采集的室内外移动目标的坐标点、移动轨迹和属性信息,构建移动目标的实时感知模型,将构建的移动目标的实时感知模型融合于实景三维模型中,并在前端进行显示。The display module 504 is used to construct a real-time perception model of the mobile target based on the coordinate points, movement trajectory and attribute information of the indoor and outdoor mobile targets collected in real time, integrate the constructed real-time perception model of the mobile target into the real-scene three-dimensional model, and display it on the front end.

本发明实施例提供的多模型融合系统与前述各实施例提供的多模型融合方法相对应,多模型融合系统的相关技术特征可参考多模型融合方法的相关技术特征,在此不再重复说明。The multi-model fusion system provided in an embodiment of the present invention corresponds to the multi-model fusion method provided in the aforementioned embodiments. The relevant technical features of the multi-model fusion system can refer to the relevant technical features of the multi-model fusion method, which will not be repeated here.

请参阅图6,图6为本申请实施例提供的电子设备的实施例示意图。如图6所示,本申请实施例提了一种电子设备,包括存储器610、处理器620及存储在存储器610上并可在处理器620上运行的计算机程序611,处理器620执行计算机程序611时实现以下步骤:根据航拍摄影图像,对BIM模型进行贴图处理,生成室内外场景的BIM模型和倾斜摄影模型;通过B/S端方式且采用GUP渲染方式对室内外场景的BIM模型和倾斜摄影模型进行渲染;对渲染后的BIM模型和倾斜摄影模型进行融合,得到融合后的实景三维模型;根据实时采集的室内外移动目标的坐标点、移动轨迹和属性信息,构建移动目标的实时感知模型,将构建的移动目标的实时感知模型融合于实景三维模型中,并在前端进行显示。Please refer to Figure 6, which is a schematic diagram of an embodiment of an electronic device provided by an embodiment of the present application. As shown in Figure 6, an embodiment of the present application provides an electronic device, including a memory 610, a processor 620, and a computer program 611 stored in the memory 610 and executable on the processor 620. When the processor 620 executes the computer program 611, the following steps are implemented: according to the aerial photography image, the BIM model is mapped to generate the BIM model and the oblique photography model of the indoor and outdoor scenes; the BIM model and the oblique photography model of the indoor and outdoor scenes are rendered by the B/S end mode and the GUP rendering mode; the rendered BIM model and the oblique photography model are fused to obtain a fused real-life three-dimensional model; according to the coordinate points, movement trajectories and attribute information of the indoor and outdoor mobile targets collected in real time, a real-time perception model of the mobile target is constructed, the constructed real-time perception model of the mobile target is integrated into the real-life three-dimensional model, and displayed on the front end.

请参阅图7,图7为本申请实施例提供的一种计算机可读存储介质的实施例示意图。如图7所示,本实施例提供了一种计算机可读存储介质700,其上存储有计算机程序711,该计算机程序711被处理器执行时实现如下步骤:根据航拍摄影图像,对BIM模型进行贴图处理,生成室内外场景的BIM模型和倾斜摄影模型;通过B/S端方式且采用GUP渲染方式对室内外场景的BIM模型和倾斜摄影模型进行渲染;对渲染后的BIM模型和倾斜摄影模型进行融合,得到融合后的实景三维模型;根据实时采集的室内外移动目标的坐标点、移动轨迹和属性信息,构建移动目标的实时感知模型,将构建的移动目标的实时感知模型融合于实景三维模型中,并在前端进行显示。Please refer to Figure 7, which is a schematic diagram of an embodiment of a computer-readable storage medium provided by an embodiment of the present application. As shown in Figure 7, this embodiment provides a computer-readable storage medium 700, on which a computer program 711 is stored, and when the computer program 711 is executed by a processor, the following steps are implemented: according to the aerial photography image, the BIM model is mapped to generate the BIM model and the oblique photography model of the indoor and outdoor scenes; the BIM model and the oblique photography model of the indoor and outdoor scenes are rendered through the B/S end mode and the GUP rendering mode; the rendered BIM model and the oblique photography model are fused to obtain a fused real-life three-dimensional model; according to the coordinate points, movement trajectories and attribute information of the indoor and outdoor mobile targets collected in real time, a real-time perception model of the mobile target is constructed, and the constructed real-time perception model of the mobile target is integrated into the real-life three-dimensional model, and displayed on the front end.

需要说明的是,在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详细描述的部分,可以参见其它实施例的相关描述。It should be noted that in the above embodiments, the description of each embodiment has its own emphasis, and for parts that are not described in detail in a certain embodiment, reference can be made to the relevant descriptions of other embodiments.

本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art will appreciate that the embodiments of the present application may be provided as methods, systems, or computer program products. Therefore, the present application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the present application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.

本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式计算机或者其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present application is described with reference to the flowchart and/or block diagram of the method, device (system), and computer program product according to the embodiment of the present application. It should be understood that each process and/or box in the flowchart and/or block diagram, as well as the combination of the process and/or box in the flowchart and/or block diagram can be implemented by computer program instructions. These computer program instructions can be provided to a processor of a general-purpose computer, a special-purpose computer, an embedded computer, or other programmable data processing device to generate a machine, so that the instructions executed by the processor of the computer or other programmable data processing device generate a device for implementing the functions specified in one process or multiple processes in the flowchart and/or one box or multiple boxes in the block diagram.

这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer-readable memory produce a manufactured product including an instruction device that implements the functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.

这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions may also be loaded onto a computer or other programmable data processing device so that a series of operational steps are executed on the computer or other programmable device to produce a computer-implemented process, whereby the instructions executed on the computer or other programmable device provide steps for implementing the functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.

尽管已描述了本申请的优选实施例,但本领域内的技术人员一旦得知了基本创造概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请范围的所有变更和修改。Although the preferred embodiments of the present application have been described, those skilled in the art may make additional changes and modifications to these embodiments once they have learned the basic creative concept. Therefore, the appended claims are intended to be interpreted as including the preferred embodiments and all changes and modifications that fall within the scope of the present application.

显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包括这些改动和变型在内。Obviously, those skilled in the art can make various changes and modifications to the present application without departing from the spirit and scope of the present application. Thus, if these modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to include these modifications and variations.

Claims (8)

1.一种基于智慧城市的多模型融合方法,其特征在于,包括:1. A multi-model fusion method based on smart city, characterized by comprising: 根据航拍摄影图像,对BIM模型进行贴图处理,生成室内外场景的BIM模型;According to the aerial photography images, the BIM model is mapped to generate the BIM model of indoor and outdoor scenes; 通过B/S端方式且采用GPU渲染方式对室内外场景的BIM模型和倾斜摄影模型进行渲染;Render the BIM model and oblique photography model of indoor and outdoor scenes through B/S end and GPU rendering; 对渲染后的BIM模型和倾斜摄影模型进行融合,得到融合后的实景三维模型;The rendered BIM model and the oblique photography model are fused to obtain a fused real-scene 3D model; 根据实时采集的室内外移动目标的坐标点、移动轨迹和属性信息,构建移动目标的实时感知模型,将构建的移动目标的实时感知模型融合于实景三维模型中,并在前端进行显示;According to the coordinate points, movement trajectory and attribute information of indoor and outdoor moving targets collected in real time, a real-time perception model of the moving target is constructed, and the constructed real-time perception model of the moving target is integrated into the real-scene 3D model and displayed on the front end; 所述对渲染后的BIM模型和倾斜摄影模型进行融合,得到融合后的实景三维模型包括:The fusion of the rendered BIM model and the oblique photography model to obtain a fused real-scene 3D model includes: 对于BIM模型和倾斜摄影模型的重叠部分,基于Cesium三维地球框架对倾斜摄影模型进行裁剪;For the overlapping parts of the BIM model and the oblique photography model, the oblique photography model is cropped based on the Cesium 3D earth framework; 将BIM模型融合于裁剪后的倾斜摄影模型中,生成实景三维模型;The BIM model is integrated into the cropped oblique photography model to generate a real-life 3D model; 所述对于BIM模型和倾斜摄影模型的重叠部分,基于Cesium三维地球框架对倾斜摄影模型进行裁剪包括:For the overlapping part of the BIM model and the oblique photography model, the oblique photography model is clipped based on the Cesium three-dimensional earth framework, including: 对于倾斜摄影模型,以BIM模型边界为初始多边形裁剪区域,记录所述多边形裁剪区域的每一条边的经纬度坐标值;For the oblique photography model, the BIM model boundary is used as the initial polygonal clipping area, and the longitude and latitude coordinate values of each edge of the polygonal clipping area are recorded; 以倾斜摄影模型的坐标原点为基准进行坐标值的矩阵运算纠正所述多边形裁剪区域的坐标;Taking the coordinate origin of the oblique photography model as a reference, matrix operation of coordinate values is performed to correct the coordinates of the polygonal clipping area; 以纠正后的多边形裁剪区域的每一条边所在面作为裁剪面,得到多个裁剪面;Taking the surface where each edge of the corrected polygonal clipping area is located as the clipping surface, a plurality of clipping surfaces are obtained; 根据任一个裁剪面的法向量和倾斜摄影模型的坐标原点到任一个裁剪面的最短距离,重新构造裁剪面,最终得到重新构造的多个裁剪面;Reconstructing the clipping surface according to the normal vector of any clipping surface and the shortest distance from the coordinate origin of the oblique photography model to any clipping surface, and finally obtaining a plurality of reconstructed clipping surfaces; 基于重新构造的多个裁剪面,对倾斜摄影模型进行裁剪。The oblique photography model is cropped based on the reconstructed multiple cropping planes. 2.根据权利要求1所述的多模型融合方法,其特征在于,所述根据航拍摄影图像,对BIM模型进行贴图处理,生成室内外场景的BIM模型包括:2. The multi-model fusion method according to claim 1 is characterized in that the step of mapping the BIM model according to the aerial photography image to generate the BIM model of the indoor and outdoor scenes comprises: 对BIM模型所用IFC格式进行解析,拆分所述BIM模型的节点;Parsing the IFC format used by the BIM model and splitting the nodes of the BIM model; 建立BIM模型的各节点与航拍摄影图像之间的对应关系;Establish the corresponding relationship between each node of the BIM model and the aerial photography image; 将与BIM模型的各个节点对应的航拍摄影图像导入BIM模型中,对BIM模型进行批量贴图,生成室内外场景的BIM模型。The aerial photography images corresponding to each node of the BIM model are imported into the BIM model, and the BIM model is batch mapped to generate the BIM model of indoor and outdoor scenes. 3.根据权利要求1或2所述的多模型融合方法,其特征在于,所述通过B/S端方式且采用GUP渲染方式对室内外场景的BIM模型和倾斜摄影模型进行渲染包括:3. The multi-model fusion method according to claim 1 or 2, characterized in that the rendering of the BIM model and the oblique photography model of the indoor and outdoor scenes by means of a B/S terminal and a GPU rendering method comprises: 在B/S端将BIM模型数据结构转换为3D Tiles数据格式,采用Cesium地图引擎对BIM模型数据进行加载渲染。On the B/S side, the BIM model data structure is converted into 3D Tiles data format, and the Cesium map engine is used to load and render the BIM model data. 4.根据权利要求3所述的多模型融合方法,其特征在于,还包括:4. The multi-model fusion method according to claim 3, further comprising: 通过在Cesium地图引擎中设置最大屏幕空间误差参数降低屏幕出错率和维持BIM模型的精细程度;By setting the maximum screen space error parameter in the Cesium map engine, the screen error rate can be reduced and the level of detail of the BIM model can be maintained; 通过配置优先加载屏幕中央图块参数提高对BIM模型数据的加载速度。Improve the loading speed of BIM model data by configuring the parameters of loading the central block of the screen first. 5.根据权利要求1所述的多模型融合方法,其特征在于,所述根据实时采集的室内外移动目标的坐标点、移动轨迹和属性信息,构建移动目标的实时感知模型包括:5. The multi-model fusion method according to claim 1, characterized in that the step of constructing a real-time perception model of a moving target based on coordinate points, moving trajectories and attribute information of indoor and outdoor moving targets collected in real time comprises: 根据监控装置通过建立的WebSocket实时传输通道发送的室内外移动目标的坐标点、移动轨迹和属性信息,构建移动目标的实时感知模型;According to the coordinate points, movement trajectories and attribute information of indoor and outdoor mobile targets sent by the monitoring device through the established WebSocket real-time transmission channel, a real-time perception model of the mobile target is constructed; 所述将构建的移动目标的实时感知模型融合于实景三维模型中,并在前端进行显示之后还包括:The method further includes integrating the constructed real-time perception model of the moving target into the real-scene three-dimensional model and displaying it on the front end: 根据监控装置实时更新的移动目标的坐标点和移动轨迹,更新移动目标的实时感知模型的位置和朝向,并将更新后的移动目标的实时感知模型在前端进行显示。According to the coordinate points and movement trajectory of the moving target updated in real time by the monitoring device, the position and orientation of the real-time perception model of the moving target are updated, and the updated real-time perception model of the moving target is displayed on the front end. 6.根据权利要求5所述的多模型融合方法,其特征在于,所述移动目标的实时感知模型包括人和车模型,还包括:6. The multi-model fusion method according to claim 5, characterized in that the real-time perception model of the mobile target includes a human and a vehicle model, and further includes: 当接收到监控装置通过建立的WebSocket实时传输通道发送的室内外移动目标的坐标点、移动轨迹和属性信息时,建立各个移动目标的实时感知模型ID与时间、位置、朝向之间的对应关系;When receiving the coordinate points, movement trajectories and attribute information of indoor and outdoor mobile targets sent by the monitoring device through the established WebSocket real-time transmission channel, a corresponding relationship between the real-time perception model ID of each mobile target and the time, position and orientation is established; 所述根据监控装置实时更新的移动目标的坐标点和移动轨迹,更新移动目标的实时感知模型的位置和朝向包括:The updating of the position and orientation of the real-time perception model of the moving target according to the coordinate points and movement trajectory of the moving target updated in real time by the monitoring device comprises: 向更新朝向函数headingPitchRollQuaternion中输入移动目标的实时感知模型ID、时间、位置、朝向,以实现相应的移动目标的实时感知模型在对应时间的位置和朝向的更新。The real-time perception model ID, time, position, and orientation of the mobile target are input into the heading update function headingPitchRollQuaternion to update the position and orientation of the real-time perception model of the corresponding mobile target at the corresponding time. 7.一种多模型融合系统,其特征在于,包括:7. A multi-model fusion system, characterized by comprising: 贴图模块,用于根据航拍摄影图像,对BIM模型进行贴图处理,生成室内外场景的BIM模型;The mapping module is used to map the BIM model based on the aerial photography images to generate the BIM model of the indoor and outdoor scenes; 渲染模块,用于通过B/S端方式且采用GUP渲染方式对室内外场景的BIM模型和倾斜摄影模型进行渲染;A rendering module is used to render the BIM model and oblique photography model of indoor and outdoor scenes through the B/S end and the GPU rendering method; 融合模块,用于对渲染后的BIM模型和倾斜摄影模型进行融合,得到融合后的实景三维模型;A fusion module is used to fuse the rendered BIM model and the oblique photography model to obtain a fused real-scene 3D model; 显示模块,用于根据实时采集的室内外移动目标的坐标点、移动轨迹和属性信息,构建移动目标的实时感知模型,将构建的移动目标的实时感知模型融合于实景三维模型中,并在前端进行显示;The display module is used to construct a real-time perception model of the moving target based on the coordinate points, movement trajectory and attribute information of the indoor and outdoor moving targets collected in real time, integrate the constructed real-time perception model of the moving target into the real-scene three-dimensional model, and display it on the front end; 所述对渲染后的BIM模型和倾斜摄影模型进行融合,得到融合后的实景三维模型包括:The fusion of the rendered BIM model and the oblique photography model to obtain a fused real-scene 3D model includes: 对于BIM模型和倾斜摄影模型的重叠部分,基于Cesium三维地球框架对倾斜摄影模型进行裁剪;For the overlapping parts of the BIM model and the oblique photography model, the oblique photography model is cropped based on the Cesium 3D earth framework; 将BIM模型融合于裁剪后的倾斜摄影模型中,生成实景三维模型;The BIM model is integrated into the cropped oblique photography model to generate a real-life 3D model; 所述对于BIM模型和倾斜摄影模型的重叠部分,基于Cesium三维地球框架对倾斜摄影模型进行裁剪包括:For the overlapping part of the BIM model and the oblique photography model, the oblique photography model is clipped based on the Cesium three-dimensional earth framework, including: 对于倾斜摄影模型,以BIM模型边界为初始多边形裁剪区域,记录所述多边形裁剪区域的每一条边的经纬度坐标值;For the oblique photography model, the BIM model boundary is used as the initial polygonal clipping area, and the longitude and latitude coordinate values of each edge of the polygonal clipping area are recorded; 以倾斜摄影模型的坐标原点为基准进行坐标值的矩阵运算纠正所述多边形裁剪区域的坐标;Taking the coordinate origin of the oblique photography model as a reference, matrix operation of coordinate values is performed to correct the coordinates of the polygonal clipping area; 以纠正后的多边形裁剪区域的每一条边所在面作为裁剪面,得到多个裁剪面;Taking the surface where each edge of the corrected polygonal clipping area is located as the clipping surface, a plurality of clipping surfaces are obtained; 根据任一个裁剪面的法向量和倾斜摄影模型的坐标原点到任一个裁剪面的最短距离,重新构造裁剪面,最终得到重新构造的多个裁剪面;Reconstructing the clipping surface according to the normal vector of any clipping surface and the shortest distance from the coordinate origin of the oblique photography model to any clipping surface, and finally obtaining a plurality of reconstructed clipping surfaces; 基于重新构造的多个裁剪面,对倾斜摄影模型进行裁剪。The oblique photography model is cropped based on the reconstructed multiple cropping planes. 8.一种电子设备,其特征在于,包括存储器、处理器,所述处理器用于执行存储器中存储的计算机管理类程序时实现如权利要求1-6任一项所述的多模型融合方法的步骤。8. An electronic device, characterized in that it comprises a memory and a processor, wherein the processor is used to implement the steps of the multi-model fusion method as described in any one of claims 1 to 6 when executing a computer management program stored in the memory.
CN202011402153.0A 2020-12-04 2020-12-04 Multi-model fusion method and system based on smart city Active CN112560137B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011402153.0A CN112560137B (en) 2020-12-04 2020-12-04 Multi-model fusion method and system based on smart city

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011402153.0A CN112560137B (en) 2020-12-04 2020-12-04 Multi-model fusion method and system based on smart city

Publications (2)

Publication Number Publication Date
CN112560137A CN112560137A (en) 2021-03-26
CN112560137B true CN112560137B (en) 2024-11-01

Family

ID=75048349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011402153.0A Active CN112560137B (en) 2020-12-04 2020-12-04 Multi-model fusion method and system based on smart city

Country Status (1)

Country Link
CN (1) CN112560137B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113179420B (en) * 2021-04-26 2022-08-30 本影(上海)网络科技有限公司 City-level wide-area high-precision CIM scene server dynamic stream rendering technical method
CN113392873A (en) * 2021-05-11 2021-09-14 华建数创(上海)科技有限公司 Method for constructing multi-layer city model through multi-source data fusion
CN115330662A (en) * 2021-05-11 2022-11-11 广联达科技股份有限公司 Model fusion method, device, electronic device and readable storage medium
CN113423002B (en) * 2021-06-29 2023-05-23 上海禹创工程顾问有限公司 Fusion display method and device based on Internet of things data and BIM model
CN113660509A (en) * 2021-10-18 2021-11-16 上海飞机制造有限公司 Three-dimensional model processing system and method based on cloud rendering
CN114283251B (en) * 2021-12-28 2024-04-09 航天科工智能运筹与信息安全研究院(武汉)有限公司 Real-time access method for data of barracks and Internet of things sensing equipment based on three-dimensional scene
CN114429512A (en) * 2022-01-06 2022-05-03 中国中煤能源集团有限公司 Fusion display method and device for BIM and live-action three-dimensional model of coal preparation plant
CN114781036A (en) * 2022-04-26 2022-07-22 云知声智能科技股份有限公司 Building information model modeling and rendering method, device, equipment and medium
CN114748872B (en) * 2022-06-13 2022-09-02 深圳市乐易网络股份有限公司 Game rendering updating method based on information fusion
CN115393494B (en) * 2022-08-24 2023-10-17 北京百度网讯科技有限公司 Urban model rendering method, device, equipment and medium based on artificial intelligence
CN115361543B (en) * 2022-10-21 2023-03-24 武汉光谷信息技术股份有限公司 Heterogeneous data fusion and plug flow method and system based on ARM architecture
CN116126981A (en) * 2022-12-02 2023-05-16 合肥泽众城市智能科技有限公司 A method of using 3D visualization technology in urban security business scenarios
CN118172507B (en) * 2024-05-13 2024-08-02 国网山东省电力公司济宁市任城区供电公司 Digital twinning-based three-dimensional reconstruction method and system for fusion of transformer substation scenes

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107576311A (en) * 2017-08-23 2018-01-12 长江水利委员会长江科学院 A real-time monitoring method for reservoir inspection based on 3D GIS
CN108197325A (en) * 2018-02-06 2018-06-22 覃睿 A kind of virtual three-dimensional outdoor scene is gone sightseeing application process and system in the air

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6459438B1 (en) * 2000-02-02 2002-10-01 Ati International Srl Method and apparatus for determining clipping distance
CN107180066B (en) * 2017-01-31 2017-12-15 张军民 Three-dimensional police geographical information platform and system architecture based on three dimensions coding
CN107197200A (en) * 2017-05-22 2017-09-22 北斗羲和城市空间科技(北京)有限公司 It is a kind of to realize the method and device that monitor video is shown
CN109085966B (en) * 2018-06-15 2020-09-08 广东康云多维视觉智能科技有限公司 Three-dimensional display system and method based on cloud computing
CN109934893B (en) * 2019-03-21 2022-11-25 广联达科技股份有限公司 Method and device for displaying any cross section of geometric body and electronic equipment
CN110222137B (en) * 2019-06-11 2022-12-30 鲁东大学 Intelligent campus system based on oblique photography and augmented reality technology
CN110415343B (en) * 2019-08-05 2023-07-21 中国电建集团北京勘测设计研究院有限公司 A three-dimensional engine system for engineering BIM visualization
CN110345920A (en) * 2019-08-20 2019-10-18 南通四建集团有限公司 An automatic synchronization method of scene and model images based on Beidou GNSS and BIM
CN111260777B (en) * 2020-02-25 2023-08-04 中国电建集团华东勘测设计研究院有限公司 Building information model reconstruction method based on oblique photogrammetry technology

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107576311A (en) * 2017-08-23 2018-01-12 长江水利委员会长江科学院 A real-time monitoring method for reservoir inspection based on 3D GIS
CN108197325A (en) * 2018-02-06 2018-06-22 覃睿 A kind of virtual three-dimensional outdoor scene is gone sightseeing application process and system in the air

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
倾斜摄影测量与BIM三维建模集成技术的研究与应用;罗瑶 等;测绘地理信息;20200805;第45卷(第04期);40-45、126 *

Also Published As

Publication number Publication date
CN112560137A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
CN112560137B (en) Multi-model fusion method and system based on smart city
CN111008422B (en) A method and system for making a real-world map of a building
US9626790B1 (en) View-dependent textures for interactive geographic information system
US9996976B2 (en) System and method for real-time overlay of map features onto a video feed
CN104778744B (en) Extensive three-dimensional forest Visual Scene method for building up based on Lidar data
US12033271B2 (en) 3D structure engine-based computation platform
CN111932668B (en) Three-dimensional visualization method, system, medium and electronic equipment for urban landscape model
CN114972599B (en) A method for virtualizing a scene
CN103268221B (en) A kind of meteorological data body 3 D displaying method based on WEB technology and device
CN116012542B (en) A method and device for dynamic visualization of earthquake disasters
CN114140588A (en) Digital sand table creating method and device, electronic equipment and storage medium
WO2023207963A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN116982087A (en) Method, apparatus and computer program product for constructing and configuring a model of a three-dimensional space scene
US20240212282A1 (en) Image rendering method and apparatus, device, and medium
WO2025077567A1 (en) Three-dimensional model output method, apparatus and device, and computer readable storage medium
WO2024124370A1 (en) Model construction method and apparatus, storage medium, and electronic device
CN106875480B (en) Method for organizing urban three-dimensional data
CN114119800B (en) Multimedia particle display method, device, equipment and storage medium
CN118628073B (en) A natural resource remote collaborative survey method, system, device and medium thereof
CN113470181B (en) Plane construction method, device, electronic equipment and storage medium
CN118520065B (en) GIS map-based data processing method and device, electronic equipment and storage medium
CN118691487B (en) High-low precision Digital Elevation Model (DEM) fusion method and system for three-dimensional digital power grid
CN117115382B (en) Map road drawing method, device, computer equipment and storage medium
Wang et al. 3D Reconstruction and Rendering Models in Urban Architectural Design Using Kalman Filter Correction Algorithm
Simoes et al. i-Scope: A City GML Framework for Mobile Devices.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant