[go: up one dir, main page]

WO2020034086A1 - Three-dimensional reconstruction method and apparatus for scene, and electronic device and storage medium - Google Patents

Three-dimensional reconstruction method and apparatus for scene, and electronic device and storage medium Download PDF

Info

Publication number
WO2020034086A1
WO2020034086A1 PCT/CN2018/100390 CN2018100390W WO2020034086A1 WO 2020034086 A1 WO2020034086 A1 WO 2020034086A1 CN 2018100390 W CN2018100390 W CN 2018100390W WO 2020034086 A1 WO2020034086 A1 WO 2020034086A1
Authority
WO
WIPO (PCT)
Prior art keywords
subspace
position information
scene
data
shooting area
Prior art date
Application number
PCT/CN2018/100390
Other languages
French (fr)
Chinese (zh)
Inventor
王恺
廉士国
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to PCT/CN2018/100390 priority Critical patent/WO2020034086A1/en
Priority to CN201880001285.7A priority patent/CN109155846B/en
Publication of WO2020034086A1 publication Critical patent/WO2020034086A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals

Definitions

  • the present application relates to the field of computer vision, and in particular, to a method, a device, an electronic device, and a storage medium for three-dimensional reconstruction of a scene.
  • the robot needs to fully understand the scene it is in when performing operations such as navigation and obstacle avoidance.
  • the three-dimensional reconstruction of the scene is one of the core technologies for the robot to fully understand the scene it is in.
  • the reconstruction work often needs to be performed manually or under the supervision of a person. This requires that the reconstruction of the scene can be completed in real time and can be rendered in real time for reference. .
  • the inventors discovered during the research of the prior art that during the 3D reconstruction and rendering of the scene, the amount of data involved is constantly increasing, the consumption of memory and video memory is also increasing, and related computing resources are constantly occupied, resulting in the speed of 3D reconstruction Slower and slower, scene reconstruction and rendering delay, real-time reconstruction and rendering can not be achieved; and after the reconstruction reaches a certain range, such as: the scope of a room, reconstruction and rendering can not continue, limiting the scope of the scene to be reconstructed .
  • a technical problem to be solved in some embodiments of the present application is to provide a method for 3D reconstruction and rendering of a scene, so that the speed of 3D reconstruction and rendering can be improved, and the scope of the scene for 3D reconstruction and rendering can be expanded.
  • An embodiment of the present application provides a method for three-dimensional reconstruction of a scene, including: acquiring image data of a current shooting area; determining first position information of the current shooting area according to the image data; according to the first position information and a previous shooting area According to the position information of the corresponding first subspace, the data stored in the first memory is dynamically adjusted; according to the data stored in the adjusted first memory, a three-dimensional model reconstruction is performed on the second subspace corresponding to the current shooting area, and according to the first The 3D reconstruction data of the two subspaces are rendered; wherein the first subspace and the second subspace are N obtained by dividing the scene, respectively. One of the subspaces, N Is greater than 0 Integer.
  • An embodiment of the present application further provides a scene scanning device, including: an acquisition module, a first position information determination module, an adjustment module, a 3D model reconstruction module, and a 3D model rendering module; an acquisition module configured to acquire a current shooting area Image data; a first position information determining module for determining the first position information of the current shooting area based on the image data; an adjusting module for determining the position of the first subspace corresponding to the previous shooting area based on the first position information Information to dynamically adjust the data stored in the first memory; a three-dimensional model reconstruction module for reconstructing a three-dimensional model of the second subspace corresponding to the current shooting area according to the data stored in the adjusted first memory; For rendering according to the 3D reconstruction data of the second subspace, where the first subspace and the second subspace are N obtained by dividing the scene One of the subspaces, N Is greater than 0 Integer.
  • An embodiment of the present application further provides an electronic device, including: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are at least A processor executes to enable at least one processor to perform the three-dimensional reconstruction method of the scene described above.
  • the embodiment of the present application further provides a computer-readable storage medium storing a computer program, and the computer program is executed by a processor to implement the foregoing three-dimensional reconstruction method of a scene.
  • the data in the first memory is dynamically adjusted according to the first position information of the current shooting area and the position information of the first subspace corresponding to the previous shooting area to ensure that The first memory has enough space to calculate the current shooting area, and also ensures that the first memory has enough space to calculate the image data of the next shooting area without the problem of calculation delay caused by the large amount of data. It will not cause the problem that the reconstruction calculation and rendering of the scene cannot be continued due to the large scope of the scene, and is suitable for the reconstruction and rendering of scenes of various sizes.
  • each subspace can be independently 3D reconstructed and rendered, avoiding the large amount of data that is reconstructed and rendered each time. This leads to the problem of slow 3D reconstruction and rendering of the scene.
  • figure 1 It is a schematic flowchart of a three-dimensional reconstruction method of a scene in the first embodiment of the present application;
  • figure 2 It is a schematic diagram of scene division in the first embodiment of the present application.
  • image 3 Is the calculation subspace in the first embodiment of the present application A With subspace D Diagram of distance
  • Figure 4 It is a schematic flowchart of a three-dimensional reconstruction method of a scene in the second embodiment of the present application.
  • Figure 5 It is a schematic flowchart of a three-dimensional reconstruction method of a scene in the third embodiment of the present application.
  • Figure 6 Is a schematic diagram between adjacent subspaces in a three-dimensional reconstruction method of a scene in a third embodiment of the present application
  • Figure 7 Is a schematic structural diagram of a three-dimensional reconstruction device for a scene in a fourth embodiment of the present application.
  • Figure 8 It is a schematic structural diagram of an electronic device in the fifth embodiment of the present application.
  • the first embodiment of the present application relates to a method for three-dimensional reconstruction of a scene.
  • the method for three-dimensional reconstruction of a scene may be applied to an electronic device or a cloud.
  • the electronic device may be a smart robot, a driverless car, or the like.
  • the cloud communicates with electronic devices to provide the terminal with scan results of the scene.
  • This embodiment is described by using an electronic device to perform a three-dimensional reconstruction method of the scene as an example.
  • For a process of performing the three-dimensional reconstruction method of the scene in the cloud refer to the content of the embodiment of the present application.
  • the specific process of the 3D reconstruction method of this scene is shown in Figure 1. As shown.
  • the scene in this embodiment takes a large-scale scene as an example, such as several rooms, a floor, a gymnasium, and the like.
  • Step 101 Get the image data of the current shooting area.
  • the terminal can collect image data of the current shooting area through one sensor or multiple sensors. For example, red (blue green) , RGB ) Camera and depth camera, current RGB The camera and depth camera need to be aligned before capturing image data, which can capture the color and depth of the current shooting area red green blue depth , RGBD )data.
  • the electronic device may first divide the scene in which the current shooting area is located into multiple subspaces, and the process of dividing the subspace is as follows: obtaining volume data of the scene in which the current shooting area is located; Divide the scene into N based on volume data Subspaces, N Is greater than 0 Integer.
  • the volume of the scene in which the current shooting area is located can be obtained by manual input, or the volume data can be obtained through the cloud.
  • the manner of obtaining the volume data of the scene is not limited.
  • the number of subspaces can be determined in advance, or any number of subspaces can be used.
  • the size and shape of each subspace can be the same. Of course, N is divided After each subspace, the size of the subspace can also be dynamically adjusted according to the data in each subspace, and the size and shape of each subspace can be different after the adjustment.
  • each subspace is the same size and is a cuboid, then each subspace is at most 6 The subspaces are adjacent.
  • the scene can be divided according to the data structure of the octree.
  • the division process is as follows: According to the volume data of the scene where the current shooting area is located, and according to the preset maximum recursion depth of the octree, the current shooting area is located.
  • the scene is divided into N Subspaces, and N Each subspace corresponds to each child node in each level of recursion depth.
  • the maximum recursion depth of the octree can be set in advance according to the obtained volume data of the scene.
  • the scene is divided into spaces using the data structure of the octree, and each of the divided subspaces corresponds to a node in the octree.
  • the following uses a specific example to illustrate the division of the scene's subspace and the process corresponding to the nodes in the octree:
  • the scene is divided into a cube S Indicates that the preset recursion depth is 2 ,
  • the process of dividing according to a preset recursion depth is: the cube S be divided into 8 Subspace, graph 2
  • the parent node of the octree is A ,
  • the corresponding subspace can be divided in this way.
  • the volume of the subspace corresponding to each level of child nodes of the octree is the same.
  • the subspace corresponding to each level of child nodes is the same.
  • each time a subspace is divided the corresponding space position is marked for the subspace. After the subspace division of the scene is completed, each subspace can be determined. Location information and size of the space.
  • Step 102 Determine the first position information of the current shooting area based on the image data.
  • the electronic device constructs the point cloud or grid data corresponding to the current shooting area according to the image data; obtains the position information of the point cloud or grid data, and determines based on the position information of the point cloud or grid data.
  • the first position information of the current shooting area is not limited to the image data.
  • the electronic device can be based on the RGBD in the image data
  • the point cloud data or grid data corresponding to the image data is calculated through a matrix transformation formula. Since the point cloud data contains multiple points, the position information in the middle of the point cloud can be used as the first position information of the point cloud data, and the center position of the grid can be used as the first position information of the grid data.
  • Step 103 Dynamically adjust the data stored in the first memory according to the first position information and the position information of the first subspace corresponding to the previous shooting area.
  • the electronic device determines an adjustment mode according to the first position information and the position information of the first subspace corresponding to the previous shooting area; according to the adjustment mode, the data stored in the first memory is adjusted.
  • the adjustment mode is a first adjustment mode, a second adjustment mode, or a third adjustment mode.
  • the first adjustment mode is to save and delete data in the first memory, read data in the second subspace corresponding to the current shooting area from the second memory, and add the addition determined by the image data in the first memory.
  • Data; the second adjustment mode is to read the data of the second subspace corresponding to the current shooting area from the second memory, and add the addition data determined by the image data in the first memory;
  • the third adjustment mode is to Addition data determined by image data is added to the memory.
  • the added data may be point cloud or grid data corresponding to the current shooting area.
  • adjustment modes may also be other modes, which are not listed here one by one.
  • the first subspace and the second subspace are N obtained by dividing the scene, respectively.
  • One of the subspaces, N Is greater than 0 Integer.
  • determining the second subspace corresponding to the current shooting area Determine the positional relationship between the first subspace and the second subspace according to the position information of the first subspace and the position information of the second subspace; and determine the adjustment mode according to the position relationship.
  • method 1 Determine whether the first position information is within the range of the position information of the first subspace, if yes, use the first subspace as the second subspace corresponding to the current shooting area; if not, use the Position information of other subspaces determines a second subspace corresponding to the current shooting area.
  • the first subspace is the subspace corresponding to the previous shooting area.
  • the position information of the first subspace includes the positions of the vertices of the first subspace.
  • the range formed by the vertices of the first subspace is taken as The range of the position information of the first subspace. Determine whether the first position information is within the range of the position information of the first subspace, and if so, the current shooting area is located in the first subspace, and the first subspace is used as the second subspace corresponding to the current shooting area, that is, the previous
  • the first subspace corresponding to the shooting area and the second subspace corresponding to the current shooting area are the same subspace.
  • the first position information is not within the range of the position of the first subspace, it indicates that the second subspace corresponding to the current shooting area and the first subspace corresponding to the previous shooting area are not the same subspace. Whether the position information of each subspace except the first subspace in the scene includes the first position information, and the subspace containing the first position information is selected as the second subspace.
  • Method 2 Determine whether the first position information is included in the range of the position information of each subspace, and determine the second subspace corresponding to the current shooting area according to the determination result.
  • this method and method 1 Similarly, the difference lies only in the method 2 According to the position information of all the subspaces, a subspace containing the first position information is searched, and the subspace containing the first position information is used as the second subspace.
  • the method for determining the positional relationship between the first subspace and the second subspace based on the position information of the first subspace and the position information of the second subspace is described below as an example.
  • the distance between the first subspace and the second subspace is calculated according to the position information of the first subspace and the position information of the second subspace; if it is determined that the distance is greater than If the preset distance threshold is determined, the positional relationship between the first subspace and the second subspace is determined to be non-adjacent; if the determined distance is less than the preset distance threshold and the distance is not zero, the relationship between the first subspace and the second subspace is determined The positional relationship is adjacent; if the determined distance is zero, it is determined that the positional relationship of the first subspace and the second subspace is the same position.
  • the distance between the geometric center point of the first subspace and the geometric center point of the second subspace is used as the distance between the first subspace and the second subspace.
  • the preset distance threshold can be set according to the actual application, where the preset distance threshold is related to the number of reconstruction points in the point cloud data (the number of grids in the grid data), the size of the subspace, and The size is related. For example, if the capacity of the first memory is 1G If the capacity is small, the preset distance threshold can be set smaller.
  • This embodiment specifically introduces a method for calculating the distance between the first subspace and the second subspace by using the geodesic distance.
  • connection graph of all subspaces, that is, the geometric centerline point of each subspace is a connection point in the connection graph.
  • the first subspace and The distance between the second subspaces is the shortest distance between the two corresponding connection points in the connection graph (that is, the number of edges on the shortest path connecting the connection points).
  • Subspaces there are 4 Subspaces are A , B , C with D ,among them, A Is the first subspace, D Is the second subspace, AB Adjacent, BC Adjacent, CD Adjacent, AC Adjacent.
  • A Is the first subspace
  • D Is the second subspace
  • AB Adjacent BC Adjacent
  • CD Adjacent AC Adjacent.
  • connection points Figure 3 In 4 Connection points A ,Junction B ,Junction C And connection points D
  • the distance between the first subspace and the second subspace is the connection point A And connection points D
  • Shortest distance between (shortest distance is A-C-D Connection point A And connection points D The number of edges on the shortest path of 2 ), Where the shortest distance is the geodesic distance.
  • the method for determining the adjustment mode according to the positional relationship between the first subspace and the second subspace is described below as an example.
  • the adjustment mode is determined as the first adjustment mode; if the positional relationship between the first subspace and the second subspace is If they are adjacent, the adjustment mode is determined to be the second adjustment mode; if the positional relationship between the first subspace and the second subspace is determined to be the same position, the adjustment mode is determined to be the third adjustment mode.
  • the first subspace and the second subspace are not in the same position, it indicates that the first subspace and the second subspace where the current shooting area is located are not the same subspace.
  • To adjust the data stored in the first memory To adjust the data stored in the first memory.
  • the memory searches for data in the second subspace, reads the data into the first memory, and adds point cloud or grid data to the data contained in the second subspace in the first memory. If it is determined that the positions of the first subspace and the second subspace are not adjacent, first save the data in the first subspace to the second memory, and simultaneously save the position information of the first subspace, and then save the first subspace.
  • the data in the subspace is deleted from the first memory, the data in the second subspace that is queried is loaded, and the point cloud or grid data is added to the data contained in the second subspace in the first memory.
  • the subspace corresponding to the nodes in the octree can be updated. Specifically, when new data is added to the second subspace, the second subspace is divided again according to the principle of the octree data structure, and the correspondence between the octree nodes and the newly divided subspace is adjusted. relationship.
  • Dividing the second subspace through the added data can reduce the volume of the subspace, reduce the amount of data in the divided subspace, and further adjust the computing capacity of the first memory for the subspace.
  • the corresponding relationship between the octree nodes is adjusted so that in the subsequent 3D reconstruction and rendering of the scene, the data in the adjacent nodes can be quickly queried through the octree nodes.
  • Step 104 Perform a three-dimensional model reconstruction on the second subspace corresponding to the current shooting area according to the data stored in the adjusted first memory.
  • a three-dimensional model of the second subspace is constructed according to point cloud data or grid data in the second subspace.
  • Step 105 Rendering based on the 3D reconstruction data of the second subspace.
  • the 3D reconstruction data of the second subspace may be the 3D grid data of the second subspace or the 3D point cloud data of the second subspace, or the 3D grid data of the second subspace and the second subspace.
  • 3D point cloud data in space The first memory in this embodiment includes a memory for performing three-dimensional reconstruction, and a video memory for rendering.
  • the data in the first memory is dynamically adjusted according to the first position information of the current shooting area and the position information of the first subspace corresponding to the previous shooting area to ensure that The first memory has enough space to calculate the current shooting area, and also ensures that the first memory has enough space to perform three-dimensional reconstruction of the image data of the next shooting area without the problem of calculation delay caused by the large amount of data.
  • the reconstruction and calculation of the scene cannot be continued due to the large scope of the scene, which is suitable for the reconstruction and rendering of scenes of various sizes.
  • each subspace can be independently 3D reconstructed and rendered, avoiding the large amount of data that is reconstructed and rendered each time. As a result, the 3D reconstruction and rendering of the scene are slow.
  • the second embodiment of the present application relates to a method for three-dimensional reconstruction of a scene.
  • the second embodiment is a further improvement on the first embodiment.
  • the main improvement is that, after the three-dimensional reconstruction and rendering of all subspaces are completed in this embodiment , Stitch all the subspace data in the scene.
  • This embodiment includes step 401 To step 407 .
  • steps 401 To step 405 Separate from the steps in the first embodiment 101 To step 105 Roughly the same, no longer detailed here, the following mainly introduces the differences:
  • Step 406 Detect whether the data in all the subspaces in the scene are contained in the second memory; if so, execute the step 407 , Otherwise return to execute step 401 .
  • the number of subspaces divided by the scene can be used to determine whether the number of subspaces contained in the second memory is the same as the number of subspaces divided by the scene. If they are the same, it is determined whether the second memory contains Data in all subspaces in the scene; otherwise, it is determined that data in all subspaces in the scene are not included in the second memory. It can be understood that there may be other detection methods, for example, a method of determining whether the second memory includes the position information of all the subspaces by using the position information of each subspace divided by the scene is not provided here one by one. List.
  • Step 407 According to the position information of each subspace, the data in all the subspaces in the second memory are stitched to form the 3D reconstructed data of the scene, and the 3D reconstructed data of the scene is rendered.
  • the second memory may be a read-only memory, and the data in all subspaces is the point cloud or grid data of the scene after splicing, forming the 3D reconstruction data of the scene, and performing the 3D reconstruction data of the scene.
  • a 3D model of the scene can be obtained by rendering. It can be understood that the stitching is performed in the first memory.
  • the three-dimensional rendering method of the scene provided by this embodiment, because the data of each subspace is stored in the second memory, and each subspace is relatively independent, the data in each subspace can be simply processed. Fusion, small amount of calculation, and fast speed. Through stitching, the entire scene can be quickly rendered, which is suitable for 3D reconstruction and rendering of scenes of various sizes.
  • the third embodiment of the present application relates to a three-dimensional rendering method of a scene.
  • the third embodiment is a further improvement on the second embodiment.
  • the main improvement lies in that this embodiment is based on the three-dimensional reconstruction of the second subspace.
  • the volume of the second subspace is adjusted according to the number of point cloud data or grid data in the second subspace. The specific process is shown in Figure 5 As shown.
  • This embodiment includes step 501 To step 508 .
  • steps 501 To step 504 step 506 To step 508
  • steps in the second embodiment 401 To step 404 step 405 To step 407 Roughly the same, no longer detailed here, the following mainly introduces the differences:
  • Step 505 Adjust the volume of the second subspace according to the number of point cloud data or grid data in the second subspace.
  • the second subspace is divided into at least one subspace.
  • the division manner of the second subspace is the same as that in the first embodiment. For example, divide the second subspace into 8 Subspace, and then use the subspace where the first position information of the current shooting area is located as a new second subspace.
  • subspace A Is the second subspace
  • the point cloud data or grid data in the second subspace and the subspace B Merge to form a new second subspace.
  • the first preset threshold and the second preset threshold are set according to the computing capability of the first memory in practical applications.
  • the method for 3D reconstruction of a scene provided by this embodiment can automatically adjust the volume of a subspace according to the amount of data in the subspace, thereby ensuring the speed of 3D reconstruction and rendering of the subspace, while also avoiding Because of the small amount of data, the computing resources are wasted.
  • the fourth embodiment of the present application relates to a three-dimensional reconstruction device 70 for a scene , Including: Get module 701 First location information determination module 702 Adjustment module 703 3D model reconstruction module 704 And 3D model rendering module 705 ; The specific structure is shown in Figure 7 As shown.
  • Acquisition module 701 For acquiring image data of a current shooting area; a first position information determining module 702 For determining the first position information of the current shooting area according to the image data; the adjustment module 703 For dynamically adjusting data stored in the first memory according to the first position information and the position information of the first subspace corresponding to the previous shooting area; a three-dimensional model reconstruction module 704 For reconstructing a three-dimensional model of a second subspace corresponding to the current shooting area according to the data stored in the adjusted first memory; a three-dimensional model rendering module 705 For rendering according to the 3D reconstruction data of the second subspace; wherein the first subspace and the second subspace are obtained by dividing the scene respectively N One of the subspaces, N Is greater than 0 Integer.
  • This embodiment is an embodiment of a virtual device corresponding to the three-dimensional reconstruction method of the foregoing scene.
  • the technical details in the foregoing method embodiments are still applicable in this embodiment, and are not described herein again.
  • the fifth embodiment of the present application relates to an electronic device, whose structure is shown in FIG. 8 As shown. Includes: at least one processor 801 ; And, with at least one processor 801 Communication connected storage 802 . Memory 802 Stored by at least one processor 801 Instruction executed. Instruction by at least one processor 801 Execute to make at least one processor 801 The three-dimensional reconstruction method of the scene described above can be performed.
  • the processor is a central processing unit (Central Processing Unit). , CPU )
  • the memory is a readable and writable memory ( Random Access Memory , RAM )
  • the processor and memory can be connected through the bus or other methods. 8 Take the connection via the bus as an example.
  • the memory can be used to store non-volatile software programs, non-volatile computer executable programs, and modules.
  • the processor executes various functional applications and data processing of the device by running non-volatile software programs, instructions, and modules stored in the memory, that is, a three-dimensional reconstruction method of the foregoing scene.
  • the memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system and an application program required for at least one function; the storage data area may store a list of options and the like.
  • the memory may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage device.
  • the memory may optionally include a memory remotely set with respect to the processor, and these remote memories may be connected to an external device through a network. Examples of the above network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • One or more modules are stored in the memory, and when executed by one or more processors, execute a method for generating a set of sample images in any of the foregoing method embodiments.
  • the above-mentioned products can execute the three-dimensional reconstruction method of the scene provided by the embodiment of the present application, and have corresponding functional modules and beneficial effects of the execution method.
  • 3D reconstruction method For technical details not described in this embodiment, refer to the scene provided by the embodiment of the present application.
  • the sixth embodiment of the present application relates to a computer-readable storage medium.
  • the readable storage medium is a computer-readable storage medium.
  • the computer-readable storage medium stores computer instructions that enable a computer to execute the first application of the present application.
  • the program is stored in a storage medium and includes several instructions for making a device Can be a microcontroller, chip, etc.) or a processor (processor ) Perform all or part of the steps of the method described in each embodiment of this application.
  • the foregoing storage media include: U Disk, mobile hard disk, read-only memory (ROM , Read-Only Memory ), Random access memory ( RAM , Random Access Memory ), Magnetic disks, or compact discs, which can store program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A three-dimensional reconstruction method and apparatus for a scene, and an electronic device and a storage medium. The method is applied to an electronic device or a cloud, and comprises: acquiring image data of a currently photographed region (101); according to the image data, determining first position information of the currently photographed region (102); according to the first position information and position information of a first sub-space corresponding to a previously photographed region, dynamically adjusting data stored in a first memory (103); according to the adjusted data stored in the first memory, reconstructing a three-dimensional model of a second sub-space corresponding to the currently photographed region (104); and performing rendering according to the three-dimensional reconstruction data of the second sub-space, wherein the first sub-space and second sub-space are respectively one of N sub-spaces obtained by dividing the scene, and N is an integer greater than zero (105). The three-dimensional reconstruction method for a scene can increase the speed of three-dimensional reconstruction and rendering, and enlarge the scene scope of three-dimensional reconstruction and rendering.

Description

一种场景的三维重建方法、装置、电子设备及存储介质             Three-dimensional reconstruction method, device, electronic device and storage medium of scene           技术领域Technical field
本申请涉及计算机视觉领域,尤其涉及一种场景的三维重建方法、装置、电子设备及存储介质。             The present application relates to the field of computer vision, and in particular, to a method, a device, an electronic device, and a storage medium for three-dimensional reconstruction of a scene.          
背景技术Background technique
机器人在进行导航、避障等操作时,需要对所处场景进行充分理解,对场景的三维重建是机器人对自身所处场景进行充分理解的核心技术之一。为了保证重建的区域是有效区域以及重建的准确性,重建工作往往需要由人工来操作,或者在人的监督下完成,这就需要对场景的重建能够实时完成,并且可以实时渲染出来供人参考。             The robot needs to fully understand the scene it is in when performing operations such as navigation and obstacle avoidance. The three-dimensional reconstruction of the scene is one of the core technologies for the robot to fully understand the scene it is in. In order to ensure that the reconstructed area is an effective area and the accuracy of the reconstruction, the reconstruction work often needs to be performed manually or under the supervision of a person. This requires that the reconstruction of the scene can be completed in real time and can be rendered in real time for reference. .          
技术问题technical problem
发明人在研究现有技术过程中发现,在对场景的三维重建和渲染过程中,涉及到的数据量不断增加,内存和显存的消耗也不断增加,相关计算资源被不断占据,导致三维重建速度越来越慢,对场景重建和渲染出现延迟,不能实现实时重建和渲染;而在重建到达一定范围后,比如:一个房间的范围,重建和渲染就无法继续,限制了待重建的场景的范围。                The inventors discovered during the research of the prior art that during the 3D reconstruction and rendering of the scene, the amount of data involved is constantly increasing, the consumption of memory and video memory is also increasing, and related computing resources are constantly occupied, resulting in the speed of 3D reconstruction Slower and slower, scene reconstruction and rendering delay, real-time reconstruction and rendering can not be achieved; and after the reconstruction reaches a certain range, such as: the scope of a room, reconstruction and rendering can not continue, limiting the scope of the scene to be reconstructed .             
可见,如何提升三维重建和渲染的速度,扩大重建和渲染的场景的范围,是需要解决的问题。                It can be seen that how to increase the speed of 3D reconstruction and rendering, and expand the scope of the reconstructed and rendered scenes, are problems that need to be solved.             
技术解决方案Technical solutions
本申请部分实施例所要解决的技术问题在于提供一种场景的三维重建和渲染方法,使得可以提升三维重建和渲染的速度,扩大了三维重建和渲染的场景范围。                A technical problem to be solved in some embodiments of the present application is to provide a method for 3D reconstruction and rendering of a scene, so that the speed of 3D reconstruction and rendering can be improved, and the scope of the scene for 3D reconstruction and rendering can be expanded.             
本申请的一个实施例提供了一种场景的三维重建方法,包括:获取当前拍摄区域的图像数据;根据图像数据,确定当前拍摄区域的第一位置信息;根据第一位置信息以及上一拍摄区域对应的第一子空间的位置信息,动态调整第一存储器中存储的数据;根据调整后的第一存储器中存储的数据,对当前拍摄区域对应的第二子空间进行三维模型重建,并根据第二子空间的三维重建数据进行渲染;其中,第一子空间和第二子空间分别是对该场景进行划分获得的N 个子空间中的一个, N 为大于 0 的整数。                An embodiment of the present application provides a method for three-dimensional reconstruction of a scene, including: acquiring image data of a current shooting area; determining first position information of the current shooting area according to the image data; according to the first position information and a previous shooting area According to the position information of the corresponding first subspace, the data stored in the first memory is dynamically adjusted; according to the data stored in the adjusted first memory, a three-dimensional model reconstruction is performed on the second subspace corresponding to the current shooting area, and according to the first The 3D reconstruction data of the two subspaces are rendered; wherein the first subspace and the second subspace are N obtained by dividing the scene, respectively.                One of the subspaces,                N                Is greater than                0                Integer.             
本申请的一个实施例还提供了一种场景的扫描装置,包括:获取模块、第一位置信息确定模块、调整模块、三维模型重建模块和三维模型渲染模块;获取模块,用于获取当前拍摄区域的图像数据;第一位置信息确定模块,用于根据图像数据,确定当前拍摄区域的第一位置信息;调整模块,用于根据第一位置信息以及上一拍摄区域对应的第一子空间的位置信息,动态调整第一存储器中存储的数据;三维模型重建模块,用于根据调整后的第一存储器中存储的数据对当前拍摄区域对应的第二子空间进行三维模型重建;三维模型渲染模块,用于根据第二子空间的三维重建数据进行渲染,其中,第一子空间和第二子空间分别是对场景进行划分获得的N 个子空间中的一个, N 为大于 0 的整数。                An embodiment of the present application further provides a scene scanning device, including: an acquisition module, a first position information determination module, an adjustment module, a 3D model reconstruction module, and a 3D model rendering module; an acquisition module configured to acquire a current shooting area Image data; a first position information determining module for determining the first position information of the current shooting area based on the image data; an adjusting module for determining the position of the first subspace corresponding to the previous shooting area based on the first position information Information to dynamically adjust the data stored in the first memory; a three-dimensional model reconstruction module for reconstructing a three-dimensional model of the second subspace corresponding to the current shooting area according to the data stored in the adjusted first memory; For rendering according to the 3D reconstruction data of the second subspace, where the first subspace and the second subspace are N obtained by dividing the scene                One of the subspaces,                N                Is greater than                0                Integer.             
本申请实施例还提供了一种电子设备,包括:至少一个处理器;以及,与至少一个处理器通信连接的存储器;其中,存储器存储有可被至少一个处理器执行的指令,该指令被至少一个处理器执行,以使至少一个处理器能够执行上述的场景的三维重建方法。                An embodiment of the present application further provides an electronic device, including: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are at least A processor executes to enable at least one processor to perform the three-dimensional reconstruction method of the scene described above.             
本申请实施例还提供了一种计算机可读存储介质,存储有计算机程序,该计算机程序被处理器执行时实现上述的场景的三维重建方法。                The embodiment of the present application further provides a computer-readable storage medium storing a computer program, and the computer program is executed by a processor to implement the foregoing three-dimensional reconstruction method of a scene.             
有益效果Beneficial effect
相对于现有技术而言,本申请部分实施例中,根据当前拍摄区域的第一位置信息以及上一拍摄区域对应的第一子空间的位置信息,动态调整第一存储器中的数据,确保了第一存储器有足够的空间对当前拍摄区域进行计算,也确保了第一存储器有足够的空间对下一拍摄区域的图像数据进行计算,而不会因数据量多而导致计算延迟的问题,同时也不会因为场景范围大导致对场景的重建计算和渲染无法继续的问题,适用于各种规模场景的重建以及渲染。另外,由于将场景划分了子空间,并以子空间的形式对场景进行三维重建以及渲染,使得每个子空间都可以独立进行三维重建和渲染,避免了由于每次重建和渲染的数据量过大而导致对场景的三维重建、渲染速度慢的问题。                Compared with the prior art, in some embodiments of the present application, the data in the first memory is dynamically adjusted according to the first position information of the current shooting area and the position information of the first subspace corresponding to the previous shooting area to ensure that The first memory has enough space to calculate the current shooting area, and also ensures that the first memory has enough space to calculate the image data of the next shooting area without the problem of calculation delay caused by the large amount of data. It will not cause the problem that the reconstruction calculation and rendering of the scene cannot be continued due to the large scope of the scene, and is suitable for the reconstruction and rendering of scenes of various sizes. In addition, because the scene is divided into subspaces, and the scene is 3D reconstructed and rendered in the form of subspaces, each subspace can be independently 3D reconstructed and rendered, avoiding the large amount of data that is reconstructed and rendered each time. This leads to the problem of slow 3D reconstruction and rendering of the scene.             
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。             One or more embodiments are exemplified by the pictures in the accompanying drawings. These exemplary descriptions do not constitute a limitation on the embodiments. Elements with the same reference numerals in the drawings are denoted as similar elements. Unless otherwise stated, the drawings in the drawings do not constitute a limitation on scale.          
图1 是本申请第一实施例中场景的三维重建方法的具体流程示意图;             figure 1             It is a schematic flowchart of a three-dimensional reconstruction method of a scene in the first embodiment of the present application;          
图2 是本申请第一实施例中场景的划分示意图;             figure 2             It is a schematic diagram of scene division in the first embodiment of the present application;          
图3 是本申请第一实施例中计算子空间 A 与子空间 D 的距离的示意图;             image 3             Is the calculation subspace in the first embodiment of the present application             A             With subspace             D             Diagram of distance          
图4 是本申请第二实施例中场景的三维重建方法的具体流程示意图;             Figure 4             It is a schematic flowchart of a three-dimensional reconstruction method of a scene in the second embodiment of the present application;          
图5 是本申请第三实施例中场景的三维重建方法的具体流程示意图;             Figure 5             It is a schematic flowchart of a three-dimensional reconstruction method of a scene in the third embodiment of the present application;          
图6 是本申请第三实施例中场景的三维重建方法中相邻子空间之间的示意图             Figure 6             Is a schematic diagram between adjacent subspaces in a three-dimensional reconstruction method of a scene in a third embodiment of the present application          
图7 是本申请第四实施例中场景的三维重建装置的结构示意图;             Figure 7             Is a schematic structural diagram of a three-dimensional reconstruction device for a scene in a fourth embodiment of the present application;          
图8 是本申请第五实施例中电子设备的结构示意图。             Figure 8             It is a schematic structural diagram of an electronic device in the fifth embodiment of the present application.          
本发明的实施方式Embodiments of the invention
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请部分实施例进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。然而,本领域的普通技术人员可以理解,在本申请的各实施例中,为了使读者更好地理解本申请而提出了许多技术细节。但是,即使没有这些技术细节和基于以下各实施例的种种变化和修改,也可以实现本申请所要求保护的技术方案。             In order to make the purpose, technical solution, and advantages of the present application clearer, some embodiments of the present application will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the application, and are not used to limit the application. However, a person of ordinary skill in the art can understand that in the embodiments of the present application, many technical details are provided in order to make the reader better understand the present application. However, even without these technical details and various changes and modifications based on the following embodiments, the technical solution claimed in this application can be implemented.          
本申请的第一实施例涉及一种场景的三维重建方法,该场景的三维重建方法可应用于电子设备或者云端,电子设备可以是智能机器人、无人驾驶车等。云端与电子设备通信连接,为终端提供对场景的扫描结果。本实施例以电子设备执行该场景的三维重建方法为例进行说明,云端执行该场景的三维重建方法的过程可以参考本申请实施例的内容。该场景的三维重建方法的具体流程如图1 所示。             The first embodiment of the present application relates to a method for three-dimensional reconstruction of a scene. The method for three-dimensional reconstruction of a scene may be applied to an electronic device or a cloud. The electronic device may be a smart robot, a driverless car, or the like. The cloud communicates with electronic devices to provide the terminal with scan results of the scene. This embodiment is described by using an electronic device to perform a three-dimensional reconstruction method of the scene as an example. For a process of performing the three-dimensional reconstruction method of the scene in the cloud, refer to the content of the embodiment of the present application. The specific process of the 3D reconstruction method of this scene is shown in Figure 1.             As shown.          
本实施例中的场景以大规模场景为例,如几个房间,一层楼,体育馆等。             The scene in this embodiment takes a large-scale scene as an example, such as several rooms, a floor, a gymnasium, and the like.          
步骤101 :获取当前拍摄区域的图像数据。             Step 101             : Get the image data of the current shooting area.          
具体的说,终端可以通过一个传感器或多个传感器,采集当前拍摄区域的图像数据,例如,可以采用彩色(red green blue , RGB )摄像头和深度摄像头,当前 RGB 摄像头和深度摄像头在采集图像数据前需要对齐,可采集到当前拍摄区域的彩色和深度( red green blue depth , RGBD )数据。             Specifically, the terminal can collect image data of the current shooting area through one sensor or multiple sensors. For example, red (blue green)             ,             RGB             ) Camera and depth camera, current             RGB             The camera and depth camera need to be aligned before capturing image data, which can capture the color and depth of the current shooting area             red green blue depth             ,             RGBD             )data.          
需要说明的是,电子设备在获取当前拍摄区域的图像数据之前,可以先将当前拍摄区域所在的场景划分为多个子空间,划分子空间的过程如下:获取当前拍摄区域所在的场景的体积数据;根据体积数据,将场景划分为N 个子空间, N 为大于 0 的整数。             It should be noted that, before acquiring the image data of the current shooting area, the electronic device may first divide the scene in which the current shooting area is located into multiple subspaces, and the process of dividing the subspace is as follows: obtaining volume data of the scene in which the current shooting area is located; Divide the scene into N based on volume data             Subspaces,             N             Is greater than             0             Integer.          
具体的说,当前拍摄区域所在的场景的体积可以通过人工输入的方式获取,也可以通过云端获取体积数据,本实施例中不限制获取该场景的体积数据的方式。子空间的个数可以预先确定,也可以采用任一划分成任意数量,其中,每个子空间的大小、形状可以相同。当然,划分了N 个子空间后,还可以根据每个子空间内的数据进行动态调整子空间的大小,调整后每个的子空间的大小和形状可以不相同。             Specifically, the volume of the scene in which the current shooting area is located can be obtained by manual input, or the volume data can be obtained through the cloud. In this embodiment, the manner of obtaining the volume data of the scene is not limited. The number of subspaces can be determined in advance, or any number of subspaces can be used. The size and shape of each subspace can be the same. Of course, N is divided             After each subspace, the size of the subspace can also be dynamically adjusted according to the data in each subspace, and the size and shape of each subspace can be different after the adjustment.          
可以理解的是,若每个子空间的大小相同,且为长方体,那么每一个子空间最多与周围6 个子空间相邻。             It can be understood that if each subspace is the same size and is a cuboid, then each subspace is at most 6             The subspaces are adjacent.          
一个具体的实现中,可以结合八叉树的数据结构对该场景进行划分,划分过程如下:根据当前拍摄区域所在场景的体积数据,按照八叉树预设的最大递归深度,将当前拍摄区域所在的场景划分为N 个子空间,且 N 个子空间分别与每一级递归深度中的每一个子节点对应。             In a specific implementation, the scene can be divided according to the data structure of the octree. The division process is as follows: According to the volume data of the scene where the current shooting area is located, and according to the preset maximum recursion depth of the octree, the current shooting area is located. The scene is divided into N             Subspaces, and             N             Each subspace corresponds to each child node in each level of recursion depth.          
具体的说,可以根据获取到的该场景的体积数据,预先设置八叉树的最大递归深度。利用八叉树的数据结构对该场景进行空间划分,划分的每一个子空间与八叉树中的节点对应。下面以一个具体的例子说明场景的子空间的划分以及与八叉树中的节点对应的过程:             Specifically, the maximum recursion depth of the octree can be set in advance according to the obtained volume data of the scene. The scene is divided into spaces using the data structure of the octree, and each of the divided subspaces corresponds to a node in the octree. The following uses a specific example to illustrate the division of the scene's subspace and the process corresponding to the nodes in the octree:          
以一个房间作为场景为例,如图2 所示,将该场景以一个立方体 S 表示,预设递归深度为 2 ,按照预设的递归深度进行划分的过程为:将该立方体 S 划分为 8 个子空间,图 2 中,八叉树的父节点为 A ,第一级节点 B0~B7 与子立方体 S0~S7 对应;按照第一级划分的方式分别对 S0 至 S7 进行划分,如子立方体的 S0 可以划分为 8 个子空间( S01~S08 ),且 S01~S07 分别与第二级中的子节点 C0~C7 对应,第二级的其余个子节点( S1~S7 )对应的子空间即可按照该方式进行划分,在该例子中,八叉树每一级子节点对应的子空间的体积相同,当然,在实际应用中,每一级子节点对应的子空间的体积可以不同。             Take a room as an example, as shown in Figure 2             As shown, the scene is divided into a cube             S             Indicates that the preset recursion depth is             2             , The process of dividing according to a preset recursion depth is: the cube             S             be divided into             8             Subspace, graph             2             The parent node of the octree is             A             , The first-level node             B0 ~ B7             With child cubes             S0 ~ S7             Correspondence; respectively             S0             to             S7             Divide, such as a subcube             S0             Can be divided into             8             Subspaces (             S01 ~ S08             ), And             S01 ~ S07             Respectively with the child nodes in the second level             C0 ~ C7             Correspondingly, the remaining child nodes of the second level (             S1 ~ S7             The corresponding subspace can be divided in this way. In this example, the volume of the subspace corresponding to each level of child nodes of the octree is the same. Of course, in practical applications, the subspace corresponding to each level of child nodes The volume can be different.          
通过将八叉树的子节点与划分的子空间进行对应,使得在对该场景按照子空间进行三维重建的过程中,通过节点与子空间之间的对应关系,可以获取到与当前拍摄区域相邻的子空间内的数据。             Corresponds the subnodes of the octree with the divided subspaces, so that during the three-dimensional reconstruction of the scene according to the subspaces, the corresponding relationship between the nodes and the subspaces can be used to obtain the phase corresponding to the current shooting area. Data in neighboring subspaces.          
需要说明的是,在对该场景划分子空间的过程中,每划分一个子空间,就为该子空间标记对应的空间位置,在完成该场景的子空间划分之后,即可确定出每一个子空间的位置信息和大小。             It should be noted that in the process of dividing the subspace of the scene, each time a subspace is divided, the corresponding space position is marked for the subspace. After the subspace division of the scene is completed, each subspace can be determined. Location information and size of the space.          
步骤102 :根据图像数据,确定当前拍摄区域的第一位置信息。             Step 102             : Determine the first position information of the current shooting area based on the image data.          
一个具体的实现中,电子设备根据图像数据,构建当前拍摄区域对应的点云或网格数据;获取点云或网格数据的位置信息,并根据该点云或网格数据的位置信息,确定当前拍摄区域的第一位置信息。             In a specific implementation, the electronic device constructs the point cloud or grid data corresponding to the current shooting area according to the image data; obtains the position information of the point cloud or grid data, and determines based on the position information of the point cloud or grid data. The first position information of the current shooting area.          
具体的说,电子设备可以根据图像数据中的RGBD 数据,通过矩阵变换公式,计算该图像数据对应的点云数据或者网格数据。由于点云数据包含多个点,可以以点云的中间的位置信息作为该点云数据的第一位置信息,同理可以将网格的中心位置作为该网格数据的第一位置信息。             Specifically, the electronic device can be based on the RGBD in the image data             For the data, the point cloud data or grid data corresponding to the image data is calculated through a matrix transformation formula. Since the point cloud data contains multiple points, the position information in the middle of the point cloud can be used as the first position information of the point cloud data, and the center position of the grid can be used as the first position information of the grid data.          
步骤103 :根据第一位置信息以及上一拍摄区域对应的第一子空间的位置信息,动态调整第一存储器中存储的数据。             Step 103             : Dynamically adjust the data stored in the first memory according to the first position information and the position information of the first subspace corresponding to the previous shooting area.          
一个具体的实现中,电子设备根据第一位置信息以及上一拍摄区域对应的第一子空间的位置信息,确定调整模式;按照该调整模式,对第一存储器中存储的数据进行调整。其中,调整模式为第一调整模式、第二调整模式或者第三调整模式。第一调整模式为将第一存储器中的数据进行另存和删除,从第二存储器中读取与当前拍摄区域对应的第二子空间的数据,并在第一存储器中添加由图像数据确定的添加数据;第二调整模式为从第二存储器中读取与当前拍摄区域对应的第二子空间的数据,并在第一存储器中添加由图像数据确定的添加数据;第三调整模式为在第一存储器中添加由图像数据确定的添加数据。其中,添加数据可以是当前拍摄区域对应的点云或网格数据。             In a specific implementation, the electronic device determines an adjustment mode according to the first position information and the position information of the first subspace corresponding to the previous shooting area; according to the adjustment mode, the data stored in the first memory is adjusted. The adjustment mode is a first adjustment mode, a second adjustment mode, or a third adjustment mode. The first adjustment mode is to save and delete data in the first memory, read data in the second subspace corresponding to the current shooting area from the second memory, and add the addition determined by the image data in the first memory. Data; the second adjustment mode is to read the data of the second subspace corresponding to the current shooting area from the second memory, and add the addition data determined by the image data in the first memory; the third adjustment mode is to Addition data determined by image data is added to the memory. The added data may be point cloud or grid data corresponding to the current shooting area.          
本领域技术人员可以理解的是,调整模式还可以是其他的模式,此处不再一一列举。             Those skilled in the art can understand that the adjustment modes may also be other modes, which are not listed here one by one.          
需要说明的是,第一子空间和第二子空间分别是对该场景进行划分获得的N 个子空间中的一个, N 为大于 0 的整数。             It should be noted that the first subspace and the second subspace are N obtained by dividing the scene, respectively.             One of the subspaces,             N             Is greater than             0             Integer.          
具体的说,根据第一位置信息以及上一拍摄区域对应的第一子空间的位置信息,或者,根据第一位置信息以及所有子空间的位置信息,确定当前拍摄区域对应的第二子空间;根据第一子空间的位置信息以及第二子空间的位置信息,确定第一子空间和第二子空间的位置关系;根据位置关系,确定调整模式。             Specifically, according to the first position information and the position information of the first subspace corresponding to the previous shooting area, or according to the first position information and the position information of all the subspaces, determining the second subspace corresponding to the current shooting area; Determine the positional relationship between the first subspace and the second subspace according to the position information of the first subspace and the position information of the second subspace; and determine the adjustment mode according to the position relationship.          
下面将分别介绍确定当前拍摄区域对应的第二子空间的两种方式:             The following describes the two methods of determining the second subspace corresponding to the current shooting area:          
方法1 :判断第一位置信息是否位于第一子空间的位置信息所在范围内,若是,则将第一子空间作为当前拍摄区域对应的第二子空间,若不是,则根据除第一子空间外的其他子空间的位置信息,确定当前拍摄区域对应的第二子空间。             method 1             : Determine whether the first position information is within the range of the position information of the first subspace, if yes, use the first subspace as the second subspace corresponding to the current shooting area; if not, use the Position information of other subspaces determines a second subspace corresponding to the current shooting area.          
具体的说,第一子空间为上一拍摄区域对应的子空间,第一子空间的位置信息包括第一子空间的各个顶角的位置,将第一子空间中各顶角构成的范围作为该第一子空间的位置信息所在范围。判断第一位置信息是否位于第一子空间的位置信息所在范围内,若是,则当前拍摄区域位于第一子空间内,将第一子空间作为当前拍摄区域对应的第二子空间,即上一拍摄区域对应的第一子空间与当前拍摄区域对应的第二子空间为同一个子空间。若第一位置信息不在该第一子空间的位置所在范围内,表明当前拍摄区域对应的第二子空间与上一拍摄区域对应的第一子空间不是同一个子空间,此时,可以分别判断该场景内除第一子空间之外的每个子空间的位置信息所在范围内是否包含第一位置信息,选取包含有第一位置信息的子空间作为第二子空间。             Specifically, the first subspace is the subspace corresponding to the previous shooting area. The position information of the first subspace includes the positions of the vertices of the first subspace. The range formed by the vertices of the first subspace is taken as The range of the position information of the first subspace. Determine whether the first position information is within the range of the position information of the first subspace, and if so, the current shooting area is located in the first subspace, and the first subspace is used as the second subspace corresponding to the current shooting area, that is, the previous The first subspace corresponding to the shooting area and the second subspace corresponding to the current shooting area are the same subspace. If the first position information is not within the range of the position of the first subspace, it indicates that the second subspace corresponding to the current shooting area and the first subspace corresponding to the previous shooting area are not the same subspace. Whether the position information of each subspace except the first subspace in the scene includes the first position information, and the subspace containing the first position information is selected as the second subspace.          
方法2 :分别判断每个子空间的位置信息所在范围内是否包含第一位置信息,根据判断结果确定当前拍摄区域对应的第二子空间。             Method 2             : Determine whether the first position information is included in the range of the position information of each subspace, and determine the second subspace corresponding to the current shooting area according to the determination result.          
具体的说,该方法与方法1 类似,区别仅在于方法 2 直接根据所有子空间的位置信息,查找包含第一位置信息的子空间,将包含第一位置信息的子空间作为第二子空间。             Specifically, this method and method 1             Similarly, the difference lies only in the method             2             According to the position information of all the subspaces, a subspace containing the first position information is searched, and the subspace containing the first position information is used as the second subspace.          
以下对根据第一子空间的位置信息以及第二子空间的位置信息,确定第一子空间和第二子空间的位置关系的方法进行举例说明。             The method for determining the positional relationship between the first subspace and the second subspace based on the position information of the first subspace and the position information of the second subspace is described below as an example.          
一个具体实现中,在确定了第二子空间后,根据第一子空间的位置信息和第二子空间的位置信息,计算第一子空间与第二子空间之间的距离;若确定距离大于预设距离阈值,则确定第一子空间与第二子空间的位置关系为不相邻;若确定距离小于预设距离阈值且距离不为零,则确定第一子空间与第二子空间的位置关系为相邻;若确定距离为零,则确定第一子空间与第二子空间位置关系为同一位置。             In a specific implementation, after the second subspace is determined, the distance between the first subspace and the second subspace is calculated according to the position information of the first subspace and the position information of the second subspace; if it is determined that the distance is greater than If the preset distance threshold is determined, the positional relationship between the first subspace and the second subspace is determined to be non-adjacent; if the determined distance is less than the preset distance threshold and the distance is not zero, the relationship between the first subspace and the second subspace is determined The positional relationship is adjacent; if the determined distance is zero, it is determined that the positional relationship of the first subspace and the second subspace is the same position.          
本实施例中,以第一子空间的几何中心点到第二子空间的几何中心点之间的距离作为第一子空间与第二子空间的距离。预设距离阈值可以根据实际应用进行设置,其中,预设距离阈值与子空间内点云数据中的重建点的数目(网格数据中网格的数目)、子空间的大小和第一存储器的容量大小相关。例如,若第一存储器的容量为1G ,容量较小,则可以将预设距离阈值设定小些。             In this embodiment, the distance between the geometric center point of the first subspace and the geometric center point of the second subspace is used as the distance between the first subspace and the second subspace. The preset distance threshold can be set according to the actual application, where the preset distance threshold is related to the number of reconstruction points in the point cloud data (the number of grids in the grid data), the size of the subspace, and The size is related. For example, if the capacity of the first memory is 1G             If the capacity is small, the preset distance threshold can be set smaller.          
计算第一子空间和第二子空间之间的距离的方式有多种,本实施例具体介绍其中的一种通过测地距离计算第一子空间和第二子空间距离的方法。             There are many ways to calculate the distance between the first subspace and the second subspace. This embodiment specifically introduces a method for calculating the distance between the first subspace and the second subspace by using the geodesic distance.          
具体的说,将相邻子空间的几何中心点用直线连接,形成一个所有子空间的连接图,即每个子空间的几何中线点即为该连接图中的一个连接点,第一子空间和第二子空间之间的距离即为该连接图中两个对应的连接点之间的最短距离(即连个连接点的最短路径上的边的个数)。             Specifically, the geometric center points of adjacent subspaces are connected by straight lines to form a connection graph of all subspaces, that is, the geometric centerline point of each subspace is a connection point in the connection graph. The first subspace and The distance between the second subspaces is the shortest distance between the two corresponding connection points in the connection graph (that is, the number of edges on the shortest path connecting the connection points).          
例如,有4 个子空间分别为 A 、 B 、 C 和 D ,其中, A 为第一子空间, D 为第二子空间, AB 相邻, BC 相邻, CD 相邻, AC 相邻。将相邻子空间的几何中心用直线连接,构成所有子空间的连接图,如图 3 所示,图中空心的圆表示连接点(图 3 中有 4 个连接点,分别为连接点 A 、连接点 B 、连接点 C 和连接点 D ),那么第一子空间和第二子空间之间的距离即是连接点 A 和连接点 D 之间的最短距离(最短距离为 A-C-D ,即连接点 A 和连接点 D 的最短路径上的边的个数为 2 ),其中,该最短距离即为测地距离。             For example, there are 4             Subspaces are             A             ,             B             ,             C             with             D             ,among them,             A             Is the first subspace,             D             Is the second subspace,             AB             Adjacent,             BC             Adjacent,             CD             Adjacent,             AC             Adjacent. Connect the geometric centers of adjacent subspaces with straight lines to form a connection graph of all subspaces, as shown in the figure             3             As shown, the hollow circles in the figure represent the connection points (Figure             3             In             4             Connection points             A             ,Junction             B             ,Junction             C             And connection points             D             ), Then the distance between the first subspace and the second subspace is the connection point             A             And connection points             D             Shortest distance between (shortest distance is             A-C-D             Connection point             A             And connection points             D             The number of edges on the shortest path of             2             ), Where the shortest distance is the geodesic distance.          
以下对根据第一子空间和第二子空间的位置关系,确定调整模式的方法进行举例说明。             The method for determining the adjustment mode according to the positional relationship between the first subspace and the second subspace is described below as an example.          
一个具体的实现中,若确定第一子空间和第二子空间的位置关系为不相邻,则确定调整模式为第一调整模式;若第一子空间和第二子空间的确定位置关系为相邻,则确定调整模式为第二调整模式;若确定第一子空间和第二子空间的位置关系为同一位置,则确定调整模式为第三调整模式。             In a specific implementation, if it is determined that the positional relationship between the first subspace and the second subspace is not adjacent, then the adjustment mode is determined as the first adjustment mode; if the positional relationship between the first subspace and the second subspace is If they are adjacent, the adjustment mode is determined to be the second adjustment mode; if the positional relationship between the first subspace and the second subspace is determined to be the same position, the adjustment mode is determined to be the third adjustment mode.          
具体的说,若确定位置关系为第一子空间和第二子空间位置不在同一位置,表明当前拍摄区域所在第一子空间与第二子空间不是同一个子空间,为了确保第一存储器的运算能力,对第一存储器中存储的数据进行调整。             Specifically, if it is determined that the first subspace and the second subspace are not in the same position, it indicates that the first subspace and the second subspace where the current shooting area is located are not the same subspace. In order to ensure the computing capacity of the first memory, To adjust the data stored in the first memory.          
具体的说,若确定第一子空间和第二子空间的位置是相邻,则不对第一存储器中的第一子空间内的数据做处理,根据第二子空间的位置信息,从第二存储器中查找第二子空间内的数据,并读取至该第一存储器中,并在第一存储器中将点云或网格数据添加至第二子空间包含的数据。若确定第一子空间和第二子空间的位置是不相邻,先将第一子空间内的数据保存至第二存储器中,同时保存该第一子空间的位置信息,再将该第一子空间的数据从第一存储器中删除,加载查询到的第二子空间内的数据,并在第一存储器中将点云或网格数据添加至第二子空间包含的数据。             Specifically, if it is determined that the positions of the first subspace and the second subspace are adjacent, the data in the first subspace in the first memory is not processed, and according to the position information of the second subspace, The memory searches for data in the second subspace, reads the data into the first memory, and adds point cloud or grid data to the data contained in the second subspace in the first memory. If it is determined that the positions of the first subspace and the second subspace are not adjacent, first save the data in the first subspace to the second memory, and simultaneously save the position information of the first subspace, and then save the first subspace. The data in the subspace is deleted from the first memory, the data in the second subspace that is queried is loaded, and the point cloud or grid data is added to the data contained in the second subspace in the first memory.          
需要说明的是,将添加数据(点云数据或网格数据)添加至第一存储器中后,可以对八叉树中的节点对应的子空间进行更新。具体的说,当有新的数据添加至第二子空间后,根据八叉树数据结构的原理,再次划分该第二子空间,并调整八叉树节点与新划分的子空间之间的对应关系。             It should be noted that after adding the added data (point cloud data or grid data) to the first storage, the subspace corresponding to the nodes in the octree can be updated. Specifically, when new data is added to the second subspace, the second subspace is divided again according to the principle of the octree data structure, and the correspondence between the octree nodes and the newly divided subspace is adjusted. relationship.          
通过添加的数据,对第二子空间进行划分,可以缩小子空间的体积,减少划分后的子空间内的数据量,进一步调整第一存储器对子空间的运算能力。另外,调整八叉树节点的对应关系,使得在后续进行该场景的三维重建和渲染过程中,可以通过八叉树节点快速查询到相邻节点内的数据。             Dividing the second subspace through the added data can reduce the volume of the subspace, reduce the amount of data in the divided subspace, and further adjust the computing capacity of the first memory for the subspace. In addition, the corresponding relationship between the octree nodes is adjusted so that in the subsequent 3D reconstruction and rendering of the scene, the data in the adjacent nodes can be quickly queried through the octree nodes.          
步骤104 :根据调整后的第一存储器中存储的数据对当前拍摄区域对应的第二子空间进行三维模型重建。             Step 104             : Perform a three-dimensional model reconstruction on the second subspace corresponding to the current shooting area according to the data stored in the adjusted first memory.          
具体的说,完成数据添加之后,根据该第二子空间内的点云数据或网格数据构建该第二子空间的三维模型。             Specifically, after the data is added, a three-dimensional model of the second subspace is constructed according to point cloud data or grid data in the second subspace.          
步骤105 :根据第二子空间的三维重建数据进行渲染。             Step 105             : Rendering based on the 3D reconstruction data of the second subspace.          
需要说明的是,第二子空间的三维重建数据可以是第二子空间的三维网格数据或者第二子空间的三维点云数据,或者是第二子空间的三维网格数据和第二子空间的三维点云数据。其中,本实施例中第一存储器包括用于进行三维重建的内存,以及用于渲染的显存。             It should be noted that the 3D reconstruction data of the second subspace may be the 3D grid data of the second subspace or the 3D point cloud data of the second subspace, or the 3D grid data of the second subspace and the second subspace. 3D point cloud data in space. The first memory in this embodiment includes a memory for performing three-dimensional reconstruction, and a video memory for rendering.          
相对于现有技术而言,本申请部分实施例中,根据当前拍摄区域的第一位置信息以及上一拍摄区域对应的第一子空间的位置信息,动态调整第一存储器中的数据,确保了第一存储器有足够的空间对当前拍摄区域进行计算,也确保了第一存储器有足够的空间对下一拍摄区域的图像数据进行三维重建,而不会因数据量多而导致计算延迟的问题,同时也不会因为场景范围大导致对场景的重建计算和渲染无法继续的问题,适用于各种规模场景的重建以及渲染。另外,由于将场景划分了子空间,并以子空间的形式对场景进行三维重建以及渲染,使得每个子空间都可以独立进行三维重建和渲染,避免了由于每次重建和渲染的数据量过大而导致对场景的三维重建、渲染的速度慢的问题。             Compared with the prior art, in some embodiments of the present application, the data in the first memory is dynamically adjusted according to the first position information of the current shooting area and the position information of the first subspace corresponding to the previous shooting area to ensure that The first memory has enough space to calculate the current shooting area, and also ensures that the first memory has enough space to perform three-dimensional reconstruction of the image data of the next shooting area without the problem of calculation delay caused by the large amount of data. At the same time, the reconstruction and calculation of the scene cannot be continued due to the large scope of the scene, which is suitable for the reconstruction and rendering of scenes of various sizes. In addition, because the scene is divided into subspaces, and the scene is 3D reconstructed and rendered in the form of subspaces, each subspace can be independently 3D reconstructed and rendered, avoiding the large amount of data that is reconstructed and rendered each time. As a result, the 3D reconstruction and rendering of the scene are slow.          
本申请的第二实施例涉及一种场景的三维重建方法,第二实施例是对第一实施例的进一步改进,主要改进之处在于,本实施例在完成所有子空间的三维重建和渲染之后,将该场景中所有的子空间数据进行拼接。             The second embodiment of the present application relates to a method for three-dimensional reconstruction of a scene. The second embodiment is a further improvement on the first embodiment. The main improvement is that, after the three-dimensional reconstruction and rendering of all subspaces are completed in this embodiment , Stitch all the subspace data in the scene.          
本实施例包括步骤401 至步骤 407 。其中,步骤 401 至步骤 405 分别与第一实施例中的步骤 101 至步骤 105 大致相同,此处不再详述,下面主要介绍不同之处:             This embodiment includes step 401             To step             407             . Where steps             401             To step             405             Separate from the steps in the first embodiment             101             To step             105             Roughly the same, no longer detailed here, the following mainly introduces the differences:          
步骤406 :检测在第二存储器中是否包含场景中所有的子空间内的数据;若是,则执行步骤 407 ,否则返回执行步骤 401 。             Step 406             : Detect whether the data in all the subspaces in the scene are contained in the second memory; if so, execute the step             407             , Otherwise return to execute step             401             .          
具体的说,可以通过该场景划分的子空间的个数,判断第二存储器中包含的子空间个数与该场景划分的子空间的个数相同,若相同,则确定第二存储器中是否包含场景中所有的子空间内的数据,否则,确定第二存储器中未包含场景中所有的子空间内的数据。可以理解的是,还可以有其他的检测方法,例如,通过该场景划分的每个子空间的位置信息,判断第二存储器中是否包含所有的子空间的位置信息的方式,此处不再一一列举。             Specifically, the number of subspaces divided by the scene can be used to determine whether the number of subspaces contained in the second memory is the same as the number of subspaces divided by the scene. If they are the same, it is determined whether the second memory contains Data in all subspaces in the scene; otherwise, it is determined that data in all subspaces in the scene are not included in the second memory. It can be understood that there may be other detection methods, for example, a method of determining whether the second memory includes the position information of all the subspaces by using the position information of each subspace divided by the scene is not provided here one by one. List.          
步骤407 :根据每个子空间的位置信息将第二存储器中的所有的子空间内的数据进行拼接,形成场景的三维重建数据,并对场景的三维重建数据进行渲染。             Step 407             : According to the position information of each subspace, the data in all the subspaces in the second memory are stitched to form the 3D reconstructed data of the scene, and the 3D reconstructed data of the scene is rendered.          
具体的说,第二存储器可以是只读存储器,所有的子空间内的数据拼接后即为该场景的点云或网格数据,形成该场景的三维重建数据,对该场景的三维重建数据进行渲染即可得到该场景的三维模型。可以理解的是,拼接是在第一存储器中进行。             Specifically, the second memory may be a read-only memory, and the data in all subspaces is the point cloud or grid data of the scene after splicing, forming the 3D reconstruction data of the scene, and performing the 3D reconstruction data of the scene. A 3D model of the scene can be obtained by rendering. It can be understood that the stitching is performed in the first memory.          
与现有技术相比,本实施例提供的场景的三维渲染方法,由于在第二存储器中存储有每个子空间的数据,且每个子空间相对独立,即可简单对每个子空间内的数据进行融合,计算量小,且速度快,通过拼接的方式,可快速实现对整个场景的渲染,适用于各种规模的场景的三维重建和渲染。             Compared with the prior art, the three-dimensional rendering method of the scene provided by this embodiment, because the data of each subspace is stored in the second memory, and each subspace is relatively independent, the data in each subspace can be simply processed. Fusion, small amount of calculation, and fast speed. Through stitching, the entire scene can be quickly rendered, which is suitable for 3D reconstruction and rendering of scenes of various sizes.          
本申请的第三实施例涉及一种场景的三维渲染方法,第三实施例是对第二实施例的进一步改进,主要改进之处在于,本实施例在根据所述第二子空间的三维重建数据进行渲染之前,根据第二子空间内点云数据或网格数据的数目,对第二子空间的体积进行调整。具体的流程如图5 所示。             The third embodiment of the present application relates to a three-dimensional rendering method of a scene. The third embodiment is a further improvement on the second embodiment. The main improvement lies in that this embodiment is based on the three-dimensional reconstruction of the second subspace. Before rendering the data, the volume of the second subspace is adjusted according to the number of point cloud data or grid data in the second subspace. The specific process is shown in Figure 5             As shown.          
本实施例包括步骤501 至步骤 508 。其中,步骤 501 至步骤 504 、步骤 506 至步骤 508 与第二实施例中的步 401 至步骤 404 、步骤 405 至步骤 407 大致相同,此处不再详述,下面主要介绍不同之处:             This embodiment includes step 501             To step             508             . Where steps             501             To step             504             ,step             506             To step             508             With the steps in the second embodiment             401             To step             404             ,step             405             To step             407             Roughly the same, no longer detailed here, the following mainly introduces the differences:          
步骤505 :根据第二子空间内的点云数据或网格数据的数目,对第二子空间的体积进行调整。             Step 505             : Adjust the volume of the second subspace according to the number of point cloud data or grid data in the second subspace.          
一个具体实现中,判断第二子空间内的点云或网格数据是否超过第一预设值,若是,则将第二子空间划分为至少一个子空间。             In a specific implementation, it is determined whether the point cloud or grid data in the second subspace exceeds the first preset value, and if so, the second subspace is divided into at least one subspace.          
需要说明的是,第二子空间的划分方式与第一实施例中的划分方式相同。例如,将第二子空间划分为8 个子空间,之后将当前拍摄区域的第一位置信息位于的子空间作为新的第二子空间。             It should be noted that the division manner of the second subspace is the same as that in the first embodiment. For example, divide the second subspace into 8             Subspace, and then use the subspace where the first position information of the current shooting area is located as a new second subspace.          
值得一提的是,在第二子空间的点云数据过多时,对第二子空间的体积进行调整,加快了第一存储器中对第二子空间的计算速度。             It is worth mentioning that when there is too much point cloud data in the second subspace, the volume of the second subspace is adjusted, which speeds up the calculation of the second subspace in the first memory.          
另一个具体实现中,判断第二子空间以及与第二子空间相邻的子空间内的点云或网格数据是否均小于第二预设值,若是,则将与第二子空间相邻的子空间和第二子空间合并。             In another specific implementation, it is determined whether the point cloud or grid data in the second subspace and the subspace adjacent to the second subspace are both smaller than the second preset value, and if so, will be adjacent to the second subspace. And the second subspace are merged.          
例如,子空间A 是第二子空间,子空间 A 与子空间 B 相邻,当检测到第二子空间和子空间 B 中的点云数据或网格数据小于第二预设阈值,则将第二子空间与子空间 B 合并,形成一个新的第二子空间。             For example, subspace A             Is the second subspace, subspace             A             With subspace             B             Adjacent when the second subspace and subspace are detected             B             The point cloud data or grid data in the second subspace and the subspace             B             Merge to form a new second subspace.          
其中,第一预设阈值和第二预设阈值是根据实际应用中第一存储器的运算能力设置的。             The first preset threshold and the second preset threshold are set according to the computing capability of the first memory in practical applications.          
需要说明的是,在相邻的两个子空间有部分重叠空间。以一个具体的例子进行说明,例如,如图6 所示,子空间 A 和子空间 AB 为相邻子空间,子空间 AB 与子空间 B 为相邻子空间,子空间 AB 与子空间 A 有部分重叠,同时子空间 AB 与子空间 B 也有部分重叠。由于子空间 AB 既包含有子空间 A 的一部分,又包含子空间 B 的一部分,保证了子空间的数据的连贯性。可以理解的是,在进行多个相邻子空间的合并操作时,相邻子空间的重叠部分会出现冗余数据,将冗余数据删除即可,例如,如图 6 所示,若将子空间 A 和子空间 B 合并,将子空间 AB 内的数据删除即可。             It should be noted that there is a partially overlapping space between two adjacent subspaces. Take a specific example to illustrate, for example, as shown in Figure 6             Shown, subspace             A             And subspace             AB             Adjacent space             AB             With subspace             B             Adjacent space             AB             With subspace             A             There are partial overlaps at the same time             AB             With subspace             B             There is also some overlap. Since the subspace             AB             Subspace             A             Part of the subspace             B             Part of the data to ensure the coherence of the subspace data. It can be understood that when performing a merge operation of multiple adjacent subspaces, redundant data may appear in overlapping portions of adjacent subspaces, and the redundant data may be deleted. For example, as shown in the figure             6             As shown, if the subspace             A             And subspace             B             Merge, subspace             AB             Delete the data inside.          
与现有技术相比,本实施例提供的场景的三维重建方法,可以根据子空间内的数据量,自动调节子空间的体积,确保对子空间的三维重建以及渲染的速度,同时也避免了因数据量少,造成的运算资源的浪费。             Compared with the prior art, the method for 3D reconstruction of a scene provided by this embodiment can automatically adjust the volume of a subspace according to the amount of data in the subspace, thereby ensuring the speed of 3D reconstruction and rendering of the subspace, while also avoiding Because of the small amount of data, the computing resources are wasted.          
本申请的第四实施例涉及一种场景的三维重建装置70 ,包括:获取模块 701 、第一位置信息确定模块 702 、调整模块 703 、三维模型重建模块 704 和三维模型渲染模块 705 ;具体结构如图 7 所示。             The fourth embodiment of the present application relates to a three-dimensional reconstruction device 70 for a scene             , Including: Get module             701             First location information determination module             702             Adjustment module             703             3D model reconstruction module             704             And 3D model rendering module             705             ; The specific structure is shown in Figure             7             As shown.          
获取模块701 ,用于获取当前拍摄区域的图像数据;第一位置信息确定模块 702 ,用于根据图像数据,确定当前拍摄区域的第一位置信息;调整模块 703 ,用于根据第一位置信息以及上一拍摄区域对应的第一子空间的位置信息,动态调整第一存储器中存储的数据;三维模型重建模块 704 ,用于根据调整后的第一存储器中存储的数据对当前拍摄区域对应的第二子空间进行三维模型重建;三维模型渲染模块 705 ,用于根据第二子空间的三维重建数据进行渲染;其中,第一子空间和第二子空间分别是对该场景进行划分获得的 N 个子空间中的一个, N 为大于 0 的整数。             Acquisition module 701             For acquiring image data of a current shooting area; a first position information determining module             702             For determining the first position information of the current shooting area according to the image data; the adjustment module             703             For dynamically adjusting data stored in the first memory according to the first position information and the position information of the first subspace corresponding to the previous shooting area; a three-dimensional model reconstruction module             704             For reconstructing a three-dimensional model of a second subspace corresponding to the current shooting area according to the data stored in the adjusted first memory; a three-dimensional model rendering module             705             For rendering according to the 3D reconstruction data of the second subspace; wherein the first subspace and the second subspace are obtained by dividing the scene respectively             N             One of the subspaces,             N             Is greater than             0             Integer.          
本实施例是与上述场景的三维重建方法对应的虚拟装置实施例,上述方法实施例中技术细节在本实施例中依然适用,此处不再赘述。             This embodiment is an embodiment of a virtual device corresponding to the three-dimensional reconstruction method of the foregoing scene. The technical details in the foregoing method embodiments are still applicable in this embodiment, and are not described herein again.          
需要说明的是,以上所述的装置实施例仅仅是示意性的,并不对本申请的保护范围构成限定,在实际应用中,本领域的技术人员可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的,此处不做限制。             It should be noted that the device embodiments described above are only schematic and do not limit the scope of protection of this application. In practical applications, those skilled in the art may select some or all of the modules according to actual needs. To achieve the purpose of the solution of this embodiment, there is no limitation here.          
本申请的第五实施例涉及一种电子设备,其结构如图8 所示。包括:至少一个处理器 801 ;以及,与至少一个处理器 801 通信连接的存储器 802 。存储器 802 存储有可被至少一个处理器 801 执行的指令。指令被至少一个处理器 801 执行,以使至少一个处理器 801 能够执行上述的场景的三维重建方法。             The fifth embodiment of the present application relates to an electronic device, whose structure is shown in FIG. 8             As shown. Includes: at least one processor             801             ; And, with at least one processor             801             Communication connected storage             802             . Memory             802             Stored by at least one processor             801             Instruction executed. Instruction by at least one processor             801             Execute to make at least one processor             801             The three-dimensional reconstruction method of the scene described above can be performed.          
本实施例中,处理器以中央处理器(Central Processing Unit , CPU )为例,存储器以可读写存储器( Random Access Memory , RAM )为例。处理器、存储器可以通过总线或者其他方式连接,图 8 中以通过总线连接为例。存储器作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块。处理器通过运行存储在存储器中的非易失性软件程序、指令以及模块,从而执行设备的各种功能应用以及数据处理,即实现上述场景的三维重建方法。             In this embodiment, the processor is a central processing unit (Central Processing Unit).             ,             CPU             ) As an example, the memory is a readable and writable memory (             Random Access Memory             ,             RAM             ) As an example. The processor and memory can be connected through the bus or other methods.             8             Take the connection via the bus as an example. As a non-volatile computer-readable storage medium, the memory can be used to store non-volatile software programs, non-volatile computer executable programs, and modules. The processor executes various functional applications and data processing of the device by running non-volatile software programs, instructions, and modules stored in the memory, that is, a three-dimensional reconstruction method of the foregoing scene.          
存储器可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储选项列表等。此外,存储器可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器可选包括相对于处理器远程设置的存储器,这些远程存储器可以通过网络连接至外接设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。             The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system and an application program required for at least one function; the storage data area may store a list of options and the like. In addition, the memory may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage device. In some embodiments, the memory may optionally include a memory remotely set with respect to the processor, and these remote memories may be connected to an external device through a network. Examples of the above network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.          
一个或者多个模块存储在存储器中,当被一个或者多个处理器执行时,执行上述任意方法实施例中样本图像的集合的生成方法。             One or more modules are stored in the memory, and when executed by one or more processors, execute a method for generating a set of sample images in any of the foregoing method embodiments.          
上述产品可执行本申请实施例所提供的场景的三维重建方法,具备执行方法相应的功能模块和有益效果,未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的场景的三维重建方法。             The above-mentioned products can execute the three-dimensional reconstruction method of the scene provided by the embodiment of the present application, and have corresponding functional modules and beneficial effects of the execution method. For technical details not described in this embodiment, refer to the scene provided by the embodiment of the present application. 3D reconstruction method.          
本申请的第六实施例涉及一种计算机可读存储介质,该可读存储介质为计算机可读存储介质,该计算机可读存储介质中存储有计算机指令,该计算机指令使计算机能够执行本申请第一至第三方法实施例中任意实施例涉及的场景的三维重建方法。             The sixth embodiment of the present application relates to a computer-readable storage medium. The readable storage medium is a computer-readable storage medium. The computer-readable storage medium stores computer instructions that enable a computer to execute the first application of the present application. A three-dimensional reconstruction method of a scene involved in any one of the first to third method embodiments.          
即,本领域技术人员可以理解,实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor )执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括: U 盘、移动硬盘、只读存储器( ROM , Read-Only Memory )、随机存取存储器( RAM , Random Access Memory )、磁碟或者光盘等各种可以存储程序代码的介质。             That is, those skilled in the art can understand that all or part of the steps in the method of the above embodiments can be implemented by a program instructing related hardware. The program is stored in a storage medium and includes several instructions for making a device Can be a microcontroller, chip, etc.) or a processor (processor             ) Perform all or part of the steps of the method described in each embodiment of this application. The foregoing storage media include:             U             Disk, mobile hard disk, read-only memory (             ROM             ,             Read-Only Memory             ), Random access memory (             RAM             ,             Random Access Memory             ), Magnetic disks, or compact discs, which can store program code.          
本领域的普通技术人员可以理解,上述各实施例是实现本申请的具体实施例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。             Those of ordinary skill in the art can understand that the foregoing embodiments are specific embodiments for implementing the present application, and in practical applications, various changes can be made in form and details without departing from the spirit and range.          

Claims (16)

  1. 一种场景的三维重建方法,其中,包括:A three-dimensional reconstruction method of a scene, including:
    获取当前拍摄区域的图像数据;Obtain image data of the current shooting area;
    根据所述图像数据,确定所述当前拍摄区域的第一位置信息;Determining, according to the image data, first position information of the current shooting area;
    根据所述第一位置信息以及上一拍摄区域对应的第一子空间的位置信息,动态调整第一存储器中存储的数据;Dynamically adjusting data stored in the first memory according to the first position information and the position information of the first subspace corresponding to the previous shooting area;
    根据调整后的所述第一存储器中存储的数据,对所述当前拍摄区域对应的第二子空间进行三维模型重建,并根据所述第二子空间的三维重建数据进行渲染;Performing a three-dimensional model reconstruction on the second subspace corresponding to the current shooting area according to the adjusted data stored in the first memory, and rendering according to the three-dimensional reconstruction data of the second subspace;
    其中,所述第一子空间和所述第二子空间分别是对所述场景进行划分获得的N个子空间中的一个,N为大于0的整数。The first subspace and the second subspace are respectively one of N subspaces obtained by dividing the scene, and N is an integer greater than 0.
  2. 根据权利要求1所述的场景的三维重建方法,其中,根据所述第一位置信息以及上一拍摄区域对应的第一子空间的位置信息,动态调整第一存储器中存储的数据,具体包括: The method for three-dimensional reconstruction of a scene according to claim 1, wherein dynamically adjusting data stored in the first memory according to the first position information and position information of a first subspace corresponding to a previous shooting area specifically includes:
    根据所述第一位置信息以及上一拍摄区域对应的第一子空间的位置信息,确定调整模式;Determining an adjustment mode according to the first position information and position information of a first subspace corresponding to a previous shooting area;
    按照所述调整模式,对所述第一存储器中存储的数据进行调整;Adjusting data stored in the first memory according to the adjustment mode;
    所述调整模式为第一调整模式、第二调整模式或者第三调整模式;The adjustment mode is a first adjustment mode, a second adjustment mode, or a third adjustment mode;
    其中,所述第一调整模式为将所述第一存储器中的数据进行另存和删除,从第二存储器中读取与所述当前拍摄区域对应的第二子空间的数据,并在所述第一存储器中添加由所述图像数据确定的添加数据;The first adjustment mode is to save and delete data in the first memory, read data in a second subspace corresponding to the current shooting area from a second memory, and save the data in the first memory. Adding additional data determined by the image data in a memory;
    所述第二调整模式为从第二存储器中读取与所述当前拍摄区域对应的第二子空间的数据,并在所述第一存储器中添加由所述图像数据确定的添加数据;The second adjustment mode is to read data in a second subspace corresponding to the current shooting area from a second memory, and add additional data determined by the image data in the first memory;
    所述第三调整模式为在所述第一存储器中添加由所述图像数据确定的添加数据。The third adjustment mode is to add addition data determined by the image data to the first memory.
  3. 根据权利要求1或2所述的场景的三维重建方法,其中,在获取当前拍摄区域的图像数据之前,所述场景的三维重建方法还包括:The method for three-dimensional reconstruction of a scene according to claim 1 or 2, wherein before acquiring the image data of the current shooting area, the method for three-dimensional reconstruction of the scene further comprises:
    获取所述当前拍摄区域所在的场景的体积数据;Acquiring volume data of a scene in which the current shooting area is located;
    根据所述体积数据,将所述场景划分为所述N个子空间。According to the volume data, the scene is divided into the N subspaces.
  4. 根据权利要求3所述的场景的三维重建方法,其中,根据所述体积数据,将所述场景划分为所述N个子空间,具体包括:The method for three-dimensional reconstruction of a scene according to claim 3, wherein dividing the scene into the N subspaces according to the volume data specifically includes:
    按照八叉树预设的最大递归深度和所述体积数据,将所述当前拍摄区域所在的场景划分为所述N个子空间,且所述N个子空间分别与每一级递归深度中的每一个子节点对应。Divide the scene in which the current shooting area is located into the N subspaces according to the preset maximum recursive depth of the octree and the volume data, and the N subspaces are each associated with each level of recursive depth Corresponds to child nodes.
  5. 根据权利要求2所述的场景的三维重建方法,其中,根据所述图像数据,确定所述当前拍摄区域的第一位置信息,具体包括:The method for three-dimensional reconstruction of a scene according to claim 2, wherein determining the first position information of the current shooting area based on the image data specifically includes:
    根据所述图像数据,构建所述当前拍摄区域对应的点云或网格数据;Constructing point cloud or grid data corresponding to the current shooting area according to the image data;
    获取所述点云或网格数据的位置信息,并根据所述点云或网格数据的位置信息确定所述当前拍摄区域的第一位置信息。Acquiring position information of the point cloud or grid data, and determining first position information of the current shooting area according to the position information of the point cloud or grid data.
  6. 根据权利要求2所述的场景的三维重建方法,其中,根据所述第一位置信息以及上一拍摄区域对应的第一子空间的位置信息,确定调整模式,具体包括:The method for three-dimensional reconstruction of a scene according to claim 2, wherein determining an adjustment mode according to the first position information and position information of a first subspace corresponding to a previous shooting area specifically includes:
    根据所述第一位置信息以及上一拍摄区域对应的第一子空间的位置信息,或者,根据所述第一位置信息以及所有子空间的位置信息,确定所述当前拍摄区域对应的第二子空间;Determine the second sub-corresponding to the current shooting area according to the first position information and the position information of the first sub-space corresponding to the previous shooting area, or according to the first position information and the position information of all sub-spaces space;
    根据所述第一子空间的位置信息以及所述第二子空间的位置信息,确定所述第一子空间和第二子空间的位置关系;Determining a positional relationship between the first subspace and the second subspace according to the position information of the first subspace and the position information of the second subspace;
    根据所述位置关系,确定所述调整模式。Determining the adjustment mode according to the position relationship.
  7. 根据权利要求6所述的场景的三维重建方法,其中,根据所述第一位置信息以及上一拍摄区域对应的第一子空间的位置信息,或者,根据所述第一位置信息以及所有子空间的位置信息,确定所述当前拍摄区域对应的第二子空间,具体包括: The method for three-dimensional reconstruction of a scene according to claim 6, wherein according to the first position information and the position information of the first subspace corresponding to the previous shooting area, or according to the first position information and all the subspaces The position information of the second subspace corresponding to the current shooting area specifically includes:
    判断第一位置信息是否位于所述第一子空间的位置信息所在范围内,若是,则将所述第一子空间作为当前拍摄区域对应的第二子空间,若不是,则根据除第一子空间外的其他子空间的位置信息,确定所述当前拍摄区域对应的第二子空间;Determine whether the first position information is within the range of the position information of the first subspace, if yes, use the first subspace as the second subspace corresponding to the current shooting area; if not, use the first subspace Position information of other subspaces outside the space, and determining a second subspace corresponding to the current shooting area;
    或者,or,
    分别判断每个子空间的位置信息所在范围内是否包含第一位置信息,根据判断结果确定所述当前拍摄区域对应的第二子空间。It is respectively determined whether the position information of each subspace contains the first position information, and the second subspace corresponding to the current shooting area is determined according to the determination result.
  8. 根据权利要求6或7所述的场景的三维重建方法,其中,根据所述第一子空间的位置信息以及所述第二子空间的位置信息,确定所述第一子空间和第二子空间的位置关系,具体包括: The method for three-dimensional reconstruction of a scene according to claim 6 or 7, wherein the first subspace and the second subspace are determined according to position information of the first subspace and position information of the second subspace. Location relationship, including:
    根据所述第一子空间的位置信息和第二子空间的位置信息,计算所述第一子空间与第二子空间之间的距离;Calculating the distance between the first subspace and the second subspace according to the position information of the first subspace and the position information of the second subspace;
    若确定所述距离大于预设距离阈值,则确定所述第一子空间与第二子空间的位置关系为不相邻;If it is determined that the distance is greater than a preset distance threshold, determining that the positional relationship between the first subspace and the second subspace is not adjacent;
    若确定所述距离小于所述预设距离阈值且所述距离不为零,则确定所述第一子空间与第二子空间的位置关系为相邻;If it is determined that the distance is less than the preset distance threshold and the distance is not zero, determining that the positional relationship between the first subspace and the second subspace is adjacent;
    若确定所述距离为零,则确定第一子空间与第二子空间位置关系为同一位置。If it is determined that the distance is zero, it is determined that the positional relationship between the first subspace and the second subspace is the same position.
  9. 根据权利要求8所述的场景的三维重建方法,其中,根据所述位置关系,确定调整模式,具体包括: The method for three-dimensional reconstruction of a scene according to claim 8, wherein determining an adjustment mode according to the positional relationship specifically includes:
    若确定所述位置关系为不相邻,则确定所述调整模式为所述第一调整模式;If it is determined that the positional relationship is not adjacent, determining that the adjustment mode is the first adjustment mode;
    若确定所述位置关系为相邻,则确定所述调整模式为所述第二调整模式;If it is determined that the positional relationship is adjacent, determining that the adjustment mode is the second adjustment mode;
    若确定所述位置关系为同一位置,则确定所述调整模式为所述第三调整模式。If it is determined that the position relationship is the same position, it is determined that the adjustment mode is the third adjustment mode.
  10. 根据权利要求5所述的场景的三维重建方法,其中,所述添加数据为所述当前拍摄区域对应的点云或网格数据。 The method of three-dimensional reconstruction of a scene according to claim 5, wherein the added data is point cloud or grid data corresponding to the current shooting area.
  11. 根据权利要求2所述的场景的三维重建方法,其中,在根据重建的三维重建数据进行渲染之后,所述场景的三维重建方法还包括: The method of three-dimensional reconstruction of a scene according to claim 2, wherein after rendering according to the reconstructed three-dimensional reconstruction data, the method of three-dimensional reconstruction of the scene further comprises:
    检测在第二存储器中是否包含所述场景中所有的子空间内的数据;Detecting whether the second memory contains data in all subspaces in the scene;
    若是,则根据每个子空间的位置信息将所述第二存储器中的所有的子空间内的数据进行拼接,形成所述场景的三维重建数据,并对所述场景的三维重建数据进行渲染。If so, the data in all the subspaces in the second memory are stitched according to the position information of each subspace to form the three-dimensional reconstruction data of the scene, and the three-dimensional reconstruction data of the scene is rendered.
  12. 根据权利要求5至11中任一项所述的场景的三维重建方法,其中,在根据所述第二子空间的三维重建数据进行渲染之前,还包括: The method for three-dimensional reconstruction of a scene according to any one of claims 5 to 11, before performing rendering based on the three-dimensional reconstruction data of the second subspace, further comprising:
    判断第二子空间内的点云或网格数据是否超过第一预设值,若是,则将第二子空间划分为至少一个子空间;Judging whether the point cloud or grid data in the second subspace exceeds the first preset value, and if so, dividing the second subspace into at least one subspace;
    判断第二子空间以及与所述第二子空间相邻的子空间内的点云或网格数据点云或网格数据是否均小于第二预设值,若是,则将与第二子空间相邻的子空间和所述第二子空间合并。Judging whether the point cloud or grid data in the second subspace and the subspace adjacent to the second subspace is smaller than the second preset value, and if it is, will be compared with the second subspace Adjacent subspaces are merged with the second subspace.
  13. 根据权利要求1至12中任一项所述的场景的三维重建方法,其中,相邻的两个子空间有部分重叠空间。 The method for three-dimensional reconstruction of a scene according to any one of claims 1 to 12, wherein two adjacent subspaces have a partially overlapping space.
  14. 一种场景的三维重建装置,其中,包括:获取模块、第一位置信息确定模块、调整模块、三维模型重建模块和三维模型渲染模块; A scene 3D reconstruction device, comprising: an acquisition module, a first position information determination module, an adjustment module, a 3D model reconstruction module, and a 3D model rendering module;
    所述获取模块,用于获取当前拍摄区域的图像数据;The acquisition module is configured to acquire image data of a current shooting area;
    所述第一位置信息确定模块,用于根据所述图像数据,确定所述当前拍摄区域的第一位置信息;The first position information determining module is configured to determine first position information of the current shooting area according to the image data;
    所述调整模块,用于根据所述第一位置信息以及上一拍摄区域对应的第一子空间的位置信息,动态调整第一存储器中存储的数据;The adjusting module is configured to dynamically adjust data stored in the first memory according to the first position information and the position information of the first subspace corresponding to the previous shooting area;
    所述三维模型重建模块,用于根据调整后的所述第一存储器中存储的数据对所述当前拍摄区域对应的第二子空间进行三维模型重建;The three-dimensional model reconstruction module is configured to perform three-dimensional model reconstruction on a second subspace corresponding to the current shooting area according to the adjusted data stored in the first memory;
    所述三维模型渲染模块,用于根据所述第二子空间的三维重建数据进行渲染;The three-dimensional model rendering module is configured to render according to three-dimensional reconstruction data of the second subspace;
    其中,所述第一子空间和所述第二子空间分别是对所述场景进行划分获得的N个子空间中的一个,N为大于0的整数。The first subspace and the second subspace are respectively one of N subspaces obtained by dividing the scene, and N is an integer greater than 0.
  15. 一种电子设备,其中,包括:An electronic device including:
    至少一个处理器;以及,At least one processor; and
    与所述至少一个处理器通信连接的存储器;其中,A memory connected in communication with the at least one processor; wherein,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求1~13任一项所述场景的三维重建方法。The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the method according to any one of claims 1 to 13. 3D reconstruction method of scene.
  16. 一种计算机可读存储介质,存储有计算机程序,其中,所述计算机程序被处理器执行时实现权利要求1~13任一项所述场景的三维重建方法。A computer-readable storage medium storing a computer program, wherein when the computer program is executed by a processor, a three-dimensional reconstruction method for a scene according to any one of claims 1 to 13 is implemented.
PCT/CN2018/100390 2018-08-14 2018-08-14 Three-dimensional reconstruction method and apparatus for scene, and electronic device and storage medium WO2020034086A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/100390 WO2020034086A1 (en) 2018-08-14 2018-08-14 Three-dimensional reconstruction method and apparatus for scene, and electronic device and storage medium
CN201880001285.7A CN109155846B (en) 2018-08-14 2018-08-14 Three-dimensional reconstruction method and device of scene, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/100390 WO2020034086A1 (en) 2018-08-14 2018-08-14 Three-dimensional reconstruction method and apparatus for scene, and electronic device and storage medium

Publications (1)

Publication Number Publication Date
WO2020034086A1 true WO2020034086A1 (en) 2020-02-20

Family

ID=64806275

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/100390 WO2020034086A1 (en) 2018-08-14 2018-08-14 Three-dimensional reconstruction method and apparatus for scene, and electronic device and storage medium

Country Status (2)

Country Link
CN (1) CN109155846B (en)
WO (1) WO2020034086A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110910483B (en) * 2019-11-29 2021-05-14 广州极飞科技股份有限公司 Three-dimensional reconstruction method and device and electronic equipment
CN112540616B (en) * 2020-12-11 2021-07-16 北京赛目科技有限公司 Laser point cloud generation method and device based on unmanned driving
CN113362449B (en) * 2021-06-01 2023-01-17 聚好看科技股份有限公司 Three-dimensional reconstruction method, device and system
CN113313805B (en) * 2021-06-23 2024-06-25 合肥量圳建筑科技有限公司 Three-dimensional scene data storage method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110193859A1 (en) * 2010-02-09 2011-08-11 Samsung Electronics Co., Ltd Apparatus and method for generating octree based 3D map
CN104504760A (en) * 2014-12-09 2015-04-08 北京畅游天下网络技术有限公司 Method and system for updating three-dimensional image in real time
CN104616345A (en) * 2014-12-12 2015-05-13 浙江大学 Octree forest compression based three-dimensional voxel access method
CN106157354A (en) * 2015-05-06 2016-11-23 腾讯科技(深圳)有限公司 A kind of three-dimensional scenic changing method and system
CN108335353A (en) * 2018-02-23 2018-07-27 清华-伯克利深圳学院筹备办公室 Three-dimensional rebuilding method, device and system, server, the medium of dynamic scene

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8350850B2 (en) * 2008-03-31 2013-01-08 Microsoft Corporation Using photo collections for three dimensional modeling
US9483703B2 (en) * 2013-05-14 2016-11-01 University Of Southern California Online coupled camera pose estimation and dense reconstruction from video
CN104867174B (en) * 2015-05-08 2018-02-23 腾讯科技(深圳)有限公司 A kind of three-dimensional map rendering indication method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110193859A1 (en) * 2010-02-09 2011-08-11 Samsung Electronics Co., Ltd Apparatus and method for generating octree based 3D map
CN104504760A (en) * 2014-12-09 2015-04-08 北京畅游天下网络技术有限公司 Method and system for updating three-dimensional image in real time
CN104616345A (en) * 2014-12-12 2015-05-13 浙江大学 Octree forest compression based three-dimensional voxel access method
CN106157354A (en) * 2015-05-06 2016-11-23 腾讯科技(深圳)有限公司 A kind of three-dimensional scenic changing method and system
CN108335353A (en) * 2018-02-23 2018-07-27 清华-伯克利深圳学院筹备办公室 Three-dimensional rebuilding method, device and system, server, the medium of dynamic scene

Also Published As

Publication number Publication date
CN109155846A (en) 2019-01-04
CN109155846B (en) 2020-12-18

Similar Documents

Publication Publication Date Title
US20230351693A1 (en) File generation apparatus, image generation apparatus based on file, file generation method and storage medium
WO2020034086A1 (en) Three-dimensional reconstruction method and apparatus for scene, and electronic device and storage medium
JP7523492B2 (en) Method and apparatus for generating data representing a light field - Patents.com
CN108122277B (en) A modeling method and device
CN112884256B (en) Path planning method and device, computer equipment and storage medium
JP2021520579A (en) Object loading methods and devices, storage media, electronic devices, and computer programs
CN111951201B (en) Unmanned aerial vehicle aerial image splicing method, device and storage medium
WO2018077071A1 (en) Panoramic image generating method and apparatus
WO2018205803A1 (en) Pose estimation method and apparatus
JP2022509329A (en) Point cloud fusion methods and devices, electronic devices, computer storage media and programs
US10972712B2 (en) Image merging method using viewpoint transformation and system therefor
CN105469440A (en) Method and apparatus for generating and traversing acceleration structure
JP2009071844A (en) Calibration processor, calibration processing method, and computer program
WO2016203731A1 (en) Method for reconstructing 3d scene as 3d model
CN111260789A (en) Obstacle avoidance method, virtual reality headset, and storage medium
TWI536316B (en) Apparatus and computer-implemented method for generating a three-dimensional scene
CN104978759A (en) Method and apparatus for rendering same regions of multi frames
CN111609854A (en) 3D map construction method and sweeping robot based on multiple depth cameras
CN115035235A (en) Three-dimensional reconstruction method and device
WO2022102476A1 (en) Three-dimensional point cloud densification device, three-dimensional point cloud densification method, and program
WO2024021340A1 (en) Robot following method and apparatus, and robot and computer-readable storage medium
CN114401391A (en) Virtual viewpoint generation method and device
JP2013186624A (en) Image processor and operation method for image processor
CN111292331B (en) Image processing methods and devices
CN111369690A (en) Building block model generation method, device, terminal and computer-readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18930297

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24.06.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18930297

Country of ref document: EP

Kind code of ref document: A1