CN108566545A - The method that three-dimensional modeling is carried out to large scene by mobile terminal and ball curtain camera - Google Patents
The method that three-dimensional modeling is carried out to large scene by mobile terminal and ball curtain camera Download PDFInfo
- Publication number
- CN108566545A CN108566545A CN201810180677.6A CN201810180677A CN108566545A CN 108566545 A CN108566545 A CN 108566545A CN 201810180677 A CN201810180677 A CN 201810180677A CN 108566545 A CN108566545 A CN 108566545A
- Authority
- CN
- China
- Prior art keywords
- mobile terminal
- ball curtain
- camera
- curtain camera
- dimensional modeling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of methods carrying out three-dimensional modeling to large scene by mobile terminal and ball curtain camera, including mobile terminal, and the ball curtain camera being connect with the communication of mobile terminal;It the described method comprises the following steps:Pass through the video flowing of mobile terminal camera head shooting current location point;Mobile terminal is positioned by the current location point video stream of acquisition, that is, obtains the location information of current any position;The ball curtain camera is triggered to take pictures;Three-dimensional modeling is carried out based on sparse cloud.Advantageous effect of the present invention is:It is more accurate to be positioned using mobile terminal, when carrying out SLAM positioning, will not be distorted;The energy consumption of ball curtain camera has been saved to greatest extent;The scene of three-dimensional modeling is more accurate.
Description
Technical field
The present invention relates to three-dimensional imaging modeling technique fields, and in particular to a kind of to carry out three-dimensional modeling by ball curtain camera
Method.
Background technology
During carrying out three-dimensional modeling using ball curtain camera, because involving SLAM technologies, ball curtain camera (is typically all
Binocular or more mesh) always carry out video flowing shooting data volume to be treated it is bigger, can be caused very to hardware in this way
Big burden, generates larger fever phenomenon, and about a few minutes to more than ten minutes electricity will exhaust.
Secondly, if directly ball curtain camera is used to carry out space orientation, the video flowing of ball curtain camera shooting must just be utilized
Frame picture, then carry out SLAM positioning, but it is such computationally intensive, occupy a large amount of cpu resource, it is huge to increase
Electric quantity consumption.In addition, using this positioning method, it is fixed in progress SLAM between the frame picture of the video flowing of ball curtain camera shooting
Spliced before position, will produce distortion;Ball curtain camera just needed when SLAM positioning to processor return data, number
Live preview can be caused to be delayed according to the passback generation time difference.
Invention content
In view of the deficiencies in the prior art, the present invention intends to provide one kind by mobile terminal and ball curtain camera to large scene
Spatial position positioning is transferred on mobile terminal by the method for carrying out three-dimensional modeling by the mobile terminal of introducing, and ball curtain phase
Machine can be served only for the photo of shooting different location point, and carry out three-dimensional modeling or modification.
To achieve the goals above, the technical solution adopted by the present invention is as follows:
The method that three-dimensional modeling is carried out to large scene by mobile terminal and ball curtain camera, including mobile terminal, Yi Jiyu
The ball curtain camera of the communication of mobile terminal connection;It the described method comprises the following steps:
The video flowing that S1 passes through mobile terminal camera head shooting current location point;
S2 positions mobile terminal by the current location point video stream of acquisition, that is, obtains current any position
Location information;
S3 triggers the ball curtain camera and takes pictures;
The location information of photo that S4 shoots the ball curtain camera and the mobile terminal carry out three-dimensional reconstruction from
Line algorithm and sparse reconstruction;
S5 calculates and is obtained after rebuilding the precise position information and sparse cloud of the ball curtain camera;
S6 is based on sparse cloud and carries out three-dimensional modeling.
It should be noted that further include S3.1, the mobile terminal by the current location point video stream obtained into
Capable is positioned as SLAM positioning.
It should be noted that the location information of the mobile terminal after SLAM positioning is the ball curtain camera
Location information.
Advantageous effect of the present invention is:
1, it is more accurate to be positioned using mobile terminal, when the video flowing that mobile terminal obtains carries out SLAM positioning,
It will not be distorted;If carrying out SLAM positioning using the frame picture of the video flowing of ball curtain camera shooting, not only it is calculated
Amount is big, occupies a large amount of cpu resource, also needs to be spliced before carrying out SLAM between frame picture, will produce distortion phenomenon.
2, ball curtain camera can only be called in gathered data (taking pictures) or change in some in parameter when, other
Time is to be in half asleep, has saved the energy consumption of ball curtain camera to greatest extent;
3, the scene of three-dimensional modeling is more accurate, because the camera of mobile terminal and the processor of mobile terminal are settings
Together, the electronic component of mobile terminal can also assist using, more punctual in the processing timeliness of entire data acquisition,
Time difference problem caused by also needing to processor return data after ball curtain camera is positioned is not will produce.
4, after the video flowing that the camera of mobile terminal obtains carries out SLAM positioning, the photograph frame of generated video flowing
The correction in details can be carried out to the structural model of three-dimensional modeling, obtained after the photo disposal shot by ball curtain camera
It is that sparse cloud carries out three-dimensional modeling, then carries out structuring textures, when choosing optimal photo by mobile terminal in current location
It shoots the obtained photograph frame of video flowing and puts alternative ranks into, the photograph frame of the video flowing shot to mobile terminal carries out most
The processing of excellent photo.
Description of the drawings
Fig. 1 is the implementation schematic diagram of the present invention;
Fig. 2 is the implementation schematic diagram of the present invention;
Fig. 3 is the implementation schematic diagram of the present invention;
Fig. 4 is the implementation schematic diagram of the present invention;
Fig. 5 is the implementation schematic diagram of the present invention;
Fig. 6 is embodiments of the present invention reference chart.
Wherein:1-mobile phone;2-ball curtain cameras.
Specific implementation mode
The invention will be further described below, it should be noted that before following embodiment is with the technical program
It carries, gives detailed embodiment and specific operating process, but protection scope of the present invention is not limited to the present embodiment.
The present invention is a kind of method carrying out three-dimensional modeling to large scene by mobile terminal and ball curtain camera, including movement
Terminal, and the ball curtain camera that is connect with the communication of mobile terminal;It the described method comprises the following steps:
The video flowing that S1 passes through mobile terminal camera head shooting current location point;
S2 positions mobile terminal by the current location point video stream of acquisition, that is, obtains current any position
Location information;
S3 triggers the ball curtain camera and takes pictures;
The location information of photo that S4 shoots the ball curtain camera and the mobile terminal carry out three-dimensional reconstruction from
Line algorithm and sparse reconstruction;
S5 calculates and is obtained after rebuilding the precise position information and sparse cloud of the ball curtain camera;
S6 is based on sparse cloud and carries out three-dimensional modeling.
It should be noted that the mobile terminal of present invention meaning, which includes but not limited to mobile phone, tablet computer etc., carries camera shooting
The terminal device of head.
It should be noted that further include S3.1, the mobile terminal by the current location point video stream obtained into
Capable is positioned as SLAM positioning.
It should be noted that the location information of the mobile terminal after SLAM positioning is the ball curtain camera
Location information.
It should be further noted that the video flowing shot by mobile terminal, which carries out SLAM positioning, belongs to monocular
VSLAM.Characteristic point is extracted to the video flowing captured by mobile terminal, these characteristic points are carried out with the processing of trigonometric ratio, is restored
Go out the three-dimensional space position (two-dimensional coordinate is converted into three-dimensional coordinate) of mobile terminal.
Specifically:
Vslam positioning flows:Including three front end, rear end and winding detection module aspects.
1, front end
Mobile terminal gathered data frame extracts characteristic point to each frame;It is calculated by more vision Set Theories between multiframe
Go out camera position.
2, rear end
Calculated position before is optimized, goes out a whole track with the formula optimization of least square.
3, winding detection module
The scene arrived has feature preservation, and the feature newly extracted is matched with previously stored feature, i.e., and one
A similitude detection process.For the scene having been to, the similar value of the two can be very high, i.e. determination once came herein, and
Position once is corrected using new characteristic point.
In step s 6, it is based on sparse cloud and carries out three-dimensional modeling, sparse cloud is thick compared to what conventional laser scanned
For close cloud, compared with generally carrying out modeling with the dense point cloud that laser scanning obtains, our primary Calculation cameras are current
The point cloud quantity that position obtains only is that conventional laser scans to obtain an a ten thousandth for cloud quantity, because its quantity is few, so operation
Speed is fast, but 200 sparse clouds are enough the Structure Calculation in a room to come out in practical applications, carries out three-dimensional
Modeling obtains the model of structuring.
Embodiment
As an embodiment of the present invention, mobile phone 1 is used to be used as mobile terminal, the present invention can be by by mobile phone 1 here
It is fixed on relatively close location with ball curtain camera 2, i.e., the two is combined as an equipment, certain this equipment is not group
The meaning of conjunction, and (as shown in Figure 6) is provided on the same frame body, the position of such mobile phone 1 is the position of ball curtain camera 2
It sets.Then it is the state for shooting video flowing to keep the camera of mobile phone 1, is placed on a location point in a space, is touched
Service curtain camera 2 is taken pictures, and then frame body is moved on next location point, repeats aforesaid operations, until completing entire empty
Between take pictures.
It should be noted that during ball curtain camera 2 is taken pictures, mobile phone 1 is constantly in the state of shooting video flowing, in this way
SLAM positioning can be carried out to each spatial position point.Mobile phone 1 (mobile terminal) and the communication mode of ball curtain camera 2 are wireless
Communication mode, using WIFI in the prior art etc..
It should be further noted that the software that the present invention uses includes APP (application program) and high in the clouds clothes on mobile phone 1
Business device service, ball curtain camera 2 is connected with mobile phone 1 by WIFI, by the APP of the ball curtain photograph real-time Transmission taken to mobile phone 1
Photo after waiting for that the photograph taking of entire scene finishes, is uploaded to cloud server progress three-dimensional and built by (application program) together
Mould.
The camera of mobile phone 1 is generally monocular camera, and the monocular video stream taken by mobile phone 1 can establish mobile phone 1 with space
Relative position, the spatial position of mobile phone 1 is calculated by 1 video flowing of mobile phone.When shooting, trigger point is people
It defines, i.e. user's act camera 2 that voluntarily judges when to concede points starts to shoot, that is, trigger point is self-defined, in ball curtain
When camera 2 is shot, the arrangement point (being spaced how far distance once shoot) that multiple trigger points are formed is dense
The model come is built in the case that degree is moderate all can be more preferable in browsing and transition effect.
In entire shooting process, the video flowing of mobile phone 1 is being run always, and mobile phone 1 is not used in shooting photo, is only used for clapping
Video flowing is taken the photograph to be positioned;Ball curtain camera 2 can only be adjusted in gathered data (taking pictures) or change in some in parameter when
It uses, other time is mobile phone 1 and sharing out the work and help one another for ball curtain camera 2 to make ball curtain in the constantly half asleep of transmitting WIFI
2 energy consumption of camera is lower.
In this embodiment, the step of monocular VSLAM is as follows:
step1:Sensor information is read, and is mainly the reading of camera image information and pretreated behaviour in vision SLAM
Make process, the work carried out in the monocular SLAM of mobile phone 1 is mainly the operating process of the video flowing of 1 camera of mobile phone acquisition;
step 2:Visual odometry, also known as front end, task are the movement locus of camera between estimating adjacent image, and
The general outline and pattern of local map;
step 3:Rear end optimizes, also known as rear end, and task is to receive the phase seat in the plane of different moments visual odometry measurement
Appearance and the information of winding detection, optimize them, obtain globally consistent track and map;
step 4:Figure is built, task is the track of the estimation after optimizing according to rear end, is established correspondingly with mission requirements
Figure.
The VSLAM of monocular can also carry out more vision aggregates, you can it is based on carrying out trigonometric ratio processing between two field pictures,
It may be based on multi-frame video stream and carry out trigonometric ratio processing, will will obtain consistent track after both above-mentioned combine, further
Processing is optimized to track, data source is the video flowing that 1 camera of mobile phone is shot, and using the computing resource of mobile phone 1, is led to
The algorithm for crossing VSLAM obtains the track walked in large scene.
In the present embodiment, three-dimensional reconstruction off-line algorithm refers to SFM algorithms.In other embodiments, it can also take other
Three-dimensional reconstruction off-line algorithm.
In S4 steps, the photo three-dimensional modeling shot according to ball curtain camera further includes the following steps:
The characteristic point at least one set of photo that S41 is obtained based on ball curtain camera is identified and matches;
S42 is detected automatically based on the closed loop that ball curtain camera three-dimensional digital models;
After S43 detections, it is digitized modeling;
S44 structural model textures.
It should be noted that in one group of photo or video flowing, spy is carried out with SIFT descriptors to single photo
Sign point (pixel on picture) extracts while analyzing each described feature neighborhood of a point, and the feature is controlled according to neighborhood
Point.
It should be noted that the closed loop is detected as:With currently calculating the ball curtain camera position and the ball curtain in the past
Camera position is compared, and is detected the presence of closely located;If detecting, the two distance in certain threshold range, is considered as described
Ball curtain camera is returned to the place passed by originally, starts closed loop detection at this time.
It should be further noted that the present invention is the closed loop of the non-time series detection based on spatial information.
It should be further noted that being seen in the step S43 to be:
S43.1 primary Calculations, which go out the ball curtain camera position and obtain part, sparse cloud of noise point, throws with distance and again
The mode of shadow, which is filtered, filters noise point;
S43.2 makes marks to sparse cloud in i.e. whole point cloud, and carries out corresponding label;
S43.3 makees a virtual line using each sparse cloud as starting point, with corresponding ball curtain camera, multiple described virtual
The space weave in that straight line passes through forms a visible space;
S43.4 plucks out the space surrounded by ray to come;
Modes of the S43.5 based on graph theory shortest path does closed space.
It should be noted that each ball curtain camera of the sparse cloud is obtained after can be seen that filtering.Its
Middle step S43.3 also is understood as using each sparse cloud as starting point, makees a virtual line with corresponding ball curtain camera, multiple
The space weave in that the virtual line passes through forms a visible space;
It should be further noted that filtering refers to:The corresponding three-dimensional coordinate in certain point in it confirmed two-dimension picture
Behind position, this three-dimensional coordinate point is projected to again on original ball curtain photo, reaffirms whether be still that point.It is former
Because being, the point of two-dimension picture is one-to-one relationship in the position of the point of three-dimensional world with it, so confirmed two-dimension picture
After the three-dimensional coordinate point of middle certain point, this three-dimensional coordinate point being projected to, whether the verification two-dimensional coordinate point that goes back still exists again
Position originally determines whether the pixel is noise, if need to filter with this.It should be noted that in photo really
A fixed optimal picture for coming from some ball curtain camera.Above-mentioned described photo also claps mobile terminal in current location
The photograph frame for the video flowing taken the photograph puts alternative ranks into.
All see a certain target it should be noted that working as ball curtain camera described in multi-section and capture picture, chooses and use
A wherein optimal progress textures.
It should be noted that an optimal figure is to be moved in the photo or current location that a certain ball curtain camera obtains
The pixel that the photograph frame for the video flowing that dynamic terminal taking obtains can obtain target is most, then the ball curtain camera is optimal.
It should be further noted that the graphic color for calculating corresponding camera using formula and its photographing:
V1=normalize (GameraMatrixi*V0)
In formula:V0 is the space point coordinates (x, y, z, 1) that any one needs samples, and a model is needed to rasterize
All the points;V1 is the new position coordinates that V0 transforms to camera space, is transformed in unit sphere by vector normalization;Tx and Ty
For the texture coordinate (x, y) corresponding to V0, selection coordinate system is OPENGL texture coordinate systems;aspecti:I-th of sampling
The length-width ratio of panoramic pictures;CameraMatrixi:The transformation matrix of i-th of panoramic pictures of sampling, camera position is converted
To origin, and resets camera and face direction.
Based on the foregoing, it is desirable to which, it is noted that closed loop detection is a dynamic process, in the process of shooting ball curtain photograph
In be continue carry out.
Further, as shown in Figure 1, being to automatically extract characteristic point to a ball curtain photo (master drawing), mainly pass through in figure
Performance is put those of on picture;
Further, as shown in Fig. 2, being matched to the characteristic point after extraction;It should be noted that in practical behaviour
The characteristic point for all photos for shooting a certain scene can be matched in work;
Further, as shown in figure 3, being further processed based on Fig. 2, you can obtain each feature in two-dimension picture
The three-dimensional space position and camera position of point, forming sparse point, (the smaller point of area is exactly sparse cloud in picture, and area is larger
Be camera position);
Further, as shown in figure 4, obtaining a cloud after being handled by Fig. 3, and structured modeling is carried out;
Further, as shown in figure 5, after modeling, the space structure based on Fig. 4 carries out automation textures, is formed and existing
The identical Virtual Space model in the real world.
It is above that the invention will be further described, it should be noted that the present embodiment premised on the technical program,
Detailed embodiment and specific operating process are given, but protection scope of the present invention is not limited to the present embodiment.
Claims (3)
1. the method for carrying out three-dimensional modeling to large scene by mobile terminal and ball curtain camera, which is characterized in that including mobile whole
It holds, and the ball curtain camera being connect with the communication of mobile terminal;It the described method comprises the following steps:
S1 shoots the video flowing of current location point by the camera of mobile terminal;
S2 positions mobile terminal by the current location point video stream of acquisition, that is, obtains the position of current any position
Confidence ceases;
S3 triggers the ball curtain camera and takes pictures;
The location information of photo and the mobile terminal that the ball curtain camera is shot is carried out three-dimensional reconstruction and calculated offline by S4
Method and sparse reconstruction;
S5 calculating obtains the precise position information and sparse cloud of the ball curtain camera after rebuilding;
S6 is based on sparse cloud and carries out three-dimensional modeling.
2. the method according to claim 1 that three-dimensional modeling is carried out to large scene by mobile terminal and ball curtain camera,
It is characterized in that, further includes S3.1, the mobile terminal is positioned as what the current location point video stream obtained carried out
SLAM is positioned.
3. the method according to claim 2 that three-dimensional modeling is carried out to large scene by mobile terminal and ball curtain camera,
It is characterized in that, the location information of the mobile terminal after SLAM positioning is the location information of the ball curtain camera.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810180677.6A CN108566545A (en) | 2018-03-05 | 2018-03-05 | The method that three-dimensional modeling is carried out to large scene by mobile terminal and ball curtain camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810180677.6A CN108566545A (en) | 2018-03-05 | 2018-03-05 | The method that three-dimensional modeling is carried out to large scene by mobile terminal and ball curtain camera |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108566545A true CN108566545A (en) | 2018-09-21 |
Family
ID=63531404
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810180677.6A Pending CN108566545A (en) | 2018-03-05 | 2018-03-05 | The method that three-dimensional modeling is carried out to large scene by mobile terminal and ball curtain camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108566545A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110363806A (en) * | 2019-05-29 | 2019-10-22 | 中德(珠海)人工智能研究院有限公司 | A method of three-dimensional space modeling is carried out using black light projection feature |
CN110378995A (en) * | 2019-05-29 | 2019-10-25 | 中德(珠海)人工智能研究院有限公司 | A method of three-dimensional space modeling is carried out using projection feature |
CN112308972A (en) * | 2020-10-20 | 2021-02-02 | 北京卓越电力建设有限公司 | A Reconstruction Method of Large-scale Cable Tunnel Environment Model |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104077809A (en) * | 2014-06-24 | 2014-10-01 | 上海交通大学 | Visual SLAM method based on structural lines |
CN104180814A (en) * | 2013-05-22 | 2014-12-03 | 北京百度网讯科技有限公司 | Navigation method in live-action function on mobile terminal, and electronic map client |
CN105203092A (en) * | 2014-06-30 | 2015-12-30 | 联想(北京)有限公司 | Information processing method and device and electronic equipment |
CN205336407U (en) * | 2015-12-14 | 2016-06-22 | 青岛市勘察测绘研究院 | Streetscape collection system based on control of android cell -phone OTG |
CN106251399A (en) * | 2016-08-30 | 2016-12-21 | 广州市绯影信息科技有限公司 | A kind of outdoor scene three-dimensional rebuilding method based on lsd slam |
CN106444042A (en) * | 2016-11-29 | 2017-02-22 | 北京知境科技有限公司 | Dual-purpose display equipment for augmented reality and virtual reality, and wearable equipment |
US9773313B1 (en) * | 2014-01-03 | 2017-09-26 | Google Inc. | Image registration with device data |
-
2018
- 2018-03-05 CN CN201810180677.6A patent/CN108566545A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104180814A (en) * | 2013-05-22 | 2014-12-03 | 北京百度网讯科技有限公司 | Navigation method in live-action function on mobile terminal, and electronic map client |
US9773313B1 (en) * | 2014-01-03 | 2017-09-26 | Google Inc. | Image registration with device data |
CN104077809A (en) * | 2014-06-24 | 2014-10-01 | 上海交通大学 | Visual SLAM method based on structural lines |
CN105203092A (en) * | 2014-06-30 | 2015-12-30 | 联想(北京)有限公司 | Information processing method and device and electronic equipment |
CN205336407U (en) * | 2015-12-14 | 2016-06-22 | 青岛市勘察测绘研究院 | Streetscape collection system based on control of android cell -phone OTG |
CN106251399A (en) * | 2016-08-30 | 2016-12-21 | 广州市绯影信息科技有限公司 | A kind of outdoor scene three-dimensional rebuilding method based on lsd slam |
CN106444042A (en) * | 2016-11-29 | 2017-02-22 | 北京知境科技有限公司 | Dual-purpose display equipment for augmented reality and virtual reality, and wearable equipment |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110363806A (en) * | 2019-05-29 | 2019-10-22 | 中德(珠海)人工智能研究院有限公司 | A method of three-dimensional space modeling is carried out using black light projection feature |
CN110378995A (en) * | 2019-05-29 | 2019-10-25 | 中德(珠海)人工智能研究院有限公司 | A method of three-dimensional space modeling is carried out using projection feature |
CN110363806B (en) * | 2019-05-29 | 2021-12-31 | 中德(珠海)人工智能研究院有限公司 | Method for three-dimensional space modeling by using invisible light projection characteristics |
CN112308972A (en) * | 2020-10-20 | 2021-02-02 | 北京卓越电力建设有限公司 | A Reconstruction Method of Large-scale Cable Tunnel Environment Model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102524422B1 (en) | Object modeling and movement method and device, and device | |
CN109102537B (en) | Three-dimensional modeling method and system combining two-dimensional laser radar and dome camera | |
CN110335343B (en) | Human body three-dimensional reconstruction method and device based on RGBD single-view-angle image | |
US12014463B2 (en) | Data acquisition and reconstruction method and system for human body three-dimensional modeling based on single mobile phone | |
CN108200334B (en) | Image capturing method, device, storage medium and electronic device | |
CN115205489A (en) | Three-dimensional reconstruction method, system and device in large scene | |
CN108958469B (en) | A method for adding hyperlinks in virtual world based on augmented reality | |
CN108830925B (en) | Three-dimensional digital modeling method based on spherical screen video stream | |
CN110728671A (en) | Vision-Based Dense Reconstruction Methods for Textureless Scenes | |
CN110544273B (en) | Motion capture method, device and system | |
CN111402412A (en) | Data acquisition method and device, equipment and storage medium | |
CN108566545A (en) | The method that three-dimensional modeling is carried out to large scene by mobile terminal and ball curtain camera | |
CN111325828A (en) | Three-dimensional face acquisition method and device based on three-eye camera | |
CN108629828B (en) | Scene rendering transition method in the moving process of three-dimensional large scene | |
CN108510434B (en) | The method for carrying out three-dimensional modeling by ball curtain camera | |
Chugunov et al. | Shakes on a plane: Unsupervised depth estimation from unstabilized photography | |
CN112102504A (en) | Three-dimensional scene and two-dimensional image mixing method based on mixed reality | |
CN111402392A (en) | Illumination model calculation method, material parameter processing method and material parameter processing device | |
CN107093165A (en) | The fast display method and device of a kind of recursive image | |
CN108564654A (en) | The picture mode of entrance of three-dimensional large scene | |
JP6341540B2 (en) | Information terminal device, method and program | |
Suo et al. | Neural3d: Light-weight neural portrait scanning via context-aware correspondence learning | |
CN112308972B (en) | A method for reconstructing large-scale cable tunnel environment model | |
CN111223192B (en) | Image processing method, application method, device and equipment thereof | |
CN116206050A (en) | Three-dimensional reconstruction method, electronic device, and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180921 |
|
RJ01 | Rejection of invention patent application after publication |