CN107071375B - A kind of Slam methods based on 3D scannings - Google Patents
A kind of Slam methods based on 3D scannings Download PDFInfo
- Publication number
- CN107071375B CN107071375B CN201710062775.5A CN201710062775A CN107071375B CN 107071375 B CN107071375 B CN 107071375B CN 201710062775 A CN201710062775 A CN 201710062775A CN 107071375 B CN107071375 B CN 107071375B
- Authority
- CN
- China
- Prior art keywords
- projector
- coordinates
- vertical line
- rectangular area
- horizontal line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3179—Video signal processing therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/363—Image reproducers using image projection screens
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3141—Constructional details thereof
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a kind of Slam methods based on 3D scannings, first by the operation of matrix, may be implemented the rectangular frame of rule being shown to irregular target area;Then by the mapping of picture, the display of jumbotron may be implemented;The rectangular area to be shown finally by trigonometric ratio can accomplish the finer adjustment less than a pixel scale, and then realize seamless non-overlapping display between projector, and picture entirety sense is very strong, almost without aberration.
Description
Technical field
The invention belongs to show projection art, and in particular to a kind of design of the Slam methods based on 3D scannings.
Background technology
A display equipment is usually only used in traditional display system, such display image content is less, covering
Range also very little.Show that a larger picture can preferably solve the problems, such as above using more projector's segmentations.Each
Projector respectively shows respective picture, is separated by arrangement according to certain rule according to the needs of overall picture between projector.
The position put according to projector can make ring curtain, the cave systems in 3 faces etc..It is more but due to the principle that projector shows
It often will appear overlapping region between a projector, the exception in display caused, such as ghost image, bright band.Solve overlapping region
The method for buying the more professional stronger hardware of controllability may be used in problem, can also be in such a way that software goes overlapping.
Cost is too high by the way of hardware, and very high to manually operated requirement, and it is difficult to accomplishes completely non-overlapping.Using pure
Although the mode of software can be by the brightness adjustment of overlapping region at almost the same with Non-overlapping Domain, but there are ghost images
Problem, and will appear larger picture delay in cutting picture.
In recent years, intelligent robot technology is worldwide greatly developed, in most of occasions, research aircraft
Device people can encounter a basic difficulty, that is, position and build figure, i.e. Slam technologies.With Slam it is closely related be exactly to pass
Sensor, sensor are divided into laser sensor and visual sensor two major classes, and the latter is widely used due to cheap.
Vision Slam is broadly divided into three categories:Monocular, binocular (or more mesh), RGBD.One Slam system is divided into four modules and (removes and pass
Sensor digital independent):Figure, winding detection are built in visual odometry VO, rear end.Visual odometry module is for estimating two moment
The transformation matrix of a three dimensions is estimated in the relative motion of robot that is, in Euclidean space, and solving this matrix can divide
Method for feature based and the direct method without using feature.The method of feature based, the first feature in extraction image are (such as
Harris angle points, SIFT, SURF, ORB), then according to the characteristic matching of two width figures, calculate the transformation matrix of camera.Directly side
All pixels in image are write into a pose estimation equation and it is opposite to find out interframe using iteration arest neighbors ICP scheduling algorithms by method
Movement, such as SVO and LSD-SLAM.
Invention content
The purpose of the present invention is to solve show a larger picture using more projector's segmentations in the prior art
When, it often will appear overlapping region between projector, cause the abnormal problem in display, it is proposed that is a kind of based on 3D scannings
Slam methods.
The technical scheme is that:A kind of Slam methods based on 3D scannings, include the following steps:
S1, by setting-out program, drawn in the projected picture of each projector in the display system one group of uniform horizontal line and
Vertical line makes horizontal line and vertical line fill full entire projector screen;Using Tof cameras, a frame is acquired to each projector's picture
RGB and depth information, then all projectors are divided into several deciles, acquire a frame RGB for whole pictures of each decile
And depth information;
S2, on the basis of first projection machine information, establish being associated with for other projectors and first projector manually
System, i.e., a group echo is done in face on the display region, binds identical point in different projector's pictures manually on RGB information figure;
S3, using first projector as global space, by calibration point, by other projectors, space is transformed into first throwing
In the global space at the place of shadow machine;
S4, in global space by all projectors by matrixing or pinhole imaging system technology, by all projectors
View field be transformed into a plane space;
S5, projector's number is set as N, be intended to that display picture is cut into r rows, c arranges the rectangular area Ti of same size, and
N=r × c, i=1,2,3 ..., N;The region that entire computer desktop determines is defined for the spaces uv, representated by the Ti of rectangular area
Being exactly corresponding projector needs content area to be shown, each projector to show that content shares the same uv coordinate systems;
S6, the horizontal line and vertical line shown according to projector in step S1, then one group of horizontal line of calibration and vertical line by hand, and be arranged
Their xy coordinates in projector are got well, while the xy coordinates and uv coordinates of every horizontal line and vertical line are bound;
S7,4 boundary lines up and down for demarcating final display area by hand, and by the xy on all vertex on 4 boundaries
Coordinate and uv coordinates are bound;
S8,4 vertex for demarcating the rectangular area that each projector shows by hand, and by the xy coordinates and uv on 4 vertex
Coordinate associates;
S9, the rectangular area that each projector shows is cut with horizontal line and vertical line, generates an xy coordinate and uv
Coordinate pair answers table;
S10, final display area that 4 boundaries surround is cut into identical rectangle with certain size, then by rectangle
It is cut into two small triangles along opposite side;Small triangle can form smaller by the further cutting in the rectangular area of each projector
Triangle, until all small triangles all cannot be by rectangular area cutting;Belonging to each small triangle
Table is answered according to the xy coordinates of affiliated projector and uv coordinate pairs by projector, finds the xy coordinates on three vertex of each triangle.
Further, step S3 is specially:
Many points physically overlapped can be demarcated between the Liang Ge projectors shot twice, if the point set of shooting is for the first time
The point set of Y, second of shooting are X, and there are a matrix M to make Y=X*M, and wherein X and Y are the matrixes of a n*4, and M
It is the matrix of a 4*4;In order to find out M, M=(X are obtained used here as least square methodT*X)-1*XT* Y, after finding out Metzler matrix,
Pass through Yi=M*XiPoint in every other projector can be transformed into the space of first projector, wherein XTIndicate X's
Transposed matrix, (XT*X)-1Indicate (XT* X) inverse matrix, i=1,2,3 ..., N, N be projector's quantity.
Further, the matrixing mode in step S4 is specially:
Indicate that rotation transformation, T matrixes indicate translation transformation, pass through Z with R matrixesi=R*Yi+ T is converted, by all projectors
It is transformed into the same z-plane space.
Further, step S9 is specially:
The rectangular area that each projector shows is cut with horizontal line and vertical line, with 2 horizontal lines in the intersection point surrendered
Totally 6 points are one group with 3 vertical lines, and two groups every group two diagonal lines are done in 6 intersection points and surrender 2 intersection points altogether, 2 intersection points connect
The extended line of line and 3 vertical lines surrender 3 intersection points again, and 3 intersection points newly surrendered are again with 6 intersection points with 6 original intersection points
One group intersects with horizontal line and vertical line;The new xy coordinates for surrendering intersection point and uv coordinates use the xy coordinates and uv of horizontal line and vertical line endpoint
Coordinate is filled;It cuts, until each pixel in rectangular area has xy coordinates and uv coordinates, finally gives birth to repeatedly
Table is answered at an xy coordinate and uv coordinate pairs.
The beneficial effects of the invention are as follows:The present invention makes each sprite after segmentation aobvious by the integration technology of hardware itself
Show very smooth, there is no interim card sense.First by the operation of matrix, may be implemented the rectangular frame of rule being shown to irregularly
Target area;Then by the mapping of picture, the display of jumbotron may be implemented;The rectangle to be shown finally by trigonometric ratio
Region can accomplish the finer adjustment less than a pixel scale, and then seamless non-overlapping between realization projector
Display, and picture entirety sense is very strong, almost without aberration.
Description of the drawings
Fig. 1 is a kind of Slam method flow diagrams based on 3D scannings provided by the invention.
Fig. 2 is the Mapping and Converting schematic diagram of the embodiment of the present invention.
Fig. 3 is the rectangular area cutting schematic diagram of the embodiment of the present invention.
Fig. 4 is the triangle of the embodiment of the present invention by rectangular area cutting schematic diagram.
Specific implementation mode
The embodiment of the present invention is further described below in conjunction with the accompanying drawings.
The present invention provides a kind of Slam methods based on 3D scannings, as shown in Figure 1, including the following steps:
Include more projectors in S1, display system, by simple setting-out program, each projector in the display system
Projected picture in draw one group of uniform horizontal line and vertical line (such as interval 64), so that horizontal line and vertical line is filled full entire projector's screen
Curtain.Using Tof cameras, a frame RGB and depth information XYZ is acquired to each projector picture in display system, then will own
Projector is divided into several deciles, and a frame RGB and depth information XYZ is acquired for whole pictures of each decile.Frame is drawn with frame
There are many points physically overlapped between face.Each projector and each decile have the independent coordinate space of oneself.
S2, on the basis of first projection machine information, establish being associated with for other projectors and first projector manually
System, i.e., a group echo is done in face on the display region in advance, is bound manually on RGB information figure identical in different projector's pictures
Point.In this way by manual intervention, data error is reduced, improves controllability.
S3, get in step sl be each projector and decile independent local space, need to convert them
In the space unified to one.Using first projector as global space, corresponding coordinate system is XYZ coordinate system, passes through calibration
Point, using following algorithm, by other projectors, space is transformed into the global space at the place of first projector:
Many points physically overlapped can be demarcated between the Liang Ge projectors shot twice, if the point set of shooting is for the first time
The point set of Y, second of shooting are X, and there are a matrix M to make Y=X*M, and wherein X and Y are the matrixes of a n*4, and M
It is the matrix of a 4*4;In order to find out M, M=(X are obtained used here as least square methodT*X)-1*XT* Y, after finding out Metzler matrix,
Pass through Yi=M*XiIn the global space at the place that can the point in every other projector be transformed into first projector,
Middle XTIndicate the transposed matrix of X, (XT*X)-1Indicate (XT* X) inverse matrix, i=1,2,3 ..., N, N be projector's quantity.
S4, in the global space that step S3 is obtained by all projectors by matrixing or pinhole imaging system technology,
The view field of all projectors is transformed into a plane space, this plane space is known as the spaces Pxy.Wherein matrix becomes
The mode of changing is specially:Pass through Zi=R*Yi+ T is converted, and all projectors are passed through rotation (expression of R matrixes) or translation (T matrixes
Indicate) it is transformed into the same z-plane space.The case where translation, i.e. T are not had in general optical projection system, between perspective plane
=0, Zi=R*Yi, therefore only it needs to be determined that 4*4=16 parameter of R matrixes.R matrixes can be by seeking the side of Metzler matrix
Method, or be calculated by demarcating 3 reference axis manually.By conversion above, we can be intended to the rectangle of projection
Region project can also project on ring curtain, as shown in Figure 2 to rectangular display area.
S5, projector's number is set as N, the picture (such as computer desktop) for being intended to Projection Display is cut into r rows, c row on an equal basis
The rectangular area Ti of size, and N=r × c, i=1,2,3 ..., N.It is uv empty to define the region that entire computer desktop determines
Between, representated by the Ti of rectangular area, which is exactly corresponding projector, needs content area to be shown, each projector to show that content is shared
The same uv coordinate systems.The space of determining display location corresponding with uv coordinate systems is known as the spaces xy, and the xy of each projector is sat
Mark system is mutual indepedent.The uv coordinate systems of each projector are it has been determined that need to find out corresponding xy coordinate systems now.
S6, the horizontal line and vertical line shown according to projector in step S1, then one group of horizontal line of calibration and vertical line by hand, and be arranged
(since vertical and horizontal line is uniformly paved with entire projector's picture, the size of projector is solid to their good xy coordinates in projector
Fixed, therefore can obtain the xy coordinates of every line).Two endpoints of every horizontal line and vertical line include xy coordinates, pass through S3, S4
The transformation of two steps can complete the conversion of xy coordinate-uv coordinates, to this every horizontal line and vertical line (endpoint) xy coordinates and
Uv coordinates realize binding.Here each projector screen is divided by horizontal line and vertical line, the xy coordinates of horizontal line and vertical line
It can adjust manually, data error can be reduced by manual intervention, improve controllability.
S7, same mode demarcate 4 boundary lines up and down of final display area by hand, will own on 4 boundaries
The xy coordinates and uv coordinates on vertex are bound.4 boundary lines determine the area size finally shown, this area size
The rectangular area (being set as R3) of the region (being set as R2) that (being set as R1)≤all projectors cover together≤desire projection display picture.
It can cut away in this way in R2 due to boundary irregularities caused by projector's hardware feature, leave behind a regular square
Shape region.Need R3 being mapped in R1 now, then in final display area the display content of any point P with uv coordinates tables
Show, the width on width/boundary of the distance * pictures to be shown of wherein u=point P to left margin, the distance * of v=point P to coboundary
The height on height/boundary of picture to be shown.Now it is required to determine that xy coordinates of the point P in each projector.
S8, same mode demarcate 4 vertex of the rectangular area that each projector shows by hand, by the xy on 4 vertex
Coordinate and uv coordinates associate (can be by the way that the pixel of entire screen be all shown, then the mode of several points is true
The xy coordinates on fixed each vertex).The display screen size of this rectangular area size≤projector.4 hands of neighboring projectors
Work calibration vertex is physically overlapped, and xy coordinates are adjusted in 4 vertex of each projector rectangular area, it is meant that is thrown
Shadow machine can display portion region, realize removal lap demand, while the precision of xy coordinates be less than 1, go in this way
Almost without gap between projector while except lap.
S9, the rectangular area that each projector shows is cut with horizontal line and vertical line, as shown in figure 3, the friendship surrendered
In point with 2 horizontal lines (H1, H2) and 3 vertical lines (V1, V2, V3) totally 6 points (need to constitute a rectangle, such as 0,1,2,3,4,
5) it is one group, two groups every group two diagonal lines is done in 6 intersection points and surrender 2 intersection points, the extended line and 3 of 2 intersection point lines altogether
Vertical line (V1, V2, V3) surrenders 3 intersection points (such as 6,7,8) again, and 3 intersection points newly surrendered are with 6 original intersection points again with 6
Intersection point (0,6,3,1,7,4) is one group and intersects with horizontal line and vertical line.The xy coordinates and uv coordinates for the point newly surrendered using horizontal line and
The xy coordinates and uv coordinates of vertical line endpoint are filled.It cuts repeatedly, until each pixel in rectangular area has
Xy coordinates and uv coordinates, can finally generate an xy coordinate and uv coordinate pairs answer table.There is a part in projector each in this way
Xy coordinates and global uv coordinates correspondence table.
S10, the rectangular frame R that 4 boundaries surround is cut into identical rectangle with certain size (such as 64*64), it will
Rectangle is cut into two small triangles along opposite side.There is the uv coordinates (size of R of oneself on three vertex of each small triangle
It is to determine, small triangle is uniform cutting and size is fixed, and uv coordinates can be found out by simple division arithmetic).It is small
Triangle can be projected the further cutting in rectangular area of machine, form smaller triangle.What is formed in dicing process is new small by three
Angular uv coordinates needs determine jointly according to the uv coordinates of the small triangle of back and 4 vertex of projector rectangular area.
Then by searching for the projector belonging to each small triangle, table is answered according to the xy coordinates of affiliated projector and uv coordinate pairs, i.e.,
The xy coordinates on three vertex of each triangle can be found.As shown in figure 4, small triangle is by the principle of rectangular area cutting
For:Two small triangle Ts 1, T2 are cut into T1, T2, T3, T4, T5, T6 totally 6 new small triangles, and cutting can be one straight
Row is until all small triangles all cannot be by rectangular area cutting.
By step S10, final picture can be obtained and be divided into several small vertex of a triangle xy coordinates, given birth to
At a triangle list, using supporting the video card interface of integration technology to be sent to related video card and carry out subsequent processing, i.e.,
3D pictures can be obtained.
Those of ordinary skill in the art will understand that the embodiments described herein, which is to help reader, understands this hair
Bright principle, it should be understood that protection scope of the present invention is not limited to such specific embodiments and embodiments.This field
Those of ordinary skill can make according to the technical disclosures disclosed by the invention various does not depart from the other each of essence of the invention
The specific variations and combinations of kind, these variations and combinations are still within the scope of the present invention.
Claims (4)
1. a kind of Slam methods based on 3D scannings, which is characterized in that include the following steps:
S1, by setting-out program, one group of uniform horizontal line and vertical line are drawn in the projected picture of each projector in the display system,
Horizontal line and vertical line is set to fill full entire projector screen;Using Tof cameras, to each projector's picture acquire a frame RGB and
Depth information, then all projectors are divided into several deciles, acquire a frame RGB and depth for whole pictures of each decile
Information;
S2, by first projection machine information on the basis of, establish the incidence relation of other projectors and first projector manually, i.e.,
A group echo is done in face on the display region, binds identical point in different projector's pictures manually on RGB information figure;
S3, using first projector as global space, by calibration point, by other projectors, space is transformed into first projector
Place global space in;
S4, in global space by all projectors by matrixing or pinhole imaging system technology, by the throwing of all projectors
Shadow zone domain is transformed into a plane space;
S5, projector's number is set as N, be intended to that display picture is cut into r rows, c arranges the rectangular area Ti and N=r of same size
× c, i=1,2,3 ..., N;It is the spaces uv to define the region that entire computer desktop determines, in rectangular area Ti institute's generations, table is just
Being corresponding projector needs content area to be shown, each projector to show that content shares the same uv coordinate systems;
S6, the horizontal line and vertical line shown according to projector in step S1, then one group of horizontal line of calibration and vertical line by hand, and set it
Xy coordinates in projector, while the xy coordinates and uv coordinates of every horizontal line and vertical line are bound;
S7,4 boundary lines up and down for demarcating final display area by hand, and by the xy coordinates on all vertex on 4 boundaries
It is bound with uv coordinates;
S8,4 vertex for demarcating the rectangular area that each projector shows by hand, and by the xy coordinates and uv coordinates on 4 vertex
It associates;
S9, the rectangular area that each projector shows is cut with horizontal line and vertical line, generates an xy coordinate and uv coordinates
Corresponding table;
S10, final display area that 4 boundaries surround is cut into identical rectangle with certain size, then by rectangle along
Opposite side is cut into two small triangles;Small triangle can form smaller three by the further cutting in the rectangular area of each projector
It is angular, until all small triangles all cannot be by rectangular area cutting;By searching for the projection belonging to each small triangle
Machine answers table according to the xy coordinates of affiliated projector and uv coordinate pairs, finds the xy coordinates on three vertex of each triangle.
2. the Slam methods according to claim 1 based on 3D scannings, which is characterized in that the step S3 is specially:
Many points physically overlapped can be demarcated between the Liang Ge projectors shot twice, if the point set of shooting is Y for the first time, the
The point set of secondary shooting is X, and there are a matrix M to make Y=X*M, and wherein X and Y are the matrixes of a n*4, and M is one
The matrix of a 4*4;In order to find out M, M=(X are obtained used here as least square methodT*X)-1*XT* Y after finding out Metzler matrix, passes through Yi
=M*XiPoint in every other projector can be transformed into the space of first projector, wherein XTIndicate the transposition square of X
Battle array, (XT*X)-1Indicate (XT* X) inverse matrix, i=1,2,3 ..., N, N be projector's quantity.
3. the Slam methods according to claim 2 based on 3D scannings, which is characterized in that the matrix in the step S4 becomes
The mode of changing is specially:
Indicate that rotation transformation, T matrixes indicate translation transformation, pass through Z with R matrixesi=R*Yi+ T is converted, and all projectors are converted
To the same z-plane space.
4. the Slam methods according to claim 1 based on 3D scannings, which is characterized in that the step S9 is specially:
The rectangular area that each projector shows is cut with horizontal line and vertical line, with 2 horizontal lines and 3 in the intersection point surrendered
Totally 6 points are one group to vertical line, and two groups every group two diagonal lines are done in 6 intersection points and surrender 2 intersection points altogether, 2 intersection point lines
Extended line and 3 vertical lines surrender 3 intersection points again, and 3 intersection points newly surrendered are with 6 original intersection points again with 6 intersection points for one group
Intersect with horizontal line and vertical line;The new xy coordinates for surrendering intersection point and uv coordinates use the xy coordinates and uv coordinates of horizontal line and vertical line endpoint
It is filled;It cuts repeatedly, until each pixel in rectangular area has xy coordinates and uv coordinates, ultimately produces one
It opens xy coordinates and uv coordinate pairs answers table.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710062775.5A CN107071375B (en) | 2017-01-24 | 2017-01-24 | A kind of Slam methods based on 3D scannings |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710062775.5A CN107071375B (en) | 2017-01-24 | 2017-01-24 | A kind of Slam methods based on 3D scannings |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107071375A CN107071375A (en) | 2017-08-18 |
CN107071375B true CN107071375B (en) | 2018-09-04 |
Family
ID=59598631
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710062775.5A Active CN107071375B (en) | 2017-01-24 | 2017-01-24 | A kind of Slam methods based on 3D scannings |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107071375B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102611822A (en) * | 2012-03-14 | 2012-07-25 | 海信集团有限公司 | Projector and projection image rectifying method thereof |
CN104966063A (en) * | 2015-06-17 | 2015-10-07 | 中国矿业大学 | Mine multi-camera video fusion method based on GPU and CPU cooperative computing |
JP2015537228A (en) * | 2012-12-14 | 2015-12-24 | ファロ テクノロジーズ インコーポレーテッド | Apparatus for optically scanning and measuring the surrounding environment |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140240469A1 (en) * | 2013-02-28 | 2014-08-28 | Motorola Mobility Llc | Electronic Device with Multiview Image Capture and Depth Sensing |
-
2017
- 2017-01-24 CN CN201710062775.5A patent/CN107071375B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102611822A (en) * | 2012-03-14 | 2012-07-25 | 海信集团有限公司 | Projector and projection image rectifying method thereof |
JP2015537228A (en) * | 2012-12-14 | 2015-12-24 | ファロ テクノロジーズ インコーポレーテッド | Apparatus for optically scanning and measuring the surrounding environment |
CN104966063A (en) * | 2015-06-17 | 2015-10-07 | 中国矿业大学 | Mine multi-camera video fusion method based on GPU and CPU cooperative computing |
Non-Patent Citations (2)
Title |
---|
《一种融合激光和深度视觉传感器的SLAM地图创建方法》;张毅,杜凡宇,罗元,熊艳;《计算机应用研究》;20161031;第33卷(第10期);全文 * |
Tomasz Kornuta*†, Maciej Stefa'nczyk*.《Utilization of textured stereovision for registration of》.《2016 21st International Conference on Methods and Models in Automation and Robotics (MMAR)》.2016, * |
Also Published As
Publication number | Publication date |
---|---|
CN107071375A (en) | 2017-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3163535B1 (en) | Wide-area image acquisition method and device | |
CN110111262B (en) | Projector projection distortion correction method and device and projector | |
CN111127318B (en) | Panoramic image splicing method in airport environment | |
AU2017246716B2 (en) | Efficient determination of optical flow between images | |
US9241147B2 (en) | External depth map transformation method for conversion of two-dimensional images to stereoscopic images | |
US9438878B2 (en) | Method of converting 2D video to 3D video using 3D object models | |
US9443338B2 (en) | Techniques for producing baseline stereo parameters for stereoscopic computer animation | |
CN105488766B (en) | Fisheye image bearing calibration and device | |
EP3451649B1 (en) | Method and apparatus for generating indoor panoramic video | |
EP3446283B1 (en) | Image stitching method and device | |
AU2015256320B2 (en) | Imaging system, method, and applications | |
US10097793B2 (en) | Method and system of depth data filling of shadows for image processing | |
CN111866523B (en) | Panoramic video synthesis method and device, electronic equipment and computer storage medium | |
CN107580203B (en) | Immersion active stereo projective perspective transformation matrix solving method | |
CN105139336B (en) | A kind of method of multichannel full-view image conversion ball curtain flake film | |
CN105989354A (en) | Positioning method and system | |
US20170289516A1 (en) | Depth map based perspective correction in digital photos | |
JP4554231B2 (en) | Distortion parameter generation method, video generation method, distortion parameter generation apparatus, and video generation apparatus | |
CN107071374B (en) | A kind of projection fusion method based on 3D scanning and Slam | |
JP2017194857A (en) | Free viewpoint video display apparatus | |
CN107071375B (en) | A kind of Slam methods based on 3D scannings | |
EP3229106A1 (en) | Efficient determination of optical flow between images | |
CN107945104A (en) | A kind of method for panoramic imaging based on space virtual reality camera | |
JP6200316B2 (en) | Image generation method, image generation apparatus, and image generation program | |
EP3229470B1 (en) | Efficient canvas view generation from intermediate views |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: Slam method based on 3D scanning Effective date of registration: 20190911 Granted publication date: 20180904 Pledgee: Chengdu Qingyang District Xingcheng Microfinance Co.,Ltd. Pledgor: ILUMINTEL Inc. Registration number: Y2019510000020 |
|
PP01 | Preservation of patent right | ||
PP01 | Preservation of patent right |
Effective date of registration: 20230511 Granted publication date: 20180904 |