CN106170085A - A kind of without mirror solid engine exchange method - Google Patents
A kind of without mirror solid engine exchange method Download PDFInfo
- Publication number
- CN106170085A CN106170085A CN201610623012.9A CN201610623012A CN106170085A CN 106170085 A CN106170085 A CN 106170085A CN 201610623012 A CN201610623012 A CN 201610623012A CN 106170085 A CN106170085 A CN 106170085A
- Authority
- CN
- China
- Prior art keywords
- mirror
- free
- pictures
- text
- dimensional display
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 239000007787 solid Substances 0.000 title abstract 2
- 230000003993 interaction Effects 0.000 claims abstract description 37
- 239000002131 composite material Substances 0.000 claims abstract description 10
- 238000005070 sampling Methods 0.000 claims abstract description 7
- 230000004044 response Effects 0.000 claims abstract description 5
- 230000002452 interceptive effect Effects 0.000 claims description 22
- 238000009877 rendering Methods 0.000 claims description 12
- 238000013507 mapping Methods 0.000 claims description 6
- 238000013515 script Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 4
- 241000592183 Eidolon Species 0.000 claims 2
- 230000004927 fusion Effects 0.000 abstract 1
- 239000000463 material Substances 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 12
- 238000011161 development Methods 0.000 description 9
- 230000000694 effects Effects 0.000 description 9
- 239000012634 fragment Substances 0.000 description 5
- 238000004519 manufacturing process Methods 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 3
- 238000005286 illumination Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 210000003710 cerebral cortex Anatomy 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/172—Processing image signals image signals comprising non-image signal components, e.g. headers or format information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present invention provides a kind of without mirror solid engine exchange method, the method creates multiple stage stereoscopic camera according to without mirror stereo display characteristic in the scene of Unity3D engine, the material contents such as model, word and the picture that establishment demand is mutual, by writing the three-dimensional mutual of the contents such as scripted code implementation model, word and picture.Carry out sampling according to tinter Shader and fusion treatment obtains without mirror stereo composite image, stereo display and picture and text interaction response can obtained without mirror stereo display terminal by recognition method such as mouse, keyboard, gestures.
Description
Technical Field
The invention relates to the technical field of mirror-free stereo display, in particular to a mirror-free stereo engine interaction method.
Background
Unity3D is a comprehensive game development tool developed by Unity Technologies that allows players to easily create multiple platforms of types of interactive content, such as three-dimensional video games, building visualizations, real-time three-dimensional animations, etc., and is a fully integrated professional game engine. The engine is distinguished by strong cross-platform development characteristics, gorgeous 3D rendering effect and free and rich man-machine interaction function. The Unity is similar to software which takes interactive graphic development environment as the first mode, such as Director, Blender, Virtools or Torque door Builder, etc., and the editor of the software runs under Windows and Mac OS X, and can issue games to Windows, Mac, Wii, iPhone, Windows phone8, Android and other platforms. Because the difference between the platforms is large, for example, the difference of screen size, operation mode, hardware condition and the like causes great trouble to developers, and the cross-platform characteristic of Unity3D can just save the migration development work of the developers between different platforms, thereby saving development time. Meanwhile, the Unity web player plug-in can be used for releasing a web game and supporting Mac and Windows web browsing. Its web player is also supported by Mac widgets.
The glasses-free three-dimensional display is a novel image display technology, under the condition that no auxiliary equipment (such as 3D glasses, helmets and the like) is needed to be worn, audiences can obtain unprecedented visual experience of high reality degree through advanced optical technologies such as light column lenses and video film sources customized by special algorithms, and the glasses-free three-dimensional display is provided. The mainstream mirror-free three-dimensional display technology is mainly realized based on a slit grating and a lenticular grating, wherein the slit grating is composed of light-transmitting and light-blocking grating strips, the space separation of images at different viewpoints is realized by the shielding effect on light, and the lenticular grating deflects the light in the space by utilizing the refraction effect of a cylindrical lens on the light, so that the effect similar to that of the slit grating is realized. The non-mirror stereo display technology based on the binocular vision principle has important application in the fields of education, exhibition, science, audio and video, mobile terminals and the like.
At present, in order to obtain a better stereoscopic effect, most of the mirror-free stereoscopic display technologies adopt a plurality of images with certain parallax to be synthesized, and a viewer sees two images at different viewing positions to generate stereoscopic vision in cerebral cortex so as to form a stereoscopic image.
At the present stage, the mirror-free stereoscopic display content mainly takes the mirror-free stereoscopic video content as the main content, but the mirror-free stereoscopic video content is limited by the problems of long manufacturing period, high manufacturing cost, no interactivity and the like, and the development of the mirror-free stereoscopic technology cannot be well promoted. The image-text interaction system is widely applicable to 2D display, has good exhibition, expression and interactivity, but also has the problems of lack of bright spots, insufficient visual impact and the like, and cannot well promote the development of the image-text interaction system.
Disclosure of Invention
The invention aims to provide a mirror-free three-dimensional engine interaction method, which can effectively solve the problems of long production period, high production cost, no interactivity and the like of mirror-free three-dimensional display contents, solve the problems of insufficient visual impact, lack of bright spots and the like of an image-text interaction system, greatly expand the application fields of the image-text interaction system and the mirror-free three-dimensional display technology and remarkably promote the development of the image-text interaction system and the mirror-free three-dimensional display technology.
The invention can be realized by the following technical scheme:
the invention discloses a mirror-free stereo engine interaction method, which comprises the following steps:
a. creating characters and pictures needing interaction;
b. generating characters into character pictures with transparent channels;
c. importing the pictures needing interaction and the character pictures with transparent channels into a Unity3D engine, and adjusting the formats of the pictures and the character pictures into a two-dimensional interface format;
d. creating a plurality of Sprite puck components in a Unity3D engine, and endowing pictures needing interaction and text pictures with transparent channels to different Sprite puck components;
e. compiling script codes to realize interactive operation of pictures and text pictures;
f. establishing a plurality of virtual cameras in the same scene, adjusting the angles of the cameras, displaying pictures and character pictures, simultaneously placing the virtual cameras according to certain structural requirements, and adding a rendering map on each camera body;
g. creating a zero plane, and compiling scripts to realize that all virtual cameras focus on the zero plane;
h. calculating a view sub-pixel mapping matrix, compiling a corresponding Shader, sampling a map rendered by each camera, and adding a plurality of sampled parallax images to obtain a final composite image;
i. creating a mirror-free stereoscopic display part in the scene of the Unity3D engine, namely a camera for acquiring a composite image and a mirror-free stereoscopic display panel respectively, and outputting the obtained final image to the mirror-free stereoscopic display panel;
j. compiling the scenes by using a Unity3D engine, and publishing the scenes as an executable file of a PC (personal computer) end;
the PC end is connected to the mirror-free three-dimensional display terminal, the PC end image is output to the mirror-free three-dimensional display terminal, the executable file is operated, and the mirror-free three-dimensional display terminal obtains correct three-dimensional display;
and l, performing image-text interactive operation on the mirror-free three-dimensional display terminal to obtain image-text interactive response of correct three-dimensional display, and realizing the mirror-free three-dimensional image-text interactive method based on the Unity3D engine.
Further, in step b, the text is generated into a text picture with a transparent channel, the text is displayed in the Unity3D engine and is interactively operated, and the specific interactive operation mode is as follows: and clicking the character picture to respond to the corresponding interactive action.
Further, in step c, the text picture with the transparent channel is imported into the Unity3D engine, and the format of the text picture is adjusted into a two-dimensional interface format.
The invention discloses a mirror-free stereo engine interaction method, which has the following beneficial effects: the mirror-free stereo engine interaction method can greatly expand the application fields of the image-text interaction system and the mirror-free stereo display technology, and can realize mirror-free stereo display by slightly modifying the existing image-text interaction system, thereby greatly reducing the manufacturing cost of the mirror-free stereo display content, transplanting the rich and free interaction function of the image-text interaction system into the mirror-free stereo display technology, and well promoting the development of the mirror-free stereo display technology.
Detailed Description
In order to make the technical solution of the present invention better understood by those skilled in the art, the following provides a detailed description of the product of the present invention with reference to the examples.
The invention discloses a mirror-free stereo engine interaction method, which comprises the following steps:
a. creating characters and pictures needing interaction; the core of the mirror-free three-dimensional image-text interaction method is the interaction operation of pictures and characters, wherein the pictures are common 2D pictures.
b. Generating characters into character pictures with transparent channels; the characters are displayed and interacted with in the Unity3D engine, and the characters need to be generated into character pictures with transparent channels. The specific interactive operation mode is as follows: and clicking the character picture to respond to the corresponding interactive action.
c. Importing the pictures needing interaction and the character pictures with transparent channels into a Unity3D engine, and adjusting the formats of the pictures and the character pictures into a two-dimensional interface format; to correctly display the text in the Unity3D engine, the format of the text needs to be adjusted to a two-dimensional interface format, so that the background of the text picture can be made transparent so as to display only the text in the scene of the Unity3D engine.
d. Creating a plurality of Sprite puck components in a Unity3D engine, and endowing pictures needing interaction and text pictures with transparent channels to different Sprite puck components; to display the pictures and words requiring interaction in the scene of the Unity3D engine, the related pictures and words cannot be dragged into the scene of the Unity3D engine directly, but the Sprite assembly is required to be used as a carrier, and the related pictures and words are given to the Sprite assembly to achieve correct display. And each Sprite puck component corresponds to a picture, and because a plurality of text pictures and pictures are interactively operated, a plurality of Sprite puck components are required to be created as carriers.
e. Compiling related script codes to realize interactive operation of pictures and text pictures; the interactive operation of the pictures and the character pictures is realized through the codes, and different pictures are clicked to respond to different interactive events;
f. establishing a plurality of virtual cameras in the same scene, adjusting the angles of the cameras, displaying pictures and character pictures, simultaneously placing the virtual cameras according to certain structural requirements, and adding a rendering map on each camera body; a plurality of virtual stereo cameras can be reasonably arranged, and a same image-text scene is synchronously shot to obtain a plurality of parallax images so as to provide slightly different parallax images for left and right eyes of a viewer, thereby realizing stereo vision. A Render map Render Texture is added to the target map of each camera, and the Render map is a special type of Texture that can be generated and updated in real time at runtime. The method of use is to create a new rendering texture and assign a camera to render it, which can then be used like a regular texture.
g. Creating a zero plane, and compiling scripts to realize that all virtual cameras focus on the zero plane; focusing the camera on a zero plane to realize that an object between the zero plane and the camera presents a screen state, and an object outside the zero plane presents a screen state so as to achieve a real space three-dimensional effect;
h. calculating a view sub-pixel mapping matrix, compiling a corresponding Shader, sampling a map rendered by each camera, and adding a plurality of sampled parallax images to obtain a final composite image;
first, the RGB components from which a given RGB sub-pixel on the 2D display screen should be taken from are determined, and the formula for the multi-view sub-pixel mapping matrix is given by:
wherein,the number of RGB sub-pixels is covered in the horizontal direction for one raster period,is the coordinate position of the RGB sub-pixels,is a grating axis vertical to LCD display screenThe included angle of the inclination of the base plate,representing the horizontal displacement of the upper left edge of the 2D display screen and the edge point of the raster unit,the number of total viewpoints, that is, the number of parallax images participating in synthesis is represented. According to the above formula, the gray value of the corresponding coordinate position of which parallax image the gray value of each sub-pixel on the 2D display screen should be taken from can be calculated. And then writing the viewpoint mapping matrix into a Shader, performing sub-pixel sampling processing on a plurality of rendering maps rendered by the camera by using the Shader, and overlapping a plurality of parallax images subjected to the sampling processing to obtain a final composite image. Shaders, also called shaders, are programs running on the GPU and used to perform shading, and texture color rendering on three-dimensional objects. Shader has two basic types of vertex Shader and fragment Shader, the vertex Shader can process and transform the vertex position of a mesh object rendered on a screen, and the output result of the vertex Shader is transmitted to the next step of a pipeline. After the grid of the geometry is rasterized by hardware, the fragment Shader on the pipeline is executed, the fragment Shader performs various tests on a fragment, and finally the successfully tested fragment is written into a rendered output frame to become a visible pixel on a display screen. As GPU hardware performance has increased, Shader programming has gone through a progression from an initial fixed rendering pipeline to a programmable rendering pipeline. The fixed rendering pipeline is a model supporting older graphics cards, which write shaders based on vertex lighting, which is computationally faster but less efficient than pixel-by-pixel lighting. The fixed pipeline is a standard geometric and illumination pipeline which controls world and view projection transformation, fixed illumination control and texture mixing, and basically all the display cards can normally operate. The programmable rendering pipeline respectively performs programming processing on vertex operation and pixel operation in the rendering pipeline.
i. Creating a mirror-free stereoscopic display part in the scene of the Unity3D engine, namely a camera for acquiring a composite image and a mirror-free stereoscopic display panel respectively, and outputting the obtained final image to the mirror-free stereoscopic display panel; creating a new display panel as a carrier of the composite image, giving the final composite image to the display panel, and acquiring images on the mirror-less stereoscopic display panel through the created new camera to be presented in a game window of a Unity3D engine so as to carry out related debugging work.
j. Compiling the scenes by using a Unity3D engine, and publishing the scenes as an executable file of a PC (personal computer) end; the manufactured mirror-free three-dimensional image-text interaction scene is published as an executable file so as to obtain a correct three-dimensional effect at a mirror-free three-dimensional display terminal, and since the mirror-free three-dimensional display terminal is different from other 2D display terminals, the correct three-dimensional effect can be displayed only by matching each pixel point in the scene with the pixel point at the same position of the mirror-free three-dimensional display terminal.
The PC end is connected to the mirror-free three-dimensional display terminal, the PC end image is output to the mirror-free three-dimensional display terminal, the executable file is operated, and the mirror-free three-dimensional display terminal obtains correct three-dimensional display; and outputting the picture of the PC terminal to the mirror-free three-dimensional display terminal, and running an executable file on the display terminal to realize the complete matching of the program and the pixel points of the mirror-free three-dimensional display terminal and obtain the correct mirror-free three-dimensional display.
And l, performing image-text interactive operation on the mirror-free three-dimensional display terminal to obtain image-text interactive response of correct three-dimensional display, and realizing the mirror-free three-dimensional image-text interactive method based on the Unity3D engine. And performing mirror-free stereo image-text interaction operation on a mirror-free stereo display terminal to obtain correct image-text interaction response and correct mirror-free stereo effect display, thereby realizing a mirror-free stereo image-text interaction method based on a Unity3D engine.
The foregoing is merely a preferred embodiment of the invention and is not intended to limit the invention in any manner; the present invention may be readily implemented by those of ordinary skill in the art having reference to the foregoing specification and claims; however, those skilled in the art should appreciate that they can readily use the disclosed conception and specific embodiments as a basis for designing or modifying other structures for carrying out the same purposes of the present invention; meanwhile, any changes, modifications, and evolutions of the equivalent changes of the above embodiments according to the actual techniques of the present invention are still within the protection scope of the technical solution of the present invention.
Claims (3)
1. A mirror-free stereo engine interaction method is characterized by comprising the following steps:
a. creating characters and pictures needing interaction;
b. generating the characters into character pictures with transparent channels;
c. importing the pictures needing interaction and the character pictures with transparent channels into a Unity3D engine, and adjusting the format of the pictures to a two-dimensional interface format;
d. creating a plurality of Sprite eidolon components in a Unity3D engine, and endowing pictures needing interaction and text pictures with transparent channels to different Sprite eidolon components;
e. compiling script codes to realize the interactive operation of the pictures and the text pictures;
f. establishing a plurality of virtual cameras in the same scene, adjusting the angles of the cameras, displaying pictures and character pictures, simultaneously placing the virtual cameras according to certain structural requirements, and adding a rendering map on each camera body;
g. creating a zero plane, and compiling scripts to realize that all virtual cameras focus on the zero plane;
h. calculating a view sub-pixel mapping matrix, compiling a corresponding Shader, sampling a mapping rendered by each camera, and adding a plurality of parallax images subjected to sampling processing to obtain a final composite image;
i. creating a mirror-free three-dimensional display part in the scene of the Unity3D engine, namely a camera for acquiring a composite image and a mirror-free three-dimensional display panel respectively, and outputting the obtained final image to the mirror-free three-dimensional display panel;
j. compiling the scenes by using a Unity3D engine, and publishing the scenes into an executable file of a PC (personal computer) end;
k. the PC end is connected to the mirror-free three-dimensional display terminal, the PC end image is output to the mirror-free three-dimensional display terminal, the executable file is operated, and the mirror-free three-dimensional display terminal obtains correct three-dimensional display;
and l, performing image-text interactive operation on the mirror-free three-dimensional display terminal to obtain image-text interactive response of correct three-dimensional display, and realizing the mirror-free three-dimensional image-text interactive method based on the Unity3D engine.
2. The mirror-free stereo engine interaction method of claim 1, wherein: in step b, the text is generated into a text picture with a transparent channel, the text is displayed in the Unity3D engine and is interactively operated, and the specific interactive operation mode is as follows: and clicking the character picture to respond to the corresponding interactive action.
3. The mirror-free stereo engine interaction method of claim 1, wherein: in step c, the text picture with the transparent channel is led into a Unity3D engine, and the format of the text picture is adjusted into a two-dimensional interface format.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610623012.9A CN106170085A (en) | 2016-08-02 | 2016-08-02 | A kind of without mirror solid engine exchange method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610623012.9A CN106170085A (en) | 2016-08-02 | 2016-08-02 | A kind of without mirror solid engine exchange method |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN106170085A true CN106170085A (en) | 2016-11-30 |
Family
ID=58065641
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201610623012.9A Pending CN106170085A (en) | 2016-08-02 | 2016-08-02 | A kind of without mirror solid engine exchange method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN106170085A (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108986193A (en) * | 2018-07-10 | 2018-12-11 | 武汉国遥新天地信息技术有限公司 | It is a kind of three-dimensional text retouch side method for drafting |
| CN113259651A (en) * | 2021-07-07 | 2021-08-13 | 江西科骏实业有限公司 | Stereoscopic display method, apparatus, medium, and computer program product |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1996367A (en) * | 2006-12-28 | 2007-07-11 | 于慧 | 360 degree automatic analog simulation device system and method for implementing same |
| CN102253715A (en) * | 2011-07-20 | 2011-11-23 | 康佳集团股份有限公司 | Intelligent image-text human-computer interaction method and intelligent image-text human-computer interaction system of mobile terminal |
| KR20140079099A (en) * | 2012-12-18 | 2014-06-26 | 홍병기 | Mobile platform unity player |
| CN103957400A (en) * | 2014-05-09 | 2014-07-30 | 北京乐成光视科技发展有限公司 | Naked eye 3D display system based on Unity3D game engine |
| CN204156999U (en) * | 2014-05-09 | 2015-02-11 | 北京乐成光视科技发展有限公司 | A kind of bore hole 3D display system based on Unity3D game engine |
| CN104599319A (en) * | 2014-12-26 | 2015-05-06 | 福建工程学院 | Real-time generation method of 3D scene |
| CN105007477A (en) * | 2015-07-06 | 2015-10-28 | 四川长虹电器股份有限公司 | Method for realizing naked eye 3D display based on Unity3D engine |
| CN105282536A (en) * | 2015-10-27 | 2016-01-27 | 成都斯斐德科技有限公司 | Naked-eye 3D picture-text interaction method based on Unity3D engine |
-
2016
- 2016-08-02 CN CN201610623012.9A patent/CN106170085A/en active Pending
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1996367A (en) * | 2006-12-28 | 2007-07-11 | 于慧 | 360 degree automatic analog simulation device system and method for implementing same |
| CN102253715A (en) * | 2011-07-20 | 2011-11-23 | 康佳集团股份有限公司 | Intelligent image-text human-computer interaction method and intelligent image-text human-computer interaction system of mobile terminal |
| KR20140079099A (en) * | 2012-12-18 | 2014-06-26 | 홍병기 | Mobile platform unity player |
| CN103957400A (en) * | 2014-05-09 | 2014-07-30 | 北京乐成光视科技发展有限公司 | Naked eye 3D display system based on Unity3D game engine |
| CN204156999U (en) * | 2014-05-09 | 2015-02-11 | 北京乐成光视科技发展有限公司 | A kind of bore hole 3D display system based on Unity3D game engine |
| CN104599319A (en) * | 2014-12-26 | 2015-05-06 | 福建工程学院 | Real-time generation method of 3D scene |
| CN105007477A (en) * | 2015-07-06 | 2015-10-28 | 四川长虹电器股份有限公司 | Method for realizing naked eye 3D display based on Unity3D engine |
| CN105282536A (en) * | 2015-10-27 | 2016-01-27 | 成都斯斐德科技有限公司 | Naked-eye 3D picture-text interaction method based on Unity3D engine |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108986193A (en) * | 2018-07-10 | 2018-12-11 | 武汉国遥新天地信息技术有限公司 | It is a kind of three-dimensional text retouch side method for drafting |
| CN113259651A (en) * | 2021-07-07 | 2021-08-13 | 江西科骏实业有限公司 | Stereoscopic display method, apparatus, medium, and computer program product |
| CN113259651B (en) * | 2021-07-07 | 2021-10-15 | 江西科骏实业有限公司 | Stereoscopic display method, apparatus, medium, and computer program product |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN105282536A (en) | Naked-eye 3D picture-text interaction method based on Unity3D engine | |
| CN103337095B (en) | The tridimensional virtual display methods of the three-dimensional geographical entity of a kind of real space | |
| US10719912B2 (en) | Scaling and feature retention in graphical elements defined based on functions | |
| CN115552451A (en) | Multi-layer reprojection techniques for augmented reality | |
| US10628995B2 (en) | Anti-aliasing of graphical elements defined based on functions | |
| CN103957400A (en) | Naked eye 3D display system based on Unity3D game engine | |
| CN104820497B (en) | A kind of 3D interactive display systems based on augmented reality | |
| CN106527857A (en) | Virtual reality-based panoramic video interaction method | |
| CN106504339A (en) | Historical relic 3D methods of exhibiting based on virtual reality | |
| CN105447898A (en) | Method and device for displaying 2D application interface in virtual real device | |
| US20130293547A1 (en) | Graphics rendering technique for autostereoscopic three dimensional display | |
| US10540918B2 (en) | Multi-window smart content rendering and optimizing method and projection method based on cave system | |
| CN102243768B (en) | A three-dimensional virtual scene rendering method | |
| JP6553184B2 (en) | Digital video rendering | |
| CN104702936A (en) | Virtual reality interaction method based on glasses-free 3D display | |
| CN105007477A (en) | Method for realizing naked eye 3D display based on Unity3D engine | |
| CN117176936B (en) | A freely expandable stereoscopic digital sandbox system and light field rendering method | |
| CN204156999U (en) | A kind of bore hole 3D display system based on Unity3D game engine | |
| CN109493409B (en) | Virtual three-dimensional scene stereo picture drawing method based on left-right eye space multiplexing | |
| CN105578172B (en) | Bore hole 3D image display methods based on Unity3D engines | |
| EP4386682A1 (en) | Image rendering method and related device thereof | |
| CN113345113B (en) | Content presentation method based on CAVE system | |
| CN106170085A (en) | A kind of without mirror solid engine exchange method | |
| CN106547557A (en) | A kind of multi-screen interactive exchange method based on virtual reality and bore hole 3D | |
| WO2025261175A1 (en) | Display method, display device, helmet mounted display, and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20161130 |
|
| WD01 | Invention patent application deemed withdrawn after publication |