CN112584125B - Three-dimensional image display device and display method thereof - Google Patents
Three-dimensional image display device and display method thereof Download PDFInfo
- Publication number
- CN112584125B CN112584125B CN201910937968.XA CN201910937968A CN112584125B CN 112584125 B CN112584125 B CN 112584125B CN 201910937968 A CN201910937968 A CN 201910937968A CN 112584125 B CN112584125 B CN 112584125B
- Authority
- CN
- China
- Prior art keywords
- composite
- dimensional image
- image display
- video signal
- display device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 239000002131 composite material Substances 0.000 claims abstract description 197
- 238000012545 processing Methods 0.000 claims abstract description 66
- 238000009877 rendering Methods 0.000 claims abstract description 23
- 230000001413 cellular effect Effects 0.000 claims description 8
- 238000007781 pre-processing Methods 0.000 claims description 4
- 239000011174 green composite Substances 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 abstract description 5
- 230000000694 effects Effects 0.000 abstract description 3
- 238000004891 communication Methods 0.000 description 29
- 230000006854 communication Effects 0.000 description 29
- 230000006870 function Effects 0.000 description 18
- 238000007726 management method Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 16
- 230000005540 biological transmission Effects 0.000 description 14
- 238000005516 engineering process Methods 0.000 description 13
- 238000010295 mobile communication Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 6
- 230000005236 sound signal Effects 0.000 description 6
- 238000004590 computer program Methods 0.000 description 4
- 239000011521 glass Substances 0.000 description 4
- 229920001621 AMOLED Polymers 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000015556 catabolic process Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000006731 degradation reaction Methods 0.000 description 3
- 230000005484 gravity Effects 0.000 description 3
- 230000004888 barrier function Effects 0.000 description 2
- 210000000988 bone and bone Anatomy 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 239000013256 coordination polymer Substances 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000002441 reversible effect Effects 0.000 description 2
- 230000008054 signal transmission Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000007175 bidirectional communication Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000035755 proliferation Effects 0.000 description 1
- 239000002096 quantum dot Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/194—Transmission of image signals
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
The application relates to the field of stereoscopic images, and discloses three-dimensional image display equipment, which comprises: a multi-view autostereoscopic display screen comprising m x n composite pixels and thus defining a display resolution of m x n; a video signal interface for receiving video frames of a 3D video signal, the video frames of the 3D video signal comprising two images having a resolution of mxn or comprising a composite image having a resolution of 2 mxn or mx2 n; and at least one 3D video processing unit; wherein each composite pixel comprises a plurality of composite sub-pixels, each composite sub-pixel is composed of i same-color sub-pixels corresponding to i viewpoints, wherein i is more than or equal to 3; wherein the at least one 3D video processing unit is configured to render sub-pixels of each composite sub-pixel based on the two images or the composite image. The application also discloses a three-dimensional (3D) image display method. The apparatus and method of the present application can realize excellent display effects with a small rendering calculation amount.
Description
Technical Field
The present application relates to an naked eye stereoscopic display technique, for example, to a three-dimensional image display apparatus and a display method thereof.
Background
Stereoscopic images are one of the hot spot technologies in the video industry, which has driven the technological change from flat display to stereoscopic display. The stereoscopic display technology is a key ring in the stereoscopic image industry, and is mainly divided into two types, namely glasses type stereoscopic display and naked eye type stereoscopic display technology. The naked eye stereoscopic display technology is a technology in which viewers can view stereoscopic display pictures without wearing glasses. Compared with glasses type stereoscopic display, naked eye type stereoscopic display belongs to an auto-stereoscopic display technology, and restriction on viewers is reduced.
The naked eye stereoscopic display is based on viewpoints, and recently, a multi-viewpoint naked eye stereoscopic display has been proposed, so that a sequence of parallax images (frames) is formed at different positions in space, so that stereoscopic image pairs having a parallax relationship can enter left and right eyes of a person, respectively, thereby giving a stereoscopic impression to a viewer. For a conventional multi-view autostereoscopic (3D) display having, for example, N viewpoints, a plurality of viewpoints of a space are projected with a plurality of independent pixels on a display panel.
However, in the construction of the conventional autostereoscopic display, a 3D display effect is provided by providing a grating on one or both sides of a 2D display panel, and transmission and display of 3D images or video are based on the 2D display panel. This presents a dilemma of resolution degradation and a proliferation of rendering computations.
Since the total resolution of the 2D display panel is a constant value, the resolution may be drastically reduced, for example, the column resolution may be reduced to 1/N of the original resolution. This also results in a reduction in resolution by a factor of two in the horizontal and vertical directions due to the pixel arrangement of the multi-view display.
In order to maintain the high definition display, in the case of providing a high definition N-view 3D display device, for example, N times that of a 2D display device, the transmission bandwidth of the terminal to the display to be occupied is also multiplied by N times, resulting in a signal transmission amount being too large. Moreover, pixel-level rendering of such N-fold high resolution images can severely occupy computational resources of the terminal or the display itself, resulting in a significant performance degradation.
Furthermore, since the transmission and display of 3D images or video is based on a 2D display panel, there may be problems of multiple format adjustments and image or video display adaptations. This may lead to a further increase in rendering calculations on the one hand and may affect the display of 3D images or video on the other hand.
This background is for ease of understanding only and is not to be construed as an admission of prior art.
Disclosure of Invention
The following presents a simplified summary of some embodiments in order to provide a basic understanding of some aspects of the disclosed embodiments, not to identify key/critical elements or to delineate the scope of the invention, but rather as a prelude to the more detailed description that follows.
Embodiments of the present disclosure are intended to provide a three-dimensional image display apparatus and a three-dimensional image display method, which aim to overcome or alleviate at least some of the problems mentioned above.
In one aspect, there is provided a three-dimensional image display apparatus including: a multi-view autostereoscopic display screen comprising m x n composite pixels and thus defining a display resolution of m x n; a video signal interface for receiving video frames of a 3D video signal, wherein the video frames of the 3D video signal comprise two images with a resolution of mxn or comprise a composite image with a resolution of 2 mxn or mx2 n; and at least one 3D video processing unit; wherein each composite pixel comprises a plurality of composite sub-pixels, each composite sub-pixel is composed of i same-color sub-pixels corresponding to i viewpoints, wherein i is more than or equal to 3; wherein the at least one 3D video processing unit is configured to render at least one of the composite sub-pixels based on one of the two images and at least another one of the composite sub-pixels based on the other of the two images; or at least one 3D video processing unit is configured to render at least two of the respective composite sub-pixels based on the composite image.
In the embodiment of the disclosure, the resolution of the multi-view autostereoscopic display screen is defined in a mode of composite pixels (composite sub-pixels), so that the resolution defined by the composite pixels (composite sub-pixels) is taken as an consideration in transmission and display, the reduction of transmission and rendering calculation amount can be effectively realized, and the display effect is excellent. In contrast, conventional 3D displays still consider the transmission and display of 3D images or video on a 2D display panel basis, which is faced with problems of reduced resolution and increased computation when performing multi-view autostereoscopic display, as well as problems of multiple format adjustments and image or video display adaptations.
In one embodiment, the three-dimensional image display device further comprises a processor, a memory, and an external interface, the video signal interface being an internal interface and configured to communicatively connect the processor and the at least one 3D video processing unit.
In one embodiment, the processor includes a register configured to receive information related to m n display resolution independent of i views. In the embodiment of the disclosure, the 3D video transmission in the video signal interface as the internal interface between the processor and the 3D video processing unit does not need to consider the multi-viewpoint factor, and the transmission and processing of data are simplified.
In an embodiment, the memory is configured to store a compressed 3D video signal or the external interface is configured to receive a compressed 3D video signal; wherein the three-dimensional image display device further comprises a codec configured to decompress and codec the compressed 3D video signal and to transmit the decompressed 3D video signal to the at least one 3D video processing unit via the video signal interface.
In one embodiment, the internal interface is selected from at least one of a MIPI, mini-MIPI interface, LVDS interface, mini-LVDS interface or Display Port interface.
In one embodiment, at least two 3D video processing units are provided, each 3D video processing unit being configured to be assigned a plurality of rows or columns of composite pixels or composite sub-pixels, respectively.
In one embodiment, the at least one 3D video processing unit is an FPGA or ASIC chip or a FPGA or ASIC chipset.
In one embodiment, the three-dimensional image display apparatus further includes a formatter for preprocessing video frames of the 3D video signal such that the two images have an mxn resolution or the composite image has a2 mxn or mx2 n resolution.
In one embodiment, the two images are left-eye and right-eye parallax images in a side-by-side format, a rendered color image in a side-by-side format, and a depth image; left-eye parallax images and right-eye parallax images in the up-down format or rendered color images and depth images in the up-down format.
In one embodiment, the composite image is a left-eye and right-eye parallax composite image interleaved left-and-right, a rendered color image and depth of field composite image interleaved up-and-down, a left-eye and right-eye parallax composite image in checkerboard format, a rendered color image and depth of field composite image in checkerboard format.
In one embodiment, each composite subpixel comprises a single row or column of multiple subpixels.
In one embodiment, each composite subpixel comprises a plurality of subpixels in an array.
In one embodiment, the plurality of composite subpixels includes a red composite subpixel, a green composite subpixel, and a blue composite subpixel. Or each composite pixel is composed of a red composite subpixel, a green composite subpixel, and a blue composite subpixel.
In one embodiment, the three-dimensional image display device further comprises eye tracking means or an eye tracking data interface for acquiring real-time eye tracking data.
In another aspect, a three-dimensional (3D) image display method is provided, wherein a multi-view autostereoscopic display screen comprises m n composite pixels and thus defines a display resolution of m n, each composite pixel comprising a plurality of composite sub-pixels, each composite sub-pixel being made up of i homochromatic sub-pixels corresponding to i views, wherein i is ≡3.
In some embodiments, a three-dimensional image display method includes:
transmitting a video frame of a 3D video signal, comprising two images having a resolution of mxn or comprising a composite image having a resolution of 2 mxn or mx2 n; and
Rendering at least one of the composite sub-pixels based on one of the two images and rendering at least another of the composite sub-pixels based on the other of the two images; or alternatively
At least two of the composite subpixels are rendered based on the composite image.
In the technical scheme of the embodiment of the disclosure, since the display resolution of the multi-view naked eye stereoscopic display screen is consistent with the resolution of the video frame of the 3D video signal, the video frame for transmitting the 3D video signal does not occupy additional transmission bandwidth resources; and since the display resolution of the display screen is identical to the resolution of the image generated based on the video frame, no format adjustment is required for the generated image.
In one embodiment, the three-dimensional image display method further includes: before transmitting video frames of a 3D video signal, information related to display resolution of m×n irrespective of i viewpoints is transmitted.
In one embodiment, the 3D video signal is a decompressed 3D video signal.
In one embodiment, the three-dimensional image display method further includes:
reading the stored compressed 3D video signal or receiving the compressed 3D video signal from the three-dimensional image display device before transmitting the video frame of the 3D video signal; and
The compressed 3D video signal is decompressed and encoded into a decompressed 3D video signal.
In one embodiment, the three-dimensional image display method further includes: the video frames of the 3D video signal are preprocessed so that the two images have an mxn resolution or the composite image has a 2 mxn or mx2 n resolution.
In another aspect, there is provided a three-dimensional image display device comprising a processor and a memory storing program instructions, characterized in that the three-dimensional image display device further comprises a multi-view autostereoscopic display screen comprising m×n composite pixels and thus defining a display resolution of m×n, each composite pixel comprising a plurality of composite sub-pixels, each composite sub-pixel being constituted by i same-colour sub-pixels corresponding to i viewpoints, wherein i is ≡3, the processor being configured to perform the above-described three-dimensional image display method when executing the program instructions.
In one embodiment, the three-dimensional image display device is a smart television, a smart cellular phone, a tablet computer, a personal computer, or a wearable device.
The foregoing general description and the following description are exemplary and explanatory only and are not restrictive of the application.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which:
Fig. 1A to 1C are schematic structural views of a three-dimensional image display device according to an embodiment of the present disclosure;
fig. 2 is a schematic hardware configuration diagram of a three-dimensional image display device according to an embodiment of the present disclosure;
Fig. 3 is a software architecture diagram of a three-dimensional image display device according to an embodiment of the present disclosure;
fig. 4A-4C are schematic diagrams of composite pixels according to embodiments of the present disclosure;
fig. 5A to 5E are schematic diagrams of formats and contents of images included in video frames of a 3D video signal according to embodiments of the present disclosure;
Fig. 6 is a schematic diagram of setting at least two 3D video processing units provided by an embodiment of the present disclosure;
FIG. 7 is a schematic step diagram of a three-dimensional (3D) image display method according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of steps of a three-dimensional (3D) image display method according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of steps of a three-dimensional (3D) image display method according to an embodiment of the present disclosure;
FIG. 10 is a schematic diagram of steps of a three-dimensional (3D) image display method according to an embodiment of the present disclosure;
fig. 11 is a step schematic diagram of a three-dimensional (3D) image display method according to an embodiment of the present disclosure.
Fig. 12 is a schematic structural view of a three-dimensional image display device according to an embodiment of the present disclosure.
Detailed Description
So that the manner in which the features and techniques of the disclosed embodiments can be understood in more detail, a more particular description of the embodiments of the disclosure, briefly summarized below, may be had by reference to the appended drawings, which are not intended to be limiting of the embodiments of the disclosure.
Herein, "autostereoscopic (3D) display" refers to a technology in which a viewer can observe stereoscopic display images on a flat panel display without wearing glasses for stereoscopic display, including, but not limited to, "parallax barrier", "lenticular lens", "directional backlight" technology.
In this context, "multi-view" has its conventional meaning in the art, meaning that different images displayed by different pixels or sub-pixels of a display screen can be viewed at different locations (viewpoints) in space. Herein, multi-view shall mean at least 3 views.
In this context, "grating" has the broad interpretation in the art, including but not limited to "parallax barrier" gratings and "lenticular" gratings, such as "lenticular" gratings.
In this context, "lens" or "lenticular" has the meaning conventional in the art, including, for example, cylindrical lenses and spherical lenses.
In this context, a conventional "pixel" means a 2D display or as the smallest unit of display in terms of its resolution when displayed by a 2D display. In this context, a conventional "subpixel" refers, for example, to a single color present in a pixel. Thus, a single pixel will comprise a set of sub-pixels, such as RGB (red-green-blue), RGBW (red-green-blue-white), RYYB (red-yellow-blue) or RGBYC (red-green-blue-yellow-cyan). However, in the definition of a pixel herein, this does not mean that the sub-pixels therein must be arranged in close proximity. For example, other components, such as other sub-pixels, may be provided between sub-pixels of the same "pixel".
In some embodiments herein, a "composite pixel", such as a "super pixel", when applied to multi-view technology in the field of autostereoscopic displays refers to the smallest display unit when an autostereoscopic display provides multi-view display, but does not exclude that a single composite pixel for multi-view technology may comprise or appear as a pixel of multiple 2D displays. Herein, unless specifically stated as a composite pixel, a 3D pixel, or a "super pixel" for a "3D display" or "multi-view" application, a pixel will refer to the smallest display unit at the time of 2D display. Herein, "super pixel" means a composite pixel that provides a 3D display of at least 12 viewpoints. Also, when described as a multi-view naked eye 3D display "composite subpixel" or "super subpixel," it will refer to a composite subpixel of a single color that appears in the composite pixel when the naked eye stereoscopic display provides multi-view display.
In some embodiments of the disclosure, there is provided a three-dimensional image display apparatus 100 including: a multi-view autostereoscopic display screen 110 comprising m×n composite pixels CP and thus defining a display resolution of m×n; a video signal interface 140 for receiving video frames of a 3D video signal, wherein the video frames of the 3D video signal contain two images with m x n resolution or contain a composite image with 2m x n or m x 2n resolution; and at least one 3D video processing unit 130.
In some embodiments, each composite pixel CP includes a plurality of composite subpixels CSP, each composite subpixel being made up of i homochromatic subpixels corresponding to i viewpoints, where i.gtoreq.3.
In some embodiments, the at least one 3D video processing unit 130 is configured to render at least one of the composite sub-pixels based on one of the two images and at least another one of the composite sub-pixels based on the other of the two images.
In further some embodiments, the at least one 3D video processing unit 130 is configured to render at least two of the respective composite sub-pixels based on the composite image.
Fig. 1A illustrates a schematic structure of a three-dimensional image display device 100 provided in one embodiment of the present disclosure. Referring to fig. 1A, in one embodiment of the present disclosure, a three-dimensional image display device 100 is provided, which may include a multi-view autostereoscopic display screen 110, at least one 3D video processing unit 130, and a video signal interface 140 for receiving video frames of a 3D video signal.
The multi-view autostereoscopic display screen 110 may include a display panel and a raster (not identified) overlaying the display panel. In the embodiment shown in fig. 1A, the multi-view autostereoscopic display screen 110 may comprise m×n composite pixels and thus define a display resolution of m×n. As shown in fig. 1A, the multi-view autostereoscopic display screen 110 comprises m columns and n rows of composite pixels and thus defines an mxn display resolution.
In some embodiments, each composite pixel includes a plurality of composite subpixels, each composite subpixel being composed of i homochromatic subpixels corresponding to i viewpoints, i+.3. In the embodiment shown in fig. 1A, i=6, but other values of i are conceivable. In the illustrated embodiment, the multi-view autostereoscopic display may have i (i=6) views (V1-V6) accordingly, but it is contemplated that there may be more or fewer views accordingly.
Referring to fig. 1A and 4A in combination, in the illustrated embodiment, each composite pixel includes three composite subpixels, and each composite subpixel is composed of 6 homochromatic subpixels corresponding to 6 viewpoints (i=6). The three composite subpixels correspond to three colors, namely red (R), green (G), and blue (B), respectively. That is, the three composite subpixels of each composite pixel have 6 red, 6 green, or 6 blue subpixels, respectively.
In the embodiment shown in fig. 1A and 4A, the composite subpixels 410, 420, 430 in the composite pixel 400 are arranged in parallel. Each composite subpixel 410, 420, 430 includes subpixels 411, 421, 431 in a single row. It is conceivable that the composite sub-pixels in the composite pixel are arranged differently or that the sub-pixels in the composite sub-pixel are arranged differently.
As shown in fig. 4B, each composite subpixel 440, 450, 460 includes subpixels 441, 451, 461 in a single column.
As shown in fig. 4C, three composite sub-pixels 470, 480, 490 in the composite pixel 400 are arranged, for example, in a "delta" shape. In the embodiment shown in fig. 4C, the subpixels 471, 481, 491 in each composite subpixel 470, 480, 490 can be in the form of an array (3 x 2).
In some embodiments, such as shown in fig. 1A-1C, the three-dimensional image display device 100 may be provided with a single 3D video processing unit 130. A single 3D video processing unit 130 simultaneously processes the rendering of each composite subpixel of each composite pixel of the autostereoscopic display screen 110.
In other embodiments, such as shown in FIG. 6, the three-dimensional image display device 100 may be provided with at least two 3D video processing units 130 that process the rendering of each composite subpixel of each composite pixel of the autostereoscopic display screen 110 in parallel, in series, or in a combination of series and parallel.
Those skilled in the art will appreciate that the at least two 3D video processing units described above may have other ways of distributing and processing multiple rows and columns of composite pixels or composite sub-pixels of the autostereoscopic display screen 110 in parallel, and are within the scope of the embodiments of the present disclosure.
In some embodiments, the at least one 3D video processing unit 130 may also optionally include a buffer 131 to buffer received video frames.
In some embodiments, the at least one 3D video processing unit is an FPGA or ASIC chip or a FPGA or ASIC chipset.
With continued reference to fig. 1A, the three-dimensional image display device 100 may further include a processor 101 communicatively coupled to the at least one 3D video processing unit 130 via a video signal interface 140. In some embodiments shown herein, the processor 101 is included in or as a processor unit of a computer or smart terminal, such as a mobile terminal. It is contemplated that in some embodiments, the processor 101 may be disposed external to the three-dimensional image display device, e.g., the three-dimensional image display device may be a multi-view autostereoscopic display with a 3D video processing unit, e.g., a non-intelligent autostereoscopic television.
For simplicity, hereinafter, an exemplary embodiment of the three-dimensional image display apparatus includes a processor inside. Further, the video signal interface 140 is configured as an internal interface connecting the processor 101 and the 3D video processing unit 130, and such a structure can be more clearly understood with reference to the three-dimensional image display apparatus 200 implemented in a mobile terminal manner shown in fig. 2 and 3. In some embodiments of the present disclosure, the video signal interface 140, which is an internal interface of the three-dimensional image Display device 200, may be an MIPI, mini-MIPI, LVDS, min-LVDS, or Display Port interface. In some embodiments, as shown in fig. 1A, the processor 101 of the three-dimensional image display device 100 may further include a register 122. Registers 122 may be used to register instructions, data, and addresses.
In some embodiments, the three-dimensional image display apparatus 100 may further include a human eye tracking device or a human eye tracking data interface for acquiring real-time human eye tracking data, so that the 3D video processing unit 130 may render corresponding sub-pixels in a composite pixel (composite sub-pixel) based on the human eye tracking data. For example, in the embodiment shown in fig. 1B, the three-dimensional image display apparatus 100 further includes an eye tracking device 150 communicatively connected to the 3D video processing unit 130, so that the 3D video processing unit 130 can directly receive the eye tracking data. In the embodiment shown in fig. 1C, the eye tracking device (not shown) may be directly connected to the processor 101, for example, while the 3D video processing unit 130 obtains eye tracking data from the processor 101 via the eye tracking data interface 151. In other embodiments, the eye tracking device may be coupled to both the processor and the 3D video processing unit, which may enable the 3D video processing unit 130 to obtain eye tracking data directly from the eye tracking device, on the one hand, and may enable other information obtained by the eye tracking device to be processed by the processor, on the other hand.
With reference to fig. 1A-C and fig. 5A-E, 3D video signal transmission and display within a three-dimensional image display device of some embodiments of the present disclosure is described. In the illustrated embodiment, the display screen 110 may define 6 viewpoints V1-V6, at each of which (spatial locations) the viewer's eyes may see the display of a corresponding one of the composite subpixels of each composite pixel in the display panel of the multi-viewpoint autostereoscopic display screen 110. The two different pictures seen by the eyes of the viewer at different viewpoints form parallax, and a stereoscopic picture is synthesized in the brain.
In some embodiments of the present disclosure, the 3D video processing unit 130 receives video frames, e.g., a decompressed 3D video signal, from the processor 101 through a video signal interface 140, e.g., as an internal interface. Each video frame may contain or consist of two images with a resolution of mxn or a composite image with a resolution of 2 mxn or mx2 n.
In some embodiments, the two images or the composite image may include different types of images and may be in various arrangements.
As shown in fig. 5A, a video frame of a 3D video signal contains or is constituted of two images 501, 502 having m×n resolution in a parallel format. In some embodiments, the two images may be left eye parallax images and right eye parallax images, respectively. In some embodiments, the two images may be a rendered color image and a depth image, respectively.
As shown in fig. 5B, a video frame of the 3D video signal contains or is constituted of two images 503, 504 having m×n resolution in a top-bottom format. In some embodiments, the two images may be left eye parallax images and right eye parallax images, respectively. In some embodiments, the two images may be a rendered color image and a depth image, respectively.
As shown in fig. 5C, a video frame of the 3D video signal contains a composite image 505 having a resolution of 2m×n in a left-right interleaved format. In some embodiments, the composite image may be a left-and-right-eye parallax composite image that is left-to-right interlaced, a rendered color and depth composite image that is left-to-right interlaced.
As shown in fig. 5D, a video frame of the 3D video signal contains a composite image 506 having m×2n resolution in an up-down interleaved format. In some embodiments, the composite image may be a left-eye and right-eye parallax composite image that is interleaved up and down. In some embodiments, the composite image may be a composite image of rendering colors and depth of field interleaved up and down.
As shown in fig. 5E, the video frame of the 3D video signal contains a composite image 507 having a resolution of 2m×n in a checkerboard format. In some embodiments, the composite image may be a left-eye and right-eye parallax composite image in a checkerboard format. In some embodiments, the composite image may be a checkerboard format of rendered color image and depth image.
It will be appreciated by those skilled in the art that the embodiments shown in the figures are merely illustrative and that two images or a composite image comprised by a video frame of a 3D video signal may comprise other types of images and may take other arrangements, which fall within the scope of the embodiments of the present disclosure.
In some embodiments, the resolution of m×n may be a resolution above Full High Definition (FHD), including, but not limited to, 1920×1080, 1920×1200, 2048×1280, 2560×1440, 3840×2160, and the like.
In some embodiments, the at least one 3D video processing unit 130, upon receiving a video frame comprising two images, renders at least one of the composite sub-pixels based on one of the two images and renders at least another one of the composite sub-pixels based on the other of the two images. Similarly, in some embodiments, the at least one 3D video processing unit renders at least two of the respective composite sub-pixels based on the composite image after receiving the video frame comprising the composite image. For example, at least one subpixel is (partially) rendered from a first image in the composite image and at least another subpixel is (partially) rendered from a second image.
In some embodiments, this is, for example, dynamic rendering based on eye tracking data.
By way of explanation and not limitation, since the 3D video processing unit 130 in the embodiment of the present disclosure includes two images of video frame data received through the video signal interface 140 configured as an internal interface, for example, the resolution of each image (or half of the resolution of the composite image) corresponds to the composite pixel divided by view (which includes the composite sub-pixel divided by view). On one hand, since the viewpoint information is irrelevant to the transmission process, naked eye 3D display with small processing calculation amount and no loss of resolution can be realized; on the other hand, since the composite pixels (composite sub-pixels) correspond to the viewpoint setting, the rendering of the display screen can be realized in a "point-to-point" manner, greatly reducing the amount of computation. In contrast, the transmission and display of images or videos of conventional autostereoscopic displays are still based on 2D display panels, which not only has problems of resolution degradation and rapid increase of rendering calculation amount, but also may have problems of multiple format adjustment and image or video display adaptation.
In some embodiments, the register 122 of the processor 101 may be configured to receive information about the display requirements of the multi-view autostereoscopic display screen 110, typically information related to the m×n resolution of the multi-view autostereoscopic display screen 110 independent of i views, so that the processor 101 transmits video frames of a 3D video signal to the multi-view autostereoscopic display screen 110 that meet the display requirements thereof. The information may be, for example, a data packet sent for initially establishing a video transmission.
Thus, the processor 101 does not need to consider information (i.gtoreq.3) related to i views of the multi-view autostereoscopic display screen 110 when transmitting video frames of a 3D video signal. Instead, the processor 101 can transmit video frames of the 3D video signal meeting the requirements of the multi-view autostereoscopic display screen 110 by means of the information related to the m×n resolution of the multi-view autostereoscopic display screen 110 received by the register 122.
In some embodiments, the three-dimensional image display device 100 may further include a codec configured to decompress and codec the compressed 3D video signal and transmit the decompressed 3D video signal to the at least one 3D video processing unit 130 via the video signal interface 140.
In some embodiments, the processor 101 of the three-dimensional image display device 100 reads from the memory or receives video frames of the 3D video signal from outside the three-dimensional image display device 100, for example, through an external interface, and then transmits the read or received video frames of the 3D video signal to the at least one 3D video processing unit 130 via the video signal interface 140.
In some embodiments, the three-dimensional image display device 100 further comprises a formatter (not shown) integrated in the processor 101, for example, configured as a codec or as part of a GPU, for preprocessing video frames of the 3D video signal to have m×n resolution for two images contained therein or 2m×n or m×2n resolution for a composite image contained therein.
As previously described, some embodiments of the present disclosure provide a three-dimensional image display device that may be a three-dimensional image display device that includes a processor. In some embodiments, the three-dimensional image display device may be configured as a smart cellular phone, tablet, smart television, wearable device, in-vehicle device, notebook, ultra Mobile Personal Computer (UMPC), netbook, personal Digital Assistant (PDA), or the like.
By way of example, fig. 2 shows a schematic hardware architecture of a three-dimensional image display device 200 implemented as a mobile terminal, such as a smart cellular phone or tablet computer. The three-dimensional image display apparatus 200 may include a processor 201, an external storage interface 202, (internal) memory 203, a Universal Serial Bus (USB) interface 204, a charge management module 205, a power management module 206, a battery 207, a mobile communication module 208, a wireless communication module 210, antennas 209, 211, an audio module 212, a speaker 213, a receiver 214, a microphone 215, an earphone interface 216, keys 217, a motor 218, an indicator 219, a Subscriber Identity Module (SIM) card interface 220, a multi-view naked-eye stereoscopic display screen 110,3D video processing unit 130, a video signal interface 140, an image capturing unit 221, an eye tracking device 150, and a sensor module 230, etc. The sensor modules 230 may include a proximity light sensor 2301, an ambient light sensor 2302, a pressure sensor 2303, a barometric pressure sensor 2304, a magnetic sensor 2305, a gravity sensor 2306, a gyroscope sensor 2307, an acceleration sensor 2308, a distance sensor 2309, a temperature sensor 2310, a fingerprint sensor 2311, a touch sensor 2312, a bone conduction sensor 2313, and the like.
It is to be understood that the structure illustrated in the embodiments of the present disclosure does not constitute a specific limitation on the three-dimensional image display apparatus 200. In other embodiments of the present disclosure, the three-dimensional image display device 200 may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 201 may include one or more processing units, such as: the processor 201 may include an Application Processor (AP), a modem processor, a baseband processor, a Graphics Processor (GPU) 223, an Image Signal Processor (ISP), a controller, a memory, a video codec 224, a Digital Signal Processor (DSP), a baseband processor, a neural Network Processor (NPU), etc., or a combination thereof. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
A cache may also be provided in the processor 201 for holding instructions or data that has just been used or recycled by the processor 201. When the processor 201 is to reuse instructions or data, it may be called directly from memory.
In some embodiments, the processor 201 may include one or more interfaces. The interfaces may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a Universal Asynchronous Receiver Transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a General Purpose Input Output (GPIO) interface, a Subscriber Identity Module (SIM) interface, a Universal Serial Bus (USB) interface, and the like.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, the processor 201 may contain multiple sets of I2C buses. The processor 201 may be communicatively coupled to the touch sensor 2312, the charger, the flash, the camera unit 221, the eye tracking device 150, etc. via different I2C bus interfaces, respectively.
Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is used to connect the processor 201 with the wireless communication module 210.
In the embodiment shown in fig. 2, a MIPI interface may be used to connect the processor 201 with the multi-view autostereoscopic display screen 110. Furthermore, the MIPI interface may also be used to connect peripheral devices such as camera unit 221, eye tracking device 150, etc.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, GPIO interfaces may be used to connect the processor 201 with the camera unit 221, the multi-view autostereoscopic display screen 110, the wireless communication module 210, the audio module 212, the sensor module 230, and the like.
The USB interface 204 is an interface conforming to the USB standard specification, and may be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 204 may be used to connect a charger to charge the three-dimensional image display device 200, or may be used to transfer data between the three-dimensional image display device 200 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present disclosure is only illustrative and not limiting to the structure of the three-dimensional image display device 200.
The wireless communication function of the three-dimensional image display device 200 may be realized by antennas 209, 211, a mobile communication module 208, a wireless communication module 210, a modem processor, a baseband processor, or the like.
The antennas 209, 211 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the three-dimensional image display device 200 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas.
The mobile communication module 208 may provide a solution including wireless communication of 2G/3G/4G/5G or the like applied to the three-dimensional image display device 200. The mobile communication module 208 may include at least one filter, switch, power amplifier, low Noise Amplifier (LNA), and the like. The mobile communication module 208 may receive electromagnetic waves from the antenna 209, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 208 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves to radiate the electromagnetic waves through the antenna 209. In some embodiments, at least some of the functional modules of the mobile communication module 208 may be provided in the processor 201. In some embodiments, at least some of the functional modules of the mobile communication module 208 may be provided in the same device as at least some of the modules of the processor 201.
The wireless communication module 210 may provide a solution for wireless communication including Wireless Local Area Network (WLAN), bluetooth (BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), etc., applied to the three-dimensional image display device 200. The wireless communication module 210 may be one or more devices integrating at least one communication processing module. The wireless communication module 210 receives electromagnetic waves via the antenna 211, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 201. The wireless communication module 210 may also receive a signal to be transmitted from the processor 201, frequency modulate it, amplify it, and convert it into electromagnetic waves to radiate through the antenna 211.
In some embodiments, the antenna 209 and the mobile communication module 208 of the three-dimensional image display device 200 are coupled, and the antenna 211 and the wireless communication module 210 are coupled, so that the three-dimensional image display device 200 can communicate with a network and other devices through wireless communication technology. The wireless communication technology may include at least one of global system for mobile communications (GSM), general Packet Radio Service (GPRS), code Division Multiple Access (CDMA), wideband Code Division Multiple Access (WCDMA), time division code division multiple access (TD-SCDMA), long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, or IR technology, among others. The GNSS may include at least one of a global satellite positioning system (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a Quasi Zenith Satellite System (QZSS), or a Satellite Based Augmentation System (SBAS).
In some embodiments, the external interface for receiving the 3D video signal may include a USB interface 204, a mobile communication module 208, a wireless communication module 209, or a combination thereof. Furthermore, other possible interfaces for receiving 3D video signals are conceivable, such as the interfaces described above.
Memory 203 may be used to store computer executable program code that includes instructions. The processor 201 executes various functional applications of the three-dimensional image display device 200 and data processing by executing instructions stored in the memory 203. The memory 203 may include a stored program area and a stored data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data (such as audio data, phonebook, etc.) created during use of the three-dimensional image display device 200, and the like. In addition, the memory 203 may include a high-speed random access memory, and may also include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The external memory interface 202 may be used to connect an external memory card, such as a Micro SD card, to realize expansion of the memory capability of the three-dimensional image display device 200. The external memory card communicates with the processor 201 via an external memory interface 202 to implement data storage functions.
In some embodiments, the memory of the three-dimensional image display device may include (internal) memory 203, an external memory card to which external memory interface 202 is connected, or a combination thereof. In other embodiments of the present disclosure, the video signal interface may also employ different internal interfacing manners or combinations of the foregoing embodiments.
In an embodiment of the present disclosure, the image capturing unit 221 may capture an image or video.
In some embodiments, the three-dimensional image display apparatus 200 implements display functions through the video signal interface 140, the 3D video processing unit 130, the multi-view autostereoscopic display screen 110, and an application processor, etc.
In some embodiments, the three-dimensional image display device 200 may include a GPU, for example, within the processor 201 for processing 3D video images, as well as processing 2D video images.
In some embodiments, the three-dimensional image display device 200 further includes a video codec 224 for compressing or decompressing digital video.
In some embodiments, the video signal interface 140 is used to output video frames of a 3D video signal, e.g., a decompressed 3D video signal, processed by the GPU or the codec 224, or both, to the 3D video processing unit 130.
In some embodiments, the GPU or codec 224 is integrated with a formatter.
The multi-view autostereoscopic display screen 110 is used to display three-dimensional (3D) images or videos, etc. The multi-view autostereoscopic display screen 110 includes a display panel. The display panel may employ a Liquid Crystal Display (LCD), an Organic Light Emitting Diode (OLED), an active matrix Organic Light Emitting Diode (OLED) or an Active Matrix Organic Light Emitting Diode (AMOLED), a Flexible Light Emitting Diode (FLED), a Mini-LED, a Micro-OLED, a quantum dot light emitting diode (QLED), or the like.
In some embodiments, eye tracking device 150 is communicatively coupled to 3D processing unit 130 such that 3D processing unit 130 may render corresponding sub-pixels in a composite pixel (composite sub-pixel) based on eye tracking data. In some embodiments, the eye tracking device 150 may also be connected to the processor 201, such as by-pass the processor 201.
The three-dimensional image display device 200 may implement audio functions through an audio module 212, a speaker 213, a receiver 214, a microphone 215, an earphone interface 216, an application processor, and the like. Such as music playing, recording, etc. The audio module 212 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 212 may also be used to encode and decode audio signals. In some embodiments, the audio module 212 may be disposed in the processor 201, or some functional modules of the audio module 212 may be disposed in the processor 201. The speaker 213 is for converting an audio electric signal into a sound signal. The three-dimensional image display device 200 can listen to music through the speaker 213 or listen to handsfree talk. A receiver 214, also called a "earpiece", is used to convert the audio electrical signal into a sound signal. When the three-dimensional image display device 200 receives a telephone call or voice information, the voice can be received by bringing the receiver 214 close to the human ear. The microphone 215 is used to convert sound signals into electrical signals. The earphone interface 216 is used to connect a wired earphone. The earphone interface 216 may be a USB interface 204 or a 3.5mm open mobile three-dimensional image display device platform (OMTP) standard interface, a american Cellular Telecommunications Industry Association (CTIA) standard interface.
The keys 217 include a power on key, a volume key, etc. The key 217 may be a mechanical key. Or may be a touch key. The three-dimensional image display apparatus 200 may receive key inputs, generating key signal inputs related to user settings and function controls of the three-dimensional image display apparatus 200.
The motor 218 may generate a vibration alert. The motor 218 may be used for incoming call vibration alerting as well as for touch vibration feedback.
The SIM card interface 220 is used to connect a SIM card. In some embodiments, the three-dimensional image display device 200 employs esims, namely: an embedded SIM card.
The pressure sensor 2303 is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 2303 may be disposed on the multi-view autostereoscopic display screen 110, which is within the scope of embodiments of the present disclosure.
The air pressure sensor 2304 is used to measure air pressure. In some embodiments, the three-dimensional image display device 200 calculates altitude from the barometric pressure value measured by the barometric pressure sensor 2304, aiding in positioning and navigation.
The magnetic sensor 2305 includes a hall sensor.
The gravity sensor 2306 is a sensor that converts motion or gravity into an electrical signal, and is mainly used for measuring parameters such as an inclination angle, an inertial force, an impact, and vibration.
The gyro sensor 2307 may be used to determine the motion pose of the three-dimensional image display device 200.
The acceleration sensor 2308 may detect the magnitude of acceleration of the three-dimensional image display device 200 in various directions (typically three axes).
A distance sensor 2309 may be used to measure distance
A temperature sensor 2310 may be used to detect temperature.
The fingerprint sensor 2311 is used for capturing a fingerprint. The three-dimensional image display device 200 can realize fingerprint unlocking, access application locking, fingerprint photographing, fingerprint incoming call answering and the like by utilizing the collected fingerprint characteristics.
The touch sensor 2312 may be disposed in the multi-view autostereoscopic display screen 110, and the touch sensor 2312 and the multi-view autostereoscopic display screen 110 form a touch screen, which is also referred to as a "touch screen".
The bone conduction sensor 2313 may acquire a vibration signal.
The charge management module 205 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 205 may receive a charging input of a wired charger through the USB interface 204. In some wireless charging embodiments, the charging management module 205 may receive wireless charging input through a wireless charging coil of the three-dimensional image display device 200.
The power management module 206 is used for connecting the battery 207, and the charge management module 205 and the processor 201. The power management module 206 receives input from at least one of the battery 207 or the charge management module 205, and provides power to the processor 201, the memory 203, the external memory, the multi-view autostereoscopic display 110, the camera unit 221, the wireless communication module 210, and the like. In other embodiments, the power management module 206 and the charge management module 205 may be disposed in the same device.
The software system of the three-dimensional image display device 200 may employ a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. The embodiments shown in the present disclosure exemplify the software structure of the three-dimensional image display device 200 by taking the android system of a hierarchical architecture as an example. It is contemplated that embodiments of the present disclosure may be implemented in a different software system, such as an operating system.
Fig. 3 is a software configuration diagram of a three-dimensional image display device 200 according to an embodiment of the present disclosure. The layered architecture divides the software into several layers. The layers communicate with each other through a software interface. In some embodiments, the android system is divided into four layers, from top to bottom, an application layer 310, a framework layer 320, a core class library and Runtime (run) 330, and a kernel layer 340, respectively.
The application layer 310 may include a series of application packages. As shown in fig. 3, the application package may include bluetooth, WLAN, navigation, music, camera, calendar, talk, video, gallery, map, short message, etc. applications. The 3D video display method according to the embodiments of the present disclosure may be implemented in a video application, for example.
Framework layer 320 provides an Application Programming Interface (API) and programming framework for application programs of the application layer. The framework layer includes some predefined functions. For example, in some embodiments of the present disclosure, functions or algorithms that identify the acquired 3D video images, algorithms that process the images, and the like may be included at the framework layer.
As shown in FIG. 3, the framework layer 320 may include a resource manager, a phone manager, a content manager, a notification manager, a window manager, a view system, an installation package manager, and so forth.
Android run time includes a core library and virtual machines. Android run is responsible for scheduling and management of the android system.
The core library consists of two parts: one part is a function to be called by java language, and the other part is a core library of android.
The application layer and the framework layer run in virtual machines. The virtual machine executes java files of the application program layer and the framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The core class library may include a plurality of functional modules. For example: three-dimensional graphics processing libraries (e.g., openGL ES), surface managers, image processing libraries, media libraries, graphics engines (e.g., SGL), and the like.
Kernel layer 340 is a layer between hardware and software. The kernel layer at least comprises a camera driver, an audio-video interface, a call interface, a Wifi interface, a sensor driver, a power management interface and a GPS interface.
Here, an embodiment of 3D video transmission and display in a three-dimensional image display apparatus is described taking a three-dimensional image display apparatus as a mobile terminal having the structure shown in fig. 2 and 3 as an example; it is contemplated that in other embodiments more or fewer features may be included or variations of the features therein.
In some embodiments, the three-dimensional image display device 200, e.g. a mobile terminal, such as a smart cellular phone or tablet, receives, e.g. compressed 3D video signals from a network, such as a cellular network, a WLAN network, bluetooth, e.g. by means of the mobile communication module 208 and antenna 209 or the wireless communication module 210 and antenna 211 as external interfaces, the compressed 3D video signals are image processed, e.g. via the GPU223, codec 224 codec and decompressed, and then the decompressed 3D video signals are sent to the at least one 3D video processing unit 130, e.g. via the video signal interface 140 as internal interface, such as the MIPI interface or mini-MIPI interface, the video frames of the decompressed 3D video signals comprising two images or a composite image of the embodiments of the present disclosure. Further, the 3D video processing unit 130 renders sub-pixels among the composite sub-pixels of the display screen accordingly, thereby implementing 3D video playback.
In other embodiments, the three-dimensional image display device 200 reads the (internal) memory 203 or reads the compressed 3D video signal stored in the external memory card through the external memory interface 202, and performs 3D video playback through corresponding processing, transmission, and rendering.
In some embodiments, the playing of the 3D video is implemented in a video application in the android application layer 310.
Embodiments of the present disclosure may also provide a three-dimensional (3D) image display method for the multi-view autostereoscopic display screen 110 according to embodiments of the present disclosure.
Referring to fig. 7, in some embodiments, a three-dimensional (3D) image display method includes:
s701: transmitting video frames of the 3D video signal, the video frames comprising two images having a resolution of mxn;
S702: rendering at least one of the composite sub-pixels of the composite pixels of the multi-view autostereoscopic display screen 110 based on one of the two images;
S703: rendering at least one other sub-pixel of each composite pixel of the multi-view autostereoscopic display screen 110 based on the other of the two images;
referring to fig. 8, in some embodiments, a three-dimensional (3D) image display method includes:
S801: transmitting a video frame of the 3D video signal, the video frame comprising a composite image having a resolution of 2 mxn or mx2 n;
S802: at least two of the composite sub-pixels of the composite pixels of the multi-view autostereoscopic display screen 110 are rendered based on the composite image.
Referring to fig. 9, in some embodiments, a three-dimensional (3D) image display method includes:
S901: transmitting information related to the display resolution of the display screen m×n irrespective of i viewpoints of the multi-viewpoint autostereoscopic display screen 110;
s902: transmitting video frames of the 3D video signal, the video frames comprising two images having a resolution of mxn;
S903: rendering at least one of the composite sub-pixels of the composite pixels of the multi-view autostereoscopic display screen 110 based on one of the two images;
S904: at least one other of the composite sub-pixels of the composite pixels of the multi-view autostereoscopic display screen 110 is rendered based on the other of the two images.
In some embodiments, the video frame of the 3D video signal includes a composite image having a resolution of 2m×n or m×2n, such that at least two of the composite sub-pixels of the composite pixels of the multi-view autostereoscopic display screen 110 are rendered based on the composite image after the video frame of the 3D video signal is transmitted.
Referring to fig. 10, in some embodiments, a three-dimensional (3D) image display method includes:
s1001: transmitting information related to the display resolution of m×n of the multi-view autostereoscopic display screen 110 irrespective of i views thereof;
s1002: reading the stored compressed 3D video signal or receiving the compressed 3D video signal from the three-dimensional image display device;
S1003: decompressing and encoding the compressed 3D video signal into a decompressed 3D video signal;
S1004: transmitting video frames of the decompressed 3D video signal, the video frames comprising two images having a resolution of mxn;
S1005: rendering at least one of the composite sub-pixels of the composite pixels of the multi-view autostereoscopic display screen 110 based on one of the two images;
s1006: at least one other of the composite sub-pixels of the composite pixels of the multi-view autostereoscopic display screen 110 is rendered based on the other of the two images.
In some embodiments, the video frame of the 3D video signal includes a composite image having a resolution of 2m×n or m×2n, such that at least two of the composite sub-pixels of the composite pixels of the multi-view autostereoscopic display screen 110 are rendered based on the composite image after the video frame of the 3D video signal is transmitted.
Referring to fig. 11, in some embodiments, a three-dimensional (3D) image display method includes:
S1101: transmitting information related to the display resolution of m×n of the multi-view autostereoscopic display screen 110 irrespective of i views thereof;
S1102: preprocessing video frames of a 3D video signal such that the video frames contain two images having m x n resolutions;
s1103: transmitting video frames of the 3D video signal;
S1104: rendering at least one of the composite sub-pixels of the composite pixels of the multi-view autostereoscopic display screen 110 based on one of the two images;
S1105: at least one other of the composite sub-pixels of the composite pixels of the multi-view autostereoscopic display screen 110 is rendered based on the other of the two images.
In some embodiments, the video frame of the 3D video signal includes a composite image having a resolution of 2m×n or m×2n, such that at least two of the composite sub-pixels of the composite pixels of the multi-view autostereoscopic display screen 110 are rendered based on the composite image after the video frame of the 3D video signal is transmitted.
In some embodiments, the three-dimensional image display device 200 may include an eye tracking apparatus or readable eye tracking data to obtain or read real-time eye tracking data of a viewer to enable dynamic rendering of the multi-view autostereoscopic display screen 110.
The presently disclosed embodiments provide a three-dimensional image display apparatus 1200, referring to fig. 12, including:
Processor 1210 and memory 1211, and may also include a communication interface 1212 and bus 1213. Wherein the processor 1210, the communication interface 1212, and the memory 1211 communicate with each other via a bus 1213. The communication interface 1213 may be used for information transfer. The processor 1210 may call logic instructions in the memory 1211 to perform the three-dimensional image display method of the above-described embodiment.
Further, the logic instructions in the memory 1211 described above may be implemented in the form of software functional units and stored in a computer-readable storage medium when sold or used as a stand-alone product.
The memory 1211 is a computer-readable storage medium that can be used to store a software program, a computer-executable program, and program instructions/modules corresponding to the methods in the embodiments of the present disclosure. The processor 1210 performs functional applications and data processing by executing program instructions/modules stored in the memory 1211, that is, implements the three-dimensional image display method in the above-described method embodiment.
The memory 1211 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for functions; the storage data area may store data created according to the use of the terminal device, etc. Further, the memory 1211 may include a high-speed random access memory, and may also include a nonvolatile memory.
Embodiments of the present disclosure provide an article of manufacture, such as a smart television, smart cellular phone, tablet, personal computer, or wearable device, configured as or incorporating the three-dimensional image display device described above.
The present disclosure provides a computer-readable storage medium storing computer-executable instructions configured to perform the above three-dimensional image display method.
The disclosed embodiments provide a computer program product comprising a computer program stored on a computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the above-described three-dimensional image display method.
The computer readable storage medium may be a transitory computer readable storage medium or a non-transitory computer readable storage medium.
The aspects of the disclosed embodiments may be embodied in a software product stored on a storage medium, including one or more instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the disclosed embodiments. And the aforementioned storage medium may be a non-transitory storage medium including: the program codes can be stored in various media such as a USB flash disk, a mobile hard disk, a read-only memory, a random access memory, a magnetic disk or an optical disk, and the like, and can also be a transient storage medium.
The above description and the drawings illustrate embodiments of the disclosure sufficiently to enable those skilled in the art to practice them. Other embodiments may involve structural, logical, electrical, process, and other changes. The embodiments represent only possible variations. Individual components and functions are optional unless explicitly required, and the sequence of operations may vary. Portions and features of some embodiments may be included in, or substituted for, those of others. The scope of the embodiments of the present disclosure encompasses the full ambit of the claims, as well as all available equivalents of the claims. When used in the present application, although the terms "first," "second," etc. may be used in the present application to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without changing the meaning of the description, so long as all occurrences of the "first element" are renamed consistently and all occurrences of the "second element" are renamed consistently. The first element and the second element are both elements, but may not be the same element. Moreover, the terminology used in the present application is for the purpose of describing embodiments only and is not intended to limit the claims. As used in the description of the embodiments and the claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, when used in this disclosure, the terms "comprises," "comprising," and the like, are intended to specify the presence of at least one of the stated features, integers, steps, operations, elements, or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or groups thereof. Without further limitation, an element defined by the phrase "comprising one …" does not exclude the presence of other like elements in a process, method or apparatus that includes such elements. In this context, each embodiment may be described with emphasis on the differences from the other embodiments, and the same similar parts between the various embodiments may be referred to each other. For the methods, products, etc. disclosed in the embodiments, if they correspond to the method sections disclosed in the embodiments, the description of the method sections may be referred to for relevance.
Those of skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may use different methods for each particular application to achieve the described functionality, but such implementation is not to be considered as beyond the scope of the embodiments of the present disclosure. It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working procedures of the above-described systems, apparatuses and units may refer to the corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the embodiments disclosed herein, the disclosed methods, articles of manufacture (including but not limited to devices, apparatuses, etc.) may be practiced in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements may be merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted or not performed. In addition, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form. The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to implement the present embodiment. In addition, each functional unit in the embodiments of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In the description corresponding to the flowcharts and block diagrams in the figures, operations or steps corresponding to different blocks may also occur in different orders than that disclosed in the description, and sometimes no specific order exists between different operations or steps. For example, two consecutive operations or steps may actually be performed substantially in parallel, they may sometimes be performed in reverse order, which may be dependent on the functions involved. Each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Claims (20)
1. A three-dimensional image display device, characterized by comprising:
A multi-view autostereoscopic display screen comprising m x n composite pixels and thus defining a display resolution of m x n;
a video signal interface for receiving video frames of a 3D video signal, wherein the video frames of the 3D video signal comprise two images with m x n resolution or comprise a composite image with 2m x n or m x 2n resolution;
At least one 3D video processing unit;
Wherein each composite pixel comprises a plurality of composite sub-pixels, each composite sub-pixel is composed of i same-color sub-pixels corresponding to i viewpoints, wherein i is more than or equal to 3;
wherein the at least one 3D video processing unit is configured to render at least one of the composite sub-pixels based on one of the two images and at least another one of the composite sub-pixels based on the other of the two images; or the at least one 3D video processing unit is configured to render at least two of the respective composite sub-pixels based on the composite image.
2. The three-dimensional image display device of claim 1, further comprising a processor, a memory, and an external interface, the video signal interface being an internal interface and configured to communicatively couple the processor and the at least one 3D video processing unit.
3. The three-dimensional image display device according to claim 2, wherein the processor comprises a register configured to receive information related to the m x n display resolution irrespective of the i viewpoints.
4. A three-dimensional image display device according to claim 2 or 3, wherein the memory is configured to store a compressed 3D video signal or the external interface is configured to receive a compressed 3D video signal; wherein the three-dimensional image display device further comprises a codec configured to decompress and codec the compressed 3D video signal and to transmit the decompressed 3D video signal to the at least one 3D video processing unit via the video signal interface.
5. The three-dimensional image Display device according to claim 2, wherein the internal interface is selected from at least one of a MIPI, mini-MIPI interface, LVDS interface, mini-LVDS interface, or Display Port interface.
6. The three-dimensional image display device according to claim 2, wherein at least two 3D video processing units are provided, each 3D video processing unit being configured to be allocated a plurality of rows or columns of composite pixels or composite sub-pixels, respectively.
7. The three-dimensional image display device of claim 1, wherein the at least one 3D video processing unit is an FPGA or ASIC chip or a FPGA or ASIC chipset.
8. The three-dimensional image display device according to claim 1, further comprising a formatter for preprocessing video frames of the 3D video signal such that the two images have an mxn resolution or the composite image has a2 mxn or mx2 n resolution.
9. The three-dimensional image display device according to claim 1, wherein the two images are a left-eye parallax image and a right-eye parallax image in a juxtaposed format, a rendered color image and a depth image in a juxtaposed format; left-eye parallax images and right-eye parallax images in the up-down format or rendered color images and depth images in the up-down format.
10. The three-dimensional image display device according to claim 1, wherein the composite image is a left-and-right-eye parallax composite image interleaved left-and-right, a rendered color image and a depth composite image interleaved up-and-down, a left-and-right-eye parallax composite image in a checkerboard format, a rendered color image and a depth composite image in a checkerboard format.
11. The three-dimensional image display device of claim 1, wherein each composite subpixel comprises a single row or column of a plurality of subpixels.
12. The three-dimensional image display device of claim 1, wherein each composite subpixel comprises a plurality of subpixels in an array.
13. The three-dimensional image display device of claim 1, wherein the plurality of composite subpixels comprise a red composite subpixel, a green composite subpixel, and a blue composite subpixel.
14. The three-dimensional image display device according to claim 1, further comprising eye tracking means or an eye tracking data interface for acquiring real-time eye tracking data.
15. A three-dimensional image display method is characterized in that a multi-viewpoint naked eye three-dimensional display screen comprises m multiplied by n composite pixels and thus defines m multiplied by n display resolution, each composite pixel comprises a plurality of composite sub-pixels, each composite sub-pixel is composed of i homochromatic sub-pixels corresponding to i viewpoints, wherein i is more than or equal to 3,
The method comprises the following steps:
transmitting a video frame of a 3D video signal, comprising two images having a resolution of mxn or comprising a composite image having a resolution of 2 mxn or mx2 n; and
Rendering at least one of the composite sub-pixels based on one of the two images and rendering at least another of the composite sub-pixels based on the other of the two images; or alternatively
Rendering at least two of the composite subpixels based on the composite image.
16. The method as recited in claim 15, further comprising:
before transmitting a video frame of a 3D video signal, information related to the display resolution of the mxn irrespective of the i viewpoints is transmitted.
17. The method according to claim 15 or 16, wherein the 3D video signal is a decompressed 3D video signal;
the method further comprises the steps of:
reading the stored compressed 3D video signal or receiving the compressed 3D video signal from the three-dimensional image display device before transmitting the video frame of the 3D video signal; and
Decompressing and encoding the compressed 3D video signal into the decompressed 3D video signal.
18. The method according to claim 15 or 16, further comprising: the video frames of the 3D video signal are preprocessed such that the two images have an mxn resolution or the composite image has a 2 mxn or mx2 n resolution.
19. A three-dimensional image display device comprising a processor and a memory storing program instructions, characterized in that the three-dimensional image display device further comprises a multi-view autostereoscopic display screen comprising m x n composite pixels and thus defining a display resolution of m x n, each composite pixel comprising a plurality of composite sub-pixels, each composite sub-pixel being made up of i homochromatic sub-pixels corresponding to i viewpoints, wherein i is ≡3, the processor being configured to perform the method of any one of claims 15 to 18 when executing the program instructions.
20. The three-dimensional image display device of claim 19, wherein the three-dimensional image display device is a smart television, a smart cellular phone, a tablet computer, a personal computer, or a wearable device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910937968.XA CN112584125B (en) | 2019-09-30 | 2019-09-30 | Three-dimensional image display device and display method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910937968.XA CN112584125B (en) | 2019-09-30 | 2019-09-30 | Three-dimensional image display device and display method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112584125A CN112584125A (en) | 2021-03-30 |
CN112584125B true CN112584125B (en) | 2024-07-30 |
Family
ID=75116127
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910937968.XA Active CN112584125B (en) | 2019-09-30 | 2019-09-30 | Three-dimensional image display device and display method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112584125B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113012621A (en) * | 2021-05-25 | 2021-06-22 | 北京芯海视界三维科技有限公司 | Time schedule controller and display device |
CN113010020A (en) * | 2021-05-25 | 2021-06-22 | 北京芯海视界三维科技有限公司 | Time schedule controller and display device |
CN113012636A (en) * | 2021-05-25 | 2021-06-22 | 北京芯海视界三维科技有限公司 | Time schedule controller and display device |
CN113556527B (en) * | 2021-07-07 | 2023-08-29 | 上海谙赋信息科技有限公司 | Intelligent 3D display system of advertisement propaganda design effect diagram |
CN115047645B (en) * | 2022-05-20 | 2024-04-19 | 北京芯海视界三维科技有限公司 | Display screen and display device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN203025421U (en) * | 2012-12-26 | 2013-06-26 | 黑龙江省四维影像数码科技有限公司 | Free stereoscopic display screen with vertical lenticular grating |
CN105282539A (en) * | 2014-07-18 | 2016-01-27 | 三星电子株式会社 | Curved multi-view image display apparatus and control method thereof |
CN110072099A (en) * | 2019-03-21 | 2019-07-30 | 朱晨乐 | A kind of naked eye 3D video pixel arrangement architecture and aligning method |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101629479B1 (en) * | 2009-11-04 | 2016-06-10 | 삼성전자주식회사 | High density multi-view display system and method based on the active sub-pixel rendering |
CN105681777B (en) * | 2016-01-20 | 2017-09-05 | 深圳创维-Rgb电子有限公司 | A kind of bore hole 3D display methods and system |
-
2019
- 2019-09-30 CN CN201910937968.XA patent/CN112584125B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN203025421U (en) * | 2012-12-26 | 2013-06-26 | 黑龙江省四维影像数码科技有限公司 | Free stereoscopic display screen with vertical lenticular grating |
CN105282539A (en) * | 2014-07-18 | 2016-01-27 | 三星电子株式会社 | Curved multi-view image display apparatus and control method thereof |
CN110072099A (en) * | 2019-03-21 | 2019-07-30 | 朱晨乐 | A kind of naked eye 3D video pixel arrangement architecture and aligning method |
Also Published As
Publication number | Publication date |
---|---|
CN112584125A (en) | 2021-03-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112584125B (en) | Three-dimensional image display device and display method thereof | |
TWI818211B (en) | Eye positioning device and method and 3D display device and method | |
TWI746302B (en) | Multi-viewpoint 3D display, multi-viewpoint 3D display terminal | |
CN112929647A (en) | 3D display device, method and terminal | |
TWI782361B (en) | 3D display device, method and terminal | |
CN211128026U (en) | Multi-view naked eye 3D display screen and multi-view naked eye 3D display terminal | |
CN211791828U (en) | 3D display device | |
CN211791829U (en) | 3D display device | |
CN211528831U (en) | Multi-view naked eye 3D display screen and naked eye 3D display terminal | |
US20150077575A1 (en) | Virtual camera module for hybrid depth vision controls | |
US11924398B2 (en) | Method for implementing 3D image display and 3D display device | |
EP4068780A1 (en) | Method for realizing 3d image display, and 3d display device | |
CN112929645A (en) | 3D display device, system and method, and 3D video data communication method | |
CN112925109A (en) | Multi-view naked eye 3D display screen and naked eye 3D display terminal | |
CN211930763U (en) | 3D display device | |
CN114827440B (en) | Display mode conversion method and conversion device based on light field display | |
CN112929641B (en) | 3D image display method and 3D display device | |
WO2021110037A1 (en) | Method for realizing 3d image display, and 3d display device | |
CN116112813A (en) | Virtualization method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |