EP3330772B1 - Display apparatus and method of displaying using projectors - Google Patents
Display apparatus and method of displaying using projectors Download PDFInfo
- Publication number
- EP3330772B1 EP3330772B1 EP17203680.8A EP17203680A EP3330772B1 EP 3330772 B1 EP3330772 B1 EP 3330772B1 EP 17203680 A EP17203680 A EP 17203680A EP 3330772 B1 EP3330772 B1 EP 3330772B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- image
- context
- projection
- focus
- projector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/363—Image reproducers using image projection screens
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3141—Constructional details thereof
- H04N9/3147—Multi-projection systems
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/0093—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
- G02B27/0172—Head mounted characterised by optical features
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B21/00—Projectors or projection-type viewers; Accessories therefor
- G03B21/54—Accessories
- G03B21/56—Projection screens
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/06—Ray-tracing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/383—Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3141—Constructional details thereof
- H04N9/317—Convergence or focusing systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3179—Video signal processing therefor
- H04N9/3188—Scale or resolution adjustment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3191—Testing thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3191—Testing thereof
- H04N9/3194—Testing thereof including sensor feedback
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0147—Head-up displays characterised by optical features comprising a device modifying the resolution of the displayed image
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0179—Display position adjusting means not related to the information to be displayed
- G02B2027/0187—Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04805—Virtual magnifying lens, i.e. window or frame movable on top of displayed information to enlarge it for better reading or selection
Definitions
- the present disclosure relates generally to representation of visual information; and more specifically, to display apparatuses comprising context image projectors or context displays, and focus image projectors. Furthermore, the present disclosure also relates to methods of displaying, via the aforementioned display apparatuses.
- technologies such as virtual reality, augmented reality and so forth present the simulated environment (often known as 'virtual world') to a user of a device.
- the simulated environment is presented by rendering images constituting the simulated on displays in the device.
- Examples of such devices include head mounted virtual reality devices, virtual reality glasses, augmented reality headset, and so forth.
- Such devices are adapted to present to the user, a feeling of immersion in the simulated environment using contemporary techniques such as stereoscopy.
- a field of view of such devices is typically about 100°, which is much lesser as compared to a field of view of humans which is typically about 180°.
- Such existing devices have certain limitations.
- conventional displays used in such devices are of small size. Specifically, a pixel density offered by such displays is about 15 pixels per degree whereas fovea of the human eye has a pixel density of about 60 pixels per degree. Consequently, due to low pixel density, such displays are unable to imitate visual acuity of eyes of humans.
- displays offering high pixel density are dimensionally too large to be accommodated in such devices.
- conventional displays such as focus plus context screens used in such devices include a high resolution display embedded into a low resolution display. However, position of the high resolution display within such focus plus context screens is often fixed at a particular position. Further images rendered on such focus plus context screens appear discontinuous at edges of the high and low resolution displays. Consequently, such existing devices are not sufficiently well developed and are limited in their ability to mimic the human visual system.
- EP 618471 A2 discloses a display apparatus, which detects the visual axis of an observer and accordingly adjusts the position of a high resolution image within a low resolution image displayed to the observer.
- the detection of the visual axis is done by sending invisible infrared light to the eye of the observer, which is reflected and measured with a sensor.
- a half mirror which combines the low and high resolution image is inclined or tilted in two directions to follow the visual axis of the eye, to display the high resolution image always in coincidence with the visual axis direction.
- US 4479784 A discloses an image generation system for a flight simulation system for trainees, whereby a foveal view is projected onto a screen by a projection system. Another peripheral view is projected on the same screen with a lower resolution.
- US 2005/185281 A1 discloses a display apparatus, which detects a fixation point of a viewer's eyes on an image of a screen.
- a high resolution image (fovea display) is created by a LCD and projected onto that portion of the screen, where the fixation point lies. This is accomplished by illuminating the LCD from one of different possible positions, i.e. different light sources LEDs.
- the present disclosure seeks to provide a display apparatus.
- the present disclosure also seeks to provide a method of displaying, via a display apparatus comprising at least one context image projector or at least one context display, and at least one focus image projector.
- the present disclosure seeks to provide a solution to the existing problem of pixel density and physical size tradeoffs, and image discontinuities within conventional displays used in devices for implementing simulated environments.
- An aim of the present disclosure is to provide a solution that overcomes at least partially the problems encountered in prior art, and provides a display apparatus that closely mimics the human visual system.
- an embodiment of the present disclosure provides a display apparatus according to claim 1.
- an embodiment of the present disclosure provides a method of displaying according to claim 11.
- Embodiments of the present disclosure substantially eliminate or at least partially address the aforementioned problems in the prior art, and enables implementation of active foveation within a display apparatus used in devices for implementing simulated environments, to mimic the human visual system.
- an underlined number is employed to represent an item over which the underlined number is positioned or an item to which the underlined number is adjacent.
- a non-underlined number relates to an item identified by a line linking the non-underlined number to the item. When a number is non-underlined and accompanied by an associated arrow, the non-underlined number is used to identify a general item at which the arrow is pointing.
- an embodiment of the present disclosure provides a display apparatus according to claim 1.
- an embodiment of the present disclosure provides a method of displaying, via a display apparatus comprising at least one context image projector or at least one context display, at least one focus image projector, at least one projection surface, an image steering unit comprising at least one first actuator for moving the focus image projector with respect to the at least one projection surface, means for detecting a gaze direction, and a processor coupled in communication with the image steering unit and the means for detecting the gaze direction the method comprising the combination of features according to claim 11.
- the present disclosure provides a display apparatus and a method of displaying via the display apparatus using projectors.
- the display apparatus described herein is not limited in operation by size of displays (or screens) adapted to facilitate rendering of the context image and/or the focus image thereon. Therefore, the display apparatus may be easily implemented in small-sized devices such as virtual reality devices. Further, the display apparatus simulates active foveation of the human visual system by detecting gaze direction of the eyes of the user of the device. Furthermore, the displayed images using the described display apparatus are continuous due to proper optimisation of optical paths of projections of focus and context images. Specifically, optical paths of the projections of focus and context images may be optimised separately using two or more projectors. Therefore, the described display apparatus is operable to closely imitate gaze contingency similar to the human visual system.
- the method of displaying using the described display apparatus is easy to implement, and possesses robust active foveation capability. Further, the display apparatus is inexpensive, and easy to manufacture.
- the display apparatus comprises at least one context image projector or at least one context display for rendering a context image, and at least one focus image projector for rendering a focus image. Further, an angular width of a projection of the rendered context image ranges from 40 degrees to 220 degrees and an angular width of a projection of the rendered focus image ranges from 5 degrees to 60 degrees.
- An arrangement is made to combine the projection of the rendered focus image with the projection of the rendered context image to create a visual scene.
- the visual scene may correspond to a scene within a simulated environment to be presented to a user of a device, such as a head-mounted virtual reality device, virtual reality glasses, augmented reality headset, and so forth. More specifically, the visual scene may be projected onto eyes of the user.
- the device may comprise the display apparatus.
- the angular width of a projection of the rendered context image may be for example from 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160 or 170 degrees up to 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200, 210 or 220 degrees.
- the angular width of a projection of the rendered focus image may be for example from 5, 10, 15, 20, 25, 30, 35, 40, 45 or 50 degrees up to 15, 20, 25, 30, 35, 40, 45, 50, 55 or 60 degrees.
- the arrangement of the at least one context image projector or the at least one context display and the at least one focus image projector facilitates the proper combination of the projection of the rendered focus image with the projection of the rendered context image. If the aforementioned combination is less that optimal, the visual scene created may appear distorted.
- the context image relates to a wide image to be rendered and projected via the display apparatus, within the aforementioned angular width, to cope with saccades associated with movement of the eyes of the user.
- the focus image relates to an image, to be rendered and projected via the display apparatus, within the aforementioned angular width to cope with microsaccades associated with movement of the eyes of the user.
- the focus image is dimensionally smaller than the context image.
- the context and focus images collectively constitute the visual scene upon combination of projections thereof.
- the term 'context display' used herein relates to a display (or screen) adapted to facilitate rendering of the context image thereon.
- the at least one context display may be adapted to receive a projection of the context image thereon.
- the context display may be selected from the group consisting of: a Liquid Crystal Display (LCD), a Light Emitting Diode (LED)-based display, an Organic LED (OLED)-based display, a micro OLED-based display, and a Liquid Crystal on Silicon (LCoS)-based display.
- LCD Liquid Crystal Display
- LED Light Emitting Diode
- OLED Organic LED
- micro OLED micro OLED-based display
- LCDoS Liquid Crystal on Silicon
- the term 'context image projector' used herein relates to an optical device for rendering the context image at a display (or screen) associated therewith.
- the context image projector may be selected from the group consisting of: a Liquid Crystal Display (LCD)-based projector, a Light Emitting Diode (LED)-based projector, an Organic LED (OLED)-based projector, a Liquid Crystal on Silicon (LCoS)-based projector, a Digital Light Processing (DLP)-based projector, and a laser projector.
- LCD Liquid Crystal Display
- LED Light Emitting Diode
- OLED Organic LED
- LCD Liquid Crystal on Silicon
- DLP Digital Light Processing
- the at least one context image projector may be used to project separate context images for the left and right eyes of the user. It may be understood that the separate context images collectively constitute the context image.
- the at least one context image projector may comprise at least two context image projectors, at least one of the at least two context image projectors being arranged to be used for a left eye of a user, and at least one of the at least two context image projectors being arranged to be used for a right eye of the user.
- the at least two context image projectors may be used such that at least one context image projector may be dedicatedly (or wholly) used to render the context image for one eye of the user.
- the at least two context image projectors allow separate optimization of optical paths of the separate context images (for example, a context image for the left eye of the user and a context image for the right eye of the user) constituting the context image.
- the at least one context image projector may be arranged to be used for left and right eyes of the user on a shared-basis.
- one context image projector may be used to render the context image on the display (or screen) associated therewith, on a shared basis.
- the one context image projector may project separate context images (for the left and right eyes of the user) collectively constituting the context image on the display (or screen) associated therewith.
- the context image may be rendered either on the context display or at the display (or screen) associated with the at least one context image projector.
- the term 'focus image projector' used herein relates to an optical device for projecting the focus image at a display (or screen) associated therewith.
- the focus image projector may be selected from the group consisting of: a Liquid Crystal Display (LCD)-based projector, a Light Emitting Diode (LED)-based projector, an Organic LED (OLED)-based projector, a Liquid Crystal on Silicon (LCoS)-based projector, a Digital Light Processing (DLP)-based projector, and a laser projector.
- LCD Liquid Crystal Display
- LED Light Emitting Diode
- OLED Organic LED
- LCD Liquid Crystal on Silicon
- DLP Digital Light Processing
- the display (or screen) associated with the at least one context image projector and the display (or screen) associated with the at least one focus image projector may be same (or shared therebetween). Specifically, in such embodiment, both the at least one context image projector and the at least one focus image projector may render the context image and the focus image respectively, at a common/shared display (or screen).
- the at least one focus image projector may comprise at least two focus image projectors, at least one of the at least two focus image projectors being arranged to be used for a left eye of a user, and at least one of the at least two focus image projectors being arranged to be used for a right eye of the user.
- the at least two focus image projectors may be used such that at least one focus image projector may be dedicatedly (or wholly) used to render the focus image for one eye of the user.
- the at least two focus image projectors allow separate optimization of optical paths of the separate focus images (for example, a focus image for the left eye of the user and a focus image for the right eye of the user) constituting the focus image.
- the at least one focus image projector may be arranged to be used for both eyes of the user.
- the laser projector may be operated such that the separate focus images for the both eyes of the user may be projected substantially simultaneously.
- one laser projector may be used as the at least one focus image projector to project separate focus images (for each of the left eye of the user and the right eye of the user) substantially simultaneously.
- the display apparatus further comprises at least one projection surface, an image steering unit, means for detecting a gaze direction, and a processor coupled in communication with the image steering unit and the means for detecting the gaze direction.
- the processor may be hardware, software, firmware or a combination of these, configured to controlling operation of the display apparatus. Specifically, the processor may control operation of the display apparatus to process and display (or project) the visual scene onto the eyes of the user. In an instance wherein the display apparatus is used within the device associated with the user, the processor may or may not be external to the device.
- the processor may also be coupled in communication with a memory unit.
- the memory unit may be hardware, software, firmware or a combination of these, suitable for storing an image of the visual scene and/or the context and focus images to be processed and displayed by the processor.
- the memory unit may be used within the device or may be remotely located.
- the means for detecting a gaze direction may relate to specialized equipment for measuring a direction of gaze of the eyes of the user and movement of the eyes, such as eye trackers. Specifically, an accurate detection of the gaze direction may allow the display apparatus to closely implement gaze contingency thereon. Further, the means for detecting the gaze direction, may or may not be placed in contact with the eyes. Examples of the means for detecting a gaze direction include contact lenses with motion sensors, cameras monitoring position of pupil of the eye, and so forth.
- the processor is configured to receive an input image, and use the detected gaze direction to determine a region of visual accuracy of the input image.
- the term 'input image' used herein relates to the image of the visual scene to be displayed via the display apparatus.
- the input image may be displayed to the eyes of the user.
- the input image may be received from an image sensor coupled to the device associated with the user.
- the image sensor such as image sensor of a pass-through digital camera
- the input image may be received from the memory unit coupled in communication with the processor.
- the memory unit may be configured to store the input image in a suitable format including, but not limited to, Moving Picture Experts Group (MPEG), Joint Photographic Experts Group (JPEG), Tagged Image File Format (TIFF), Portable Network Graphics (PNG), Graphics Interchange Format (GIF), and Bitmap file format (BMP).
- MPEG Moving Picture Experts Group
- JPEG Joint Photographic Experts Group
- TIFF Tagged Image File Format
- PNG Portable Network Graphics
- GIF Graphics Interchange Format
- BMP Bitmap file format
- the input image may optionally be a computer generated image.
- the processor After receiving the input image, the processor thus uses the detected gaze direction to determine a region of visual accuracy of the input image.
- the region of visual accuracy relates to a region of the input image whereat the detected gaze direction of the eye may be focused.
- the region of visual accuracy may be a region of interest (or a fixation point) within the input image, and may be projected onto fovea of the eye.
- the region of visual accuracy may be the region of focus within the input image. Therefore, it may be evident that the region of visual accuracy relates to a region resolved to a much greater detail as compared to other regions of the input image, when the input image is viewed by a human visual system.
- the processor is configured to process the input image to generate the context image and the focus image, the context image having a first resolution and the focus image having a second resolution.
- the second resolution is higher than the first resolution.
- the focus image substantially corresponds to the region of visual accuracy of the input image.
- the context image corresponds to a low-resolution representation of the input image. Therefore, the context image includes the region of visual accuracy of the input image along with remaining region of the input image. Specifically, size of the context image is larger than size of the focus image since the focus image corresponds to only a portion of the context image whereat the detected gaze direction of the eye may be focused.
- the first and second resolutions may be understood in terms of angular resolution. Specifically, pixels per degree indicative of the second resolution is higher than pixels per degree indicative of the first resolution.
- fovea of the eye of the user corresponds to 2 degrees of visual field and receives the projection of the focus image of angular cross section width equal to 114 pixels indicative of 57 pixels per degree. Therefore, an angular pixel size corresponding to the focus image would equal 2/114 or 0.017.
- the retina of the eye corresponds to 180 degrees of visual field and receives projection of the context image of angular cross section width equal to 2700 pixels indicative of 15 pixels per degree. Therefore, an angular pixel size corresponding to the context image would equal 180/2700 or 0.067.
- the angular pixel size corresponding to the context image is clearly much larger than the angular pixel size corresponding to the focus image.
- a perceived angular resolution indicated by a total number of pixels may be greater for the context image as compared to the focus image since the focus image corresponds to only a part of the context image, wherein the part corresponds to the region of visual accuracy of the input image.
- a region of the context image that substantially corresponds to the region of visual accuracy of the input image is masked.
- the masking may be performed by the processor to hide (or obscure) the region of the context image corresponding to the region of visual accuracy of the input image.
- pixels of the context image corresponding to the region of visual accuracy of the input image may be dimmed (or blackened) for masking.
- the focus image substantially corresponds to the region of visual accuracy of the input image, and the second resolution is higher than the first resolution.
- the processor After processing the input image, the processor is configured to render the context image at the at least one context display or at the at least one projection surface via the at least one context image projector. Further, the processor is configured to render the focus image at the at least one projection surface via the at least one focus image projector. It is to be understood that either the at least one context display, or the at least one projection surface and the at least one context image projector, may be used to render the context image, at a given time.
- the term 'projection surface' used herein relates to a display (or screen) adapted to facilitate rendering of the context image and the focus image thereon.
- the at least one projection surface may have transmittance and reflectance specifications suitable for optically rendering the context and focus images thereon.
- the at least one projection surface may be a non-transparent (or opaque) surface.
- the at least one projection surface may be a semi-transparent surface.
- the at least one projection surface may be implemented by way of at least one of: a polarizer, a retarder.
- the at least one projection surface may be arranged to allow the projection of the rendered context image to pass through substantially and to reflect the projection of the rendered focus image substantially.
- the context image may be projected onto the at least one projection surface from a back side thereof and the focus image may be projected onto the at least one projection surface from a front side thereof.
- the at least one projection surface may be arranged to allow the projection of the rendered focus image to pass through substantially and to reflect the projection of the rendered context image substantially.
- the focus image may be projected onto the at least one projection surface from the back side thereof and the context image may be projected onto the at least one projection surface from the front side thereof.
- the at least one projection surface may be arranged to allow the projections of both the rendered context and focus images to pass through substantially.
- both the context image and focus images may be projected onto the at least one projection surface from the back side thereof.
- the at least one projection surface may be arranged to reflect the projections of both the rendered context and focus images substantially.
- both the context image and focus images may be projected onto the at least one projection surface from the front side thereof.
- the at least one projection surface may comprise at least two projection surfaces, at least one of the at least two projection surfaces being arranged to be used for a left eye of the user, and at least one of the at least two projection surfaces being arranged to be used for a right eye of the user.
- at least one of the at least two projection surfaces may be used for rendering the context and focus images for a left eye of the user.
- at least one of the at least two projection surfaces may be used for rendering the context and focus images for a right eye of the user.
- at least one of the at least two projection surfaces may be semi-transparent to transmit projections of the context image and/or the focus image therethrough.
- the at least one projection surface is implemented as a part of the at least one context display.
- the context image may be rendered by the processor at the at least one context display without use of the at least one context image projector.
- the at least one context display may also be adapted to facilitate rendering of the focus image thereon.
- the processor is configured to control the image steering unit to adjust a location of a projection of the rendered focus image on the at least one projection surface, such that the projection of the rendered focus image substantially overlaps the projection of the masked region of the rendered context image on the at least one projection surface. Furthermore, the processor is configured to perform rendering the context image, rendering the focus image, and controlling the image steering unit substantially simultaneously. Specifically, the combined projections of the rendered context and focus images may constitute a projection of the input image. The context and focus images are rendered substantially simultaneously in order to avoid time lag during combination of projections thereof.
- the angular width of the projection of the rendered context image is larger than the angular width of the projection of the rendered focus image. This may be attributed to the fact that the rendered focus image is typically projected on and around the fovea of the eye, whereas the rendered context image is projected on a retina of the eye, of which the fovea is just a small part. Specifically, a combination of the rendered context and focus images constitute the input image and may be projected onto the eye to project the input image thereon.
- the term 'image steering unit' used herein relates to equipment (such as optical elements, electromechanical components, and so forth) for controlling the projection of the rendered focus image on the at least one projection surface.
- the image steering unit includes at least one element/component.
- the image steering unit is operable to control the projection of the rendered context image on the at least one projection surface.
- the image steering unit substantially overlaps the projection of the rendered focus image with the projection of the masked region of the rendered context image to avoid distortion of the region of visual accuracy of the input image.
- the region of visual accuracy of the input image is represented within both, the rendered context image of low resolution and the rendered focus image of high resolution.
- the overlap (or superimposition) of projections of low and high-resolution images of a same region results in distortion of appearance of the same region.
- the rendered focus image of high resolution may contain more information pertaining to the region of visual accuracy of the input image, as compared to the rendered context image of low resolution. Therefore, the region of the context image that substantially corresponds to the region of visual accuracy of the input image is masked, in order to project the rendered high-resolution focus image without distortion.
- the processor is configured to mask the region of the context image corresponding to the region of visual accuracy of the input image such that transitional area seams (or edges) between the region of visual accuracy of the displayed input image and remaining region of the displayed input image are minimum.
- the region of visual accuracy of the displayed input image corresponds to the projection of the focus image (and the masked region of the context image) whereas the remaining region of the displayed input image corresponds to the projection of the context image.
- the masking should be performed as a gradual gradation in order to minimize the transitional area seams upon superimposition of the context and focus images so that the displayed input image appears continuous.
- the processor may significantly dim pixels of the context image corresponding to the region of visual accuracy of the input image, and gradually reduce the amount of dimming of the pixels with increase in distance thereof from the region of visual accuracy of the input image. If alignment and appearance of the superimposed (or overlaid) projections of the rendered context and focus images are improper and/or have discontinuities, then the displayed input image would also be improper.
- masking the region of the context image that substantially corresponds to the region of visual accuracy of the input image may be performed using linear transparency mask blend of inverse values between the context image and the focus image at the transition area, stealth (or camouflage) patterns containing shapes naturally difficult for detection by the eyes of the user, and so forth.
- the image steering unit comprises at least one first actuator for moving the focus image projector with respect to the at least one projection surface, wherein the processor is configured to control the at least one first actuator to adjust the location of the projection of the rendered focus image on the at least one projection surface.
- the at least one first actuator moves the focus image projector when the gaze direction of the eye shifts from one direction to another.
- the arrangement of the focus image projector and the at least one projection surface may not project the rendered focus image on and around the fovea of the eye.
- the processor controls the at least one first actuator to move the focus image projector with respect to the at least one projection surface, to adjust the location of the projection of the rendered focus image on the at least one projection surface such that the rendered focus image may be projected on and around the fovea of the eye even on occurrence of shift in the gaze direction. More specifically, the processor controls the at least one first actuator by generating an actuation signal (such as an electric current, hydraulic pressure, and so forth).
- an actuation signal such as an electric current, hydraulic pressure, and so forth.
- the at least one first actuator may move the focus image projector closer or away from the at least one projection surface. In another example, the at least one first actuator may move the focus image projector laterally with respect to the at least one projection surface. In yet another example, the at least one first actuator may tilt and/or rotate the focus image projector with respect to the at least one projection surface.
- the image steering unit comprises at least one optical element that is positioned on an optical path between the at least one projection surface and the at least one focus image projector and at least one second actuator for moving the at least one optical element with respect to the at least one focus image projector.
- the at least one optical element is selected from the group consisting of a lens, a prism, a mirror, a beam splitter, and an optical waveguide.
- the processor is configured to control the at least one second actuator to adjust the location of the projection of the rendered focus image on the at least one projection surface.
- the at least one optical element may change the optical path of the projection of the rendered focus image on the at least one projection surface in order to facilitate projection of the rendered focus image on and around the fovea of the eye even on occurrence of shift in the gaze direction.
- the processor controls the at least one second actuator by generating an actuation signal (such as an electric current, hydraulic pressure, and so forth).
- a prism may be positioned on an optical path between a projection surface and a focus image projector.
- the optical path of the projection of the rendered focus image may change on passing through the prism to adjust the location of the projection of the rendered focus image on the projection surface.
- the prism may be moved transversally and/or laterally, be rotated, be tilted, and so forth, by a second actuator in order to facilitate projection of the rendered focus image on and around the fovea of the eye even on occurrence of shift in the gaze direction.
- the at least one optical element that is positioned on an optical path between the at least one projection surface and the at least one focus image projector may be an optical waveguide.
- the optical waveguide may be arranged to allow the projection of the focus image to pass therethrough, and to adjust the location of the projection of the rendered focus image on the at least one projection surface. Therefore, the optical waveguide may be semi-transparent.
- the optical waveguide may further comprise optical elements therein such as microprisms, mirrors, diffractive optics, and so forth.
- the image steering unit comprises at least one third actuator for moving the at least one projection surface
- the processor is configured to control the at least one third actuator to adjust the location of the projection of the rendered focus image on the at least one projection surface.
- the at least one third actuator may move the at least one projection surface in order to facilitate projection of the rendered focus image on and around the fovea of the eye even on occurrence of shift in the gaze direction.
- the processor controls the at least one third actuator by generating an actuation signal (such as an electric current, hydraulic pressure, and so forth).
- the at least one third actuator may move the at least one projection surface closer or away from the at least one focus image projector. In another example, the at least one third actuator may move the at least one projection surface laterally with respect to the at least one focus image projector. In yet another example, the at least one third actuator may tilt and/or rotate the at least one projection surface.
- the display element may comprise at least one focusing lens that is positioned on an optical path between the at least one projection surface and the at least one focus image projector, and at least one fourth actuator for moving the at least one focusing lens with respect to the at least one focus image projector.
- the processor is configured to control the at least one fourth actuator to adjust a focus of the projection of the rendered focus image.
- the at least one focusing lens may utilize specialized properties thereof to adjust the focus of the projection of the rendered focus image by changing the optical path thereof. More specifically, the focus of the projection of the rendered focus image may be adjusted to accommodate for diopter tuning, astigmatism correction, and so forth.
- the processor may control the at least one fourth actuator by generating an actuation signal (such as an electric current, hydraulic pressure, and so forth).
- the display apparatus may comprise the at least one focusing lens that is positioned on an optical path between the at least one first optical element and the at least one focus display, wherein the processor is configured to control at least one active optical characteristic of the at least one focusing lens by applying a control signal to the at least one focusing lens.
- the active optical characteristics of the at least one focusing lens may include, but are not limited to, focal length, and optical power.
- the control signal may be electrical signal, hydraulic pressure, and so forth.
- the at least one focusing lens may be a Liquid Crystal lens (LC lens), and so forth.
- the at least one focusing lens may be positioned on an optical path between the at least one first optical element and the at least one context display.
- the processor may implement image processing functions for the at least one projection surface.
- the image processing functions may be implemented prior to rendering the context image and the focus image at the at least one projection surface. More specifically, implementation of such image processing functions may optimize quality of the rendered context and focus images. Therefore, the image processing function may be selected by taking into account properties of the at least one projection surface and the properties of the input image.
- image processing functions for the at least one projection surface may comprise at least one function for optimizing perceived context image and/or the focus image quality, the at least one function selected from the group comprising low pass filtering, colour processing, and gamma correction.
- the image processing functions for the at least one projection surface may further comprise edge processing to minimize perceived distortion on a boundary of combined projections of the rendered context and focus images.
- the present description also relates to the method as described above.
- the various embodiments and variants disclosed above apply mutatis mutandis to the method. More specifically, the location of the projection of the rendered focus image can be adjusted by controlling at least one first actuator of the image steering unit to move the focus image projector with respect to the at least one projection surface.
- the location of the projection of the rendered focus image is adjusted by controlling at least one second actuator of the image steering unit to move at least one optical element of the image steering unit with respect to the at least one focus image projector, wherein the at least one optical element is positioned on an optical path between the at least one projection surface and the at least one focus image projector, and/or the location of the projection of the rendered focus image is adjusted by controlling at least one third actuator of the image steering unit to move the at least one projection surface.
- the method may also comprise adjusting a focus of the projection of the rendered focus image by controlling at least one fourth actuator of the display apparatus to move at least one focusing lens of the display apparatus with respect to the at least one focus image projector, wherein the at least one focusing lens is positioned on an optical path between the at least one projection surface and the at least one focus image projector.
- the display apparatus 100 includes at least one context image projector or at least one context display 102 for rendering a context image, and at least one focus image projector 104 for rendering a focus image.
- An arrangement is made to combine the projection of the rendered focus image with the projection of the rendered context image to create a visual scene.
- the display apparatus 200 includes at least one projection surface 202, at least one context image projector or at least one context display 204, at least one focus image projector 206, an image steering unit 208, means for detecting a gaze direction 210, and a processor 212.
- the processor 212 is coupled in communication with the image steering unit 208 and the means for detecting the gaze direction 210. Further, the processor 212 is also coupled to the at least one projection surface 202, the at least one context image projector or at least one context display 204, and the at least one focus image projector 206.
- the display apparatus 300 comprises at least one projection surface (depicted as a projection surface 302 ), at least one context image projector (depicted as a context image projector 304 ), at least one focus image projector (depicted as a focus image projector 306), means for detecting a gaze direction (not shown), a processor (not shown), and an image steering unit comprising at least one first actuator (not shown), at least one optical element (depicted as an optical element 308) and at least one second actuator (not shown).
- the optical element 308 is selected from a group consisting of a lens, a prism, a mirror, a beam splitter, and an optical waveguide.
- the processor of the display apparatus 300 is configured to render a context image 310 at the projection surface 302 via the context image projector 304, and to render a focus image 312 at the projection surface 302 via the focus image projector 306. Further, the processor of the display apparatus 300 is configured to control the second actuator (not shown) to adjust a location of a projection of the rendered focus image 312 on the projection surface 302. As shown, both the context image 310 and the focus image 312 are projected from a same side of the projection surface 302.
- the display apparatus 400 comprises at least one projection surface (depicted as a projection surface 402 ), at least one context image projector (depicted as a context image projector 404), at least one focus image projector (depicted as a focus image projector 406), means for detecting a gaze direction (not shown), a processor (not shown), and an image steering unit (not shown).
- the processor of the display apparatus 400 is configured to render a context image 408 at the projection surface 402 via the context image projector 404, and to render a focus image 410 at the projection surface 402 via the focus image projector 406.
- the context image 408 is projected from a front side of the projection surface 402 and the focus image 410 is projected from a back side of the projection surface 402.
- the display apparatus 500 comprises at least one projection surface comprising at least two projection surfaces (depicted as projection surfaces 502A and 502B ), at least one context image projector (depicted as a context image projector 504 ), at least one focus image projector comprising at least two focus image projectors (depicted as two focus image projectors 506A and 506B ), means for detecting a gaze direction (not shown), a processor (not shown), and an image steering unit (not shown).
- the projection surface 502A of the at least two projection surfaces is arranged to be used for a left eye of a user, and the projection surface 502B of the at least two projection surfaces is arranged to be used for a right eye of the user.
- the focus image projector 506A of the at least two focus image projectors is arranged to be used for the left eye of a user, and the focus image projector 506B of the at least two focus image projectors is arranged to be used for the right eye of the user.
- the processor of the display apparatus 500 is configured to render a context image (depicted as two context images 508A and 508B ) at the two projection surfaces 502A and 502B respectively, via the context image projector 504.
- the context image 508A is used for the left eye of the user, and the context image 508B is used for the right eye of the user.
- the processor of the display apparatus 500 is configured to render a focus image (depicted as two focus images 510A and 510B ) at the two projection surfaces 502A and 502B via the two focus image projectors 506A and 506B respectively.
- the focus image 510A is used for the left eye of the user
- the focus image 510B is used for the right eye of the user.
- both the context images 508A and 508B and the focus images 510A and 510B are projected from a same side of the at least one projection surface.
- the display apparatus 600 comprises at least one projection surface implemented as a part of at least one context display (depicted as a context display 602 ), at least one focus image projector comprising at least two focus image projectors (depicted as two focus image projectors 604A and 604B ), means for detecting a gaze direction (not shown), a processor (not shown), and an image steering unit (not shown).
- the processor of the display apparatus 600 is configured to render a context image 606 at the context display 602.
- the processor of the display apparatus 600 is configured to render a focus image (depicted as two focus images 608A and 608B ) at the at least one projection surface implemented as a part of the context display 602 via the two focus image projectors 604A and 604B respectively.
- the focus image 608A is used for the left eye of the user
- the focus image 608B is used for the right eye of the user.
- both the focus images 608A and 608B and are projected from a same side of the at least one projection surface implemented as a part of the context display 602.
- the display apparatus 700 comprises at least one projection surface 702 implemented as a part of at least one context display, and at least one focus image projector 704. Further, the display apparatus 700 comprises an image steering unit comprising at least one optical element 706 that is positioned on an optical path between the at least one projection surface 702 and the at least one focus image projector 704. As shown, the at least one optical element 706 is an optical waveguide. Further, a processor of the display apparatus 700 is configured to control at least one second actuator (not shown) to adjust a location of the projection of the rendered focus image on the at least one projection surface 702. The at least one optical element 706 (or the depicted optical waveguide) further comprises optical elements 708 therein such as microprisms, mirrors, diffractive optics, and so forth.
- optical elements 708 such as microprisms, mirrors, diffractive optics, and so forth.
- a context image is rendered via the at least one context image projector or the at least one context display, wherein an angular width of a projection of the rendered context image ranges from 40 degrees to 220 degrees.
- a focus image is rendered via the at least one focus image projector, wherein an angular width of a projection of the rendered focus image ranges from 5 degrees to 60 degrees.
- an arrangement is made for the projection of the rendered focus image to be combined with the projection of the rendered context image to create a visual scene.
- the locations of the projection of the rendered focus image may be adjusted by controlling at least one first actuator of the image steering unit to move the focus image projector with respect to the at least one projection surface.
- the location of the projection of the rendered focus image may be adjusted by controlling at least one second actuator of the image steering unit to move at least one optical element of the image steering unit with respect to the at least one focus image projector, wherein the at least one optical element is positioned on an optical path between the at least one projection surface and the at least one focus image projector.
- the location of the projection of the rendered focus image may be adjusted by controlling at least one third actuator of the image steering unit to move the at least one projection surface.
- the method 800 may comprise adjusting a focus of the projection of the rendered focus image by controlling at least one fourth actuator of the display apparatus to move at least one focusing lens of the display apparatus with respect to the at least one focus image projector, wherein the at least one focusing lens is positioned on an optical path between the at least one projection surface and the at least one focus image projector.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Optics & Photonics (AREA)
- Computer Graphics (AREA)
- Controls And Circuits For Display Device (AREA)
- Projection Apparatus (AREA)
- Transforming Electric Information Into Light Information (AREA)
- Control Of Indicators Other Than Cathode Ray Tubes (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Computer Hardware Design (AREA)
- Geometry (AREA)
Description
- The present disclosure relates generally to representation of visual information; and more specifically, to display apparatuses comprising context image projectors or context displays, and focus image projectors. Furthermore, the present disclosure also relates to methods of displaying, via the aforementioned display apparatuses.
- In recent times, there have been rapid advancements in technologies for simulating virtual environments for applications such as gaming, education, military training, healthcare surgery training, and so forth. Specifically, technologies such as virtual reality, augmented reality and so forth present the simulated environment (often known as 'virtual world') to a user of a device. The simulated environment is presented by rendering images constituting the simulated on displays in the device. Examples of such devices include head mounted virtual reality devices, virtual reality glasses, augmented reality headset, and so forth. Such devices are adapted to present to the user, a feeling of immersion in the simulated environment using contemporary techniques such as stereoscopy. Often, a field of view of such devices is typically about 100°, which is much lesser as compared to a field of view of humans which is typically about 180°.
- Further, such existing devices have certain limitations. In an example, conventional displays used in such devices are of small size. Specifically, a pixel density offered by such displays is about 15 pixels per degree whereas fovea of the human eye has a pixel density of about 60 pixels per degree. Consequently, due to low pixel density, such displays are unable to imitate visual acuity of eyes of humans. Further, displays offering high pixel density are dimensionally too large to be accommodated in such devices. In another example, conventional displays such as focus plus context screens used in such devices include a high resolution display embedded into a low resolution display. However, position of the high resolution display within such focus plus context screens is often fixed at a particular position. Further images rendered on such focus plus context screens appear discontinuous at edges of the high and low resolution displays. Consequently, such existing devices are not sufficiently well developed and are limited in their ability to mimic the human visual system.
-
EP 618471 A2 -
US 4479784 A discloses an image generation system for a flight simulation system for trainees, whereby a foveal view is projected onto a screen by a projection system. Another peripheral view is projected on the same screen with a lower resolution. -
US 2005/185281 A1 discloses a display apparatus, which detects a fixation point of a viewer's eyes on an image of a screen. A high resolution image (fovea display) is created by a LCD and projected onto that portion of the screen, where the fixation point lies. This is accomplished by illuminating the LCD from one of different possible positions, i.e. different light sources LEDs. - Therefore, in light of the foregoing discussion, there exists a need to overcome the aforementioned drawbacks associated with conventional displays used in devices for implementing simulated environments.
- The present disclosure seeks to provide a display apparatus. The present disclosure also seeks to provide a method of displaying, via a display apparatus comprising at least one context image projector or at least one context display, and at least one focus image projector. The present disclosure seeks to provide a solution to the existing problem of pixel density and physical size tradeoffs, and image discontinuities within conventional displays used in devices for implementing simulated environments. An aim of the present disclosure is to provide a solution that overcomes at least partially the problems encountered in prior art, and provides a display apparatus that closely mimics the human visual system.
- In one aspect, an embodiment of the present disclosure provides a display apparatus according to claim 1.
- In another aspect, an embodiment of the present disclosure provides a method of displaying according to claim 11.
- Embodiments of the present disclosure substantially eliminate or at least partially address the aforementioned problems in the prior art, and enables implementation of active foveation within a display apparatus used in devices for implementing simulated environments, to mimic the human visual system.
- Additional aspects, advantages, features and objects of the present disclosure would be made apparent from the drawings and the detailed description of the illustrative embodiments construed in conjunction with the appended claims that follow.
- It will be appreciated that features of the present disclosure are susceptible to being combined in various combinations without departing from the scope of the present disclosure as defined by the appended claims.
- The summary above, as well as the following detailed description of illustrative embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the present disclosure, exemplary constructions of the disclosure are shown in the drawings. However, the present disclosure is not limited to specific methods and instrumentalities disclosed herein. Moreover, those in the art will understand that the drawings are not to scale. Wherever possible, like elements have been indicated by identical numbers.
- Embodiments of the present disclosure will now be described, by way of example only, with reference to the following diagrams wherein:
-
FIG. 1 is a block diagram depicting a sub-part of an exemplary architecture of a display apparatus, in accordance with an embodiment of the present disclosure; -
FIG. 2 is a block diagram of an exemplary architecture of a display apparatus, in accordance with an embodiment of the present disclosure; -
FIGs. 3-7 are exemplary implementations of the display apparatus, in accordance with various embodiments of the present disclosure; and -
FIG. 8 illustrates steps of a method of displaying via a display apparatus, in accordance with an embodiment of the present disclosure. - In the accompanying drawings, an underlined number is employed to represent an item over which the underlined number is positioned or an item to which the underlined number is adjacent. A non-underlined number relates to an item identified by a line linking the non-underlined number to the item. When a number is non-underlined and accompanied by an associated arrow, the non-underlined number is used to identify a general item at which the arrow is pointing.
- The following detailed description illustrates embodiments of the present disclosure and ways in which they can be implemented. Although some modes of carrying out the present disclosure have been disclosed, those skilled in the art would recognize that other embodiments for carrying out or practicing the present disclosure are also possible.
- In one aspect, an embodiment of the present disclosure provides a display apparatus according to claim 1.
- In another aspect, an embodiment of the present disclosure provides a method of displaying, via a display apparatus comprising at least one context image projector or at least one context display, at least one focus image projector, at least one projection surface, an image steering unit comprising at least one first actuator for moving the focus image projector with respect to the at least one projection surface, means for detecting a gaze direction, and a processor coupled in communication with the image steering unit and the means for detecting the gaze direction the method comprising the combination of features according to claim 11.
- The present disclosure provides a display apparatus and a method of displaying via the display apparatus using projectors. The display apparatus described herein is not limited in operation by size of displays (or screens) adapted to facilitate rendering of the context image and/or the focus image thereon. Therefore, the display apparatus may be easily implemented in small-sized devices such as virtual reality devices. Further, the display apparatus simulates active foveation of the human visual system by detecting gaze direction of the eyes of the user of the device. Furthermore, the displayed images using the described display apparatus are continuous due to proper optimisation of optical paths of projections of focus and context images. Specifically, optical paths of the projections of focus and context images may be optimised separately using two or more projectors. Therefore, the described display apparatus is operable to closely imitate gaze contingency similar to the human visual system. The method of displaying using the described display apparatus is easy to implement, and possesses robust active foveation capability. Further, the display apparatus is inexpensive, and easy to manufacture.
- The display apparatus comprises at least one context image projector or at least one context display for rendering a context image, and at least one focus image projector for rendering a focus image. Further, an angular width of a projection of the rendered context image ranges from 40 degrees to 220 degrees and an angular width of a projection of the rendered focus image ranges from 5 degrees to 60 degrees. An arrangement is made to combine the projection of the rendered focus image with the projection of the rendered context image to create a visual scene. Specifically, the visual scene may correspond to a scene within a simulated environment to be presented to a user of a device, such as a head-mounted virtual reality device, virtual reality glasses, augmented reality headset, and so forth. More specifically, the visual scene may be projected onto eyes of the user. In such instance, the device may comprise the display apparatus.
- According to an embodiment, the angular width of a projection of the rendered context image may be for example from 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160 or 170 degrees up to 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200, 210 or 220 degrees. According to another embodiment the angular width of a projection of the rendered focus image may be for example from 5, 10, 15, 20, 25, 30, 35, 40, 45 or 50 degrees up to 15, 20, 25, 30, 35, 40, 45, 50, 55 or 60 degrees.
- The arrangement of the at least one context image projector or the at least one context display and the at least one focus image projector facilitates the proper combination of the projection of the rendered focus image with the projection of the rendered context image. If the aforementioned combination is less that optimal, the visual scene created may appear distorted.
- In an embodiment, the context image relates to a wide image to be rendered and projected via the display apparatus, within the aforementioned angular width, to cope with saccades associated with movement of the eyes of the user. In another embodiment, the focus image relates to an image, to be rendered and projected via the display apparatus, within the aforementioned angular width to cope with microsaccades associated with movement of the eyes of the user. Specifically, the focus image is dimensionally smaller than the context image. Further, the context and focus images collectively constitute the visual scene upon combination of projections thereof.
- In an embodiment, the term 'context display' used herein relates to a display (or screen) adapted to facilitate rendering of the context image thereon. Specifically, the at least one context display may be adapted to receive a projection of the context image thereon. According to an embodiment, the context display may be selected from the group consisting of: a Liquid Crystal Display (LCD), a Light Emitting Diode (LED)-based display, an Organic LED (OLED)-based display, a micro OLED-based display, and a Liquid Crystal on Silicon (LCoS)-based display.
- In another embodiment, the term 'context image projector' used herein relates to an optical device for rendering the context image at a display (or screen) associated therewith. According to an embodiment, the context image projector may be selected from the group consisting of: a Liquid Crystal Display (LCD)-based projector, a Light Emitting Diode (LED)-based projector, an Organic LED (OLED)-based projector, a Liquid Crystal on Silicon (LCoS)-based projector, a Digital Light Processing (DLP)-based projector, and a laser projector.
- In an embodiment, the at least one context image projector may be used to project separate context images for the left and right eyes of the user. It may be understood that the separate context images collectively constitute the context image. According to an embodiment, the at least one context image projector may comprise at least two context image projectors, at least one of the at least two context image projectors being arranged to be used for a left eye of a user, and at least one of the at least two context image projectors being arranged to be used for a right eye of the user. Specifically, the at least two context image projectors may be used such that at least one context image projector may be dedicatedly (or wholly) used to render the context image for one eye of the user. The at least two context image projectors allow separate optimization of optical paths of the separate context images (for example, a context image for the left eye of the user and a context image for the right eye of the user) constituting the context image.
- In another embodiment, the at least one context image projector may be arranged to be used for left and right eyes of the user on a shared-basis.
- For example, one context image projector may be used to render the context image on the display (or screen) associated therewith, on a shared basis. In such example, the one context image projector may project separate context images (for the left and right eyes of the user) collectively constituting the context image on the display (or screen) associated therewith.
- It is to be understood that at a given time, only one of the at least one context display and the at least one context image projector are used for rendering the context image. Specifically, at a given time, the context image may be rendered either on the context display or at the display (or screen) associated with the at least one context image projector.
- According to an embodiment, the term 'focus image projector' used herein relates to an optical device for projecting the focus image at a display (or screen) associated therewith. According to an embodiment, the focus image projector may be selected from the group consisting of: a Liquid Crystal Display (LCD)-based projector, a Light Emitting Diode (LED)-based projector, an Organic LED (OLED)-based projector, a Liquid Crystal on Silicon (LCoS)-based projector, a Digital Light Processing (DLP)-based projector, and a laser projector.
- In an embodiment, the display (or screen) associated with the at least one context image projector and the display (or screen) associated with the at least one focus image projector may be same (or shared therebetween). Specifically, in such embodiment, both the at least one context image projector and the at least one focus image projector may render the context image and the focus image respectively, at a common/shared display (or screen).
- In an embodiment of the present disclosure, the at least one focus image projector may comprise at least two focus image projectors, at least one of the at least two focus image projectors being arranged to be used for a left eye of a user, and at least one of the at least two focus image projectors being arranged to be used for a right eye of the user. Specifically, the at least two focus image projectors may be used such that at least one focus image projector may be dedicatedly (or wholly) used to render the focus image for one eye of the user. The at least two focus image projectors allow separate optimization of optical paths of the separate focus images (for example, a focus image for the left eye of the user and a focus image for the right eye of the user) constituting the focus image.
- Optionally, if the at least one focus image projector is a laser projector, the at least one focus image projector may be arranged to be used for both eyes of the user. Specifically, the laser projector may be operated such that the separate focus images for the both eyes of the user may be projected substantially simultaneously. For example, one laser projector may be used as the at least one focus image projector to project separate focus images (for each of the left eye of the user and the right eye of the user) substantially simultaneously.
- The display apparatus further comprises at least one projection surface, an image steering unit, means for detecting a gaze direction, and a processor coupled in communication with the image steering unit and the means for detecting the gaze direction.
- In an embodiment, the processor may be hardware, software, firmware or a combination of these, configured to controlling operation of the display apparatus. Specifically, the processor may control operation of the display apparatus to process and display (or project) the visual scene onto the eyes of the user. In an instance wherein the display apparatus is used within the device associated with the user, the processor may or may not be external to the device.
- Optionally, the processor may also be coupled in communication with a memory unit. In an embodiment, the memory unit may be hardware, software, firmware or a combination of these, suitable for storing an image of the visual scene and/or the context and focus images to be processed and displayed by the processor. In such embodiment, the memory unit may be used within the device or may be remotely located.
- In an embodiment, the means for detecting a gaze direction may relate to specialized equipment for measuring a direction of gaze of the eyes of the user and movement of the eyes, such as eye trackers. Specifically, an accurate detection of the gaze direction may allow the display apparatus to closely implement gaze contingency thereon. Further, the means for detecting the gaze direction, may or may not be placed in contact with the eyes. Examples of the means for detecting a gaze direction include contact lenses with motion sensors, cameras monitoring position of pupil of the eye, and so forth.
- The processor is configured to receive an input image, and use the detected gaze direction to determine a region of visual accuracy of the input image. In an embodiment, the term 'input image' used herein relates to the image of the visual scene to be displayed via the display apparatus. For example, the input image may be displayed to the eyes of the user. In an embodiment, the input image may be received from an image sensor coupled to the device associated with the user. Specifically, the image sensor (such as image sensor of a pass-through digital camera) may capture an image of a real-world environment as the input image to be projected onto the eyes. In another embodiment, the input image may be received from the memory unit coupled in communication with the processor. Specifically, the memory unit may be configured to store the input image in a suitable format including, but not limited to, Moving Picture Experts Group (MPEG), Joint Photographic Experts Group (JPEG), Tagged Image File Format (TIFF), Portable Network Graphics (PNG), Graphics Interchange Format (GIF), and Bitmap file format (BMP). In such embodiment, the input image may optionally be a computer generated image.
- After receiving the input image, the processor thus uses the detected gaze direction to determine a region of visual accuracy of the input image. In an embodiment, the region of visual accuracy relates to a region of the input image whereat the detected gaze direction of the eye may be focused. Specifically, the region of visual accuracy may be a region of interest (or a fixation point) within the input image, and may be projected onto fovea of the eye. Further, the region of visual accuracy may be the region of focus within the input image. Therefore, it may be evident that the region of visual accuracy relates to a region resolved to a much greater detail as compared to other regions of the input image, when the input image is viewed by a human visual system.
- Further, after determining the region of visual accuracy of the input image, the processor is configured to process the input image to generate the context image and the focus image, the context image having a first resolution and the focus image having a second resolution. The second resolution is higher than the first resolution. The focus image substantially corresponds to the region of visual accuracy of the input image. Further, the context image corresponds to a low-resolution representation of the input image. Therefore, the context image includes the region of visual accuracy of the input image along with remaining region of the input image. Specifically, size of the context image is larger than size of the focus image since the focus image corresponds to only a portion of the context image whereat the detected gaze direction of the eye may be focused.
- In an embodiment, the first and second resolutions may be understood in terms of angular resolution. Specifically, pixels per degree indicative of the second resolution is higher than pixels per degree indicative of the first resolution. In an example, fovea of the eye of the user corresponds to 2 degrees of visual field and receives the projection of the focus image of angular cross section width equal to 114 pixels indicative of 57 pixels per degree. Therefore, an angular pixel size corresponding to the focus image would equal 2/114 or 0.017. Further in such example, the retina of the eye corresponds to 180 degrees of visual field and receives projection of the context image of angular cross section width equal to 2700 pixels indicative of 15 pixels per degree. Therefore, an angular pixel size corresponding to the context image would equal 180/2700 or 0.067. As calculated, the angular pixel size corresponding to the context image is clearly much larger than the angular pixel size corresponding to the focus image. However, a perceived angular resolution indicated by a total number of pixels may be greater for the context image as compared to the focus image since the focus image corresponds to only a part of the context image, wherein the part corresponds to the region of visual accuracy of the input image.
- Along with the generation of the context image and the focus image, a region of the context image that substantially corresponds to the region of visual accuracy of the input image is masked. Specifically, the masking may be performed by the processor to hide (or obscure) the region of the context image corresponding to the region of visual accuracy of the input image. For example, pixels of the context image corresponding to the region of visual accuracy of the input image may be dimmed (or blackened) for masking. Further, the focus image substantially corresponds to the region of visual accuracy of the input image, and the second resolution is higher than the first resolution.
- After processing the input image, the processor is configured to render the context image at the at least one context display or at the at least one projection surface via the at least one context image projector. Further, the processor is configured to render the focus image at the at least one projection surface via the at least one focus image projector. It is to be understood that either the at least one context display, or the at least one projection surface and the at least one context image projector, may be used to render the context image, at a given time.
- According to an embodiment, the term 'projection surface' used herein relates to a display (or screen) adapted to facilitate rendering of the context image and the focus image thereon. Specifically, the at least one projection surface may have transmittance and reflectance specifications suitable for optically rendering the context and focus images thereon. In an example, the at least one projection surface may be a non-transparent (or opaque) surface. In another example, the at least one projection surface may be a semi-transparent surface. Optionally, the at least one projection surface may be implemented by way of at least one of: a polarizer, a retarder.
- In an embodiment, the at least one projection surface may be arranged to allow the projection of the rendered context image to pass through substantially and to reflect the projection of the rendered focus image substantially. In such embodiment, the context image may be projected onto the at least one projection surface from a back side thereof and the focus image may be projected onto the at least one projection surface from a front side thereof. In an alternate embodiment, the at least one projection surface may be arranged to allow the projection of the rendered focus image to pass through substantially and to reflect the projection of the rendered context image substantially. In such embodiment, the focus image may be projected onto the at least one projection surface from the back side thereof and the context image may be projected onto the at least one projection surface from the front side thereof.
- According to an embodiment, the at least one projection surface may be arranged to allow the projections of both the rendered context and focus images to pass through substantially. In such embodiment, both the context image and focus images may be projected onto the at least one projection surface from the back side thereof. According to another embodiment, the at least one projection surface may be arranged to reflect the projections of both the rendered context and focus images substantially. In such embodiment, both the context image and focus images may be projected onto the at least one projection surface from the front side thereof.
- According to an embodiment of the present disclosure, the at least one projection surface may comprise at least two projection surfaces, at least one of the at least two projection surfaces being arranged to be used for a left eye of the user, and at least one of the at least two projection surfaces being arranged to be used for a right eye of the user. Specifically, at least one of the at least two projection surfaces may be used for rendering the context and focus images for a left eye of the user. Similarly, at least one of the at least two projection surfaces may be used for rendering the context and focus images for a right eye of the user. Optionally, at least one of the at least two projection surfaces may be semi-transparent to transmit projections of the context image and/or the focus image therethrough.
- In an embodiment, the at least one projection surface is implemented as a part of the at least one context display. In such embodiment, the context image may be rendered by the processor at the at least one context display without use of the at least one context image projector. Further, in such embodiment, the at least one context display may also be adapted to facilitate rendering of the focus image thereon.
- After rendering the context and focus images, the processor is configured to control the image steering unit to adjust a location of a projection of the rendered focus image on the at least one projection surface, such that the projection of the rendered focus image substantially overlaps the projection of the masked region of the rendered context image on the at least one projection surface. Furthermore, the processor is configured to perform rendering the context image, rendering the focus image, and controlling the image steering unit substantially simultaneously. Specifically, the combined projections of the rendered context and focus images may constitute a projection of the input image. The context and focus images are rendered substantially simultaneously in order to avoid time lag during combination of projections thereof.
- The angular width of the projection of the rendered context image is larger than the angular width of the projection of the rendered focus image. This may be attributed to the fact that the rendered focus image is typically projected on and around the fovea of the eye, whereas the rendered context image is projected on a retina of the eye, of which the fovea is just a small part. Specifically, a combination of the rendered context and focus images constitute the input image and may be projected onto the eye to project the input image thereon.
- The term 'image steering unit' used herein relates to equipment (such as optical elements, electromechanical components, and so forth) for controlling the projection of the rendered focus image on the at least one projection surface. Specifically, the image steering unit includes at least one element/component. The image steering unit is operable to control the projection of the rendered context image on the at least one projection surface.
- In the aforementioned embodiment, the image steering unit substantially overlaps the projection of the rendered focus image with the projection of the masked region of the rendered context image to avoid distortion of the region of visual accuracy of the input image. Specifically, the region of visual accuracy of the input image is represented within both, the rendered context image of low resolution and the rendered focus image of high resolution. The overlap (or superimposition) of projections of low and high-resolution images of a same region results in distortion of appearance of the same region. Further, the rendered focus image of high resolution may contain more information pertaining to the region of visual accuracy of the input image, as compared to the rendered context image of low resolution. Therefore, the region of the context image that substantially corresponds to the region of visual accuracy of the input image is masked, in order to project the rendered high-resolution focus image without distortion.
- The processor is configured to mask the region of the context image corresponding to the region of visual accuracy of the input image such that transitional area seams (or edges) between the region of visual accuracy of the displayed input image and remaining region of the displayed input image are minimum. It is to be understood that the region of visual accuracy of the displayed input image corresponds to the projection of the focus image (and the masked region of the context image) whereas the remaining region of the displayed input image corresponds to the projection of the context image. Specifically, the masking should be performed as a gradual gradation in order to minimize the transitional area seams upon superimposition of the context and focus images so that the displayed input image appears continuous. For example, the processor may significantly dim pixels of the context image corresponding to the region of visual accuracy of the input image, and gradually reduce the amount of dimming of the pixels with increase in distance thereof from the region of visual accuracy of the input image. If alignment and appearance of the superimposed (or overlaid) projections of the rendered context and focus images are improper and/or have discontinuities, then the displayed input image would also be improper.
- Optionally, masking the region of the context image that substantially corresponds to the region of visual accuracy of the input image may be performed using linear transparency mask blend of inverse values between the context image and the focus image at the transition area, stealth (or camouflage) patterns containing shapes naturally difficult for detection by the eyes of the user, and so forth.
- The image steering unit comprises at least one first actuator for moving the focus image projector with respect to the at least one projection surface, wherein the processor is configured to control the at least one first actuator to adjust the location of the projection of the rendered focus image on the at least one projection surface. Specifically, the at least one first actuator moves the focus image projector when the gaze direction of the eye shifts from one direction to another. In such instance, the arrangement of the focus image projector and the at least one projection surface may not project the rendered focus image on and around the fovea of the eye. Therefore, the processor controls the at least one first actuator to move the focus image projector with respect to the at least one projection surface, to adjust the location of the projection of the rendered focus image on the at least one projection surface such that the rendered focus image may be projected on and around the fovea of the eye even on occurrence of shift in the gaze direction. More specifically, the processor controls the at least one first actuator by generating an actuation signal (such as an electric current, hydraulic pressure, and so forth).
- In an example, the at least one first actuator may move the focus image projector closer or away from the at least one projection surface. In another example, the at least one first actuator may move the focus image projector laterally with respect to the at least one projection surface. In yet another example, the at least one first actuator may tilt and/or rotate the focus image projector with respect to the at least one projection surface.
- The image steering unit comprises at least one optical element that is positioned on an optical path between the at least one projection surface and the at least one focus image projector and at least one second actuator for moving the at least one optical element with respect to the at least one focus image projector. The at least one optical element is selected from the group consisting of a lens, a prism, a mirror, a beam splitter, and an optical waveguide. The processor is configured to control the at least one second actuator to adjust the location of the projection of the rendered focus image on the at least one projection surface. Specifically, the at least one optical element may change the optical path of the projection of the rendered focus image on the at least one projection surface in order to facilitate projection of the rendered focus image on and around the fovea of the eye even on occurrence of shift in the gaze direction. More specifically, the processor controls the at least one second actuator by generating an actuation signal (such as an electric current, hydraulic pressure, and so forth).
- For example, a prism may be positioned on an optical path between a projection surface and a focus image projector. Specifically, the optical path of the projection of the rendered focus image may change on passing through the prism to adjust the location of the projection of the rendered focus image on the projection surface. Further, the prism may be moved transversally and/or laterally, be rotated, be tilted, and so forth, by a second actuator in order to facilitate projection of the rendered focus image on and around the fovea of the eye even on occurrence of shift in the gaze direction.
- For example, the at least one optical element that is positioned on an optical path between the at least one projection surface and the at least one focus image projector, may be an optical waveguide. Specifically, the optical waveguide may be arranged to allow the projection of the focus image to pass therethrough, and to adjust the location of the projection of the rendered focus image on the at least one projection surface. Therefore, the optical waveguide may be semi-transparent. In an embodiment, the optical waveguide may further comprise optical elements therein such as microprisms, mirrors, diffractive optics, and so forth.
- The image steering unit comprises at least one third actuator for moving the at least one projection surface, wherein the processor is configured to control the at least one third actuator to adjust the location of the projection of the rendered focus image on the at least one projection surface. Specifically, the at least one third actuator may move the at least one projection surface in order to facilitate projection of the rendered focus image on and around the fovea of the eye even on occurrence of shift in the gaze direction. More specifically, the processor controls the at least one third actuator by generating an actuation signal (such as an electric current, hydraulic pressure, and so forth).
- In an example, the at least one third actuator may move the at least one projection surface closer or away from the at least one focus image projector. In another example, the at least one third actuator may move the at least one projection surface laterally with respect to the at least one focus image projector. In yet another example, the at least one third actuator may tilt and/or rotate the at least one projection surface.
- According to an embodiment, the display element may comprise at least one focusing lens that is positioned on an optical path between the at least one projection surface and the at least one focus image projector, and at least one fourth actuator for moving the at least one focusing lens with respect to the at least one focus image projector. Further, in such embodiment, the processor is configured to control the at least one fourth actuator to adjust a focus of the projection of the rendered focus image. Specifically, the at least one focusing lens may utilize specialized properties thereof to adjust the focus of the projection of the rendered focus image by changing the optical path thereof. More specifically, the focus of the projection of the rendered focus image may be adjusted to accommodate for diopter tuning, astigmatism correction, and so forth. Further, the processor may control the at least one fourth actuator by generating an actuation signal (such as an electric current, hydraulic pressure, and so forth).
- According to another embodiment, the display apparatus may comprise the at least one focusing lens that is positioned on an optical path between the at least one first optical element and the at least one focus display, wherein the processor is configured to control at least one active optical characteristic of the at least one focusing lens by applying a control signal to the at least one focusing lens. Specifically, the active optical characteristics of the at least one focusing lens may include, but are not limited to, focal length, and optical power. Further, in such embodiment, the control signal may be electrical signal, hydraulic pressure, and so forth.
- In an embodiment, the at least one focusing lens may be a Liquid Crystal lens (LC lens), and so forth. Optionally, the at least one focusing lens may be positioned on an optical path between the at least one first optical element and the at least one context display.
- In an embodiment, the processor may implement image processing functions for the at least one projection surface. Specifically, the image processing functions may be implemented prior to rendering the context image and the focus image at the at least one projection surface. More specifically, implementation of such image processing functions may optimize quality of the rendered context and focus images. Therefore, the image processing function may be selected by taking into account properties of the at least one projection surface and the properties of the input image.
- According to an embodiment, image processing functions for the at least one projection surface may comprise at least one function for optimizing perceived context image and/or the focus image quality, the at least one function selected from the group comprising low pass filtering, colour processing, and gamma correction. In an embodiment, the image processing functions for the at least one projection surface may further comprise edge processing to minimize perceived distortion on a boundary of combined projections of the rendered context and focus images.
- The present description also relates to the method as described above. The various embodiments and variants disclosed above apply mutatis mutandis to the method. More specifically, the location of the projection of the rendered focus image can be adjusted by controlling at least one first actuator of the image steering unit to move the focus image projector with respect to the at least one projection surface.
- According to another embodiment, the location of the projection of the rendered focus image is adjusted by controlling at least one second actuator of the image steering unit to move at least one optical element of the image steering unit with respect to the at least one focus image projector, wherein the at least one optical element is positioned on an optical path between the at least one projection surface and the at least one focus image projector, and/or the location of the projection of the rendered focus image is adjusted by controlling at least one third actuator of the image steering unit to move the at least one projection surface.
- The method may also comprise adjusting a focus of the projection of the rendered focus image by controlling at least one fourth actuator of the display apparatus to move at least one focusing lens of the display apparatus with respect to the at least one focus image projector, wherein the at least one focusing lens is positioned on an optical path between the at least one projection surface and the at least one focus image projector.
- Referring to
FIG.1 , illustrated is a block diagram depicting a sub-part of an exemplary architecture of adisplay apparatus 100, in accordance with an embodiment of the present disclosure. Thedisplay apparatus 100 includes at least one context image projector or at least onecontext display 102 for rendering a context image, and at least onefocus image projector 104 for rendering a focus image. An arrangement is made to combine the projection of the rendered focus image with the projection of the rendered context image to create a visual scene. - Referring to
FIG.2 , illustrated is a block diagram of an exemplary architecture of adisplay apparatus 200, in accordance with an embodiment of the present disclosure. Thedisplay apparatus 200 includes at least oneprojection surface 202, at least one context image projector or at least onecontext display 204, at least onefocus image projector 206, animage steering unit 208, means for detecting agaze direction 210, and aprocessor 212. Theprocessor 212 is coupled in communication with theimage steering unit 208 and the means for detecting thegaze direction 210. Further, theprocessor 212 is also coupled to the at least oneprojection surface 202, the at least one context image projector or at least onecontext display 204, and the at least onefocus image projector 206. - Referring to
FIG.3 , illustrated is an exemplary implementation of adisplay apparatus 300, in accordance with an embodiment of the present disclosure. As shown, thedisplay apparatus 300 comprises at least one projection surface (depicted as a projection surface 302), at least one context image projector (depicted as a context image projector 304), at least one focus image projector (depicted as a focus image projector 306), means for detecting a gaze direction (not shown), a processor (not shown), and an image steering unit comprising at least one first actuator (not shown), at least one optical element (depicted as an optical element 308) and at least one second actuator (not shown). For example, theoptical element 308 is selected from a group consisting of a lens, a prism, a mirror, a beam splitter, and an optical waveguide. The processor of thedisplay apparatus 300 is configured to render acontext image 310 at theprojection surface 302 via thecontext image projector 304, and to render afocus image 312 at theprojection surface 302 via thefocus image projector 306. Further, the processor of thedisplay apparatus 300 is configured to control the second actuator (not shown) to adjust a location of a projection of the renderedfocus image 312 on theprojection surface 302. As shown, both thecontext image 310 and thefocus image 312 are projected from a same side of theprojection surface 302. - Referring to
FIG.4 , illustrated is an exemplary implementation of adisplay apparatus 400, in accordance with another embodiment of the present disclosure. As shown, thedisplay apparatus 400 comprises at least one projection surface (depicted as a projection surface 402), at least one context image projector (depicted as a context image projector 404), at least one focus image projector (depicted as a focus image projector 406), means for detecting a gaze direction (not shown), a processor (not shown), and an image steering unit (not shown). The processor of thedisplay apparatus 400 is configured to render acontext image 408 at theprojection surface 402 via thecontext image projector 404, and to render afocus image 410 at theprojection surface 402 via thefocus image projector 406. As shown, thecontext image 408 is projected from a front side of theprojection surface 402 and thefocus image 410 is projected from a back side of theprojection surface 402. - Referring to
FIG.5 , illustrated is an exemplary implementation of adisplay apparatus 500, in accordance with another embodiment of the present disclosure. As shown, thedisplay apparatus 500 comprises at least one projection surface comprising at least two projection surfaces (depicted as projection surfaces 502A and 502B), at least one context image projector (depicted as a context image projector 504), at least one focus image projector comprising at least two focus image projectors (depicted as twofocus image projectors projection surface 502A of the at least two projection surfaces is arranged to be used for a left eye of a user, and theprojection surface 502B of the at least two projection surfaces is arranged to be used for a right eye of the user. Furthermore, thefocus image projector 506A of the at least two focus image projectors is arranged to be used for the left eye of a user, and thefocus image projector 506B of the at least two focus image projectors is arranged to be used for the right eye of the user. The processor of thedisplay apparatus 500 is configured to render a context image (depicted as twocontext images projection surfaces context image projector 504. In such instance, thecontext image 508A is used for the left eye of the user, and thecontext image 508B is used for the right eye of the user. Further, the processor of thedisplay apparatus 500 is configured to render a focus image (depicted as twofocus images projection surfaces focus image projectors focus image 510A is used for the left eye of the user, and thefocus image 510B is used for the right eye of the user. As shown, both thecontext images focus images - Referring to
FIG.6 , illustrated is an exemplary implementation of adisplay apparatus 600, in accordance with another embodiment of the present disclosure. As shown, thedisplay apparatus 600 comprises at least one projection surface implemented as a part of at least one context display (depicted as a context display 602), at least one focus image projector comprising at least two focus image projectors (depicted as twofocus image projectors display apparatus 600 is configured to render acontext image 606 at thecontext display 602. Further, the processor of thedisplay apparatus 600 is configured to render a focus image (depicted as twofocus images context display 602 via the twofocus image projectors focus image 608A is used for the left eye of the user, and thefocus image 608B is used for the right eye of the user. As shown, both thefocus images context display 602. - Referring to
FIG.7 , illustrated is an exemplary implementation of adisplay apparatus 700, in accordance with another embodiment of the present disclosure. As shown, thedisplay apparatus 700 comprises at least oneprojection surface 702 implemented as a part of at least one context display, and at least onefocus image projector 704. Further, thedisplay apparatus 700 comprises an image steering unit comprising at least oneoptical element 706 that is positioned on an optical path between the at least oneprojection surface 702 and the at least onefocus image projector 704. As shown, the at least oneoptical element 706 is an optical waveguide. Further, a processor of thedisplay apparatus 700 is configured to control at least one second actuator (not shown) to adjust a location of the projection of the rendered focus image on the at least oneprojection surface 702. The at least one optical element 706 (or the depicted optical waveguide) further comprisesoptical elements 708 therein such as microprisms, mirrors, diffractive optics, and so forth. - Referring to
FIG. 8 , illustrated are steps of amethod 800 of displaying via a display apparatus (such as thedisplay apparatus 100 ofFIG.1 ), in accordance with an embodiment of the present disclosure. Atstep 802, a context image is rendered via the at least one context image projector or the at least one context display, wherein an angular width of a projection of the rendered context image ranges from 40 degrees to 220 degrees. At step 804, a focus image is rendered via the at least one focus image projector, wherein an angular width of a projection of the rendered focus image ranges from 5 degrees to 60 degrees. Atstep 806, an arrangement is made for the projection of the rendered focus image to be combined with the projection of the rendered context image to create a visual scene. - The
steps 802 to 806 are only illustrative and other alternatives can also be provided where one or more steps are added, one or more steps are removed, or one or more steps are provided in a different sequence without departing from the scope of the claims herein. In an example, in themethod 800, the location of the projection of the rendered focus image may be adjusted by controlling at least one first actuator of the image steering unit to move the focus image projector with respect to the at least one projection surface. In another example, in themethod 800, the location of the projection of the rendered focus image may be adjusted by controlling at least one second actuator of the image steering unit to move at least one optical element of the image steering unit with respect to the at least one focus image projector, wherein the at least one optical element is positioned on an optical path between the at least one projection surface and the at least one focus image projector. In yet another example, in themethod 800, the location of the projection of the rendered focus image may be adjusted by controlling at least one third actuator of the image steering unit to move the at least one projection surface. Optionally, themethod 800 may comprise adjusting a focus of the projection of the rendered focus image by controlling at least one fourth actuator of the display apparatus to move at least one focusing lens of the display apparatus with respect to the at least one focus image projector, wherein the at least one focusing lens is positioned on an optical path between the at least one projection surface and the at least one focus image projector. - Modifications to embodiments of the present disclosure described in the foregoing are possible without departing from the scope of the present disclosure as defined by the accompanying claims. Expressions such as "including", "comprising", "incorporating", "have", "is" used to describe and claim the present disclosure are intended to be construed in a non-exclusive manner, namely allowing for items, components or elements not explicitly described also to be present. Reference to the singular is also to be construed to relate to the plural.
Claims (12)
- A display apparatus (100, 200, 300, 400, 500, 600, 700) comprising:- at least one context image projector (304, 404, 504) or at least one context display (102, 204, 602) for rendering a context image (310, 408, 508, 606), wherein an angular width of a projection of the rendered context image ranges from 40 degrees to 220 degrees;- at least one focus image projector (104, 206, 306, 406, 506, 604, 704) for rendering a focus image (312, 410, 510, 608), wherein an angular width of a projection of the rendered focus image ranges from 5 degrees to 60 degrees;- at least one projection surface (202, 302, 402, 502, 702);- an image steering unit (208) comprising- at least one first actuator for moving the at least one focus image projector with respect to the at least one projection surface;- at least one optical element (308, 706, 708) that is positioned on an optical path between the at least one projection surface (202, 302, 402, 502, 702) and the at least one focus image projector (104, 206, 306, 406, 506, 604, 704), and at least one second actuator for moving the at least one optical element with respect to the at least one focus image projector, the at least one optical element being selected from the group consisting of a lens, a prism, a mirror, a beam splitter, and an optical waveguide;- at least one third actuator for moving the at least one projection surface (202, 302, 402, 502, 702),- means for detecting a gaze direction (210); and- a processor (212) coupled in communication with the image steering unit and the means for detecting the gaze direction, wherein the processor is configured to:wherein the processor is configured to perform (c), (d) and (e) substantially simultaneously and wherein an arrangement is made to combine the projection of the rendered focus image with the projection of the rendered context image to create a visual scene.(a) receive an input image, and use the detected gaze direction to determine a region of visual accuracy of the input image;(b) process the input image to generate the context image and the focus image, the context image having a first resolution and the focus image having a second resolution, wherein:- a region of the context image that substantially corresponds to the region of visual accuracy of the input image is masked,- the focus image substantially corresponds to the region of visual accuracy of the input image, and- the second resolution is higher than the first resolution;(c) render the context image at the at least one context display or at the at least one projection surface via the at least one context image projector;(d) render the focus image at the at least one projection surface via the at least one focus image projector; and(e) control the image steering unit to adjust a location of the projection of the rendered focus image on the at least one projection surface, such that the projection of the rendered focus image substantially overlaps the projection of the masked region of the rendered context image on the at least one projection surface and control the at least one first actuator, the at least one second actuator and the at least one third actuator to adjust the location of the projection of the rendered focus image on the at least one projection surface,
- The display apparatus (100, 200, 300, 400, 500, 600, 700) of claim 1, wherein the at least one focus image projector (104, 206, 306, 406, 506, 604, 704) comprises at least two focus image projectors, at least one of the at least two focus image projectors being arranged to be used for a left eye of a user, and at least one of the at least two focus image projectors being arranged to be used for a right eye of the user.
- The display apparatus (100, 200, 300, 400, 500, 600, 700) of claim 1 or 2, wherein the at least one context image projector (304, 404, 504) comprises at least two context image projectors, at least one of the at least two context image projectors being arranged to be used for a left eye of a user, and at least one of the at least two context image projectors being arranged to be used for a right eye of the user, or the at least one context image projector is arranged to be used for left and right eyes of a user on a shared-basis.
- The display apparatus (100, 200, 300, 400, 500, 600, 700) of any of claims 1 to 3, wherein the at least one projection surface (202, 302, 402, 502, 702) comprises at least two projection surfaces, at least one of the at least two projection surfaces being arranged to be used for a left eye of a user, and at least one of the at least two projection surfaces being arranged to be used for a right eye of the user.
- The display apparatus (100, 200, 300, 400, 500, 600, 700) of claim 1, 2, or 4, wherein the at least one projection surface (202, 302, 402, 502, 702) is implemented as a part of the at least one context display (102, 204, 602).
- The display apparatus (100, 200, 300, 400, 500, 600, 700) of any of the preceding claims, wherein the at least one projection surface (202, 302, 402, 502, 702) is implemented by way of at least one of: a polarizer, a retarder.
- The display apparatus (100, 200, 300, 400, 500, 600, 700) of any of the preceding claims, wherein the display apparatus comprises:- at least one focusing lens that is positioned on an optical path between the at least one projection surface (202, 302, 402, 502, 702) and the at least one focus image projector (104, 206, 306, 406, 506, 604, 704); and- at least one fourth actuator for moving the at least one focusing lens with respect to the at least one focus image projector,wherein the processor (212) is configured to control the at least one fourth actuator to adjust a focus of the projection of the rendered focus image (312, 410, 510, 608).
- The display apparatus (100, 200, 300, 400, 500, 600, 700) of any of claims 2 to 6, wherein the display apparatus comprises:- at least one focusing lens that is positioned on an optical path between the at least one first optical element (308, 706, 708) and the at least one focus display,wherein the processor (212) is configured to control at least one active optical characteristic, including focal length and optical power, of the at least one focusing lens by applying a control signal to the at least one focusing lens.
- The display apparatus (100, 200, 300, 400, 500, 600, 700) of any of the preceding claims, wherein the context display (102, 204, 602) is selected from the group consisting of: a Liquid Crystal Display, a Light Emitting Diode-based display, an Organic Light Emitting Diode-based display, a micro Organic Light Emitting Diode -based display, and a Liquid Crystal on Silicon-based display.
- The display apparatus (100, 200, 300, 400, 500, 600, 700) of any of the preceding claims, wherein the context image projector (304, 404, 504) and/or the focus image projector (104, 206, 306, 406, 506, 604, 704) are independently selected from the group consisting of: a Liquid Crystal Display-based projector, a Light Emitting Diode-based projector, an Organic Light Emitting Diode-based projector, a Liquid Crystal on Silicon-based projector, a Digital Light Processing-based projector, and a laser projector.
- A method of displaying, via a display apparatus (100, 200, 300, 400, 500, 600, 700) comprising at least one context image projector (304, 404, 504) or at least one context display (102, 204, 602), at least one focus image projector (104, 206, 306, 406, 506, 604, 704), at least one projection surface (202, 302, 402, 502, 702), an image steering unit (208) comprising at least one first actuator for moving the focus image projector with respect to the at least one projection surface, at least one optical element (308, 706, 708) on an optical path between the at least one projection surface and the at least one focus image projector (104, 206, 306, 406, 506, 604, 704), the at least one optical element (308, 706, 708) being selected from the group consisting of a lens, a prism, a mirror, a beam splitter, and an optical waveguide and at least one second actuator for adjusting a location of the at least one optical element, and at least one third actuator for moving the at least one projection surface (202, 302, 402, 502, 702), means for detecting a gaze direction (210) and a processor (212) coupled in communication with the image steering unit and the means for detecting the gaze direction, the method comprising:(i) rendering a context image (310, 408, 508, 606) via the at least one context image projector or the at least one context display, wherein an angular width of a projection of the rendered context image ranges from 40 degrees to 220 degrees;(ii) rendering a focus image (312, 410, 510, 608) via the at least one focus image projector, wherein an angular width of a projection of the rendered focus image ranges from 5 degrees to 60 degrees;(iii) arranging for the projection of the rendered focus image to be combined with the projection of the rendered context image to create a visual scene;(iv) detecting a gaze direction, and using the detected gaze direction to determine a region of visual accuracy of an input image;(v) processing the input image to generate the context image and the focus image, the context image having a first resolution and the focus image having a second resolution, the second resolution being higher than the first resolution, wherein the processing comprises:- masking a region of the context image that substantially corresponds to the region of visual accuracy of the input image; and- generating the focus image to substantially correspond to the region of visual accuracy of the input image; and(vi) controlling the image steering unit to adjust a location of the projection of the rendered focus image on the at least one projection surface, such that the projection of the rendered focus image substantially overlaps the projection of the masked region of the rendered context image on the at least one projection surface and controlling the at least one first actuator, the at least one second actuator and the at least one third actuator to adjust the location of the projection of the rendered focus image on the at least one projection surface,wherein (i), (ii) and (vi) are performed substantially simultaneously.
- The method of claim 11, further comprising adjusting a focus of the projection of the rendered focus image (312, 410, 510, 608) by controlling at least one fourth actuator of the display apparatus (100, 200, 300, 400, 500, 600, 700) to move at least one focusing lens of the display apparatus with respect to the at least one focus image projector (104, 206, 306, 406, 506, 604, 704), wherein the at least one focusing lens is positioned on an optical path between the at least one projection surface (202, 302, 402, 502, 702) and the at least one focus image projector.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/366,497 US9711114B1 (en) | 2016-12-01 | 2016-12-01 | Display apparatus and method of displaying using projectors |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3330772A1 EP3330772A1 (en) | 2018-06-06 |
EP3330772B1 true EP3330772B1 (en) | 2020-06-03 |
Family
ID=59297841
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP17203680.8A Active EP3330772B1 (en) | 2016-12-01 | 2017-11-27 | Display apparatus and method of displaying using projectors |
Country Status (7)
Country | Link |
---|---|
US (1) | US9711114B1 (en) |
EP (1) | EP3330772B1 (en) |
JP (1) | JP6423945B2 (en) |
KR (1) | KR101977903B1 (en) |
CN (1) | CN107959837B (en) |
HK (1) | HK1247014B (en) |
SG (1) | SG10201709781TA (en) |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3602181B1 (en) | 2017-03-27 | 2025-01-15 | Avegant Corp. | Steerable foveal display |
US10602033B2 (en) * | 2017-05-02 | 2020-03-24 | Varjo Technologies Oy | Display apparatus and method using image renderers and optical combiners |
US10495895B2 (en) * | 2017-06-14 | 2019-12-03 | Varjo Technologies Oy | Display apparatus and method of displaying using polarizers |
US10371998B2 (en) | 2017-06-29 | 2019-08-06 | Varjo Technologies Oy | Display apparatus and method of displaying using polarizers and optical combiners |
KR102461253B1 (en) * | 2017-07-24 | 2022-10-31 | 삼성전자주식회사 | Projection display apparatus including eye tracker |
US10578869B2 (en) * | 2017-07-24 | 2020-03-03 | Mentor Acquisition One, Llc | See-through computer display systems with adjustable zoom cameras |
US10155166B1 (en) * | 2017-09-08 | 2018-12-18 | Sony Interactive Entertainment Inc. | Spatially and user aware second screen projection from a companion robot or device |
US10614734B2 (en) * | 2018-03-06 | 2020-04-07 | Varjo Technologies Oy | Display apparatus and method of displaying using controllable scanning mirror |
CN113196132A (en) | 2018-12-07 | 2021-07-30 | 阿维甘特公司 | Steerable positioning element |
CA3125739A1 (en) | 2019-01-07 | 2020-07-16 | Avegant Corp. | Control system and rendering pipeline |
US20200225734A1 (en) * | 2019-01-11 | 2020-07-16 | Varjo Technologies Oy | Display apparatus and method of indicating level of immersion using visual indicator |
US10764567B2 (en) * | 2019-01-22 | 2020-09-01 | Varjo Technologies Oy | Display apparatus and method of displaying |
US10725304B1 (en) * | 2019-03-28 | 2020-07-28 | Facebook Technologies, Llc | Compensatory image during swift-eye movement |
US10466489B1 (en) | 2019-03-29 | 2019-11-05 | Razmik Ghazaryan | Methods and apparatus for a variable-resolution screen |
US11586049B2 (en) | 2019-03-29 | 2023-02-21 | Avegant Corp. | Steerable hybrid display using a waveguide |
US10554940B1 (en) | 2019-03-29 | 2020-02-04 | Razmik Ghazaryan | Method and apparatus for a variable-resolution screen |
US11284053B2 (en) | 2019-03-29 | 2022-03-22 | Razmik Ghazaryan | Head-mounted display and projection screen |
JPWO2021049473A1 (en) | 2019-09-09 | 2021-03-18 | ||
CN110764342A (en) * | 2019-11-19 | 2020-02-07 | 四川长虹电器股份有限公司 | Intelligent projection display device and adjustment method of projection picture thereof |
CN114930226A (en) | 2020-01-06 | 2022-08-19 | 阿维甘特公司 | Head-mounted system with color specific modulation |
CN111294579B (en) * | 2020-02-19 | 2021-12-28 | 母国标 | Method and system for aligning projection picture with curtain center |
FR3107611B1 (en) | 2020-02-26 | 2022-03-04 | Aledia | Multiple resolution display screen and method of making |
KR20220081162A (en) | 2020-12-08 | 2022-06-15 | 삼성전자주식회사 | Foveated display apparatus |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11109278A (en) * | 1997-10-03 | 1999-04-23 | Minolta Co Ltd | Video display device |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US931683A (en) | 1908-06-06 | 1909-08-17 | George S Cox And Brother | Haircloth-loom. |
US4479784A (en) * | 1981-03-03 | 1984-10-30 | The Singer Company | Eye line-of-sight responsive wide angle visual system |
JP2741063B2 (en) * | 1989-05-02 | 1998-04-15 | 株式会社エイ・ティ・アール通信システム研究所 | Wide-field display device |
JPH04156584A (en) * | 1990-10-19 | 1992-05-29 | Nec Kagoshima Ltd | Display device |
JPH0777665A (en) * | 1993-03-29 | 1995-03-20 | Canon Inc | Image display device and image photographing device for the same |
US5394203A (en) * | 1994-06-06 | 1995-02-28 | Delco Electronics Corporation | Retracting head up display with image position adjustment |
JPH08313843A (en) * | 1995-05-16 | 1996-11-29 | Agency Of Ind Science & Technol | Wide visual field and high resolution video presentation device in line of sight followup system |
US5956180A (en) * | 1996-12-31 | 1999-09-21 | Bass; Robert | Optical viewing system for asynchronous overlaid images |
US7872635B2 (en) | 2003-05-15 | 2011-01-18 | Optimetrics, Inc. | Foveated display eye-tracking system and method |
US7488072B2 (en) * | 2003-12-04 | 2009-02-10 | New York University | Eye tracked foveal display by controlled illumination |
JP2005202221A (en) * | 2004-01-16 | 2005-07-28 | Toshiba Corp | Display device |
US7973834B2 (en) | 2007-09-24 | 2011-07-05 | Jianwen Yang | Electro-optical foveated imaging and tracking system |
JP2010153983A (en) * | 2008-12-24 | 2010-07-08 | Panasonic Electric Works Co Ltd | Projection type video image display apparatus, and method therein |
HU0900478D0 (en) * | 2009-07-31 | 2009-09-28 | Holografika Hologrameloeallito | Method and apparatus for displaying 3d images |
US8941559B2 (en) * | 2010-09-21 | 2015-01-27 | Microsoft Corporation | Opacity filter for display device |
US9292973B2 (en) * | 2010-11-08 | 2016-03-22 | Microsoft Technology Licensing, Llc | Automatic variable virtual focus for augmented reality displays |
US9025252B2 (en) * | 2011-08-30 | 2015-05-05 | Microsoft Technology Licensing, Llc | Adjustment of a mixed reality display for inter-pupillary distance alignment |
US8638498B2 (en) * | 2012-01-04 | 2014-01-28 | David D. Bohn | Eyebox adjustment for interpupillary distance |
JP2014059432A (en) * | 2012-09-18 | 2014-04-03 | Seiko Epson Corp | Image display system |
CN107209390A (en) * | 2015-02-12 | 2017-09-26 | 谷歌公司 | The display of combination high-resolution narrow and intermediate-resolution wide field are shown |
WO2016187352A1 (en) * | 2015-05-18 | 2016-11-24 | Daqri, Llc | Threat identification system |
-
2016
- 2016-12-01 US US15/366,497 patent/US9711114B1/en active Active
-
2017
- 2017-11-27 CN CN201711209418.3A patent/CN107959837B/en active Active
- 2017-11-27 SG SG10201709781TA patent/SG10201709781TA/en unknown
- 2017-11-27 KR KR1020170159285A patent/KR101977903B1/en active Active
- 2017-11-27 EP EP17203680.8A patent/EP3330772B1/en active Active
- 2017-11-27 JP JP2017226385A patent/JP6423945B2/en active Active
-
2018
- 2018-05-17 HK HK18106453.7A patent/HK1247014B/en unknown
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11109278A (en) * | 1997-10-03 | 1999-04-23 | Minolta Co Ltd | Video display device |
Non-Patent Citations (1)
Title |
---|
"Displays : Fundamentals and Applications", 5 July 2011, CRC PRESS, ISBN: 978-1-56881-439-1, article OLIVER BIMBER ET AL: "Near-Eye Displays", pages: 439 - 504, XP055572703 * |
Also Published As
Publication number | Publication date |
---|---|
EP3330772A1 (en) | 2018-06-06 |
HK1247014B (en) | 2019-09-13 |
KR20180062947A (en) | 2018-06-11 |
CN107959837B (en) | 2018-11-16 |
JP2018109746A (en) | 2018-07-12 |
CN107959837A (en) | 2018-04-24 |
KR101977903B1 (en) | 2019-05-13 |
US9711114B1 (en) | 2017-07-18 |
JP6423945B2 (en) | 2018-11-14 |
SG10201709781TA (en) | 2018-07-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3330772B1 (en) | Display apparatus and method of displaying using projectors | |
EP3330771B1 (en) | Display apparatus and method of displaying using focus and context displays | |
Itoh et al. | Towards indistinguishable augmented reality: A survey on optical see-through head-mounted displays | |
EP3548955B1 (en) | Display apparatus and method of displaying using image renderers and optical combiners | |
WO2018100241A1 (en) | Gaze-tracking system and method of tracking user's gaze | |
US10602033B2 (en) | Display apparatus and method using image renderers and optical combiners | |
WO2018100240A1 (en) | Display apparatus image renderers and optical combiners | |
WO2018100239A1 (en) | Imaging system and method of producing images for display apparatus | |
WO2018100242A1 (en) | Display apparatus and method of displaying using optical combiners and context and focus image renderers | |
US20190004350A1 (en) | Display apparatus and method of displaying using polarizers and optical combiners | |
CN109997067B (en) | Display apparatus and method using portable electronic device | |
US20200285055A1 (en) | Direct retina projection apparatus and method | |
EP3548956B1 (en) | Imaging system and method of producing context and focus images | |
EP3330773B1 (en) | Display apparatus and method of displaying using context display and projectors |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20181204 |
|
RBV | Designated contracting states (corrected) |
Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20190405 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20191115 |
|
GRAJ | Information related to disapproval of communication of intention to grant by the applicant or resumption of examination proceedings by the epo deleted |
Free format text: ORIGINAL CODE: EPIDOSDIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTC | Intention to grant announced (deleted) | ||
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: VARJO TECHNOLOGIES OY |
|
INTG | Intention to grant announced |
Effective date: 20200318 |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: VARJO TECHNOLOGIES OY |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
111Z | Information provided on other rights and legal means of execution |
Free format text: AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR Effective date: 20200210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP Ref country code: AT Ref legal event code: REF Ref document number: 1277605 Country of ref document: AT Kind code of ref document: T Effective date: 20200615 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602017017602 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200904 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200903 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20200603 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200903 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1277605 Country of ref document: AT Kind code of ref document: T Effective date: 20200603 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201006 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20201003 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602017017602 Country of ref document: DE |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 |
|
26N | No opposition filed |
Effective date: 20210304 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20201127 |
|
REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20201130 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20201130 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20201130 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20201127 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 Ref country code: MT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200603 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20201130 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20241121 Year of fee payment: 8 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20241120 Year of fee payment: 8 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20241128 Year of fee payment: 8 |