WO2024181153A1 - Virtual image display device, virtual image display method, program, and mobile object - Google Patents
Virtual image display device, virtual image display method, program, and mobile object Download PDFInfo
- Publication number
- WO2024181153A1 WO2024181153A1 PCT/JP2024/005318 JP2024005318W WO2024181153A1 WO 2024181153 A1 WO2024181153 A1 WO 2024181153A1 JP 2024005318 W JP2024005318 W JP 2024005318W WO 2024181153 A1 WO2024181153 A1 WO 2024181153A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- user
- parallax
- virtual image
- eye
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 21
- 230000003287 optical effect Effects 0.000 claims abstract description 68
- 230000008859 change Effects 0.000 claims description 70
- 238000003384 imaging method Methods 0.000 claims description 17
- 230000001276 controlling effect Effects 0.000 claims description 6
- 230000001105 regulatory effect Effects 0.000 claims description 3
- 230000004438 eyesight Effects 0.000 abstract description 5
- 230000004888 barrier function Effects 0.000 description 26
- 238000001514 detection method Methods 0.000 description 25
- 230000008569 process Effects 0.000 description 12
- 238000004458 analytical method Methods 0.000 description 11
- 239000004973 liquid crystal related substance Substances 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 9
- 201000003152 motion sickness Diseases 0.000 description 8
- 238000002834 transmittance Methods 0.000 description 5
- 230000007423 decrease Effects 0.000 description 4
- 238000009792 diffusion process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 239000000463 material Substances 0.000 description 4
- 238000005401 electroluminescence Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 230000000149 penetrating effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 239000011347 resin Substances 0.000 description 2
- 229920005989 resin Polymers 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 239000011230 binding agent Substances 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000000498 cooling water Substances 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 239000000446 fuel Substances 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 230000004304 visual acuity Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
Definitions
- This disclosure relates to a virtual image display device, a virtual image display method, a program, and a moving object.
- a virtual image display device such as that described in Patent Document 1, is known.
- a virtual image display device is a virtual image display device that allows a user of a moving body to visually recognize a stereoscopic image, a display unit configured to display a parallax image including a first image and a second image having parallax with respect to the first image; an optical element configured to define a propagation direction of image light of the parallax image; an optical system configured to propagate image light whose propagation direction is regulated by the optical element toward an eye of the user and display a virtual image of the parallax image in a field of view of the user; a camera configured to image the eye of the user; and A controller, The controller: Detecting the position of the user's eyes and the position of the user's gaze point based on the imaging data output from the camera; The parallax image is controlled based on the eye position and the position of the gaze point.
- a virtual image display method is a virtual image display device that allows a user of a moving body to visually recognize a stereoscopic image, the virtual image display method being executed by the virtual image display device including: a display unit configured to display a parallax image including a first image and a second image having parallax with respect to the first image; an optical element configured to define a propagation direction of image light of the parallax image; an optical system configured to propagate the image light, the propagation direction of which is defined by the optical element, toward an eye of the user and display a virtual image of the parallax image in a field of view of the user; a camera configured to capture an image of the eye of the user; and a controller, taking an image of the user's eye; Detecting the position of the user's eye and the position of the user's gaze point based on imaging data of the user's eye; The parallax image is controlled based on the eye position and the position of the gaze point.
- a program is a virtual image display device that allows a user of a moving body to visually recognize a stereoscopic image, the program being executed by a virtual image display device including: a display unit configured to display a parallax image including a first image and a second image having parallax with respect to the first image; an optical element configured to define a propagation direction of image light of the parallax image; an optical system configured to propagate the image light, the propagation direction of which is defined by the optical element, toward an eye of the user and display a virtual image of the parallax image in the field of view of the user; a camera configured to capture an image of the eye of the user; and a controller,
- the controller is a program for causing the camera to capture an image of the user's eyes, detecting the position of the user's eyes and the position of the user's point of gaze based on the image data of the user's eyes, and controlling the parallax image based on the position of the eyes and the
- a moving object according to one embodiment of the present disclosure includes the above-described virtual image display device.
- FIG. 1 is a diagram illustrating an example of a virtual image display device mounted on a moving object.
- FIG. 2 is a diagram showing a schematic configuration of the virtual image display device shown in FIG. 1 .
- 3 is a diagram showing an example of the display panel shown in FIG. 2 as viewed from the depth direction.
- FIG. 3 is a diagram showing an example of the optical element shown in FIG. 2 as viewed from the depth direction.
- FIG. 2 is a diagram for explaining the relationship between the virtual image shown in FIG. 1 and the user's eyes;
- FIG. FIG. 2 is a diagram showing an example of sub-pixels observed by the left and right eyes of a user.
- FIG. 13 is a diagram showing another example of sub-pixels observed by the left and right eyes of a user.
- 11A and 11B are diagrams for explaining the relationship between the display position of a virtual image and the convergence angle and the amount of parallax.
- 4A to 4C are diagrams for explaining an example of control of a virtual image in the virtual image display device shown in FIG. 1 .
- 10A to 10C are diagrams for explaining another example of the control of a virtual image in the virtual image display device shown in FIG. 10 is a flowchart illustrating an operation of the virtual image display device.
- 10 is a flowchart illustrating an operation of the virtual image display device.
- 10 is a flowchart illustrating an operation of the virtual image display device.
- 10 is a flowchart illustrating an operation of the virtual image display device.
- Patent Document 1 describes a virtual image display device that, when an obstacle is present in front of the vehicle, changes the display form of the virtual image depending on the distance between the vehicle and the obstacle, thereby preventing the virtual image from being displayed as if it is penetrating the obstacle.
- a virtual image display device 1 is mounted on a moving body 10, and allows a user (driver or operator) 12 of the moving body 10 to visually recognize a virtual image V including various information related to the moving body 10.
- the virtual image display device 1 is also called a head-up display.
- a head-up display is also called a HUD (Head Up Display).
- the various information included in the virtual image V may include, for example, the speed of the moving body 10, the engine rotation speed, the remaining fuel such as gasoline, the total travel distance of the moving body 10, the total travel distance of the moving body 10, and the water temperature of the engine cooling water system.
- Part of the configuration of the virtual image display device 1 may be shared with other devices and parts provided in the moving body 10.
- Mobile object in this disclosure may include, for example, vehicles, ships, and aircraft.
- Vehicles may include, for example, automobiles, industrial vehicles, railroad vehicles, residential vehicles, and fixed-wing aircraft that run on runways.
- Automobiles may include, for example, passenger cars, trucks, buses, motorcycles, and trolley buses.
- Industrial vehicles may include, for example, industrial vehicles for agriculture and construction.
- Industrial vehicles may include, for example, forklifts and golf carts.
- Industrial vehicles for agriculture may include, for example, tractors, cultivators, transplanters, binders, combines, and lawnmowers.
- Industrial vehicles for construction may include, for example, bulldozers, scrapers, excavators, cranes, dump trucks, and road rollers. Vehicles may include those that run by human power.
- Vehicle classification is not limited to the above examples.
- automobiles may include industrial vehicles that can run on roads.
- the same vehicle may be included in multiple classifications.
- Ships may include, for example, marine jets, boats, and tankers.
- the aircraft may include, for example, fixed-wing aircraft and rotary-wing aircraft.
- the moving body 10 mounting the virtual image display device 1 is a vehicle, but the moving body 10 is not limited to a vehicle and may be any of the moving bodies described above.
- the virtual image display device 1 includes a display unit 2, an optical element 5, an optical system 6, a detection unit 7, and a controller 8.
- the detection unit 7 is also referred to as a camera or an in-car camera.
- the detection unit 7 is configured to capture an image of the eye 13 of the user 12 of the virtual image display device 1.
- the user 12 may be the driver of the moving body 10.
- the detection unit 7 may capture an image of the left eye 13L and the right eye 13R of the user 12.
- the detection unit 7 may or may not capture an image of the entire face of the user 12.
- the left eye 13L and the right eye 13R may be collectively referred to as the eye 13.
- the detection unit 7 may be attached to the rearview mirror of the vehicle 10.
- the detection unit 7 may be attached to, for example, a cluster in the instrument panel.
- the detection unit 7 may be attached to a center panel.
- the detection unit 7 may be attached to a support portion of the steering wheel disposed at the center of the steering wheel.
- the detection unit 7 may be attached on the dashboard.
- the detection unit 7 does not have to be disposed in a position that directly captures an image of the eye 13 of the user 12.
- the detection unit 7 may be disposed inside the dashboard and detect the eye 13 of the user 12 reflected on the windshield.
- the detection unit 7 may be disposed inside the dashboard and detect an image reflected on the windshield.
- the detection unit 7 is configured to obtain an image of the subject and generate an image of the subject.
- the detection unit 7 includes an imaging element.
- the imaging element may include, for example, a CCD (Charged Coupled Device) imaging element or a CMOS (Complementary Metal Oxide Semiconductor) imaging element.
- the detection unit 7 is positioned so that the face of the user 12 is located on the subject side.
- the detection unit 7 may be configured to capture images of the first eye 13L and the second eye 13R using two or more imaging devices.
- the detection unit 7 may be configured to detect the positions of the first eye 13L and the second eye 13R as coordinates in three-dimensional space.
- the detection unit 7 may not include a camera, but may be connected to an external camera.
- the detection unit 7 may include an input terminal that inputs a signal from the external camera.
- the external camera may be directly connected to the input terminal.
- the external camera may be indirectly connected to the input terminal via a shared network.
- the detection unit 7 that does not include a camera may include an input terminal configured to receive a video signal from the camera.
- the detection unit 7 that does not include a camera may be configured to detect the left eye and right eye from the video signal input to the input terminal.
- the detection unit 7 is configured to output imaging data generated by imaging the eye 13 to the controller 8.
- the detection unit 7 may be configured to output the imaging data to the controller 8 via a communication network such as a wired or wireless communication network or a CAN (Controller Area Network).
- a communication network such as a wired or wireless communication network or a CAN (Controller Area Network).
- the display unit 2 includes a display panel 3.
- the display panel 3 may be a transmissive display panel.
- the display unit 2 may include an illuminator 4.
- the transmissive display panel may include a liquid crystal panel.
- the transmissive display panel may have a known liquid crystal panel configuration.
- various liquid crystal panels such as IPS (In-Plane Switching) type, FFS (Fringe Field Switching) type, VA (Vertical Alignment) type, and ECB (Electrically Controlled Birefringence) type may be used.
- the transmissive display panel includes MEMS (Micro Electro Mechanical Systems) shutter type display panels.
- the illuminator 4 is disposed on the side of the display panel 3 opposite the display surface 3a, and illuminates the display panel 3 in a planar manner.
- the illuminator 4 may include a light source, a light guide plate, a diffusion plate, a diffusion sheet, etc.
- the illuminator 4 emits illumination light from the light source, and homogenizes the illumination light in the planar direction of the display panel 3 by the light guide plate, the diffusion plate, the diffusion sheet, etc.
- the illuminator 4 emits the homogenized light toward the display panel 3.
- the display unit 2 includes a transmissive display panel 3 and an illuminator 4, but the display unit 2 is not limited to a transmissive display panel and may include a self-luminous display panel. If the display panel 3 is a self-luminous display panel, the display unit 2 does not need to include the illuminator 4.
- a self-luminous display panel may include a plurality of self-luminous elements. As the self-luminous elements, various self-luminous elements such as LEDs (Light Emitting Diodes), organic EL (Electro Luminescence), and inorganic EL may be used.
- the display panel 3 has a plurality of partitioned regions on an active area 31 formed in a planar shape.
- the active area 31 is configured to display a parallax image.
- the parallax image includes a left eye image (also called a first image) and a right eye image (also called a second image) having parallax with respect to the left eye image.
- Each of the plurality of partitioned regions is an area partitioned in a first direction and a second direction perpendicular to the first direction.
- the direction perpendicular to the first direction and the second direction is called a third direction.
- the first direction may be called a horizontal direction.
- the second direction may be called a vertical direction.
- the third direction may be called a depth direction.
- the first direction, the second direction, and the third direction are not limited to these.
- the first direction is represented as the x-axis direction.
- the second direction is represented as the y-axis direction.
- the third direction is represented as the z-axis direction.
- the active area 31 includes multiple subpixels arranged in a grid pattern along the first and second directions.
- Each subpixel corresponds to one of the colors R (Red), G (Green), or B (Blue), and a set of three subpixels R, G, and B can form one pixel.
- One pixel can be called one picture element.
- the horizontal direction is, for example, the direction in which multiple subpixels that form one pixel are lined up.
- the vertical direction is, for example, the direction in which multiple subpixels of the same color are lined up.
- the subpixels arranged in the active area 31 constitute a plurality of subpixel groups Pg under the control of the controller 8.
- the subpixel groups Pg are arranged repeatedly in the first direction.
- the subpixel groups Pg can be arranged at the same position in the second direction, or can be arranged shifted in the second direction.
- the subpixel groups Pg can be arranged repeatedly adjacent to each other in the second direction at positions shifted by one subpixel in the first direction.
- the subpixel groups Pg include a plurality of subpixels in a predetermined row and column.
- the active area 31 is provided with a plurality of subpixel groups Pg including 12 subpixels P1 to P12 arranged consecutively, 1 in the second direction and 12 in the first direction. In the example shown in FIG. 3, some subpixel groups Pg are labeled with symbols.
- the multiple subpixel groups Pg are the smallest units that the controller 8 controls to display an image.
- the optical element 5 may be composed of a parallax barrier or a lenticular lens.
- the optical element 5 is a parallax barrier.
- the parallax barrier 5 is formed by a plane that follows the active area 31, and is separated from the active area 31 by a predetermined distance (gap) g.
- the parallax barrier 5 may be located on the opposite side of the illuminator 4 with respect to the display panel 3.
- the parallax barrier 5 may be located on the illuminator 4 side of the display panel 3.
- the parallax barrier 5 determines the propagation direction of image light emitted from the subpixels for each of the light-transmitting regions 51, which are strip-shaped regions extending in a predetermined direction in the plane.
- the parallax barrier 5 has a plurality of attenuation regions 52 that attenuate the image light.
- the attenuation regions 52 define a light-transmitting region 51 between two adjacent attenuation regions 52.
- the light-transmitting regions 51 have a higher light transmittance than the light-transmitting regions 52.
- the light transmittance of the light-transmitting regions 51 may be 10 times or more, 100 times or more, or 1000 times or more, that of the light-transmitting regions 52.
- the light-transmitting regions 52 have a lower light transmittance than the light-transmitting regions 51.
- the attenuation regions 52 may block the image light.
- the parallax barrier 5 may be made of a film or a plate-like member.
- the multiple light-attenuating regions 52 are made of the film or plate-like member.
- the multiple light-transmitting regions 51 may be openings provided in the film or plate-like member.
- the film may be made of resin or other materials.
- the plate-like member may be made of resin, metal, or other materials.
- the parallax barrier 5 is not limited to a film or plate-like member and may be made of other types of members.
- the parallax barrier 5 may be made of a base material having light-blocking properties.
- the parallax barrier 5 may be made of a base material containing a light-blocking additive.
- the parallax barrier 5 may be of a fixed type or an active barrier type.
- the parallax barrier 5 as an active barrier type can also be configured with a liquid crystal shutter.
- the liquid crystal shutter can control the light transmittance according to the applied voltage.
- the liquid crystal shutter is configured with multiple pixels, and can control the light transmittance of each pixel.
- the multiple light-transmitting regions 51 and multiple light-attenuating regions 52 of the liquid crystal shutter correspond to the multiple pixels of the liquid crystal shutter.
- the boundary between the multiple light-transmitting regions 51 and multiple light-attenuating regions 52 can be stepped to correspond to the shape of the multiple pixels.
- the parallax barrier 5 determines the light direction of the image light emitted from multiple sub-pixels, thereby determining the area on the active area 31 that is visible to the user's eyes.
- the area in the active area 31 that emits the image light that propagates to the user's left eye is called the left visible area (first visible area) 31aL.
- the area in the active area 31 that emits the image light that propagates to the user's right eye is called the right visible area (second visible area) 31aR.
- the barrier pitch Bp which is the arrangement interval of the light-transmitting regions 51 in the first direction, and the gap g between the active area 31 and the parallax barrier are defined so that the following equations (1) and (2) hold, using the optimum viewing distance d and interocular distance E.
- E:d (n ⁇ Hp/b):g...(1)
- d:Bp (d+g):(2 ⁇ n ⁇ Hp/b)...(2)
- the optimal viewing distance d is the distance between the left and right eyes of the user and the parallax barrier 5.
- the direction of the line passing through the left and right eyes is horizontal.
- the interocular distance E is the distance between the left and right eyes of the user.
- the interocular distance E may be, for example, 61.1 mm to 64.4 mm, a value calculated in research by the National Institute of Advanced Industrial Science and Technology.
- Hp is the horizontal length of a subpixel, as shown in Figure 3.
- the optical system 6 may include a first optical member 6a and a second optical member 6b.
- the first optical member 6a is configured to reflect (a part of) the image light emitted from the active area 31 toward the second optical member 6b.
- the second optical member 6b is configured to make a part of the image light reflected by the first optical member 6a reach (enter) the user's eye.
- the optical system 6 may function as a magnifying optical system that magnifies the image displayed on the active area 31. At least one of the first optical member 6a and the second optical member 6b may have optical power.
- the first optical member 6a may be a concave mirror having optical power.
- the windshield of the moving body 10 may also serve as the second optical member 6b.
- the number of optical members constituting the optical system 6 is not limited to two, and may be one, or may be three or more.
- the first optical member 6a and the second optical member 6b may include an optically functional surface that is a spherical shape, an aspherical shape, or a free-form shape.
- a portion of the image light emitted from the active area 31 of the display panel 3 passes through the multiple light-transmitting regions 51 and reaches the second optical member 6b via the first optical member 6a.
- a portion of the image light that reaches the second optical member 6b is reflected by the second optical member 6b and reaches the user's eye. This allows the user's eye to view a virtual image V of the image displayed in the active area 31 in front of the second optical member 6b.
- the surface on which the virtual image V is projected is called the virtual image surface S.
- the forward direction is the direction of the second optical member 6b as seen by the user.
- the forward direction is the direction in which the moving body 10 normally moves.
- the left visible area (first visible area) 31aL shown in FIG. 5 is an area of the virtual image surface S viewed by the left eye 13L of the user 12 as a result of image light that has passed through multiple light-transmitting areas of the parallax barrier 5 reaching the left eye 13L of the user 12, as described above.
- the right visible area (second visible area) 31aR is an area of the virtual image surface S viewed by the right eye 13R of the user 12 as a result of image light that has passed through multiple light-transmitting areas of the parallax barrier 5 reaching the right eye 13R of the user 12, as described above.
- the aperture ratio of the parallax barrier 5 is 50%
- the arrangement of multiple subpixels of the virtual image V as seen by the left eye 13L and right eye 13R of the user 12 is shown in FIG. 6.
- the aperture ratio is 50%
- the multiple light-transmitting regions and the multiple light-attenuating regions of the parallax barrier 5 have the same width in the interocular direction (x-axis direction).
- the dashed line indicates the virtual image of the boundary between the multiple light-transmitting regions and the multiple light-attenuating regions of the parallax barrier 5.
- the left visible region 31aL visible from the left eye 13L and the right visible region 31aR visible from the right eye 13R are regions extending diagonally in the x and y directions located between the two-dot dashed lines.
- the right visible region 31aR is not visible from the left eye 13L.
- the left visible region 31aL is not visible from the right eye 13R.
- the left visible region 31aL includes the entirety of the multiple subpixels P2 to P5 arranged in the active area and virtual images of most of the multiple subpixels P1 and P6.
- the left eye 13L of the user 12 has difficulty viewing the virtual images of the multiple subpixels P7 to P12 arranged in the active area.
- the right visible region 31aR includes the entirety of the multiple subpixels P8 to P11 arranged in the active area and virtual images of most of the multiple subpixels P7 and P12.
- the right eye 13R of the user 12 has difficulty viewing the virtual images of the multiple subpixels P1 to P6 arranged in the active area.
- the controller 8 can display a left eye image in the multiple subpixels P1 to P6.
- the controller 8 can display a right eye image in the multiple subpixels P7 to P12.
- the left eye 13L of the user 12 mainly sees the virtual image of the left eye image VL in the left visible region 31aL
- the right eye 13R mainly sees the virtual image of the right eye image VR in the right visible region 31aR. Because the left eye image VL and the right eye image VR are parallax images having parallax with respect to each other, the user 12 sees the left eye image VL and the right eye image VR as a stereoscopic image.
- the virtual image V when the virtual image V is located at the appropriate viewing distance d, the virtual image V is located so as to intersect with the virtual image surface S.
- the left eye image VL and the right eye image VR are displayed at approximately the same position as the virtual image V viewed by the user 12.
- the convergence angle at which the left eye 13L and the right eye 13R view a point on the virtual image V located at the appropriate viewing distance d is represented by the convergence angle ⁇ 0.
- the left eye image VL and the right eye image VR are displayed at positions that differ by the amount of parallax D1 on the virtual image surface S.
- the left eye image VL is an image of the virtual image V viewed from the left side at a smaller angle than when viewed from the appropriate viewing distance d.
- the right eye image VR is an image of the virtual image V viewed from the right side at a smaller angle than when viewed from the appropriate viewing distance d.
- the user 12 perceives the virtual image V as being present at the position where the direction in which the left eye 13L views the left eye image VL intersects with the direction in which the right eye 13R views the right eye image VR.
- the convergence angle at which the user 12 views a point on the virtual image V is represented by the convergence angle ⁇ 1.
- the convergence angle ⁇ 1 is smaller than the convergence angle ⁇ 0 at which a point located at the appropriate viewing distance d is viewed.
- the left eye image VL and the right eye image VR are displayed at positions that differ by the amount of parallax D2 on the virtual image surface S.
- the left eye image VL is an image of the virtual image V viewed from the left side at a larger angle than when viewed from the appropriate viewing distance d.
- the right eye image VR is an image of the virtual image V viewed from the right side at a larger angle than when viewed from the appropriate viewing distance d.
- the user 12 perceives the virtual image V as being present at the position where the direction in which the left eye 13L views the left eye image VL intersects with the direction in which the right eye 13R views the right eye image VR.
- the convergence angle at which the user 12 views a point on the virtual image V is represented by the convergence angle ⁇ 2.
- the convergence angle ⁇ 2 is greater than the convergence angle ⁇ 0 at which a point located at the appropriate viewing distance d is viewed.
- the parallax amount D between the left eye image VL and the right eye image VR on the virtual image surface S is increased, the display position of the virtual image V moves away from the eye 13 (i.e., the display distance of the virtual image V increases).
- the parallax amount D is decreased, the display position of the virtual image V moves closer to the eye 13 (i.e., the display distance of the virtual image V decreases).
- the parallax amount D can take positive or negative values.
- the display position may be defined with the right direction in FIG. 8 as the positive direction.
- the origin R of the display position may be set arbitrarily, but may be, for example, within the virtual image surface S of the virtual image V located at the suitable viewing distance d.
- the controller 8 is connected to each component of the virtual image display device 1 and controls each component.
- the controller 8 is configured as a processor, for example.
- the controller 8 may include one or more processors.
- the processor may include a general-purpose processor that loads a specific program to execute a specific function, and a dedicated processor specialized for a specific process.
- the dedicated processor may include an application specific integrated circuit (ASIC: Application Specific Integrated Circuit).
- the processor may include a programmable logic device (PLD: Programmable Logic Device).
- the PLD may include an FPGA (Field-Programmable Gate Array).
- the controller 8 may be either a SoC (System-on-a-Chip) in which one or more processors work together, or a SiP (System In a Package).
- the controller 8 includes a memory unit 81, and may store various information, programs for operating each component of the virtual image display device 1, etc. in the memory unit 81.
- the memory unit 81 may be configured of, for example, a semiconductor memory, etc.
- the memory unit 81 may function as a work memory for the controller 8.
- the controller 8 detects the position of the eye 13 of the user 12 and the position of the gaze point of the user 12 (hereinafter referred to as gaze point P) based on the imaging data output from the in-vehicle camera 7.
- the position of the eye 13 may be the midpoint of the line segment connecting the positions of the left eye 13L and the right eye 13R, or may be the position of the dominant eye of the user 12.
- the dominant eye of the user 12 may be set in advance by the user 12, or may be input to the controller 8 from outside.
- the gaze point P is the position of the intersection of the gaze points of the left eye 13L and the right eye 13R in a virtual plane including the gaze points of the left eye 13L and the right eye 13R.
- the gaze point is also called the visual axis.
- the gaze point may be a virtual straight line passing through the center of each lens that opens from the iris and faces outward, and the central fossa, which has the best visual acuity, in each of the left eye 13L and the right eye
- the controller 8 controls the parallax image to be displayed on the display panel 3 based on the position of the eye 13 and the position of the gaze point P. This allows the gaze point P of the user 12 to be approximately aligned with the horizontal and vertical display positions of the virtual image V. As a result, the risk of the user 12 experiencing motion sickness or being unable to intuitively understand the surrounding situation can be reduced.
- the controller 8 may control the display form of the parallax image based on the position of the eye 13 and the position of the gaze point P.
- the display form of the parallax image may include the parallax amount, brightness, color, transparency, size, shape, etc. of the parallax image.
- the controller 8 may hide the virtual image V or display the virtual image V as a frame based on the position of the eye 13 and the position of the gaze point P. Displaying the virtual image V as a frame is also referred to as displaying the virtual image V as a frame only, and means that information represented by letters or numbers is hidden in the virtual image V consisting of letters, numbers, and figures.
- the controller 8 may calculate a gaze distance (hereinafter also referred to as a first distance) L1 between the position of the eye 13 and the position of the gaze point P.
- the controller 8 may control the parallax image based on the first distance L1. In this case, it is possible to adjust the display position of the virtual image V according to the gaze distance L1. As a result, it is possible to further reduce the risk that the user 12 will suffer from visually-induced motion sickness or that the user 12 will be unable to intuitively understand the surrounding situation.
- the controller 8 may calculate a display distance (hereinafter also referred to as the second distance) L2 between the position of the eye 13 and the display position of the virtual image V.
- the controller 8 may compare the first distance L1 with the second distance L2, and if the second distance L2 differs from the first distance L1, control the amount of parallax of the parallax image so that the second distance L2 matches the first distance L1, as shown in FIG. 9. In this case, it is possible to adjust the display distance L2 of the virtual image V according to the gaze distance L1. As a result, the risk that the user 12 will experience visually-induced motion sickness or will be unable to intuitively understand the surrounding situation can be further reduced.
- the controller 8 may change it from the first parallax amount to the second parallax amount all at once, or may change it stepwise from the first parallax amount to the second parallax amount.
- the time during which the gaze distance L1 and the display distance L2 do not match can be shortened.
- the parallax amount D stepwise the discomfort felt by the user 12 due to the change in the display distance L2 can be reduced.
- the controller 8 may determine whether the eye 13 of the user 12 is included in the eye box 14 (see FIG. 1).
- the eye box 14 is the range of eye positions from which the user 12 observes a virtual image, and is an area in real space in which the eye 13 of the user 12 is assumed to be present, taking into account, for example, the physique, posture, and changes in posture of the user 12.
- the shape of the eye box 14 is arbitrary.
- the eye box 14 may be a planar area or a three-dimensional area. If the eye 13 of the user 12 is not included in the eye box 14, the controller 8 does not need to change the display form of the parallax image. In this case, the processing load on the controller 8 and the display unit 2 can be reduced.
- the virtual image display device 1 includes a sensor device 11.
- the sensor device 11 is configured to acquire at least one of information (situation) of the moving body 10 and environmental information (ambient environment) surrounding the moving body 10.
- the sensor device 11 may include an exterior camera 11a, a GPS (Global Positioning System) receiver, communication equipment, a laser radar, a millimeter wave radar, sensor equipment, a car navigation device, a drive recorder, etc.
- GPS Global Positioning System
- the exterior camera 11a can capture images of the scenery around the moving body 10, particularly the environment ahead of the moving body 10, and obtain information ahead of the moving body 10 (e.g., information about other moving bodies, pedestrians, animals, roadside installations such as guardrails, buildings, etc., that are in front of the moving body 10).
- the exterior camera 11a may be configured to include, for example, a CCD imaging element or a CMOS imaging element.
- the exterior camera 11a may be located at the front end of the moving body 10, inside the passenger compartment of the moving body 10, or in the engine room.
- the exterior camera 11a may have a lens with a wide angle of view, for example, a wide-angle lens, a fisheye lens, etc.
- the GPS receiver can obtain information regarding the position (latitude and longitude) and speed of the moving body 10.
- the communication device may include an external communication device capable of communicating with an external communication network.
- the communication device can obtain information regarding the position and speed of the moving body 10, the weather around the moving body 10, etc. from the external communication network via the external communication device.
- the laser radar and millimeter wave radar can obtain information ahead of the moving body 10.
- the sensor device may include a luminance meter, an ultrasonic sensor, etc.
- the sensor device may include an ECU (Electronic Control Unit) of the moving body 10, etc.
- the sensor device can obtain information ahead of the moving body 10, information regarding the luminance around the moving body 10, information regarding whether the headlights equipped on the moving body 10 are turned on, etc.
- the car navigation device can obtain information regarding the position and speed of the moving body 10.
- the drive recorder can obtain information ahead of the moving body 10 and information regarding the speed of the moving body 10.
- the controller 8 includes an information acquisition unit 8a, an information analysis unit 8b, a display determination unit 8c, and a display instruction unit 8d.
- the information acquisition unit 8a acquires information on the moving body 10 and information on the surroundings of the moving body 10 from the sensor device 11, and outputs the information on the moving body 10 and information on the surroundings of the moving body 10 to the information analysis unit 8b.
- the information acquisition unit 8a may acquire information on the moving body 10 and information on the surroundings of the moving body 10 at predetermined time intervals (for example, 0.008 seconds to 1 second) and output the information to the information analysis unit 8b.
- the information on the moving body 10 may include information on the position and speed of the moving body 10, and information on whether the headlights are on or not.
- the information on the moving body 10 may include information on the user 12, such as the position of the eye 13 of the user 12, the gaze point P, the viewpoint, the line of sight, and the gaze distance L1.
- the information on the surroundings of the moving body 10 may include information ahead of the moving body 10, information on the brightness around the moving body 10, and information on the weather around the moving body 10.
- the information analysis unit 8b analyzes information about the moving body 10 and information about the surroundings of the moving body 10, and judges the status of the moving body 10 and the status around the moving body 10.
- the information analysis unit 8b outputs the status of the moving body 10 and the status around the moving body 10 to the display determination unit 8c.
- the information analysis unit 8b may judge the status of the moving body 10 and the status around the moving body 10 at a predetermined time interval (e.g., 0.008 seconds to 1 second), and output it to the display determination unit 8c.
- the information analysis unit 8b may be configured to store the previously judged status of the moving body 10 and the status around the moving body 10.
- the information analysis unit 8b When the information analysis unit 8b newly judges the status of the moving body 10 and the status around the moving body 10, it may be configured to compare the newly judged status of the moving body 10 and the status around the moving body 10 with the previously judged status of the moving body 10 and the status around the moving body 10.
- the display determination unit 8c sets and stores the display form (parallax amount D, size, shape, brightness, etc. of the parallax image) to be displayed on the display unit 2 based on the status of the moving body 10 and the status around the moving body 10, and outputs it to the display instruction unit 8d.
- the display determination unit 8c When the display determination unit 8c acquires the display form of the parallax image from the information analysis unit 8b, the display determination unit 8c may discard the display form of the parallax image acquired previously and stored in the display determination unit 8c.
- the display instruction unit 8d instructs the display unit 2 to display the parallax image in the display form set by the display determination unit 8c.
- the situation around the moving body 10 may include the presence or absence of an obstacle O.
- the obstacle O may be an object that is present in front of the moving body 10 and whose display distance calculated from the parallax amount D is longer than the distance to the object.
- the obstacle O may include, for example, other moving bodies that are present around the moving body 10, particularly in front of the moving body 10, roadside installations such as guardrails, buildings, etc. If an obstacle O is present, the information analysis unit 8b may detect the relative position and relative speed of the obstacle O with respect to the moving body 10.
- the situation around the moving body 10 may include the background luminance in the field of view of the user 12, particularly the background luminance in the line of sight of the user 12.
- controller 8 controls the parallax image (i.e., the virtual image V).
- the controller 8 detects the position of the eye 13 of the user 12 and the position of the gaze point P at every predetermined time interval ⁇ t, and calculates the gaze distance L1.
- the time interval ⁇ t may be, for example, 0.008 seconds to 1 second.
- the controller 8 compares the gaze distance L1 at the current time with the gaze distance L1 at a time ⁇ t before the current time, and calculates the amount of change in the gaze distance L1.
- the controller 8 compares the amount of change in the gaze distance L1 with a threshold (also called the first threshold) T1, and if the amount of change in the gaze distance L1 is equal to or greater than the first threshold T1, the controller 8 may change the parallax amount D of the parallax image so that the display distance L2 matches the gaze distance L1, as shown in FIG. 9. If the amount of change in the gaze distance L1 is less than the first threshold T1, the controller 8 does not need to change the parallax amount of the parallax image. This makes it possible to reduce the processing load on the controller 8 and the display unit 2 while suppressing visually induced motion sickness of the user 12.
- a threshold also called the first threshold
- the first threshold T1 may be, for example, a distance in the range of 5 m or more and less than 10 m, 10 m or more and less than 15 m, 15 m or more and less than 20 m, or 20 m or more, or may be some other distance.
- the controller 8 may change the parallax amount D of the parallax image all at once so that the display distance L2 matches the viewing distance L1, or may change the parallax amount D of the parallax image in stages.
- the parallax amount D of the parallax image is changed all at once, the time during which the viewing distance L1 and the display distance L2 do not match can be shortened.
- the parallax amount D of the parallax image is changed in stages, the discomfort felt by the user 12 due to the change in the display distance L2 can be reduced.
- the controller 8 When the controller 8 changes the parallax amount D in stages, it may change the parallax amount D in stages so that the display distance L2 matches the gaze distance L1 over about 1 to 2 seconds, or it may change the parallax amount D in stages so that the display distance L2 matches the gaze distance L1 over about 3 seconds.
- the controller 8 may change the parallax amount D so that the display distance L2 matches the gaze distance L1 in a shorter time the faster the speed of the moving body 10 is.
- the controller 8 changes the parallax amount D in stages it may change the size, color, shape, etc. of the virtual image V viewed by the user 12 in stages.
- the controller 8 may control the display unit 2 not to display the parallax image when the change in the gaze distance L1 is equal to or greater than the threshold (also referred to as the second threshold) T2 and the speed of the moving body 10 is equal to or greater than the threshold (also referred to as the third threshold) T3.
- the virtual image display device 1 may turn off the virtual image V when the change in the gaze distance L1 is equal to or greater than the second threshold T2 and the speed of the moving body 10 is equal to or greater than the third threshold T3.
- the attention of the user 12 can be directed to the obstacle O, thereby improving the safety of driving.
- the second threshold T2 may be, for example, 30 m, but is not limited thereto.
- the second threshold T2 may be a distance in the range of 5 m or more and less than 10 m, 10 m or more and less than 15 m, 15 m or more and less than 20 m, or 20 m or more, or may be another distance.
- the second threshold T2 may be the same as the first threshold T1 or may be different from the first threshold T1.
- the third threshold T3 may be a speed of 80 km/h or more, but is not limited to this.
- the second threshold T2 may be, for example, a speed of 60 km/h or more and less than 80 km/h, 50 km/h or more and less than 60 km/h, 40 km/h or more and less than 50 km/h, 30 km/h or more and less than 40 km/h, 10 km/h or more and less than 30 km/h, or less than 10 km/h.
- the controller 8 may control the display unit 2 to display the virtual image V as a frame when the change in the gaze distance L1 is equal to or greater than the second threshold T2 and the speed of the moving body 10 is equal to or greater than the third threshold T3. In this case, too, for example, when an obstacle O appears ahead of the moving body 10, the attention of the user 12 can be directed to the obstacle O, thereby improving driving safety.
- the controller 8 may control the display unit 2 to not display the virtual image V or to display it in a frame, but to reduce the brightness of the virtual image V or to increase the transparency of the virtual image V. In this case, it is possible to increase the safety of driving and to continue to provide the user 12 with various information related to the moving body 10.
- the controller 8 may change the parallax amount so that the display distance L2 coincides with the gaze distance L1 and increase the luminance of the virtual image V. This can suppress visually induced motion sickness of the user 12. Also, as shown in FIG. 10, when the gaze point P of the user 12 moves from a bright place BP to a dark place DP, the visibility of the virtual image V can be increased, and as a result, the time during which the user 12 has difficulty viewing the virtual image V can be shortened.
- the controller 8 may determine the increase amount of the luminance of the virtual image V based on the speed of the moving object 10.
- the controller 8 may increase the increase amount of the luminance of the virtual image V as the speed of the moving object 10 increases.
- the fourth threshold T4 may be, for example, a distance in the range of 5 m or more and less than 10 m, 10 m or more and less than 15 m, 15 m or more and less than 20 m, or 20 m or more, or may be another distance.
- the fourth threshold T4 may be the same as the first threshold T1 or the second threshold T2, or may be different from the first threshold T1 and the second threshold T2.
- the fifth threshold T5 may be, for example, a luminance in the range of 1000 cd/ m2 or more and less than 2000 cd/ m2 , 2000 cd/ m2 or more and less than 3000 cd/ m2 , or 3000 cd/ m2 or more, or may be another luminance.
- the controller 8 may change the parallax amount so that the display distance L2 coincides with the gaze distance L1 and may reduce the luminance of the virtual image V. In this case, it is possible to suppress visually induced motion sickness of the user 12.
- the gaze point P of the user 12 moves from a dark place DP to a bright place BP (see FIG.
- the controller 8 may determine the amount of reduction in the luminance of the virtual image V based on the speed of the moving object 10.
- the controller 8 may increase the amount of reduction in the luminance of the virtual image V as the speed of the moving object 10 increases.
- the sixth threshold T6 may be, for example, a distance in the range of 5 m or more and less than 10 m, 10 m or more and less than 15 m, 15 m or more and less than 20 m, or 20 m or more, or may be another distance.
- the sixth threshold T6 may be the same as the fourth threshold T4, or may be different from the fourth threshold T4.
- the seventh threshold T7 may be, for example, a luminance in the range of 1000 cd/ m2 or more and less than 2000 cd/ m2 , 2000 cd/ m2 or more and less than 3000 cd/ m2 , or 3000 cd/ m2 or more, or may be another luminance.
- the seventh threshold T7 may be the same as the fifth threshold T5, or may be different from the fifth threshold T5.
- the memory unit 81 of the controller 8 may store a table showing the correspondence between the amount of change (amount of decrease and amount of increase) in background luminance in the speed of the moving body 10 and the line of sight direction of the user 12, and the amount of change (amount of increase and amount of decrease) in the luminance of the virtual image V.
- the controller 8 may change the luminance of the virtual image V based on the table stored in the memory unit 81.
- the controller 8 may control the virtual image V based on the lighting state of the headlights of the moving body 10.
- the lighting state of the headlights includes whether the headlights are on or off and the light distribution state of the headlights (high beam or low beam).
- the controller 8 may control the brightness of the virtual image V based on the lighting state of the headlights. For example, when the headlights are on, the controller 8 may increase the brightness of the virtual image V, in which case the visibility of the virtual image V can be improved.
- the controller 8 may control the brightness of the virtual image V based on the lighting state of the headlights and the background brightness in the line of sight of the user 12. In this case, it is possible to further improve the visibility of the virtual image V.
- the controller 8 hides or frames the virtual image V, or changes the display position, size, color, brightness, and transparency of the virtual image V, based on the gaze distance L1, the speed of the moving body 10, and the background brightness in the line of sight of the user 12, but the control of the virtual image V by the controller 8 is not limited to this.
- the controller 8 may turn the virtual image V, which is a stereoscopic image (three-dimensional image), into a planar image (two-dimensional image), based on information about the moving body 10 and information about the surroundings of the moving body 10.
- FIGS The control of the display unit 2 by the controller 8 will be described below.
- Figures 11 to 13 are flowcharts showing the control of the display unit 2 by the controller 8.
- step is abbreviated as [S]
- [YES] the judgment control
- [NO] the judgment control
- the flowchart in FIG. 11 [starts] when the moving body 10 starts.
- [S1] information on the moving body 10 and information on the surroundings of the moving body 10 are acquired from the sensor device 11.
- [S2] the information acquired from the sensor device 11 is analyzed to determine the status of the moving body 10 and the status of the surroundings of the moving body 10.
- [S5] it is determined whether the gaze distance L1 of the user 12 has been determined. If the gaze distance L1 has been determined [Yes], proceed to [S6]. If the gaze distance L1 has not been determined [No], proceed to [S7], and after determining the gaze distance L1 in [S7], proceed to [S6].
- the threshold value may be the smallest of the threshold values (e.g., the first threshold value T1, the second threshold value T2, and the fourth threshold value T4) that are set for the change in gaze distance L1. If the change in gaze distance L1 is greater than or equal to the threshold value [Yes], proceed to [S10], and if the change in gaze distance L1 is not greater than or equal to the threshold value [No], end the flowchart.
- the threshold value may be the smallest of the threshold values (e.g., the first threshold value T1, the second threshold value T2, and the fourth threshold value T4) that are set for the change in gaze distance L1.
- the change in gaze distance L1 is compared with thresholds (e.g., first threshold T1, second threshold T2, and fourth threshold T4) set for the change in gaze distance L1, and a change pattern for the parallax image (i.e., virtual image V) is selected based on the comparison result, and the process proceeds to [S11] of the flowchart shown in FIG. 12.
- the change pattern may be either a change pattern according to information about the moving body 10 or a change pattern according to information ahead of the moving body 10.
- [S9] it is determined whether there is a change in the situation at the gaze point of the user 12, and a change pattern for the parallax image (i.e., virtual image V) is selected.
- the change in the situation at the gaze point may be, for example, a change in the background luminance in the line of sight of the user 12. If there is a change in the situation at the gaze point [Yes], proceed to [S11] in the flowchart shown in Figure 12, and if there is no change in the situation at the gaze point [No], end the flowchart.
- the speed of the moving body 10 is compared with a threshold value (e.g., third threshold value T3) set for the speed of the moving body 10, and a change pattern for the parallax image is selected based on the comparison result, and the process proceeds to [S14].
- a threshold value e.g., third threshold value T3 set for the speed of the moving body 10
- the parameters of the parallax image (virtual image V) to be changed and the amount of change of the parameters are determined based on the change patterns selected in [S10], [S12], and [S15], and the process proceeds to [S15] of the flowchart shown in FIG. 13.
- the parameters of the parallax image (virtual image V) may include the amount of parallax D, the time for changing the amount of parallax D, and parameters related to the brightness, color, transparency, size, and shape of the virtual image V.
- the time for changing the amount of parallax D is the time from the start to the end of changing the amount of parallax D.
- the parameters of the parallax image (virtual image V) may include a flag for hiding the virtual image V, and a flag for displaying the virtual image V in a frame.
- one or more objects included in the parallax image are selected for which the parallax amount D is to be changed. Then, in [S17], the amount of change in the parallax amount D is calculated based on the gaze distance L1 of the user 12. In [S18], the parallax amount D stored in the display determination unit 8c is rewritten.
- the display unit 2 is instructed to change the parallax image displayed on the display panel 3 based on the parallax image parameters and the amount of change of the parameters determined in [S14].
- a process (delay process) is performed to change the parallax amount D in stages rather than all at once.
- it is determined whether or not all changes to the parallax image parameters have been completed. If all changes to the parallax image parameters have been completed [Yes], the flow chart is terminated, and if all changes to the parallax image parameters have not been completed [No], the process returns to [S15].
- the above-described embodiment is not limited to implementation as the virtual image display device 1.
- the above-described embodiment may be implemented as a virtual image display method using the virtual image display device 1, or may be implemented as a program for controlling the virtual image display device 1.
- each component can be rearranged so as not to cause logical inconsistencies, and multiple components can be combined into one or divided.
- references such as “first” and “second” are identifiers for distinguishing the configuration.
- Configurations distinguished by descriptions such as “first” and “second” in this disclosure may have their numbers exchanged.
- the first optical member may exchange identifiers “first” and “second” with the second optical member.
- the exchange of identifiers is performed simultaneously.
- the configurations remain distinguished even after the exchange of identifiers.
- Identifiers may be deleted.
- a configuration from which an identifier has been deleted is distinguished by a symbol. Descriptions of identifiers such as “first” and “second” in this disclosure alone should not be used to interpret the order of the configurations or to justify the existence of identifiers with smaller numbers.
- the x-axis, y-axis, and z-axis are provided for convenience of explanation and may be interchanged.
- the configurations according to this disclosure have been described using an orthogonal coordinate system consisting of the x-axis, y-axis, and z-axis.
- the positional relationship of each configuration according to this disclosure is not limited to being orthogonal.
- This disclosure can be implemented in the following configurations (1) to (13).
- a virtual image display device that allows a user of a moving body to visually recognize a stereoscopic image, comprising: a display unit configured to display a parallax image including a first image and a second image having parallax with respect to the first image; an optical element configured to define a propagation direction of image light of the parallax image; an optical system configured to propagate image light whose propagation direction is regulated by the optical element toward an eye of the user and display a virtual image of the parallax image in a field of view of the user; a camera configured to image the eye of the user; and A controller, The controller: Detecting the position of the user's eyes and the position of the user's gaze point based on the imaging data output from the camera; The virtual image display device is configured to control the parallax image based on the position of the eye and the position of the gaze point.
- a sensor device configured to acquire a status of the moving object, The virtual image display device according to any one of the above configurations (1) to (4), wherein the controller is configured to control the parallax image based on a state of the moving object acquired by the sensor device.
- the sensor device is configured to acquire a speed of the moving object
- the virtual image display device according to the above-mentioned configuration (5), wherein the controller is configured to control the parallax images based on a speed of the moving object.
- the sensor device is configured to acquire a lighting state of a headlight of the moving object,
- a sensor device configured to acquire an ambient environment of the moving object, The virtual image display device according to any one of the above configurations (1) to (4), wherein the controller is configured to control the parallax image based on the surrounding environment of the moving object acquired by the sensor device.
- the sensor device is configured to acquire a background luminance of the user's field of view,
- the virtual image display device according to the above-mentioned configuration (8), wherein the controller is configured to control the parallax images based on a background luminance in the user's field of vision.
- a virtual image display device that allows a user of a moving body to visually recognize a stereoscopic image, the virtual image display method being executed by the virtual image display device including: a display unit configured to display a parallax image including a first image and a second image having parallax with respect to the first image; an optical element configured to define a propagation direction of image light of the parallax image; an optical system configured to propagate the image light, the propagation direction of which is defined by the optical element, toward the user's eye and display a virtual image of the parallax image in the user's field of vision; a camera configured to capture an image of the user's eye; and a controller, Taking an image of the user's eye; Detecting the position of the user's eye and the position of the user's gaze point based on imaging data of the user's eye; A virtual image display method, comprising: controlling the parallax image based on the position of the eye and the position of the gaze point.
- a virtual image display device that allows a user of a moving body to visually recognize a stereoscopic image
- the virtual image display device including: a display unit configured to display a parallax image including a first image and a second image having parallax with respect to the first image; an optical element configured to determine a propagation direction of image light of the parallax image; an optical system configured to propagate the image light, the propagation direction of which is determined by the optical element, toward the user's eye and display a virtual image of the parallax image in the user's field of vision; a camera configured to capture an image of the user's eye; and a controller, the program being executed by the virtual image display device, A program for causing the controller to cause the camera to capture an image of the user's eye, detecting the position of the user's eye and the position of the user's point of gaze based on the image data of the user's eye, and controlling the parallax image based on the position of the eye and the position of the point of
- Virtual image display device 2 Display unit 3 Display panel 3a Display surface 4 Illuminator 5 Optical element (parallax barrier) 6 Optical system 6a First optical member 6b Second optical member 7 Detection unit (in-vehicle camera) 8 controller 8a information acquisition section 8b information analysis section 8c display judgment section 8d display instruction section 10 moving object 11 sensor device 12 user 13 eye 13L left eye (first eye) 13R Right eye (second eye) 14 Eye box 31 Active area 31aL Left visible area 31aR Right visible area 51 Light-transmitting area 52 Light-reducing area O Obstacle P Point of gaze S Virtual image surface V Virtual image VL Left eye image VR Right eye image d Suitable viewing distance g Gap
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
A virtual image display device according to the present disclosure includes a display unit, an optical element, an optical system, a camera, and a controller. The display unit displays a disparity image. The optical element defines the propagation direction of image light of the disparity image. The optical system propagates the image light, the propagation direction of which is defined by the optical element, toward the eyes of a user, and displays a virtual image of the disparity image in the field of vision of the user. The camera captures images of the eyes of the user. The controller detects the positions of the eyes of the user and the position of a gaze point of the user on the basis of captured image data outputted from the camera, and controls the disparity image on the basis of the positions of the eyes and the position of the gaze point.
Description
本開示は、虚像表示装置、虚像表示方法、プログラム、及び移動体に関する。
This disclosure relates to a virtual image display device, a virtual image display method, a program, and a moving object.
従来、例えば特許文献1に記載された虚像表示装置が知られている。
A virtual image display device, such as that described in Patent Document 1, is known.
本開示の一実施形態に係る虚像表示装置は、移動体の利用者に立体像を視認させる虚像表示装置であって、
第1画像と前記第1画像に対して視差を有する第2画像とを含む視差画像を表示するように構成される表示部と、
前記視差画像の画像光の伝播方向を規定するように構成される光学素子と、
前記光学素子によって伝播方向を規定された画像光を前記利用者の眼に向かって伝播させ、前記利用者の視野に前記視差画像の虚像を表示するように構成される光学系と、
前記利用者の眼を撮像するように構成されるカメラと、
コントローラと、を含み、
前記コントローラは、
前記カメラから出力される撮像データに基づいて、前記利用者の眼の位置及び前記利用者の注視点の位置を検出し、
前記眼の位置と前記注視点の位置とに基づいて、前記視差画像を制御するように構成される。 A virtual image display device according to an embodiment of the present disclosure is a virtual image display device that allows a user of a moving body to visually recognize a stereoscopic image,
a display unit configured to display a parallax image including a first image and a second image having parallax with respect to the first image;
an optical element configured to define a propagation direction of image light of the parallax image;
an optical system configured to propagate image light whose propagation direction is regulated by the optical element toward an eye of the user and display a virtual image of the parallax image in a field of view of the user;
a camera configured to image the eye of the user; and
A controller,
The controller:
Detecting the position of the user's eyes and the position of the user's gaze point based on the imaging data output from the camera;
The parallax image is controlled based on the eye position and the position of the gaze point.
第1画像と前記第1画像に対して視差を有する第2画像とを含む視差画像を表示するように構成される表示部と、
前記視差画像の画像光の伝播方向を規定するように構成される光学素子と、
前記光学素子によって伝播方向を規定された画像光を前記利用者の眼に向かって伝播させ、前記利用者の視野に前記視差画像の虚像を表示するように構成される光学系と、
前記利用者の眼を撮像するように構成されるカメラと、
コントローラと、を含み、
前記コントローラは、
前記カメラから出力される撮像データに基づいて、前記利用者の眼の位置及び前記利用者の注視点の位置を検出し、
前記眼の位置と前記注視点の位置とに基づいて、前記視差画像を制御するように構成される。 A virtual image display device according to an embodiment of the present disclosure is a virtual image display device that allows a user of a moving body to visually recognize a stereoscopic image,
a display unit configured to display a parallax image including a first image and a second image having parallax with respect to the first image;
an optical element configured to define a propagation direction of image light of the parallax image;
an optical system configured to propagate image light whose propagation direction is regulated by the optical element toward an eye of the user and display a virtual image of the parallax image in a field of view of the user;
a camera configured to image the eye of the user; and
A controller,
The controller:
Detecting the position of the user's eyes and the position of the user's gaze point based on the imaging data output from the camera;
The parallax image is controlled based on the eye position and the position of the gaze point.
本開示の一実施形態に係る虚像表示方法は、移動体の利用者に立体像を視認させる虚像表示装置であって、第1画像と前記第1画像に対して視差を有する第2画像とを含む視差画像を表示するように構成される表示部と、前記視差画像の画像光の伝播方向を規定するように構成される光学素子と、前記光学素子によって伝播方向を規定された画像光を前記利用者の眼に向かって伝播させ、前記利用者の視野に前記視差画像の虚像を表示するように構成される光学系と、前記利用者の眼を撮像するように構成されるカメラと、コントローラと、を含む虚像表示装置が実行する虚像表示方法であって、
前記利用者の眼を撮像し、
前記利用者の眼を撮像した撮像データに基づいて、前記利用者の眼の位置及び前記利用者の注視点の位置を検出し、
前記眼の位置と前記注視点の位置とに基づいて、前記視差画像を制御する。 A virtual image display method according to an embodiment of the present disclosure is a virtual image display device that allows a user of a moving body to visually recognize a stereoscopic image, the virtual image display method being executed by the virtual image display device including: a display unit configured to display a parallax image including a first image and a second image having parallax with respect to the first image; an optical element configured to define a propagation direction of image light of the parallax image; an optical system configured to propagate the image light, the propagation direction of which is defined by the optical element, toward an eye of the user and display a virtual image of the parallax image in a field of view of the user; a camera configured to capture an image of the eye of the user; and a controller,
taking an image of the user's eye;
Detecting the position of the user's eye and the position of the user's gaze point based on imaging data of the user's eye;
The parallax image is controlled based on the eye position and the position of the gaze point.
前記利用者の眼を撮像し、
前記利用者の眼を撮像した撮像データに基づいて、前記利用者の眼の位置及び前記利用者の注視点の位置を検出し、
前記眼の位置と前記注視点の位置とに基づいて、前記視差画像を制御する。 A virtual image display method according to an embodiment of the present disclosure is a virtual image display device that allows a user of a moving body to visually recognize a stereoscopic image, the virtual image display method being executed by the virtual image display device including: a display unit configured to display a parallax image including a first image and a second image having parallax with respect to the first image; an optical element configured to define a propagation direction of image light of the parallax image; an optical system configured to propagate the image light, the propagation direction of which is defined by the optical element, toward an eye of the user and display a virtual image of the parallax image in a field of view of the user; a camera configured to capture an image of the eye of the user; and a controller,
taking an image of the user's eye;
Detecting the position of the user's eye and the position of the user's gaze point based on imaging data of the user's eye;
The parallax image is controlled based on the eye position and the position of the gaze point.
本開示の一実施形態に係るプログラムは、移動体の利用者に立体像を視認させる虚像表示装置であって、第1画像と前記第1画像に対して視差を有する第2画像とを含む視差画像を表示するように構成される表示部と、前記視差画像の画像光の伝播方向を規定するように構成される光学素子と、前記光学素子によって伝播方向を規定された画像光を前記利用者の眼に向かって伝播させ、前記利用者の視野に前記視差画像の虚像を表示するように構成される光学系と、前記利用者の眼を撮像するように構成されるカメラと、コントローラと、を含む虚像表示装置が実行するプログラムであって、
前記コントローラが、前記カメラに前記利用者の眼を撮像させ、前記利用者の眼を撮像した撮像データに基づいて、前記利用者の眼の位置及び前記利用者の注視点の位置を検出し、前記眼の位置と前記注視点の位置とに基づいて、前記視差画像を制御するためのプログラムである。 A program according to an embodiment of the present disclosure is a virtual image display device that allows a user of a moving body to visually recognize a stereoscopic image, the program being executed by a virtual image display device including: a display unit configured to display a parallax image including a first image and a second image having parallax with respect to the first image; an optical element configured to define a propagation direction of image light of the parallax image; an optical system configured to propagate the image light, the propagation direction of which is defined by the optical element, toward an eye of the user and display a virtual image of the parallax image in the field of view of the user; a camera configured to capture an image of the eye of the user; and a controller,
The controller is a program for causing the camera to capture an image of the user's eyes, detecting the position of the user's eyes and the position of the user's point of gaze based on the image data of the user's eyes, and controlling the parallax image based on the position of the eyes and the position of the point of gaze.
前記コントローラが、前記カメラに前記利用者の眼を撮像させ、前記利用者の眼を撮像した撮像データに基づいて、前記利用者の眼の位置及び前記利用者の注視点の位置を検出し、前記眼の位置と前記注視点の位置とに基づいて、前記視差画像を制御するためのプログラムである。 A program according to an embodiment of the present disclosure is a virtual image display device that allows a user of a moving body to visually recognize a stereoscopic image, the program being executed by a virtual image display device including: a display unit configured to display a parallax image including a first image and a second image having parallax with respect to the first image; an optical element configured to define a propagation direction of image light of the parallax image; an optical system configured to propagate the image light, the propagation direction of which is defined by the optical element, toward an eye of the user and display a virtual image of the parallax image in the field of view of the user; a camera configured to capture an image of the eye of the user; and a controller,
The controller is a program for causing the camera to capture an image of the user's eyes, detecting the position of the user's eyes and the position of the user's point of gaze based on the image data of the user's eyes, and controlling the parallax image based on the position of the eyes and the position of the point of gaze.
本開示の一実施形態に係る移動体は、上記の虚像表示装置を含む。
A moving object according to one embodiment of the present disclosure includes the above-described virtual image display device.
本開示の目的、特色、および利点は、下記の詳細な説明と図面とからより明確になるであろう。
移動体に搭載された虚像表示装置の一例を示す図である。
図1に示す虚像表示装置の概略構成を示す図である。
図2に示す表示パネルを奥行方向から見た例を示す図である。
図2に示す光学素子を奥行方向から見た例を示す図である。
図1に示す虚像と利用者の眼との関係を説明するための図である。
利用者の左眼及び右眼が観察するサブピクセルの一例を示す図である。
利用者の左眼及び右眼が観察するサブピクセルの他の例を示す図である。
虚像の表示位置と輻輳角及び視差量との関係を説明するための図である。
図1に示す虚像表示装置における虚像の制御の一例を説明するための図である。
図1に示す虚像表示装置における虚像の制御の他の例を説明するための図である。
虚像表示装置の動作を説明するフローチャートである。
虚像表示装置の動作を説明するフローチャートである。
虚像表示装置の動作を説明するフローチャートである。
The objects, features and advantages of the present disclosure will become more apparent from the following detailed description and drawings.
FIG. 1 is a diagram illustrating an example of a virtual image display device mounted on a moving object. FIG. 2 is a diagram showing a schematic configuration of the virtual image display device shown in FIG. 1 . 3 is a diagram showing an example of the display panel shown in FIG. 2 as viewed from the depth direction. FIG. 3 is a diagram showing an example of the optical element shown in FIG. 2 as viewed from the depth direction. FIG. 2 is a diagram for explaining the relationship between the virtual image shown in FIG. 1 and the user's eyes; FIG. FIG. 2 is a diagram showing an example of sub-pixels observed by the left and right eyes of a user. FIG. 13 is a diagram showing another example of sub-pixels observed by the left and right eyes of a user. 11A and 11B are diagrams for explaining the relationship between the display position of a virtual image and the convergence angle and the amount of parallax. 4A to 4C are diagrams for explaining an example of control of a virtual image in the virtual image display device shown in FIG. 1 . 10A to 10C are diagrams for explaining another example of the control of a virtual image in the virtual image display device shown in FIG. 10 is a flowchart illustrating an operation of the virtual image display device. 10 is a flowchart illustrating an operation of the virtual image display device. 10 is a flowchart illustrating an operation of the virtual image display device.
従来、車両に搭載されたヘッドアップディスプレイ型の虚像表示装置であって、利用者の視野の背景に虚像を重畳させて表示する虚像表示装置が知られている。そのような虚像表示装置では、車両の前方に障害物が存在する場合に、虚像が障害物を突き抜けるように表示されることがあり、利用者が虚像の視認に違和感を覚えることがある。特許文献1は、車両の前方に障害物が存在する場合、車両と障害物との間の距離に応じて、虚像の表示形態を変更することで、虚像が障害物を突き抜けるように表示されることを抑制する虚像表示装置を記載している。
Conventionally, there is known a head-up display type virtual image display device mounted on a vehicle that displays a virtual image by superimposing it on the background of the user's field of vision. With such a virtual image display device, when an obstacle is present in front of the vehicle, the virtual image may be displayed as if it is penetrating the obstacle, which may cause the user to feel uncomfortable viewing the virtual image. Patent Document 1 describes a virtual image display device that, when an obstacle is present in front of the vehicle, changes the display form of the virtual image depending on the distance between the vehicle and the obstacle, thereby preventing the virtual image from being displayed as if it is penetrating the obstacle.
特許文献1に記載された従来の虚像表示装置では、虚像を注視している際に、利用者の眼と虚像位置との間に急に障害物が割り込んでくると、注視距離が大きく変化することがあり、利用者が映像酔いを起こしたり、利用者が周辺の状況を直感的に理解できなくなったりすることがあった。
In the conventional virtual image display device described in Patent Document 1, if an obstacle suddenly appears between the user's eyes and the position of the virtual image while gazing at the virtual image, the gaze distance can change significantly, causing the user to suffer from visually-induced motion sickness or being unable to intuitively understand the situation around them.
以下、本開示の実施形態について、図面を参照しながら詳細に説明がされる。なお、以下の説明で用いられる図は模式的なものである。図面上の寸法比率等は現実のものとは必ずしも一致していない。本明細書では、一部の図面において、便宜的に、直交座標系xyzを定義する。
Below, an embodiment of the present disclosure will be described in detail with reference to the drawings. Note that the drawings used in the following description are schematic. The dimensional ratios and the like in the drawings do not necessarily correspond to the actual ones. In this specification, for the sake of convenience, a Cartesian coordinate system xyz is defined in some of the drawings.
本開示の一実施形態に係る虚像表示装置1は、図1に示すように、移動体10に搭載され、移動体10の利用者(運転者又は操縦者)12に移動体10に関連する各種の情報を含む虚像Vを視認させる。虚像表示装置1は、ヘッドアップディスプレイとも称される。ヘッドアップディスプレイは、HUD(Head Up Display)ともいう。移動体10が車両である場合、虚像Vに含まれる各種の情報は、例えば移動体10の速度、エンジン回転速度、ガソリン等の燃料残量、移動体10の総走行距離、移動体10の区間総走行距離、及びエンジン冷却水系の水温等を含みうる。虚像表示装置1は、構成の一部を、移動体10が備える他の装置、部品と兼用してよい。
As shown in FIG. 1, a virtual image display device 1 according to an embodiment of the present disclosure is mounted on a moving body 10, and allows a user (driver or operator) 12 of the moving body 10 to visually recognize a virtual image V including various information related to the moving body 10. The virtual image display device 1 is also called a head-up display. A head-up display is also called a HUD (Head Up Display). When the moving body 10 is a vehicle, the various information included in the virtual image V may include, for example, the speed of the moving body 10, the engine rotation speed, the remaining fuel such as gasoline, the total travel distance of the moving body 10, the total travel distance of the moving body 10, and the water temperature of the engine cooling water system. Part of the configuration of the virtual image display device 1 may be shared with other devices and parts provided in the moving body 10.
本開示における「移動体」は、例えば車両、船舶、及び航空機等を含んでよい。車両は、例えば自動車、産業車両、鉄道車両、生活車両、及び滑走路を走行する固定翼機等を含んでよい。自動車は、例えば乗用車、トラック、バス、二輪車、及びトロリーバス等を含んでよい。産業車両は、例えば農業及び建設向けの産業車両等を含んでよい。産業車両は、例えばフォークリフト及びゴルフカート等を含んでよい。農業向けの産業車両は、例えばトラクター、耕耘機、移植機、バインダー、コンバイン、及び芝刈り機等を含んでよい。建設向けの産業車両は、例えばブルドーザー、スクレーパー、ショベルカー、クレーン車、ダンプカー、及びロードローラ等を含んでよい。車両は、人力で走行するものを含んでよい。車両の分類は、上述した例に限られない。例えば、自動車は、道路を走行可能な産業車両を含んでよい。複数の分類に同じ車両が含まれてよい。船舶は、例えばマリンジェット、ボート、及びタンカー等を含んでよい。航空機は、例えば固定翼機及び回転翼機等を含んでよい。以下、虚像表示装置1を搭載する移動体10が車両である場合を例として説明するが、移動体10は、車両に限らず、上述の移動体のいずれかであってよい。
"Mobile object" in this disclosure may include, for example, vehicles, ships, and aircraft. Vehicles may include, for example, automobiles, industrial vehicles, railroad vehicles, residential vehicles, and fixed-wing aircraft that run on runways. Automobiles may include, for example, passenger cars, trucks, buses, motorcycles, and trolley buses. Industrial vehicles may include, for example, industrial vehicles for agriculture and construction. Industrial vehicles may include, for example, forklifts and golf carts. Industrial vehicles for agriculture may include, for example, tractors, cultivators, transplanters, binders, combines, and lawnmowers. Industrial vehicles for construction may include, for example, bulldozers, scrapers, excavators, cranes, dump trucks, and road rollers. Vehicles may include those that run by human power. Vehicle classification is not limited to the above examples. For example, automobiles may include industrial vehicles that can run on roads. The same vehicle may be included in multiple classifications. Ships may include, for example, marine jets, boats, and tankers. The aircraft may include, for example, fixed-wing aircraft and rotary-wing aircraft. In the following, a case will be described in which the moving body 10 mounting the virtual image display device 1 is a vehicle, but the moving body 10 is not limited to a vehicle and may be any of the moving bodies described above.
虚像表示装置1は、図1,2に示すように、表示部2と、光学素子5と、光学系6と、検出部7と、コントローラ8とを含む。
As shown in Figures 1 and 2, the virtual image display device 1 includes a display unit 2, an optical element 5, an optical system 6, a detection unit 7, and a controller 8.
検出部7は、カメラ又は車内カメラとも称される。検出部7は、虚像表示装置1の利用者12の眼13を撮像するように構成される。利用者12は、移動体10の運転者であってよい。検出部7は、利用者12の左眼13L及び右眼13Rを撮像すればよい。検出部7は、利用者12の顔全体を撮像してよいし、利用者12の顔全体を撮像しなくてよい。本明細書において、利用者12の左眼13Lと右眼13Rとを特に区別しない場合、左眼13L及び右眼13Rを纏めて眼13と記載することがある。
The detection unit 7 is also referred to as a camera or an in-car camera. The detection unit 7 is configured to capture an image of the eye 13 of the user 12 of the virtual image display device 1. The user 12 may be the driver of the moving body 10. The detection unit 7 may capture an image of the left eye 13L and the right eye 13R of the user 12. The detection unit 7 may or may not capture an image of the entire face of the user 12. In this specification, when there is no particular distinction between the left eye 13L and the right eye 13R of the user 12, the left eye 13L and the right eye 13R may be collectively referred to as the eye 13.
検出部7は、移動体10のルームミラーに取り付けられてよい。検出部7は、インスツルメントパネル内における、例えばクラスターに取り付けられてよい。検出部7は、センターパネルに取り付けられてよい。検出部7は、ステアリングホイールの中心に配置された、該ステアリングホイールの支持部に取り付けられてよい。検出部7は、ダッシュボード上に取り付けられてよい。検出部7は、利用者12の眼13を直接撮像する位置に配置されていなくてもよい。例えば、検出部7は、ダッシュボードの内部に配置され、ウインドシ-ルドに反射した利用者12の眼13を検出するようにしてもよい。検出部7は、ダッシュボード内部に配置され、ウインドシールドに反射した像を検出してもよい。
The detection unit 7 may be attached to the rearview mirror of the vehicle 10. The detection unit 7 may be attached to, for example, a cluster in the instrument panel. The detection unit 7 may be attached to a center panel. The detection unit 7 may be attached to a support portion of the steering wheel disposed at the center of the steering wheel. The detection unit 7 may be attached on the dashboard. The detection unit 7 does not have to be disposed in a position that directly captures an image of the eye 13 of the user 12. For example, the detection unit 7 may be disposed inside the dashboard and detect the eye 13 of the user 12 reflected on the windshield. The detection unit 7 may be disposed inside the dashboard and detect an image reflected on the windshield.
検出部7は、被写体の像を取得し、被写体の画像を生成するように構成される。検出部7は、撮像素子を含む。撮像素子は、例えばCCD(Charged Coupled Device)撮像素子又はCMOS(Complementary Metal Oxide Semiconductor)撮像素子を含んでよい。検出部7は、被写体側に利用者12の顔が位置するように配置される。検出部7は、2台以上の撮像装置を用いて、第1眼13L及び第2眼13Rを撮像するように構成されてよい。検出部7は、第1眼13Lの位置及び第2眼13Rの位置を三次元空間の座標として検出するように構成されてよい。
The detection unit 7 is configured to obtain an image of the subject and generate an image of the subject. The detection unit 7 includes an imaging element. The imaging element may include, for example, a CCD (Charged Coupled Device) imaging element or a CMOS (Complementary Metal Oxide Semiconductor) imaging element. The detection unit 7 is positioned so that the face of the user 12 is located on the subject side. The detection unit 7 may be configured to capture images of the first eye 13L and the second eye 13R using two or more imaging devices. The detection unit 7 may be configured to detect the positions of the first eye 13L and the second eye 13R as coordinates in three-dimensional space.
検出部7は、カメラを備えず、外部カメラに接続されていてよい。検出部7は、外部カメラからの信号を入力する入力端子を備えてよい。外部カメラは、入力端子に直接的に接続されてよい。外部のカメラは、共有のネットワークを介して入力端子に間接的に接続されてよい。カメラを備えない検出部7は、カメラが映像信号を入力するように構成される入力端子を備えてよい。カメラを備えない検出部7は、入力端子に入力された映像信号から左眼及び右眼を検出するように構成してよい。
The detection unit 7 may not include a camera, but may be connected to an external camera. The detection unit 7 may include an input terminal that inputs a signal from the external camera. The external camera may be directly connected to the input terminal. The external camera may be indirectly connected to the input terminal via a shared network. The detection unit 7 that does not include a camera may include an input terminal configured to receive a video signal from the camera. The detection unit 7 that does not include a camera may be configured to detect the left eye and right eye from the video signal input to the input terminal.
検出部7は、眼13を撮像して生成した撮像データをコントローラ8に出力するように構成される。検出部7は、撮像データを、有線、無線、及びCAN(Controller Area Network)等の通信ネットワークを介して、コントローラ8に出力するように構成してよい。
The detection unit 7 is configured to output imaging data generated by imaging the eye 13 to the controller 8. The detection unit 7 may be configured to output the imaging data to the controller 8 via a communication network such as a wired or wireless communication network or a CAN (Controller Area Network).
表示部2は、表示パネル3を含んで構成される。表示パネル3は、透過型の表示パネルであってよい。表示パネル3が透過型の表示パネルである場合、表示部2は照射器4を含んでよい。
The display unit 2 includes a display panel 3. The display panel 3 may be a transmissive display panel. When the display panel 3 is a transmissive display panel, the display unit 2 may include an illuminator 4.
透過型の表示パネルは、液晶パネルを含みうる。透過型の表示パネルは、公知の液晶パネルの構成を有してよい。公知の液晶パネルとしては、IPS(In-Plane Switching)方式、FFS(Fringe Field Switching)方式、VA(Vertical Alignment)方式、および、ECB(Electrically Controlled Birefringence)方式等の種々の液晶パネルを採用しうる。透過型の表示パネルは、液晶パネルの他に、MEMS(Micro Electro Mechanical Systems)シャッタ式の表示パネルを含む。
The transmissive display panel may include a liquid crystal panel. The transmissive display panel may have a known liquid crystal panel configuration. As known liquid crystal panels, various liquid crystal panels such as IPS (In-Plane Switching) type, FFS (Fringe Field Switching) type, VA (Vertical Alignment) type, and ECB (Electrically Controlled Birefringence) type may be used. In addition to liquid crystal panels, the transmissive display panel includes MEMS (Micro Electro Mechanical Systems) shutter type display panels.
照射器4は、表示パネル3における表示面3aとは反対側の面側に配置され、表示パネル3を面的に照射する。照射器4は、光源、導光板、拡散板、拡散シート等を含んで構成されてよい。照射器4は、光源により照射光を射出し、導光板、拡散板、拡散シート等により照射光を表示パネル3の面方向に均一化する。照射器4は、均一化された光を表示パネル3に向かって出射する。
The illuminator 4 is disposed on the side of the display panel 3 opposite the display surface 3a, and illuminates the display panel 3 in a planar manner. The illuminator 4 may include a light source, a light guide plate, a diffusion plate, a diffusion sheet, etc. The illuminator 4 emits illumination light from the light source, and homogenizes the illumination light in the planar direction of the display panel 3 by the light guide plate, the diffusion plate, the diffusion sheet, etc. The illuminator 4 emits the homogenized light toward the display panel 3.
以下、表示部2が透過型の表示パネル3及び照射器4を含んで構成される場合を例として説明するが、表示部2は、透過型の表示パネルに限らず、自発光型の表示パネルを含んで構成されてよい。表示パネル3が自発光型の表示パネルである場合、表示部2は照射器4を含まなくてよい。自発光型の表示パネルは、複数の自発光素子を含みうる。自発光素子としては、LED(Light Emitting Diode)、有機EL(Electro Luminescence)、無機EL等の種々の自発光素子を採用しうる。
The following describes an example in which the display unit 2 includes a transmissive display panel 3 and an illuminator 4, but the display unit 2 is not limited to a transmissive display panel and may include a self-luminous display panel. If the display panel 3 is a self-luminous display panel, the display unit 2 does not need to include the illuminator 4. A self-luminous display panel may include a plurality of self-luminous elements. As the self-luminous elements, various self-luminous elements such as LEDs (Light Emitting Diodes), organic EL (Electro Luminescence), and inorganic EL may be used.
表示パネル3は、面状に形成されたアクティブエリア31上に複数の区画領域を有する。アクティブエリア31は、視差画像を表示するように構成される。視差画像は、左眼画像(第1画像ともいう)と、左眼画像に対して視差を有する右眼画像(第2画像ともいう)とを含む。複数の区画領域の各々は、第1方向及び第1方向に直交する第2方向に区画された領域である。第1方向及び第2方向に直交する方向は、第3方向と称される。第1方向は水平方向と称されてよい。第2方向は鉛直方向と称されてよい。第3方向は奥行方向と称されてよい。第1方向、第2方向、及び第3方向はそれぞれこれらに限られない。図3において、第1方向はx軸方向として表される。第2方向はy軸方向として表される。第3方向はz軸方向として表される。
The display panel 3 has a plurality of partitioned regions on an active area 31 formed in a planar shape. The active area 31 is configured to display a parallax image. The parallax image includes a left eye image (also called a first image) and a right eye image (also called a second image) having parallax with respect to the left eye image. Each of the plurality of partitioned regions is an area partitioned in a first direction and a second direction perpendicular to the first direction. The direction perpendicular to the first direction and the second direction is called a third direction. The first direction may be called a horizontal direction. The second direction may be called a vertical direction. The third direction may be called a depth direction. The first direction, the second direction, and the third direction are not limited to these. In FIG. 3, the first direction is represented as the x-axis direction. The second direction is represented as the y-axis direction. The third direction is represented as the z-axis direction.
複数の区画領域の各々には、1つのサブピクセルが対応する。アクティブエリア31は、第1方向及び第2方向に沿って格子状に配列された複数のサブピクセルを含む。
Each of the multiple partitioned regions corresponds to one subpixel. The active area 31 includes multiple subpixels arranged in a grid pattern along the first and second directions.
各サブピクセルは、R(Red),G(Green),B(Blue)のいずれかの色に対応し、R,G,Bの3つのサブピクセルを一組として1ピクセルを構成することができる。1ピクセルは、1画素と称されうる。水平方向は、例えば、1ピクセルを構成する複数のサブピクセルが並ぶ方向である。鉛直方向は、例えば、同じ色の複数のサブピクセルが並ぶ方向である。
Each subpixel corresponds to one of the colors R (Red), G (Green), or B (Blue), and a set of three subpixels R, G, and B can form one pixel. One pixel can be called one picture element. The horizontal direction is, for example, the direction in which multiple subpixels that form one pixel are lined up. The vertical direction is, for example, the direction in which multiple subpixels of the same color are lined up.
アクティブエリア31に配列された複数のサブピクセルは、コントローラ8の制御により、複数のサブピクセル群Pgを構成する。複数のサブピクセル群Pgは、第1方向に繰り返して配列される。複数のサブピクセル群Pgは、第2方向に同じ位置に配列することができ、又は、第2方向にずらして配列することができる。例えば、複数のサブピクセル群Pgは、第2方向においては、第1方向に1サブピクセル分ずれた位置に隣接して繰り返して配列することができる。複数のサブピクセル群Pgは、所定の行及び列の複数のサブピクセルを含む。複数のサブピクセル群Pgは、第2方向にb個(b行)、第1方向に2×n個(2×n列)、連続して配列された(2×n×b)個のサブピクセルP1~PN(N=2×n×b)を含む。図3に示す例では、n=6、b=1である。アクティブエリア31には、第2方向に1個、第1方向に12個、連続して配列された12個のサブピクセルP1~P12を含む複数のサブピクセル群Pgが配置される。図3に示す例では、一部のサブピクセル群Pgに符号を付している。
The subpixels arranged in the active area 31 constitute a plurality of subpixel groups Pg under the control of the controller 8. The subpixel groups Pg are arranged repeatedly in the first direction. The subpixel groups Pg can be arranged at the same position in the second direction, or can be arranged shifted in the second direction. For example, the subpixel groups Pg can be arranged repeatedly adjacent to each other in the second direction at positions shifted by one subpixel in the first direction. The subpixel groups Pg include a plurality of subpixels in a predetermined row and column. The subpixel groups Pg include b subpixels (b rows) in the second direction and 2×n subpixels (2×n columns) in the first direction, (2×n×b) subpixels P1 to PN (N=2×n×b) arranged consecutively. In the example shown in FIG. 3, n=6, b=1. The active area 31 is provided with a plurality of subpixel groups Pg including 12 subpixels P1 to P12 arranged consecutively, 1 in the second direction and 12 in the first direction. In the example shown in FIG. 3, some subpixel groups Pg are labeled with symbols.
複数のサブピクセル群Pgは、コントローラ8が画像を表示するための制御を行う最小単位である。複数のサブピクセル群Pgに含まれる各サブピクセルは、識別情報P1~PN(N=2×n×b)で識別される。全てのサブピクセル群Pgの同じ識別情報を有する複数のサブピクセルP1~PN(N=2×n×b)は、コントローラ8によって同時に制御されるように構成される。例えば、コントローラ8は、サブピクセルP1に表示させる画像を左眼画像から右眼画像に切り替える場合、全てのサブピクセル群Pgにおける複数のサブピクセルP1に表示させる画像を左眼画像から右眼画像に同時的に切り替えることができる。
The multiple subpixel groups Pg are the smallest units that the controller 8 controls to display an image. Each subpixel included in the multiple subpixel groups Pg is identified by identification information P1 to PN (N = 2 x n x b). The multiple subpixels P1 to PN (N = 2 x n x b) having the same identification information in all subpixel groups Pg are configured to be controlled simultaneously by the controller 8. For example, when switching the image displayed in subpixel P1 from a left eye image to a right eye image, the controller 8 can simultaneously switch the image displayed in the multiple subpixels P1 in all subpixel groups Pg from a left eye image to a right eye image.
光学素子5は、パララックスバリアで構成されてよいし、レンチキュラレンズで構成されてよい。以下、光学素子5がパララックスバリアである場合を例として説明する。パララックスバリア5は、図5に示すように、アクティブエリア31に沿う平面により形成され、アクティブエリア31から所定距離(ギャップ)gだけ離隔している。パララックスバリア5は、表示パネル3に対して照射器4の反対側に位置してよい。パララックスバリア5は、表示パネル3の照射器4側に位置してよい。
The optical element 5 may be composed of a parallax barrier or a lenticular lens. Below, an example will be described in which the optical element 5 is a parallax barrier. As shown in FIG. 5, the parallax barrier 5 is formed by a plane that follows the active area 31, and is separated from the active area 31 by a predetermined distance (gap) g. The parallax barrier 5 may be located on the opposite side of the illuminator 4 with respect to the display panel 3. The parallax barrier 5 may be located on the illuminator 4 side of the display panel 3.
パララックスバリア5は、図4に示すように、面内の所定方向に延びる複数の帯状領域である透光領域51ごとに、複数のサブピクセルから射出される画像光の伝播方向を規定する。パララックスバリア5は、画像光を減光させる複数の減光領域52を有する。複数の減光領域52は、互いに隣接する2つの減光領域52の間に、透光領域51を画定する。複数の透光領域51は、複数の減光領域52に比べて光透過率が高い。複数の透光領域51の光透過率は、複数の減光領域52の光透過率の10倍以上であってよいし、100倍以上であってよいし、1000倍以上であってよい。複数の減光領域52は、複数の透光領域51に比べて光透過率が低い。複数の減光領域52は、画像光を遮光してよい。
As shown in FIG. 4, the parallax barrier 5 determines the propagation direction of image light emitted from the subpixels for each of the light-transmitting regions 51, which are strip-shaped regions extending in a predetermined direction in the plane. The parallax barrier 5 has a plurality of attenuation regions 52 that attenuate the image light. The attenuation regions 52 define a light-transmitting region 51 between two adjacent attenuation regions 52. The light-transmitting regions 51 have a higher light transmittance than the light-transmitting regions 52. The light transmittance of the light-transmitting regions 51 may be 10 times or more, 100 times or more, or 1000 times or more, that of the light-transmitting regions 52. The light-transmitting regions 52 have a lower light transmittance than the light-transmitting regions 51. The attenuation regions 52 may block the image light.
パララックスバリア5は、フィルム又は板状部材で構成されてよい。この場合、複数の減光領域52は、当該フィルム又は板状部材で構成される。複数の透光領域51は、フィルム又は板状部材に設けられた開口であってよい。フィルムは、樹脂で構成されてよいし、他の材料で構成されてよい。板状部材は、樹脂又は金属等で構成されてよいし、他の材料で構成されてよい。パララックスバリア5は、フィルム又は板状部材に限られず、他の種類の部材で構成されてよい。パララックスバリア5は、遮光性を有する基材で構成されてよい。パララックスバリア5は、遮光性を有する添加物を含有する基材で構成されてよい。パララックスバリア5は固定方式で合ってもよいし、アクティブバリア方式であってもよい。
The parallax barrier 5 may be made of a film or a plate-like member. In this case, the multiple light-attenuating regions 52 are made of the film or plate-like member. The multiple light-transmitting regions 51 may be openings provided in the film or plate-like member. The film may be made of resin or other materials. The plate-like member may be made of resin, metal, or other materials. The parallax barrier 5 is not limited to a film or plate-like member and may be made of other types of members. The parallax barrier 5 may be made of a base material having light-blocking properties. The parallax barrier 5 may be made of a base material containing a light-blocking additive. The parallax barrier 5 may be of a fixed type or an active barrier type.
アクティブバリア方式としてのパララックスバリア5は、液晶シャッタで構成することもできる。液晶シャッタは、印加する電圧に応じて光の透過率を制御しうる。液晶シャッタは、複数の画素で構成され、各画素における光の透過率を制御してよい。液晶シャッタによる複数の透光領域51及び複数の減光領域52は、液晶シャッタの複数の画素に対応した領域となる。パララックスバリア5が液晶シャッタで構成される場合、複数の透光領域51と複数の減光領域52との境界は、複数の画素の形状に対応して階段状となりうる。
The parallax barrier 5 as an active barrier type can also be configured with a liquid crystal shutter. The liquid crystal shutter can control the light transmittance according to the applied voltage. The liquid crystal shutter is configured with multiple pixels, and can control the light transmittance of each pixel. The multiple light-transmitting regions 51 and multiple light-attenuating regions 52 of the liquid crystal shutter correspond to the multiple pixels of the liquid crystal shutter. When the parallax barrier 5 is configured with a liquid crystal shutter, the boundary between the multiple light-transmitting regions 51 and multiple light-attenuating regions 52 can be stepped to correspond to the shape of the multiple pixels.
パララックスバリア5が複数のサブピクセルから射出された画像光の光線方向を規定することによって、利用者の眼が視認可能なアクティブエリア31上の領域が定まる。利用者の左眼に伝播する画像光を射出するアクティブエリア31内の領域は、左可視領域(第1可視領域)31aLと称される。利用者の右眼に伝播する画像光を射出するアクティブエリア31内の領域は、右可視領域(第2可視領域)31aRと称される。
The parallax barrier 5 determines the light direction of the image light emitted from multiple sub-pixels, thereby determining the area on the active area 31 that is visible to the user's eyes. The area in the active area 31 that emits the image light that propagates to the user's left eye is called the left visible area (first visible area) 31aL. The area in the active area 31 that emits the image light that propagates to the user's right eye is called the right visible area (second visible area) 31aR.
透光領域51の第1方向における配置間隔であるバリアピッチBp、アクティブエリア31とパララックスバリアとの間のギャップgは、適視距離d及び眼間距離Eを用いた次の式(1)及び式(2)が成り立つように規定される。
E:d=(n×Hp/b):g …(1)
d:Bp=(d+g):(2×n×Hp/b) …(2) The barrier pitch Bp, which is the arrangement interval of the light-transmitting regions 51 in the first direction, and the gap g between the active area 31 and the parallax barrier are defined so that the following equations (1) and (2) hold, using the optimum viewing distance d and interocular distance E.
E:d=(n×Hp/b):g…(1)
d:Bp=(d+g):(2×n×Hp/b)…(2)
E:d=(n×Hp/b):g …(1)
d:Bp=(d+g):(2×n×Hp/b) …(2) The barrier pitch Bp, which is the arrangement interval of the light-transmitting regions 51 in the first direction, and the gap g between the active area 31 and the parallax barrier are defined so that the following equations (1) and (2) hold, using the optimum viewing distance d and interocular distance E.
E:d=(n×Hp/b):g…(1)
d:Bp=(d+g):(2×n×Hp/b)…(2)
適視距離dは、利用者の左眼及び右眼それぞれとパララックスバリア5との間の距離である。左眼と右眼とを通る直線の方向(眼間方向)は水平方向である。眼間距離Eは、利用者の左眼と右眼との間の距離である。眼間距離Eは、例えば、産業技術総合研究所の研究によって算出された値である61.1mm~64.4mmであってよい。Hpは、図3に示すような、サブピクセルの水平方向の長さである。
The optimal viewing distance d is the distance between the left and right eyes of the user and the parallax barrier 5. The direction of the line passing through the left and right eyes (interocular direction) is horizontal. The interocular distance E is the distance between the left and right eyes of the user. The interocular distance E may be, for example, 61.1 mm to 64.4 mm, a value calculated in research by the National Institute of Advanced Industrial Science and Technology. Hp is the horizontal length of a subpixel, as shown in Figure 3.
光学系6は、第1光学部材6a及び第2光学部材6bを含んで構成されてよい。第1光学部材6aは、アクティブエリア31から射出された画像光(の一部)を第2光学部材6bに向かって反射するように構成される。第2光学部材6bは、第1光学部材6aによって反射された画像光の一部を利用者の眼に到達(入射)させるように構成される。光学系6は、アクティブエリア31に表示された画像を拡大する拡大光学系として機能してよい。第1光学部材6a及び第2光学部材6bの少なくとも一方は、光学的なパワーを有してよい。第1光学部材6aは、光学的なパワーを有する凹面鏡であってよい。移動体10のウインドシールドが、第2光学部材6bを兼ねてよい。光学系6を構成する光学部材の数は、2つに限られず、1つであってよいし、3つ以上であってよい。第1光学部材6a及び第2光学部材6bは、球面形状、非球面形状、又は自由曲面形状である光学的機能面を含んでよい。
The optical system 6 may include a first optical member 6a and a second optical member 6b. The first optical member 6a is configured to reflect (a part of) the image light emitted from the active area 31 toward the second optical member 6b. The second optical member 6b is configured to make a part of the image light reflected by the first optical member 6a reach (enter) the user's eye. The optical system 6 may function as a magnifying optical system that magnifies the image displayed on the active area 31. At least one of the first optical member 6a and the second optical member 6b may have optical power. The first optical member 6a may be a concave mirror having optical power. The windshield of the moving body 10 may also serve as the second optical member 6b. The number of optical members constituting the optical system 6 is not limited to two, and may be one, or may be three or more. The first optical member 6a and the second optical member 6b may include an optically functional surface that is a spherical shape, an aspherical shape, or a free-form shape.
表示パネル3のアクティブエリア31から射出された画像光の一部は、複数の透光領域51を透過し、第1光学部材6aを介して、第2光学部材6bに到達する。第2光学部材6bに到達した画像光の一部は、第2光学部材6bに反射されて、利用者の眼に到達する。これにより、利用者の眼は、第2光学部材6bの前方に、アクティブエリア31に表示された画像の虚像Vを視認することができる。虚像Vが投影される面を虚像面Sとよぶ。本明細書において、前方は、利用者から見て第2光学部材6bの方向である。前方は、移動体10が通常移動する方向である。
A portion of the image light emitted from the active area 31 of the display panel 3 passes through the multiple light-transmitting regions 51 and reaches the second optical member 6b via the first optical member 6a. A portion of the image light that reaches the second optical member 6b is reflected by the second optical member 6b and reaches the user's eye. This allows the user's eye to view a virtual image V of the image displayed in the active area 31 in front of the second optical member 6b. The surface on which the virtual image V is projected is called the virtual image surface S. In this specification, the forward direction is the direction of the second optical member 6b as seen by the user. The forward direction is the direction in which the moving body 10 normally moves.
図5に示す左可視領域(第1可視領域)31aLは、上述のように、パララックスバリア5の複数の透光領域を透過した画像光が利用者12の左眼13Lに到達することによって、利用者12の左眼13Lが視認する虚像面Sの領域である。右可視領域(第2可視領域)31aRは、上述のように、パララックスバリア5の複数の透光領域を透過した画像光が利用者12の右眼13Rに到達することによって、利用者12の右眼13Rが視認する虚像面Sの領域である。
The left visible area (first visible area) 31aL shown in FIG. 5 is an area of the virtual image surface S viewed by the left eye 13L of the user 12 as a result of image light that has passed through multiple light-transmitting areas of the parallax barrier 5 reaching the left eye 13L of the user 12, as described above. The right visible area (second visible area) 31aR is an area of the virtual image surface S viewed by the right eye 13R of the user 12 as a result of image light that has passed through multiple light-transmitting areas of the parallax barrier 5 reaching the right eye 13R of the user 12, as described above.
例えば、パララックスバリア5の開口率が50%の場合、利用者12の左眼13L及び右眼13Rから見た虚像Vの複数のサブピクセルの配置を図6に示す。開口率が50%のとき、パララックスバリア5の複数の透光領域と複数の減光領域とは、等しい眼間方向(x軸方向)の幅を有する。図6において、一点鎖線は、パララックスバリア5の複数の透光領域と複数の減光領域との境界の虚像を示す。左眼13Lから視認可能な左可視領域31aL及び右眼13Rから視認可能な右可視領域31aRは、それぞれ二点鎖線の間に位置するx及びy方向に斜め方向に延びる領域である。右可視領域31aRは、左眼13Lからは見えない。左可視領域31aLは、右眼13Rからは見えない。
For example, when the aperture ratio of the parallax barrier 5 is 50%, the arrangement of multiple subpixels of the virtual image V as seen by the left eye 13L and right eye 13R of the user 12 is shown in FIG. 6. When the aperture ratio is 50%, the multiple light-transmitting regions and the multiple light-attenuating regions of the parallax barrier 5 have the same width in the interocular direction (x-axis direction). In FIG. 6, the dashed line indicates the virtual image of the boundary between the multiple light-transmitting regions and the multiple light-attenuating regions of the parallax barrier 5. The left visible region 31aL visible from the left eye 13L and the right visible region 31aR visible from the right eye 13R are regions extending diagonally in the x and y directions located between the two-dot dashed lines. The right visible region 31aR is not visible from the left eye 13L. The left visible region 31aL is not visible from the right eye 13R.
図7の例では、左可視領域31aLには、アクティブエリアに配列された複数のサブピクセルP2からP5の全体と複数のサブピクセルP1及びP6の大部分の虚像が含まれる。利用者12の左眼13Lは、アクティブエリアに配列された複数のサブピクセルP7からP12の虚像を視認し難い。右可視領域31aRには、アクティブエリアに配列された複数のサブピクセルP8からP11の全体と複数のサブピクセルP7及びP12の大部分の虚像が含まれる。利用者12の右眼13Rは、アクティブエリアに配列された複数のサブピクセルP1からP6の虚像を視認し難い。コントローラは、複数のサブピクセルP1からP6に左眼画像を表示することができる。コントローラ8は、複数のサブピクセルP7からP12に右眼画像を表示することができる。このようにすることによって、利用者12の左眼13Lが主に左可視領域31aLの左眼画像VLの虚像を視認し、右眼13Rが主に右可視領域31aRの右眼画像VRの虚像を視認する。左眼画像VL及び右眼画像VRは互いに視差を有する視差画像であるため、利用者12は、左眼画像VL及び右眼画像VRを立体像として視認する。
In the example of FIG. 7, the left visible region 31aL includes the entirety of the multiple subpixels P2 to P5 arranged in the active area and virtual images of most of the multiple subpixels P1 and P6. The left eye 13L of the user 12 has difficulty viewing the virtual images of the multiple subpixels P7 to P12 arranged in the active area. The right visible region 31aR includes the entirety of the multiple subpixels P8 to P11 arranged in the active area and virtual images of most of the multiple subpixels P7 and P12. The right eye 13R of the user 12 has difficulty viewing the virtual images of the multiple subpixels P1 to P6 arranged in the active area. The controller 8 can display a left eye image in the multiple subpixels P1 to P6. The controller 8 can display a right eye image in the multiple subpixels P7 to P12. By doing this, the left eye 13L of the user 12 mainly sees the virtual image of the left eye image VL in the left visible region 31aL, and the right eye 13R mainly sees the virtual image of the right eye image VR in the right visible region 31aR. Because the left eye image VL and the right eye image VR are parallax images having parallax with respect to each other, the user 12 sees the left eye image VL and the right eye image VR as a stereoscopic image.
図8の中央に示すように、虚像Vが適視距離dに位置するとき、虚像Vは虚像面Sと交差して位置する。左眼画像VL及び右眼画像VRは、利用者12が視認する虚像Vと略同位置に表示される。左眼13L及び右眼13Rが虚像V上の適視距離dに位置する点を見る輻輳角は、輻輳角Θ0で表される。
As shown in the center of Figure 8, when the virtual image V is located at the appropriate viewing distance d, the virtual image V is located so as to intersect with the virtual image surface S. The left eye image VL and the right eye image VR are displayed at approximately the same position as the virtual image V viewed by the user 12. The convergence angle at which the left eye 13L and the right eye 13R view a point on the virtual image V located at the appropriate viewing distance d is represented by the convergence angle Θ0.
図8の右側に示すように、虚像Vが適視距離dの遠距離側に位置するとき、左眼画像VLと右眼画像VRとは、虚像面Sにおいて視差量D1だけ異なる位置に表示される。左眼画像VLは、虚像Vを適視距離dから見たよりも、より小さい角度で左側から見た像となる。右眼画像VRは、虚像Vを適視距離dから見たよりも、より小さい角度で右側から見た像となる。利用者12は、左眼13Lから左眼画像VLを見る方向と、右眼13Rから右眼画像VRを見る方向との交差する位置に、虚像Vが存在するように知覚する。利用者12が虚像V上の点を見る輻輳角は、輻輳角Θ1で表される。輻輳角Θ1は、適視距離dに位置する点を見る輻輳角Θ0より小さい。
As shown on the right side of FIG. 8, when the virtual image V is located on the far side of the appropriate viewing distance d, the left eye image VL and the right eye image VR are displayed at positions that differ by the amount of parallax D1 on the virtual image surface S. The left eye image VL is an image of the virtual image V viewed from the left side at a smaller angle than when viewed from the appropriate viewing distance d. The right eye image VR is an image of the virtual image V viewed from the right side at a smaller angle than when viewed from the appropriate viewing distance d. The user 12 perceives the virtual image V as being present at the position where the direction in which the left eye 13L views the left eye image VL intersects with the direction in which the right eye 13R views the right eye image VR. The convergence angle at which the user 12 views a point on the virtual image V is represented by the convergence angle Θ1. The convergence angle Θ1 is smaller than the convergence angle Θ0 at which a point located at the appropriate viewing distance d is viewed.
図8の左側に示すように、虚像Vが適視距離dの近距離側に位置するとき、左眼画像VLと右眼画像VRとは、虚像面Sにおいて視差量D2だけ異なる位置に表示される。左眼画像VLは、虚像Vを適視距離dから見たよりも、より大きい角度で左側から見た像となる。右眼画像VRは、虚像Vを適視距離dから見たよりも、より大きい角度で右側から見た像となる。利用者12は、左眼13Lから左眼画像VLを見る方向と、右眼13Rから右眼画像VRを見る方向との交差する位置に、虚像Vが存在するように知覚する。利用者12が虚像V上の点を見る輻輳角は、輻輳角Θ2で表される。輻輳角Θ2は、適視距離dに位置する点を見る輻輳角Θ0より大きい。
As shown on the left side of FIG. 8, when the virtual image V is located on the near side of the appropriate viewing distance d, the left eye image VL and the right eye image VR are displayed at positions that differ by the amount of parallax D2 on the virtual image surface S. The left eye image VL is an image of the virtual image V viewed from the left side at a larger angle than when viewed from the appropriate viewing distance d. The right eye image VR is an image of the virtual image V viewed from the right side at a larger angle than when viewed from the appropriate viewing distance d. The user 12 perceives the virtual image V as being present at the position where the direction in which the left eye 13L views the left eye image VL intersects with the direction in which the right eye 13R views the right eye image VR. The convergence angle at which the user 12 views a point on the virtual image V is represented by the convergence angle Θ2. The convergence angle Θ2 is greater than the convergence angle Θ0 at which a point located at the appropriate viewing distance d is viewed.
図8に示すように、虚像面Sにおける左眼画像VLと右眼画像VRとの視差量Dを大きくすると、虚像Vの表示位置が眼13から遠ざかる(すなわち、虚像Vの表示距離が大きくなる)。視差量Dを小さくすると、虚像Vの表示位置が眼13に近づく(すなわち、虚像Vの表示距離が小さくなる)。視差量Dは、正負の値をとりうる。視差量Dは、例えば、虚像面Sにおける左眼画像VLの表示位置(水平方向における表示位置)をDLとし、右眼画像VRの表示位置(水平方向における表示位置)をDRとしたとき、D=DR-DLで定義されてよい。表示位置は、図8における右方向を正方向として定義されてよい。表示位置の原点Rは、任意に設定されてよいが、例えば、適視距離dに位置する虚像Vの虚像面S内におけるであってよい。
As shown in FIG. 8, when the parallax amount D between the left eye image VL and the right eye image VR on the virtual image surface S is increased, the display position of the virtual image V moves away from the eye 13 (i.e., the display distance of the virtual image V increases). When the parallax amount D is decreased, the display position of the virtual image V moves closer to the eye 13 (i.e., the display distance of the virtual image V decreases). The parallax amount D can take positive or negative values. For example, when the display position (display position in the horizontal direction) of the left eye image VL on the virtual image surface S is DL and the display position (display position in the horizontal direction) of the right eye image VR is DR, the parallax amount D may be defined as D=DR-DL. The display position may be defined with the right direction in FIG. 8 as the positive direction. The origin R of the display position may be set arbitrarily, but may be, for example, within the virtual image surface S of the virtual image V located at the suitable viewing distance d.
コントローラ8は、虚像表示装置1の各構成部に接続され、各構成部を制御する。コントローラ8は、例えばプロセッサとして構成される。コントローラ8は、1以上のプロセッサを含んでよい。プロセッサは、特定のプログラムを読み込ませて特定の機能を実行する汎用のプロセッサ、及び特定の処理に特化した専用のプロセッサを含んでよい。専用のプロセッサは、特定用途向けIC(ASIC:Application Specific Integrated Circuit)を含んでよい。プロセッサは、プログラマブルロジックデバイス(PLD:Programmable Logic Device)を含んでよい。PLDは、FPGA(Field-Programmable Gate Array)を含んでよい。コントローラ8は、1つ又は複数のプロセッサが協働するSoC(System-on-a-Chip)、及びSiP(System In a Package)のいずれかであってよい。コントローラ8は、記憶部81を含み、記憶部81に各種情報、又は虚像表示装置1の各構成部を動作させるためのプログラム等を格納してよい。記憶部81は、例えば半導体メモリ等で構成されてよい。記憶部81は、コントローラ8のワークメモリとして機能してよい。
The controller 8 is connected to each component of the virtual image display device 1 and controls each component. The controller 8 is configured as a processor, for example. The controller 8 may include one or more processors. The processor may include a general-purpose processor that loads a specific program to execute a specific function, and a dedicated processor specialized for a specific process. The dedicated processor may include an application specific integrated circuit (ASIC: Application Specific Integrated Circuit). The processor may include a programmable logic device (PLD: Programmable Logic Device). The PLD may include an FPGA (Field-Programmable Gate Array). The controller 8 may be either a SoC (System-on-a-Chip) in which one or more processors work together, or a SiP (System In a Package). The controller 8 includes a memory unit 81, and may store various information, programs for operating each component of the virtual image display device 1, etc. in the memory unit 81. The memory unit 81 may be configured of, for example, a semiconductor memory, etc. The memory unit 81 may function as a work memory for the controller 8.
コントローラ8は、車内カメラ7から出力される撮像データに基づいて、利用者12の眼13の位置及び利用者12の注視点(以下、注視点Pと記載する)の位置を検出する。眼13の位置は、左眼13Lの位置と右眼13Rの位置とを結ぶ線分の中点であってよいし、利用者12の利き目の位置であってよい。利用者12の利き目は、利用者12によって予め設定されてよいし、外部からコントローラ8に入力されてよい。注視点Pは、左眼13Lの視線および右眼13Rの視線を含む仮想平面内における左眼13Lの視線と右眼13Rの視線との交点の位置である。視線は、視軸とも称される。視線は、左眼13L及び右眼13Rのそれぞれにおける、虹彩から開口して外方に臨む各水晶体の中心と視力のもっともよい中央窩とを通る仮想直線であってよい。
The controller 8 detects the position of the eye 13 of the user 12 and the position of the gaze point of the user 12 (hereinafter referred to as gaze point P) based on the imaging data output from the in-vehicle camera 7. The position of the eye 13 may be the midpoint of the line segment connecting the positions of the left eye 13L and the right eye 13R, or may be the position of the dominant eye of the user 12. The dominant eye of the user 12 may be set in advance by the user 12, or may be input to the controller 8 from outside. The gaze point P is the position of the intersection of the gaze points of the left eye 13L and the right eye 13R in a virtual plane including the gaze points of the left eye 13L and the right eye 13R. The gaze point is also called the visual axis. The gaze point may be a virtual straight line passing through the center of each lens that opens from the iris and faces outward, and the central fossa, which has the best visual acuity, in each of the left eye 13L and the right eye 13R.
コントローラ8は、眼13の位置と注視点Pの位置とに基づいて、表示パネル3に表示する視差画像を制御する。これにより、利用者12の注視点Pと虚像Vの水平方向および鉛直方向の表示位置とを略一致させることが可能となる。その結果、利用者12が映像酔いを起こしたり、利用者12が周辺の状況を直感的に理解できなくなったりする虞を低減できる。コントローラ8は、眼13の位置と注視点Pの位置とに基づいて、視差画像の表示形態を制御してよい。視差画像の表示形態は、視差画像の視差量、輝度、色、透過度、サイズ、及び形状等を含みうる。また、コントローラ8は、眼13の位置と注視点Pの位置とに基づいて、虚像Vを非表示にしてよいし、虚像Vを枠表示にしてよい。虚像Vを枠表示にするとは、虚像Vを枠のみの表示にするとも言い、文字、数字、及び図形で構成される虚像Vにおいて、文字又は数字で表された情報を非表示にすることを意味する。
The controller 8 controls the parallax image to be displayed on the display panel 3 based on the position of the eye 13 and the position of the gaze point P. This allows the gaze point P of the user 12 to be approximately aligned with the horizontal and vertical display positions of the virtual image V. As a result, the risk of the user 12 experiencing motion sickness or being unable to intuitively understand the surrounding situation can be reduced. The controller 8 may control the display form of the parallax image based on the position of the eye 13 and the position of the gaze point P. The display form of the parallax image may include the parallax amount, brightness, color, transparency, size, shape, etc. of the parallax image. In addition, the controller 8 may hide the virtual image V or display the virtual image V as a frame based on the position of the eye 13 and the position of the gaze point P. Displaying the virtual image V as a frame is also referred to as displaying the virtual image V as a frame only, and means that information represented by letters or numbers is hidden in the virtual image V consisting of letters, numbers, and figures.
コントローラ8は、眼13の位置及び注視点Pの位置との間の注視距離(以下、第1距離ともいう)L1を算出してよい。コントローラ8は、第1距離L1に基づいて、視差画像を制御してよい。この場合、注視距離L1に応じて虚像Vの表示位置を調整することが可能となる。その結果、その結果、利用者12が映像酔いを起こしたり、利用者12が周辺の状況を直感的に理解できなくなったりする虞をより低減できる。
The controller 8 may calculate a gaze distance (hereinafter also referred to as a first distance) L1 between the position of the eye 13 and the position of the gaze point P. The controller 8 may control the parallax image based on the first distance L1. In this case, it is possible to adjust the display position of the virtual image V according to the gaze distance L1. As a result, it is possible to further reduce the risk that the user 12 will suffer from visually-induced motion sickness or that the user 12 will be unable to intuitively understand the surrounding situation.
コントローラ8は、眼13の位置と虚像Vの表示位置との間の表示距離(以下、第2距離ともいう)L2を算出してよい。コントローラ8は、第1距離L1と第2距離L2とを比較し、第2距離L2が第1距離L1と異なる場合、図9に示すように、第2距離L2が第1距離L1に一致するように、視差画像の視差量を制御してよい。この場合、注視距離L1に応じて虚像Vの表示距離L2を調整することが可能となる。その結果、その結果、利用者12が映像酔いを起こしたり、利用者12が周辺の状況を直感的に理解できなくなったりする虞をより低減できる。
The controller 8 may calculate a display distance (hereinafter also referred to as the second distance) L2 between the position of the eye 13 and the display position of the virtual image V. The controller 8 may compare the first distance L1 with the second distance L2, and if the second distance L2 differs from the first distance L1, control the amount of parallax of the parallax image so that the second distance L2 matches the first distance L1, as shown in FIG. 9. In this case, it is possible to adjust the display distance L2 of the virtual image V according to the gaze distance L1. As a result, the risk that the user 12 will experience visually-induced motion sickness or will be unable to intuitively understand the surrounding situation can be further reduced.
コントローラ8は、視差画像の視差量Dを第1の視差量から第2の視差量に変更する場合、第1の視差量から第2の視差量に一挙に変更してよいし、第1の視差量から第2の視差量に段階的に変更してよい。視差量Dを一挙に変更する場合、注視距離L1と表示距離L2とが一致しない時間を短くできる。視差量Dを段階的に変更する場合、表示距離L2の変更に伴う利用者12の違和感を低減できる。
When changing the parallax amount D of the parallax image from the first parallax amount to the second parallax amount, the controller 8 may change it from the first parallax amount to the second parallax amount all at once, or may change it stepwise from the first parallax amount to the second parallax amount. When changing the parallax amount D all at once, the time during which the gaze distance L1 and the display distance L2 do not match can be shortened. When changing the parallax amount D stepwise, the discomfort felt by the user 12 due to the change in the display distance L2 can be reduced.
コントローラ8は、利用者12の眼13がアイボックス14(図1参照)に含まれるか否かを判定してよい。アイボックス14は、利用者12が虚像を観察する眼の位置の範囲であり、例えば利用者12の体格、姿勢、及び姿勢の変化等を考慮して、利用者12の眼13が存在しうると想定される実空間上の領域である。アイボックス14の形状は任意である。アイボックス14は、平面的な領域であってよいし、立体的な領域であってよい。コントローラ8は、利用者12の眼13がアイボックス14に含まれない場合、視差画像の表示形態を変更しなくてよい。この場合、コントローラ8及び表示部2の処理負荷を軽減することができる。
The controller 8 may determine whether the eye 13 of the user 12 is included in the eye box 14 (see FIG. 1). The eye box 14 is the range of eye positions from which the user 12 observes a virtual image, and is an area in real space in which the eye 13 of the user 12 is assumed to be present, taking into account, for example, the physique, posture, and changes in posture of the user 12. The shape of the eye box 14 is arbitrary. The eye box 14 may be a planar area or a three-dimensional area. If the eye 13 of the user 12 is not included in the eye box 14, the controller 8 does not need to change the display form of the parallax image. In this case, the processing load on the controller 8 and the display unit 2 can be reduced.
虚像表示装置1は、図2に示すように、センサ装置11を含んで構成される。センサ装置11は、移動体10の情報(状況)及び移動体10を取り巻く環境情報(周囲環境)のうちの少なくとも一方を取得するように構成される。センサ装置11は、車外カメラ11a、GPS(Global Positioning System)受信機、通信機器、レーザレーダ、ミリ波レーダ、センサ機器、カーナビゲーション装置、ドライブレコーダ等を含みうる。
As shown in FIG. 2, the virtual image display device 1 includes a sensor device 11. The sensor device 11 is configured to acquire at least one of information (situation) of the moving body 10 and environmental information (ambient environment) surrounding the moving body 10. The sensor device 11 may include an exterior camera 11a, a GPS (Global Positioning System) receiver, communication equipment, a laser radar, a millimeter wave radar, sensor equipment, a car navigation device, a drive recorder, etc.
車外カメラ11aは、移動体10の周囲の風景、特に、移動体10の前方の環境を撮像し、移動体10の前方情報(例えば、移動体10前方に存在する他の移動体、歩行者、動物、ガードレール等の路側設置物、建造物等に関する情報)を取得することができる。車外カメラ11aは、例えばCCD撮像素子又はCMOS撮像素子を含んで構成されてよい。車外カメラ11aは、移動体10の前端部に位置してよいし、移動体10の車室内に位置してよいし、エンジンルーム内に位置してよい。車外カメラ11aは、例えば広角レンズ、魚眼レンズ等の画角の広いレンズを有してよい。
The exterior camera 11a can capture images of the scenery around the moving body 10, particularly the environment ahead of the moving body 10, and obtain information ahead of the moving body 10 (e.g., information about other moving bodies, pedestrians, animals, roadside installations such as guardrails, buildings, etc., that are in front of the moving body 10). The exterior camera 11a may be configured to include, for example, a CCD imaging element or a CMOS imaging element. The exterior camera 11a may be located at the front end of the moving body 10, inside the passenger compartment of the moving body 10, or in the engine room. The exterior camera 11a may have a lens with a wide angle of view, for example, a wide-angle lens, a fisheye lens, etc.
GPS受信機は、移動体10の位置(緯度及び経度)及び速度に関する情報を取得することができる。通信機器は、外部通信網と通信可能な外部通信装置を含んでよい。通信機器は、外部通信網から、外部通信装置を介して、移動体10の位置及び速度、移動体10周辺の天候等に関する情報を取得することができる。レーザレーダ及びミリ波レーダは、移動体10の前方情報を取得することができる。センサ機器は、輝度計、超音波センサ等を含んでよい。センサ機器は、移動体10のECU(Electronic Control Unit)等を含んでよい。センサ機器は、移動体10の前方情報、移動体10周辺の輝度に関する情報、移動体10に備わった前照灯の点灯の有無に関する情報等を取得することができる。カーナビゲーション装置は、移動体10の位置及び速度に関する情報を取得することができる。ドライブレコーダは、移動体10の前方情報、及び移動体10の速度に関する情報を取得することができる。
The GPS receiver can obtain information regarding the position (latitude and longitude) and speed of the moving body 10. The communication device may include an external communication device capable of communicating with an external communication network. The communication device can obtain information regarding the position and speed of the moving body 10, the weather around the moving body 10, etc. from the external communication network via the external communication device. The laser radar and millimeter wave radar can obtain information ahead of the moving body 10. The sensor device may include a luminance meter, an ultrasonic sensor, etc. The sensor device may include an ECU (Electronic Control Unit) of the moving body 10, etc. The sensor device can obtain information ahead of the moving body 10, information regarding the luminance around the moving body 10, information regarding whether the headlights equipped on the moving body 10 are turned on, etc. The car navigation device can obtain information regarding the position and speed of the moving body 10. The drive recorder can obtain information ahead of the moving body 10 and information regarding the speed of the moving body 10.
コントローラ8は、図2に示すように、情報取得部8a、情報解析部8b、表示判断部8c、及び表示指示部8dを含んで構成される。
As shown in FIG. 2, the controller 8 includes an information acquisition unit 8a, an information analysis unit 8b, a display determination unit 8c, and a display instruction unit 8d.
情報取得部8aは、センサ装置11から移動体10の情報及び移動体10周辺の情報を取得し、移動体10の情報及び移動体10周辺の情報を情報解析部8bに出力する。情報取得部8aは、所定の時間間隔(例えば0.008秒~1秒)毎に移動体10及び移動体10周辺の情報を取得し、情報解析部8bに出力してよい。移動体10の情報は、移動体10の位置及び速度に関する情報、前照灯の点灯の有無に関する情報を含みうる。移動体10の情報は、利用者12の眼13の位置、注視点P、視点、視線、及び注視距離L1等の利用者12の情報を含みうる。移動体10周辺の情報は、移動体10の前方情報、移動体10周辺の輝度に関する情報、移動体10周辺の天候に関する情報を含みうる。
The information acquisition unit 8a acquires information on the moving body 10 and information on the surroundings of the moving body 10 from the sensor device 11, and outputs the information on the moving body 10 and information on the surroundings of the moving body 10 to the information analysis unit 8b. The information acquisition unit 8a may acquire information on the moving body 10 and information on the surroundings of the moving body 10 at predetermined time intervals (for example, 0.008 seconds to 1 second) and output the information to the information analysis unit 8b. The information on the moving body 10 may include information on the position and speed of the moving body 10, and information on whether the headlights are on or not. The information on the moving body 10 may include information on the user 12, such as the position of the eye 13 of the user 12, the gaze point P, the viewpoint, the line of sight, and the gaze distance L1. The information on the surroundings of the moving body 10 may include information ahead of the moving body 10, information on the brightness around the moving body 10, and information on the weather around the moving body 10.
情報解析部8bは、移動体10の情報及び移動体10周辺の情報を解析し、移動体10の状況及び移動体10周辺の状況を判断する。情報解析部8bは、移動体10の状況及び移動体10周辺の状況を表示判断部8cに出力する。情報解析部8bは、所定の時間間隔(例えば0.008秒~1秒)毎に移動体10及び移動体10周辺の状況を判断し、表示判断部8cに出力してよい。情報解析部8bは、以前に判断した移動体10及び移動体10周辺の状況を記憶するように構成されてよい。情報解析部8bは、移動体10及び移動体10周辺の状況を新たに判断すると、新たに判断した移動体10及び移動体10周辺の状況と、以前に判断した移動体10及び移動体10周辺の状況とを比較するように構成されてよい。表示判断部8cは、移動体10の状況及び移動体10周辺の状況に基づいて、表示部2に表示させる視差画像の表示形態(視差画像の視差量D、サイズ、形状、輝度等)を設定し記憶するとともに、表示指示部8dに出力する。表示判断部8cは、情報解析部8bから視差画像の表示形態を取得する場合、表示判断部8cに記憶されている以前に取得した視差画像の表示形態を破棄してよい。表示指示部8dは、表示判断部8cが設定した表示形態で視差画像を表示するよう表示部2に指示する。
The information analysis unit 8b analyzes information about the moving body 10 and information about the surroundings of the moving body 10, and judges the status of the moving body 10 and the status around the moving body 10. The information analysis unit 8b outputs the status of the moving body 10 and the status around the moving body 10 to the display determination unit 8c. The information analysis unit 8b may judge the status of the moving body 10 and the status around the moving body 10 at a predetermined time interval (e.g., 0.008 seconds to 1 second), and output it to the display determination unit 8c. The information analysis unit 8b may be configured to store the previously judged status of the moving body 10 and the status around the moving body 10. When the information analysis unit 8b newly judges the status of the moving body 10 and the status around the moving body 10, it may be configured to compare the newly judged status of the moving body 10 and the status around the moving body 10 with the previously judged status of the moving body 10 and the status around the moving body 10. The display determination unit 8c sets and stores the display form (parallax amount D, size, shape, brightness, etc. of the parallax image) to be displayed on the display unit 2 based on the status of the moving body 10 and the status around the moving body 10, and outputs it to the display instruction unit 8d. When the display determination unit 8c acquires the display form of the parallax image from the information analysis unit 8b, the display determination unit 8c may discard the display form of the parallax image acquired previously and stored in the display determination unit 8c. The display instruction unit 8d instructs the display unit 2 to display the parallax image in the display form set by the display determination unit 8c.
移動体10周辺の状況は、障害物Oの有無を含んでよい。障害物Oは、移動体10前方に存在する物体であって、該物体までの距離より視差量Dから求まる表示距離が長い物体であってよい。障害物Oは、例えば、移動体10周辺、特に移動体10前方に存在する他の移動体、ガードレール等の路側設置物、建造物等を含みうる。情報解析部8bは、障害物Oが存在する場合、移動体10に対する障害物Oの相対位置及び相対速度を検出してよい。移動体10周辺の状況は、利用者12の視野の背景輝度、特に利用者12の視線方向の背景輝度を含んでよい。
The situation around the moving body 10 may include the presence or absence of an obstacle O. The obstacle O may be an object that is present in front of the moving body 10 and whose display distance calculated from the parallax amount D is longer than the distance to the object. The obstacle O may include, for example, other moving bodies that are present around the moving body 10, particularly in front of the moving body 10, roadside installations such as guardrails, buildings, etc. If an obstacle O is present, the information analysis unit 8b may detect the relative position and relative speed of the obstacle O with respect to the moving body 10. The situation around the moving body 10 may include the background luminance in the field of view of the user 12, particularly the background luminance in the line of sight of the user 12.
以下、コントローラ8による視差画像(すなわち、虚像V)の制御について説明する。
The following describes how the controller 8 controls the parallax image (i.e., the virtual image V).
コントローラ8は、所定の時間間隔Δt毎に利用者12の眼13の位置及び注視点Pの位置を検出し、注視距離L1を算出する。時間間隔Δtは、例えば0.008秒~1秒であってよい。コントローラ8は、現在時刻における注視距離L1と、現在時刻よりΔtだけ前の時刻における注視距離L1とを比較し、注視距離L1の変化量を算出する。コントローラ8は、注視距離L1の変化量と閾値(第1閾値ともいう)T1とを比較し、注視距離L1の変化量が第1閾値T1以上である場合、図9に示すように、表示距離L2が注視距離L1に一致するよう視差画像の視差量Dを変更してよい。コントローラ8は、注視距離L1の変化量が第1閾値T1未満である場合、視差画像の視差量を変更しなくてよい。これにより、利用者12の映像酔いを抑えつつ、コントローラ8及び表示部2の処理負荷を軽減することができる。第1閾値T1は、例えば、5m以上10m未満、10m以上15m未満、15m以上20m未満、又は20m以上の範囲の距離であってよいし、その他の距離であってよい。
The controller 8 detects the position of the eye 13 of the user 12 and the position of the gaze point P at every predetermined time interval Δt, and calculates the gaze distance L1. The time interval Δt may be, for example, 0.008 seconds to 1 second. The controller 8 compares the gaze distance L1 at the current time with the gaze distance L1 at a time Δt before the current time, and calculates the amount of change in the gaze distance L1. The controller 8 compares the amount of change in the gaze distance L1 with a threshold (also called the first threshold) T1, and if the amount of change in the gaze distance L1 is equal to or greater than the first threshold T1, the controller 8 may change the parallax amount D of the parallax image so that the display distance L2 matches the gaze distance L1, as shown in FIG. 9. If the amount of change in the gaze distance L1 is less than the first threshold T1, the controller 8 does not need to change the parallax amount of the parallax image. This makes it possible to reduce the processing load on the controller 8 and the display unit 2 while suppressing visually induced motion sickness of the user 12. The first threshold T1 may be, for example, a distance in the range of 5 m or more and less than 10 m, 10 m or more and less than 15 m, 15 m or more and less than 20 m, or 20 m or more, or may be some other distance.
コントローラ8は、表示距離L2が注視距離L1に一致するよう視差画像の視差量Dを一挙に変更してよいし、視差画像の視差量Dを段階的に変更してよい。視差画像の視差量Dを一挙に変更する場合、注視距離L1と表示距離L2とが一致しない時間を短くしうる。視差画像の視差量Dを段階的に変更する場合、表示距離L2の変更に伴う利用者12の違和感を低減しうる。
The controller 8 may change the parallax amount D of the parallax image all at once so that the display distance L2 matches the viewing distance L1, or may change the parallax amount D of the parallax image in stages. When the parallax amount D of the parallax image is changed all at once, the time during which the viewing distance L1 and the display distance L2 do not match can be shortened. When the parallax amount D of the parallax image is changed in stages, the discomfort felt by the user 12 due to the change in the display distance L2 can be reduced.
コントローラ8は、視差量Dを段階的に変更する場合、表示距離L2が1~2秒程度かけて注視距離L1に一致するよう視差量Dを段階的に変更してよいし、表示距離L2が3秒程度かけて注視距離L1に一致するよう視差量Dを段階的に変更してよい。コントローラ8は、移動体10の速度が大きいほど、表示距離L2が短時間で注視距離L1に一致するよう視差量Dを変更してよい。また、コントローラ8は、視差量Dを段階的に変更する場合、利用者12が視認する虚像Vのサイズ、色、形状等を段階的に変更してもよい。
When the controller 8 changes the parallax amount D in stages, it may change the parallax amount D in stages so that the display distance L2 matches the gaze distance L1 over about 1 to 2 seconds, or it may change the parallax amount D in stages so that the display distance L2 matches the gaze distance L1 over about 3 seconds. The controller 8 may change the parallax amount D so that the display distance L2 matches the gaze distance L1 in a shorter time the faster the speed of the moving body 10 is. Furthermore, when the controller 8 changes the parallax amount D in stages, it may change the size, color, shape, etc. of the virtual image V viewed by the user 12 in stages.
コントローラ8は、注視距離L1の変化量が閾値(第2閾値ともいう)T2以上であり、且つ、移動体10の速度が閾値(第3閾値ともいう)T3以上である場合、視差画像を表示しないよう表示部2を制御してよい。言い換えると、虚像表示装置1は、注視距離L1の変化量が第2閾値T2以上であり、且つ、移動体10の速度が第3閾値T3以上である場合、虚像Vを消灯してよい。この場合、図9に示すように、移動体10前方に障害物Oが現れたときに、利用者12の注意を障害物Oに向けさせることができるため、運転の安全性を高めることができる。第2閾値T2は、例えば30mであってよいが、これに限らない。第2閾値T2は、5m以上10m未満、10m以上15m未満、15m以上20m未満、又は20m以上の範囲の距離であってよいし、その他の距離であってよい。第2閾値T2は、第1閾値T1と同じであってよいし、第1閾値T1と異なっていてよい。第3閾値T3は、80km/h以上の速度であってよいが、これに限定されない。第2閾値T2は、例えば、60km/h以上80km/h未満、50km/h以上60km/h未満、40km/h以上50km/h未満、30km/h以上40km/h未満、10km/h以上30km/h未満、又は10km/h未満の速度であってよい。
The controller 8 may control the display unit 2 not to display the parallax image when the change in the gaze distance L1 is equal to or greater than the threshold (also referred to as the second threshold) T2 and the speed of the moving body 10 is equal to or greater than the threshold (also referred to as the third threshold) T3. In other words, the virtual image display device 1 may turn off the virtual image V when the change in the gaze distance L1 is equal to or greater than the second threshold T2 and the speed of the moving body 10 is equal to or greater than the third threshold T3. In this case, as shown in FIG. 9, when an obstacle O appears in front of the moving body 10, the attention of the user 12 can be directed to the obstacle O, thereby improving the safety of driving. The second threshold T2 may be, for example, 30 m, but is not limited thereto. The second threshold T2 may be a distance in the range of 5 m or more and less than 10 m, 10 m or more and less than 15 m, 15 m or more and less than 20 m, or 20 m or more, or may be another distance. The second threshold T2 may be the same as the first threshold T1 or may be different from the first threshold T1. The third threshold T3 may be a speed of 80 km/h or more, but is not limited to this. The second threshold T2 may be, for example, a speed of 60 km/h or more and less than 80 km/h, 50 km/h or more and less than 60 km/h, 40 km/h or more and less than 50 km/h, 30 km/h or more and less than 40 km/h, 10 km/h or more and less than 30 km/h, or less than 10 km/h.
コントローラ8は、注視距離L1の変化量が第2閾値T2以上であり、且つ、移動体10の速度が第3閾値T3以上である場合、虚像Vを枠表示にするよう表示部2を制御してよい。この場合も、例えば移動体10前方に障害物Oが現れたとき、利用者12の注意を障害物Oに向けさせることができるため、運転の安全性を高めることができる。
The controller 8 may control the display unit 2 to display the virtual image V as a frame when the change in the gaze distance L1 is equal to or greater than the second threshold T2 and the speed of the moving body 10 is equal to or greater than the third threshold T3. In this case, too, for example, when an obstacle O appears ahead of the moving body 10, the attention of the user 12 can be directed to the obstacle O, thereby improving driving safety.
コントローラ8は、注視距離L1の変化量が第2閾値T2以上であり、且つ、移動体10の速度が第3閾値T3以上である場合に、虚像Vを非表示又は枠表示にせず、虚像Vの輝度を低下させる、又は虚像Vの透過度を増加させるよう表示部2を制御してもよい。この場合、運転の安全性を高めることができるとともに、利用者12に移動体10に関連する各種の情報を提供し続けることができる。
When the amount of change in the gaze distance L1 is equal to or greater than the second threshold T2 and the speed of the moving body 10 is equal to or greater than the third threshold T3, the controller 8 may control the display unit 2 to not display the virtual image V or to display it in a frame, but to reduce the brightness of the virtual image V or to increase the transparency of the virtual image V. In this case, it is possible to increase the safety of driving and to continue to provide the user 12 with various information related to the moving body 10.
コントローラ8は、注視距離L1の変化量が閾値(第4閾値ともいう)T4以上であり、且つ、利用者12の視線方向の背景輝度の低下量が閾値(第5閾値ともいう)T5以上である場合、表示距離L2が注視距離L1に一致するよう視差量を変更するとともに、虚像Vの輝度を上昇させてよい。これにより、利用者12の映像酔いを抑えることができる。また、図10に示すように、利用者12の注視点Pが明るい場所BPから暗い場所DPに移動した際、虚像Vの視認性を高めることができ、その結果、利用者12が虚像Vを視認しにくい時間を短くすることができる。コントローラ8は、移動体10の速度に基づいて、虚像Vの輝度の上昇量を決定してよい。コントローラ8は、移動体10の速度が大きいほど、虚像Vの輝度の上昇量を大きくしてよい。第4閾値T4は、例えば、5m以上10m未満、10m以上15m未満、15m以上20m未満、又は20m以上の範囲の距離であってよいし、その他の距離であってよい。第4閾値T4は、第1閾値T1又は第2閾値T2と同じであってよいし、第1閾値T1及び第2閾値T2と異なっていてよい。第5閾値T5は、例えば、1000cd/m2以上2000cd/m2未満、2000cd/m2以上3000cd/m2未満、又は3000cd/m2以上の範囲の輝度であってよいし、その他の輝度であってよい。
When the change amount of the gaze distance L1 is equal to or greater than a threshold (also referred to as a fourth threshold) T4 and the decrease amount of the background luminance in the line of sight of the user 12 is equal to or greater than a threshold (also referred to as a fifth threshold) T5, the controller 8 may change the parallax amount so that the display distance L2 coincides with the gaze distance L1 and increase the luminance of the virtual image V. This can suppress visually induced motion sickness of the user 12. Also, as shown in FIG. 10, when the gaze point P of the user 12 moves from a bright place BP to a dark place DP, the visibility of the virtual image V can be increased, and as a result, the time during which the user 12 has difficulty viewing the virtual image V can be shortened. The controller 8 may determine the increase amount of the luminance of the virtual image V based on the speed of the moving object 10. The controller 8 may increase the increase amount of the luminance of the virtual image V as the speed of the moving object 10 increases. The fourth threshold T4 may be, for example, a distance in the range of 5 m or more and less than 10 m, 10 m or more and less than 15 m, 15 m or more and less than 20 m, or 20 m or more, or may be another distance. The fourth threshold T4 may be the same as the first threshold T1 or the second threshold T2, or may be different from the first threshold T1 and the second threshold T2. The fifth threshold T5 may be, for example, a luminance in the range of 1000 cd/ m2 or more and less than 2000 cd/ m2 , 2000 cd/ m2 or more and less than 3000 cd/ m2 , or 3000 cd/ m2 or more, or may be another luminance.
コントローラ8は、注視距離L1の変化量が閾値(第6閾値ともいう)T6以上であり、且つ、利用者12の視線方向の背景輝度の上昇量が閾値(第7閾値ともいう)T7以上である場合、表示距離L2が注視距離L1に一致するよう視差量を変更するとともに、虚像Vの輝度を低下させてよい。この場合、利用者12の映像酔いを抑えることができる。また、利用者12の注視点Pが暗い場所DPから明るい場所BPに移動した際(図10参照)、虚像Vが利用者12を幻惑させることを抑えることができ、その結果、利用者12が虚像Vを視認しにくい時間を短くすることができる。コントローラ8は、移動体10の速度に基づいて、虚像Vの輝度の低下量を決定してよい。コントローラ8は、移動体10の速度が大きいほど、虚像Vの輝度の低下量を大きくしてよい。第6閾値T6は、例えば、5m以上10m未満、10m以上15m未満、15m以上20m未満、又は20m以上の範囲の距離であってよいし、その他の距離であってよい。第6閾値T6は、第4閾値T4と同じであってよいし、第4閾値T4と異なっていてよい。第7閾値T7は、例えば、1000cd/m2以上2000cd/m2未満、2000cd/m2以上3000cd/m2未満、又は3000cd/m2以上の範囲の輝度であってよいし、その他の輝度であってよい。第7閾値T7は、第5閾値T5と同じであってよいし、第5閾値T5と異なっていてよい。
When the change amount of the gaze distance L1 is equal to or greater than a threshold (also referred to as a sixth threshold) T6 and the increase amount of the background luminance in the line of sight of the user 12 is equal to or greater than a threshold (also referred to as a seventh threshold) T7, the controller 8 may change the parallax amount so that the display distance L2 coincides with the gaze distance L1 and may reduce the luminance of the virtual image V. In this case, it is possible to suppress visually induced motion sickness of the user 12. In addition, when the gaze point P of the user 12 moves from a dark place DP to a bright place BP (see FIG. 10), it is possible to suppress the virtual image V from dazzling the user 12, and as a result, it is possible to shorten the time during which the user 12 has difficulty viewing the virtual image V. The controller 8 may determine the amount of reduction in the luminance of the virtual image V based on the speed of the moving object 10. The controller 8 may increase the amount of reduction in the luminance of the virtual image V as the speed of the moving object 10 increases. The sixth threshold T6 may be, for example, a distance in the range of 5 m or more and less than 10 m, 10 m or more and less than 15 m, 15 m or more and less than 20 m, or 20 m or more, or may be another distance. The sixth threshold T6 may be the same as the fourth threshold T4, or may be different from the fourth threshold T4. The seventh threshold T7 may be, for example, a luminance in the range of 1000 cd/ m2 or more and less than 2000 cd/ m2 , 2000 cd/ m2 or more and less than 3000 cd/ m2 , or 3000 cd/ m2 or more, or may be another luminance. The seventh threshold T7 may be the same as the fifth threshold T5, or may be different from the fifth threshold T5.
コントローラ8の記憶部81は、移動体10の速度及び利用者12の視線方向における背景輝度の変化量(低下量及び上昇量)と、虚像Vの輝度の変化量(上昇量及び低下量)との対応を示すテーブルを記憶してよい。コントローラ8は、記憶部81に記憶されたテーブルに基づいて、虚像Vの輝度を変化させてよい。
The memory unit 81 of the controller 8 may store a table showing the correspondence between the amount of change (amount of decrease and amount of increase) in background luminance in the speed of the moving body 10 and the line of sight direction of the user 12, and the amount of change (amount of increase and amount of decrease) in the luminance of the virtual image V. The controller 8 may change the luminance of the virtual image V based on the table stored in the memory unit 81.
コントローラ8は、移動体10の前照灯の点灯状態に基づいて、虚像Vを制御してよい。前照灯の点灯状態は、前照灯の点灯の有無、前照灯の配光状態(ハイビームまたはロービーム)を含む。コントローラ8は、前照灯の点灯状態に基づいて、虚像Vの輝度を制御してよい。例えば、コントローラ8は、前照灯が点灯している場合、虚像Vの輝度を上昇させてよく、この場合、虚像Vの視認性を高めることができる。コントローラ8は、前照灯の点灯状態と利用者12の視線方向の背景輝度とに基づいて、虚像Vの輝度を制御してよい。この場合、虚像Vの視認性をより高めることが可能となる。
The controller 8 may control the virtual image V based on the lighting state of the headlights of the moving body 10. The lighting state of the headlights includes whether the headlights are on or off and the light distribution state of the headlights (high beam or low beam). The controller 8 may control the brightness of the virtual image V based on the lighting state of the headlights. For example, when the headlights are on, the controller 8 may increase the brightness of the virtual image V, in which case the visibility of the virtual image V can be improved. The controller 8 may control the brightness of the virtual image V based on the lighting state of the headlights and the background brightness in the line of sight of the user 12. In this case, it is possible to further improve the visibility of the virtual image V.
上記では、コントローラ8が、注視距離L1、移動体10の速度及び利用者12の視線方向の背景輝度に基づいて、虚像Vを非表示又は枠表示にしたり、虚像Vの表示位置、サイズ、色、輝度、及び透過度を変更したりする場合について説明したが、コントローラ8による虚像Vの制御はこれらに限らない。コントローラ8は、例えば、移動体10の情報及び移動体10周辺の情報に基づいて、立体像(3次元画像)である虚像Vを平面像(2次元画像)にしてもよい。
In the above, a case has been described in which the controller 8 hides or frames the virtual image V, or changes the display position, size, color, brightness, and transparency of the virtual image V, based on the gaze distance L1, the speed of the moving body 10, and the background brightness in the line of sight of the user 12, but the control of the virtual image V by the controller 8 is not limited to this. For example, the controller 8 may turn the virtual image V, which is a stereoscopic image (three-dimensional image), into a planar image (two-dimensional image), based on information about the moving body 10 and information about the surroundings of the moving body 10.
以下、コントローラ8による表示部2の制御について説明する。図11~13は、コントローラ8による表示部2の制御を示すフローチャートである。フローチャートでは、「ステップ」を〔S〕と略称するとともに、チャート内においては、判断制御における「正」を[YES]で表し、「否」を[NO]で表している。
The control of the display unit 2 by the controller 8 will be described below. Figures 11 to 13 are flowcharts showing the control of the display unit 2 by the controller 8. In the flowcharts, "step" is abbreviated as [S], and within the charts, "positive" in the judgment control is represented by [YES] and "negative" is represented by [NO].
図11のフローチャートは、移動体10が始動した時点で〔スタート〕する。〔S1〕において、センサ装置11から移動体10の情報及び移動体10周辺の情報を取得する。続いて、〔S2〕において、センサ装置11から取得した情報を解析し、移動体10の状況及び移動体10周辺の状況を判断する。
The flowchart in FIG. 11 [starts] when the moving body 10 starts. In [S1], information on the moving body 10 and information on the surroundings of the moving body 10 are acquired from the sensor device 11. Next, in [S2], the information acquired from the sensor device 11 is analyzed to determine the status of the moving body 10 and the status of the surroundings of the moving body 10.
次に、〔S3〕において、眼13の位置がアイボックス14内であるか否かを判定する。眼13の位置がアイボックス14内である[Yes]場合、〔S4〕に進み、眼13の位置がアイボックス14内でない[No]場合、フローチャートを終了する。
Next, in [S3], it is determined whether or not the position of the eye 13 is within the eye box 14. If the position of the eye 13 is within the eye box 14 [Yes], the process proceeds to [S4], and if the position of the eye 13 is not within the eye box 14 [No], the flow chart ends.
〔S4〕において、利用者12の視線に変化があったか否かを判定する。利用者12の視線に変化があった[Yes]場合、〔S5〕に進み、利用者12の視線に変化がなかった[No]場合、フローチャートを終了する。
In [S4], it is determined whether or not there has been a change in the line of sight of the user 12. If there has been a change in the line of sight of the user 12 [Yes], proceed to [S5], and if there has been no change in the line of sight of the user 12 [No], end the flowchart.
〔S5〕において、利用者12の注視距離L1を判断済みであるか否かを判定する。注視距離L1を判断済みである[Yes]場合、〔S6〕に進む。注視距離L1を判断済みでない[No]場合、〔S7〕に進み、〔S7〕において、注視距離L1を判断した後、〔S6〕に進む。
In [S5], it is determined whether the gaze distance L1 of the user 12 has been determined. If the gaze distance L1 has been determined [Yes], proceed to [S6]. If the gaze distance L1 has not been determined [No], proceed to [S7], and after determining the gaze distance L1 in [S7], proceed to [S6].
〔S6〕において、利用者12の注視距離L1に変化があったか否かを判定する。注視距離L1に変化があった[Yes]場合、〔S8〕に進み、注視距離L1に変化がなかった[No]場合、〔S9〕に進む。
In [S6], it is determined whether there has been a change in the gaze distance L1 of the user 12. If there has been a change in the gaze distance L1 [Yes], proceed to [S8], and if there has been no change in the gaze distance L1 [No], proceed to [S9].
〔S8〕において、注視距離L1の変化量が閾値以上であるか否かを判定する。閾値は、注視距離L1の変化量に対して設定されている閾値(例えば第1閾値T1、第2閾値T2、及び第4閾値T4)のうちの最小の閾値であってよい。注視距離L1の変化量が閾値以上である[Yes]場合、〔S10〕に進み、注視距離L1の変化量が閾値以上でない[No]場合、フローチャートを終了する。
In [S8], it is determined whether the change in gaze distance L1 is greater than or equal to a threshold value. The threshold value may be the smallest of the threshold values (e.g., the first threshold value T1, the second threshold value T2, and the fourth threshold value T4) that are set for the change in gaze distance L1. If the change in gaze distance L1 is greater than or equal to the threshold value [Yes], proceed to [S10], and if the change in gaze distance L1 is not greater than or equal to the threshold value [No], end the flowchart.
〔S10〕において、注視距離L1の変化量と、注視距離L1の変化量に対して設定されている閾値(例えば第1閾値T1、第2閾値T2、及び第4閾値T4)との比較を行い、比較結果に基づいて、視差画像(すなわち虚像V)の変更パターンを選択し、図12に示すフローチャートの〔S11〕に進む。変更パターンは、移動体10の情報に応じた変更パターン、及び移動体10の前方情報に応じた変更パターンのいずれかであってよい。
In [S10], the change in gaze distance L1 is compared with thresholds (e.g., first threshold T1, second threshold T2, and fourth threshold T4) set for the change in gaze distance L1, and a change pattern for the parallax image (i.e., virtual image V) is selected based on the comparison result, and the process proceeds to [S11] of the flowchart shown in FIG. 12. The change pattern may be either a change pattern according to information about the moving body 10 or a change pattern according to information ahead of the moving body 10.
〔S6〕において、注視距離L1に変化がなかった[No]場合、〔S9〕において、利用者12の注視点に状況変化があるか否かを判定し、視差画像(すなわち虚像V)の変更パターンを選択する。注視点の状況変化は、例えば、利用者12の視線方向の背景輝度の変化であってよい。注視点に状況変化がある[Yes]場合、図12に示すフローチャートの〔S11〕に進み、注視点に状況変化がない[No]場合、フローチャートを終了する。
If there is no change in the gaze distance L1 in [S6] [No], then in [S9] it is determined whether there is a change in the situation at the gaze point of the user 12, and a change pattern for the parallax image (i.e., virtual image V) is selected. The change in the situation at the gaze point may be, for example, a change in the background luminance in the line of sight of the user 12. If there is a change in the situation at the gaze point [Yes], proceed to [S11] in the flowchart shown in Figure 12, and if there is no change in the situation at the gaze point [No], end the flowchart.
次に、〔S11〕において、変更パターンが移動体10の速度に応じた視差画像の変更処理を行うか否かを判定する。移動体10の速度に応じた視差画像の変更処理を行う[Yes]場合、〔S12〕に進み、移動体10の速度に応じた視差画像の変更処理を行わない[No]場合、〔S13〕に進む。
Next, in [S11], it is determined whether the change pattern performs processing to change the parallax image according to the speed of the moving body 10. If the result is [Yes] to perform processing to change the parallax image according to the speed of the moving body 10, the process proceeds to [S12], and if the result is [No] to not perform processing to change the parallax image according to the speed of the moving body 10, the process proceeds to [S13].
〔S12〕において、移動体10の速度と移動体10の速度に対して設定された閾値(例えば第3閾値T3)とを比較し、比較結果に基づいて、視差画像の変更パターンを選択し、〔S14〕に進む。
In [S12], the speed of the moving body 10 is compared with a threshold value (e.g., third threshold value T3) set for the speed of the moving body 10, and a change pattern for the parallax image is selected based on the comparison result, and the process proceeds to [S14].
〔S11〕において、変更パターンが移動体10の速度に応じた視差画像の変更処理を行わない[No]場合、〔S13〕において、移動体10前方の状況に応じた視差画像の変更処理を行うか否かを判定する。移動体10前方の状況に応じた視差画像の変更処理を行う[Yes]場合、〔S13〕に進み、移動体10前方の状況に応じた視差画像の変更処理を行わない[No]場合、〔S14〕に進む。
If the change pattern in [S11] does not change the parallax image in accordance with the speed of the moving body 10 [No], then in [S13] it is determined whether or not to change the parallax image in accordance with the situation ahead of the moving body 10. If the change process in accordance with the situation ahead of the moving body 10 is to be performed [Yes], proceed to [S13], and if the change process in accordance with the situation ahead of the moving body 10 is not to be performed [No], proceed to [S14].
〔S14〕において、〔S10〕、〔S12〕、及び〔S15〕で選択した変更パターンに基づいて、変更すべき視差画像(虚像V)のパラメータ及び当該パラメータの変更量を決定し、図13に示すフローチャートの〔S15〕に進む。視差画像(虚像V)のパラメータは、視差量D、視差量Dの変更時間、虚像Vの輝度、色、透過度、サイズ、及び形状に関するパラメータを含みうる。視差量Dの変更時間は、視差量Dの変更を開始してから終了するまでの時間である。視差画像(虚像V)のパラメータは、虚像Vを非表示にするためのフラグ、及び虚像Vを枠表示にするためのフラグを含みうる。
In [S14], the parameters of the parallax image (virtual image V) to be changed and the amount of change of the parameters are determined based on the change patterns selected in [S10], [S12], and [S15], and the process proceeds to [S15] of the flowchart shown in FIG. 13. The parameters of the parallax image (virtual image V) may include the amount of parallax D, the time for changing the amount of parallax D, and parameters related to the brightness, color, transparency, size, and shape of the virtual image V. The time for changing the amount of parallax D is the time from the start to the end of changing the amount of parallax D. The parameters of the parallax image (virtual image V) may include a flag for hiding the virtual image V, and a flag for displaying the virtual image V in a frame.
〔S15〕において、変更すべき視差画像のパラメータが視差量Dを含むか否かを判定する。変更すべき視差画像のパラメータが視差量Dを含む[Yes]場合、〔S16〕に進み、変更すべき視差画像のパラメータが視差量Dを含まない[No]場合、〔S19〕に進む。
In [S15], it is determined whether the parameters of the parallax image to be changed include the parallax amount D. If the parameters of the parallax image to be changed include the parallax amount D [Yes], proceed to [S16], and if the parameters of the parallax image to be changed do not include the parallax amount D [No], proceed to [S19].
〔S16〕において、視差画像に含まれる1または複数のオブジェクト(例えば、スピードメータ、タコメータ、及び方向指示器等)のうちの視差量Dを変更するオブジェクトを選択する。続いて、〔S17〕において、利用者12の注視距離L1に基づいて、視差量Dの変更量を算出する。〔S18〕において、表示判断部8cに記憶されている視差量Dを書き換える。
In [S16], one or more objects (e.g., a speedometer, a tachometer, a direction indicator, etc.) included in the parallax image are selected for which the parallax amount D is to be changed. Then, in [S17], the amount of change in the parallax amount D is calculated based on the gaze distance L1 of the user 12. In [S18], the parallax amount D stored in the display determination unit 8c is rewritten.
〔S19〕において、〔S14〕で決定した視差画像のパラメータ及び当該パラメータの変更量に基づいて、表示パネル3に表示されている視差画像を変更するよう表示部2に指示する。続いて、視差量Dを段階的に変更する場合には、〔S20〕において、視差量Dを一挙に変更せず、段階的に変更するための処理(遅延処理)を行う。〔S21〕において、視差画像のパラメータの変更が全て終了したか否かを判定する。視差画像のパラメータの変更が全て終了した[Yes]場合、フローチャートを終了し、視差画像のパラメータの変更が全て終了していない[No]場合、〔S15〕に戻る。
In [S19], the display unit 2 is instructed to change the parallax image displayed on the display panel 3 based on the parallax image parameters and the amount of change of the parameters determined in [S14]. Next, if the parallax amount D is to be changed in stages, in [S20] a process (delay process) is performed to change the parallax amount D in stages rather than all at once. In [S21], it is determined whether or not all changes to the parallax image parameters have been completed. If all changes to the parallax image parameters have been completed [Yes], the flow chart is terminated, and if all changes to the parallax image parameters have not been completed [No], the process returns to [S15].
上述した実施形態は、虚像表示装置1としての実施のみに限定されない。例えば、上述した実施形態は、虚像表示装置1を用いた虚像表示方法として実施してもよいし、虚像表示装置1を制御するプログラムとして実施してもよい。
The above-described embodiment is not limited to implementation as the virtual image display device 1. For example, the above-described embodiment may be implemented as a virtual image display method using the virtual image display device 1, or may be implemented as a program for controlling the virtual image display device 1.
本開示の実施形態によれば、利用者の映像酔いの発生を抑えることができる。また、本開示の実施形態によれば、利用者に周辺の状況を直感的に理解させることが可能となる。
According to the embodiments of the present disclosure, it is possible to prevent the user from experiencing motion sickness caused by video. Furthermore, according to the embodiments of the present disclosure, it is possible to allow the user to intuitively understand the surrounding situation.
本開示に係る構成は、以上説明してきた実施形態にのみ限定されるものではなく、幾多の変形又は変更が可能である。例えば、各構成部等に含まれる機能等は論理的に矛盾しないように再配置可能であり、複数の構成部等を1つに組み合わせたり、或いは分割したりすることが可能である。
The configuration of the present disclosure is not limited to the embodiments described above, but can be modified or changed in many ways. For example, the functions contained in each component can be rearranged so as not to cause logical inconsistencies, and multiple components can be combined into one or divided.
本開示に係る構成を説明する図は、模式的なものである。図面上の寸法比率等は、現実のものと必ずしも一致しない。
The figures illustrating the configurations related to this disclosure are schematic. The dimensional ratios and other details in the drawings do not necessarily correspond to the actual ones.
本開示において「第1」及び「第2」等の記載は、当該構成を区別するための識別子である。本開示における「第1」及び「第2」等の記載で区別された構成は、当該構成における番号を交換することができる。例えば、第1光学部材は、第2光学部材と識別子である「第1」と「第2」とを交換することができる。識別子の交換は同時に行われる。識別子の交換後も当該構成は区別される。識別子は削除してよい。識別子を削除した構成は、符号で区別される。本開示における「第1」及び「第2」等の識別子の記載のみに基づいて、当該構成の順序の解釈、小さい番号の識別子が存在することの根拠に利用してはならない。
In this disclosure, descriptions such as "first" and "second" are identifiers for distinguishing the configuration. Configurations distinguished by descriptions such as "first" and "second" in this disclosure may have their numbers exchanged. For example, the first optical member may exchange identifiers "first" and "second" with the second optical member. The exchange of identifiers is performed simultaneously. The configurations remain distinguished even after the exchange of identifiers. Identifiers may be deleted. A configuration from which an identifier has been deleted is distinguished by a symbol. Descriptions of identifiers such as "first" and "second" in this disclosure alone should not be used to interpret the order of the configurations or to justify the existence of identifiers with smaller numbers.
本開示において、x軸、y軸、及びz軸は、説明の便宜上設けられたものであり、互いに入れ替えられてよい。本開示に係る構成は、x軸、y軸、及びz軸によって構成される直交座標系を用いて説明されてきた。本開示に係る各構成の位置関係は、直交関係にあると限定されるものではない。
In this disclosure, the x-axis, y-axis, and z-axis are provided for convenience of explanation and may be interchanged. The configurations according to this disclosure have been described using an orthogonal coordinate system consisting of the x-axis, y-axis, and z-axis. The positional relationship of each configuration according to this disclosure is not limited to being orthogonal.
本開示は、以下の構成(1)~(13)で実施可能である。
This disclosure can be implemented in the following configurations (1) to (13).
(1)移動体の利用者に立体像を視認させる虚像表示装置であって、
第1画像と前記第1画像に対して視差を有する第2画像とを含む視差画像を表示するように構成される表示部と、
前記視差画像の画像光の伝播方向を規定するように構成される光学素子と、
前記光学素子によって伝播方向を規定された画像光を前記利用者の眼に向かって伝播させ、前記利用者の視野に前記視差画像の虚像を表示するように構成される光学系と、
前記利用者の眼を撮像するように構成されるカメラと、
コントローラと、を含み、
前記コントローラは、
前記カメラから出力される撮像データに基づいて、前記利用者の眼の位置及び前記利用者の注視点の位置を検出し、
前記眼の位置と前記注視点の位置とに基づいて、前記視差画像を制御するように構成される、虚像表示装置。 (1) A virtual image display device that allows a user of a moving body to visually recognize a stereoscopic image, comprising:
a display unit configured to display a parallax image including a first image and a second image having parallax with respect to the first image;
an optical element configured to define a propagation direction of image light of the parallax image;
an optical system configured to propagate image light whose propagation direction is regulated by the optical element toward an eye of the user and display a virtual image of the parallax image in a field of view of the user;
a camera configured to image the eye of the user; and
A controller,
The controller:
Detecting the position of the user's eyes and the position of the user's gaze point based on the imaging data output from the camera;
The virtual image display device is configured to control the parallax image based on the position of the eye and the position of the gaze point.
第1画像と前記第1画像に対して視差を有する第2画像とを含む視差画像を表示するように構成される表示部と、
前記視差画像の画像光の伝播方向を規定するように構成される光学素子と、
前記光学素子によって伝播方向を規定された画像光を前記利用者の眼に向かって伝播させ、前記利用者の視野に前記視差画像の虚像を表示するように構成される光学系と、
前記利用者の眼を撮像するように構成されるカメラと、
コントローラと、を含み、
前記コントローラは、
前記カメラから出力される撮像データに基づいて、前記利用者の眼の位置及び前記利用者の注視点の位置を検出し、
前記眼の位置と前記注視点の位置とに基づいて、前記視差画像を制御するように構成される、虚像表示装置。 (1) A virtual image display device that allows a user of a moving body to visually recognize a stereoscopic image, comprising:
a display unit configured to display a parallax image including a first image and a second image having parallax with respect to the first image;
an optical element configured to define a propagation direction of image light of the parallax image;
an optical system configured to propagate image light whose propagation direction is regulated by the optical element toward an eye of the user and display a virtual image of the parallax image in a field of view of the user;
a camera configured to image the eye of the user; and
A controller,
The controller:
Detecting the position of the user's eyes and the position of the user's gaze point based on the imaging data output from the camera;
The virtual image display device is configured to control the parallax image based on the position of the eye and the position of the gaze point.
(2)前記コントローラは、前記眼の位置と前記注視点の位置との間の距離である第1距離に基づいて、前記視差画像を制御するように構成される、上記構成(1)に記載の虚像表示装置。
(2) The virtual image display device according to the above configuration (1), wherein the controller is configured to control the parallax image based on a first distance, which is the distance between the eye position and the gaze point position.
(3)前記コントローラは、前記眼の位置から前記虚像の表示位置までの距離である第2距離が前記第1距離と異なる場合、前記第2距離が前記第1距離に一致するように、前記視差画像の視差量を制御するように構成される、上記構成(2)に記載の虚像表示装置。
(3) The virtual image display device according to the above configuration (2), wherein the controller is configured to control the amount of parallax of the parallax image when a second distance, which is the distance from the eye position to the display position of the virtual image, differs from the first distance, so that the second distance coincides with the first distance.
(4)前記コントローラは、前記視差量を段階的に制御するように構成される、上記構成(3)に記載の虚像表示装置。
(4) The virtual image display device according to the above configuration (3), wherein the controller is configured to control the amount of parallax in stages.
(5)前記移動体の状況を取得するように構成されるセンサ装置を含み、
前記コントローラは、前記センサ装置が取得した前記移動体の状況に基づいて、前記視差画像を制御するように構成される、上記構成(1)~(4)のいずれかに記載の虚像表示装置。 (5) A sensor device configured to acquire a status of the moving object,
The virtual image display device according to any one of the above configurations (1) to (4), wherein the controller is configured to control the parallax image based on a state of the moving object acquired by the sensor device.
前記コントローラは、前記センサ装置が取得した前記移動体の状況に基づいて、前記視差画像を制御するように構成される、上記構成(1)~(4)のいずれかに記載の虚像表示装置。 (5) A sensor device configured to acquire a status of the moving object,
The virtual image display device according to any one of the above configurations (1) to (4), wherein the controller is configured to control the parallax image based on a state of the moving object acquired by the sensor device.
(6)前記センサ装置は、前記移動体の速度を取得するように構成され、
前記コントローラは、前記移動体の速度に基づいて、前記視差画像を制御するように構成される、上記構成(5)に記載の虚像表示装置。 (6) The sensor device is configured to acquire a speed of the moving object,
The virtual image display device according to the above-mentioned configuration (5), wherein the controller is configured to control the parallax images based on a speed of the moving object.
前記コントローラは、前記移動体の速度に基づいて、前記視差画像を制御するように構成される、上記構成(5)に記載の虚像表示装置。 (6) The sensor device is configured to acquire a speed of the moving object,
The virtual image display device according to the above-mentioned configuration (5), wherein the controller is configured to control the parallax images based on a speed of the moving object.
(7)前記センサ装置は、前記移動体の前照灯の点灯状態を取得するように構成され、
前記コントローラは、前記前照灯の点灯状態に基づいて、前記視差画像を制御するように構成される、上記構成(5)または(6)に記載の虚像表示装置。 (7) The sensor device is configured to acquire a lighting state of a headlight of the moving object,
The virtual image display device according to the above configuration (5) or (6), wherein the controller is configured to control the parallax images based on a lighting state of the headlights.
前記コントローラは、前記前照灯の点灯状態に基づいて、前記視差画像を制御するように構成される、上記構成(5)または(6)に記載の虚像表示装置。 (7) The sensor device is configured to acquire a lighting state of a headlight of the moving object,
The virtual image display device according to the above configuration (5) or (6), wherein the controller is configured to control the parallax images based on a lighting state of the headlights.
(8)前記移動体の周囲環境を取得するように構成されるセンサ装置を含み、
前記コントローラは、前記センサ装置が取得した前記移動体の周囲環境に基づいて、前記視差画像を制御するように構成される、上記構成(1)~(4)のいずれかに記載の虚像表示装置。 (8) A sensor device configured to acquire an ambient environment of the moving object,
The virtual image display device according to any one of the above configurations (1) to (4), wherein the controller is configured to control the parallax image based on the surrounding environment of the moving object acquired by the sensor device.
前記コントローラは、前記センサ装置が取得した前記移動体の周囲環境に基づいて、前記視差画像を制御するように構成される、上記構成(1)~(4)のいずれかに記載の虚像表示装置。 (8) A sensor device configured to acquire an ambient environment of the moving object,
The virtual image display device according to any one of the above configurations (1) to (4), wherein the controller is configured to control the parallax image based on the surrounding environment of the moving object acquired by the sensor device.
(9)前記センサ装置は、前記利用者の視野の背景輝度を取得するように構成され、
前記コントローラは、前記利用者の視野の背景輝度に基づいて、前記視差画像を制御するように構成される、上記構成(8)に記載の虚像表示装置。 (9) The sensor device is configured to acquire a background luminance of the user's field of view,
The virtual image display device according to the above-mentioned configuration (8), wherein the controller is configured to control the parallax images based on a background luminance in the user's field of vision.
前記コントローラは、前記利用者の視野の背景輝度に基づいて、前記視差画像を制御するように構成される、上記構成(8)に記載の虚像表示装置。 (9) The sensor device is configured to acquire a background luminance of the user's field of view,
The virtual image display device according to the above-mentioned configuration (8), wherein the controller is configured to control the parallax images based on a background luminance in the user's field of vision.
(10)前記コントローラは、前記眼の位置がアイボックスに含まれない場合、前記視差画像を変更しないように構成される、上記構成(1)~(9)のいずれかに記載の虚像表示装置。
(10) A virtual image display device according to any one of the above configurations (1) to (9), wherein the controller is configured not to change the parallax image if the eye position is not included in an eye box.
(11)移動体の利用者に立体像を視認させる虚像表示装置であって、第1画像と前記第1画像に対して視差を有する第2画像とを含む視差画像を表示するように構成される表示部と、前記視差画像の画像光の伝播方向を規定するように構成される光学素子と、前記光学素子によって伝播方向を規定された画像光を前記利用者の眼に向かって伝播させ、前記利用者の視野に前記視差画像の虚像を表示するように構成される光学系と、前記利用者の眼を撮像するように構成されるカメラと、コントローラと、を含む虚像表示装置が実行する虚像表示方法であって、
前記利用者の眼を撮像し、
前記利用者の眼を撮像した撮像データに基づいて、前記利用者の眼の位置及び前記利用者の注視点の位置を検出し、
前記眼の位置と前記注視点の位置とに基づいて、前記視差画像を制御する、虚像表示方法。 (11) A virtual image display device that allows a user of a moving body to visually recognize a stereoscopic image, the virtual image display method being executed by the virtual image display device including: a display unit configured to display a parallax image including a first image and a second image having parallax with respect to the first image; an optical element configured to define a propagation direction of image light of the parallax image; an optical system configured to propagate the image light, the propagation direction of which is defined by the optical element, toward the user's eye and display a virtual image of the parallax image in the user's field of vision; a camera configured to capture an image of the user's eye; and a controller,
Taking an image of the user's eye;
Detecting the position of the user's eye and the position of the user's gaze point based on imaging data of the user's eye;
A virtual image display method, comprising: controlling the parallax image based on the position of the eye and the position of the gaze point.
前記利用者の眼を撮像し、
前記利用者の眼を撮像した撮像データに基づいて、前記利用者の眼の位置及び前記利用者の注視点の位置を検出し、
前記眼の位置と前記注視点の位置とに基づいて、前記視差画像を制御する、虚像表示方法。 (11) A virtual image display device that allows a user of a moving body to visually recognize a stereoscopic image, the virtual image display method being executed by the virtual image display device including: a display unit configured to display a parallax image including a first image and a second image having parallax with respect to the first image; an optical element configured to define a propagation direction of image light of the parallax image; an optical system configured to propagate the image light, the propagation direction of which is defined by the optical element, toward the user's eye and display a virtual image of the parallax image in the user's field of vision; a camera configured to capture an image of the user's eye; and a controller,
Taking an image of the user's eye;
Detecting the position of the user's eye and the position of the user's gaze point based on imaging data of the user's eye;
A virtual image display method, comprising: controlling the parallax image based on the position of the eye and the position of the gaze point.
(12)移動体の利用者に立体像を視認させる虚像表示装置であって、第1画像と前記第1画像に対して視差を有する第2画像とを含む視差画像を表示するように構成される表示部と、前記視差画像の画像光の伝播方向を規定するように構成される光学素子と、前記光学素子によって伝播方向を規定された画像光を前記利用者の眼に向かって伝播させ、前記利用者の視野に前記視差画像の虚像を表示するように構成される光学系と、前記利用者の眼を撮像するように構成されるカメラと、コントローラと、を含む虚像表示装置が実行するプログラムであって、
前記コントローラが、前記カメラに前記利用者の眼を撮像させ、前記利用者の眼を撮像した撮像データに基づいて、前記利用者の眼の位置及び前記利用者の注視点の位置を検出し、前記眼の位置と前記注視点の位置とに基づいて、前記視差画像を制御するためのプログラム。 (12) A virtual image display device that allows a user of a moving body to visually recognize a stereoscopic image, the virtual image display device including: a display unit configured to display a parallax image including a first image and a second image having parallax with respect to the first image; an optical element configured to determine a propagation direction of image light of the parallax image; an optical system configured to propagate the image light, the propagation direction of which is determined by the optical element, toward the user's eye and display a virtual image of the parallax image in the user's field of vision; a camera configured to capture an image of the user's eye; and a controller, the program being executed by the virtual image display device,
A program for causing the controller to cause the camera to capture an image of the user's eye, detecting the position of the user's eye and the position of the user's point of gaze based on the image data of the user's eye, and controlling the parallax image based on the position of the eye and the position of the point of gaze.
前記コントローラが、前記カメラに前記利用者の眼を撮像させ、前記利用者の眼を撮像した撮像データに基づいて、前記利用者の眼の位置及び前記利用者の注視点の位置を検出し、前記眼の位置と前記注視点の位置とに基づいて、前記視差画像を制御するためのプログラム。 (12) A virtual image display device that allows a user of a moving body to visually recognize a stereoscopic image, the virtual image display device including: a display unit configured to display a parallax image including a first image and a second image having parallax with respect to the first image; an optical element configured to determine a propagation direction of image light of the parallax image; an optical system configured to propagate the image light, the propagation direction of which is determined by the optical element, toward the user's eye and display a virtual image of the parallax image in the user's field of vision; a camera configured to capture an image of the user's eye; and a controller, the program being executed by the virtual image display device,
A program for causing the controller to cause the camera to capture an image of the user's eye, detecting the position of the user's eye and the position of the user's point of gaze based on the image data of the user's eye, and controlling the parallax image based on the position of the eye and the position of the point of gaze.
(13)上記構成(1)~(10)のいずれかに記載の虚像表示装置を含む、移動体。
(13) A moving object including a virtual image display device according to any one of the above configurations (1) to (10).
1 虚像表示装置
2 表示部
3 表示パネル
3a 表示面
4 照射器
5 光学素子(パララックスバリア)
6 光学系
6a 第1光学部材
6b 第2光学部材
7 検出部(車内カメラ)
8 コントローラ
8a 情報取得部
8b 情報解析部
8c 表示判断部
8d 表示指示部
10 移動体
11 センサ装置
12 利用者
13 眼
13L 左眼(第1眼)
13R 右眼(第2眼)
14 アイボックス
31 アクティブエリア
31aL 左可視領域
31aR 右可視領域
51 透光領域
52 減光領域
O 障害物
P 注視点
S 虚像面
V 虚像
VL 左眼画像
VR 右眼画像
d 適視距離
g ギャップ 1 Virtual image display device 2 Display unit 3 Display panel 3a Display surface 4 Illuminator 5 Optical element (parallax barrier)
6 Optical system 6a First optical member 6b Second optical member 7 Detection unit (in-vehicle camera)
8 controller 8a information acquisition section 8b information analysis section 8c display judgment section 8d display instruction section 10 moving object 11 sensor device 12 user 13 eye 13L left eye (first eye)
13R Right eye (second eye)
14 Eye box 31 Active area 31aL Left visible area 31aR Right visible area 51 Light-transmitting area 52 Light-reducing area O Obstacle P Point of gaze S Virtual image surface V Virtual image VL Left eye image VR Right eye image d Suitable viewing distance g Gap
2 表示部
3 表示パネル
3a 表示面
4 照射器
5 光学素子(パララックスバリア)
6 光学系
6a 第1光学部材
6b 第2光学部材
7 検出部(車内カメラ)
8 コントローラ
8a 情報取得部
8b 情報解析部
8c 表示判断部
8d 表示指示部
10 移動体
11 センサ装置
12 利用者
13 眼
13L 左眼(第1眼)
13R 右眼(第2眼)
14 アイボックス
31 アクティブエリア
31aL 左可視領域
31aR 右可視領域
51 透光領域
52 減光領域
O 障害物
P 注視点
S 虚像面
V 虚像
VL 左眼画像
VR 右眼画像
d 適視距離
g ギャップ 1 Virtual image display device 2 Display unit 3 Display panel 3a Display surface 4 Illuminator 5 Optical element (parallax barrier)
6 Optical system 6a First optical member 6b Second optical member 7 Detection unit (in-vehicle camera)
8 controller 8a information acquisition section 8b information analysis section 8c display judgment section 8d display instruction section 10 moving object 11 sensor device 12 user 13 eye 13L left eye (first eye)
13R Right eye (second eye)
14 Eye box 31 Active area 31aL Left visible area 31aR Right visible area 51 Light-transmitting area 52 Light-reducing area O Obstacle P Point of gaze S Virtual image surface V Virtual image VL Left eye image VR Right eye image d Suitable viewing distance g Gap
Claims (13)
- 移動体の利用者に立体像を視認させる虚像表示装置であって、
第1画像と前記第1画像に対して視差を有する第2画像とを含む視差画像を表示するように構成される表示部と、
前記視差画像の画像光の伝播方向を規定するように構成される光学素子と、
前記光学素子によって伝播方向を規定された画像光を前記利用者の眼に向かって伝播させ、前記利用者の視野に前記視差画像の虚像を表示するように構成される光学系と、
前記利用者の眼を撮像するように構成されるカメラと、
コントローラと、を含み、
前記コントローラは、
前記カメラから出力される撮像データに基づいて、前記利用者の眼の位置及び前記利用者の注視点の位置を検出し、
前記眼の位置と前記注視点の位置とに基づいて、前記視差画像を制御するように構成される、虚像表示装置。 A virtual image display device that allows a user of a moving body to visually recognize a stereoscopic image,
a display unit configured to display a parallax image including a first image and a second image having parallax with respect to the first image;
an optical element configured to define a propagation direction of image light of the parallax image;
an optical system configured to propagate image light whose propagation direction is regulated by the optical element toward an eye of the user and display a virtual image of the parallax image in a field of view of the user;
a camera configured to image the eye of the user; and
A controller,
The controller:
Detecting the position of the user's eyes and the position of the user's gaze point based on the imaging data output from the camera;
The virtual image display device is configured to control the parallax image based on the position of the eye and the position of the gaze point. - 前記コントローラは、前記眼の位置と前記注視点の位置との間の距離である第1距離に基づいて、前記視差画像を制御するように構成される、請求項1に記載の虚像表示装置。 The virtual image display device according to claim 1, wherein the controller is configured to control the parallax image based on a first distance that is a distance between the eye position and the gaze point position.
- 前記コントローラは、前記眼の位置から前記虚像の表示位置までの距離である第2距離が前記第1距離と異なる場合、前記第2距離が前記第1距離に一致するように、前記視差画像の視差量を制御するように構成される、請求項2に記載の虚像表示装置。 The virtual image display device according to claim 2, wherein the controller is configured to control the amount of parallax of the parallax image when a second distance, which is a distance from the eye position to a display position of the virtual image, differs from the first distance so that the second distance coincides with the first distance.
- 前記コントローラは、前記視差量を段階的に制御するように構成される、請求項3に記載の虚像表示装置。 The virtual image display device according to claim 3, wherein the controller is configured to control the amount of parallax in stages.
- 前記移動体の状況を取得するように構成されるセンサ装置を含み、
前記コントローラは、前記センサ装置が取得した前記移動体の状況に基づいて、前記視差画像を制御するように構成される、請求項1~4のいずれか1項に記載の虚像表示装置。 a sensor device configured to acquire a status of the moving object;
5. The virtual image display device according to claim 1, wherein the controller is configured to control the parallax images based on a state of the moving object acquired by the sensor device. - 前記センサ装置は、前記移動体の速度を取得するように構成され、
前記コントローラは、前記移動体の速度に基づいて、前記視差画像を制御するように構成される、請求項5に記載の虚像表示装置。 The sensor device is configured to acquire a speed of the moving object;
The virtual image display device according to claim 5 , wherein the controller is configured to control the parallax images based on a speed of the moving object. - 前記センサ装置は、前記移動体の前照灯の点灯状態を取得するように構成され、
前記コントローラは、前記前照灯の点灯状態に基づいて、前記視差画像を制御するように構成される、請求項5または6に記載の虚像表示装置。 The sensor device is configured to acquire a lighting state of a headlamp of the moving object,
The virtual image display device according to claim 5 , wherein the controller is configured to control the parallax images based on a lighting state of the headlights. - 前記移動体の周囲環境を取得するように構成されるセンサ装置を含み、
前記コントローラは、前記センサ装置が取得した前記移動体の周囲環境に基づいて、前記視差画像を制御するように構成される、請求項1~4のいずれか1項に記載の虚像表示装置。 A sensor device configured to acquire an ambient environment of the moving object,
5. The virtual image display device according to claim 1, wherein the controller is configured to control the parallax image based on the surrounding environment of the moving object acquired by the sensor device. - 前記センサ装置は、前記利用者の視野の背景輝度を取得するように構成され、
前記コントローラは、前記利用者の視野の背景輝度に基づいて、前記視差画像を制御するように構成される、請求項8に記載の虚像表示装置。 The sensor device is configured to acquire a background luminance of the user's field of view;
The virtual image display device according to claim 8 , wherein the controller is configured to control the parallax images based on a background luminance in the user's field of view. - 前記コントローラは、前記眼の位置がアイボックスに含まれない場合、前記視差画像を変更しないように構成される、請求項1~9のいずれか1項に記載の虚像表示装置。 The virtual image display device according to any one of claims 1 to 9, wherein the controller is configured not to change the parallax image if the eye position is not included in an eye box.
- 移動体の利用者に立体像を視認させる虚像表示装置であって、第1画像と前記第1画像に対して視差を有する第2画像とを含む視差画像を表示するように構成される表示部と、前記視差画像の画像光の伝播方向を規定するように構成される光学素子と、前記光学素子によって伝播方向を規定された画像光を前記利用者の眼に向かって伝播させ、前記利用者の視野に前記視差画像の虚像を表示するように構成される光学系と、前記利用者の眼を撮像するように構成されるカメラと、コントローラと、を含む虚像表示装置が実行する虚像表示方法であって、
前記利用者の眼を撮像し、
前記利用者の眼を撮像した撮像データに基づいて、前記利用者の眼の位置及び前記利用者の注視点の位置を検出し、
前記眼の位置と前記注視点の位置とに基づいて、前記視差画像を制御する、虚像表示方法。 A virtual image display device that allows a user of a moving body to visually recognize a stereoscopic image, the virtual image display device including: a display unit configured to display a parallax image including a first image and a second image having parallax with respect to the first image; an optical element configured to define a propagation direction of image light of the parallax image; an optical system configured to propagate the image light, the propagation direction of which is defined by the optical element, toward an eye of the user and display a virtual image of the parallax image in a field of view of the user; a camera configured to capture an image of the eye of the user; and a controller, the virtual image display method being executed by the virtual image display device, the virtual image display device including:
taking an image of the user's eye;
Detecting the position of the user's eye and the position of the user's gaze point based on imaging data of the user's eye;
A virtual image display method, comprising: controlling the parallax image based on the position of the eye and the position of the gaze point. - 移動体の利用者に立体像を視認させる虚像表示装置であって、第1画像と前記第1画像に対して視差を有する第2画像とを含む視差画像を表示するように構成される表示部と、前記視差画像の画像光の伝播方向を規定するように構成される光学素子と、前記光学素子によって伝播方向を規定された画像光を前記利用者の眼に向かって伝播させ、前記利用者の視野に前記視差画像の虚像を表示するように構成される光学系と、前記利用者の眼を撮像するように構成されるカメラと、コントローラと、を含む虚像表示装置が実行するプログラムであって、
前記コントローラが、前記カメラに前記利用者の眼を撮像させ、前記利用者の眼を撮像した撮像データに基づいて、前記利用者の眼の位置及び前記利用者の注視点の位置を検出し、前記眼の位置と前記注視点の位置とに基づいて、前記視差画像を制御するためのプログラム。 A virtual image display device that allows a user of a moving body to visually recognize a stereoscopic image, the virtual image display device including: a display unit configured to display a parallax image including a first image and a second image having parallax with respect to the first image; an optical element configured to define a propagation direction of image light of the parallax image; an optical system configured to propagate the image light, the propagation direction of which is defined by the optical element, toward an eye of the user and display a virtual image of the parallax image in the field of view of the user; a camera configured to capture an image of the eye of the user; and a controller, the virtual image display device including: a program executed by the virtual image display device, the program including:
A program for causing the controller to cause the camera to capture an image of the user's eye, detecting the position of the user's eye and the position of the user's point of gaze based on the image data of the user's eye, and controlling the parallax image based on the position of the eye and the position of the point of gaze. - 請求項1~10のいずれか1項に記載の虚像表示装置を含む、移動体。 A moving object including a virtual image display device according to any one of claims 1 to 10.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023030398 | 2023-02-28 | ||
JP2023-030398 | 2023-02-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024181153A1 true WO2024181153A1 (en) | 2024-09-06 |
Family
ID=92590434
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2024/005318 WO2024181153A1 (en) | 2023-02-28 | 2024-02-15 | Virtual image display device, virtual image display method, program, and mobile object |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024181153A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005301144A (en) * | 2004-04-15 | 2005-10-27 | Denso Corp | Virtual image display device and program |
WO2017060966A1 (en) * | 2015-10-06 | 2017-04-13 | 日立マクセル株式会社 | Head-up display |
JP2019083385A (en) * | 2017-10-30 | 2019-05-30 | 日本精機株式会社 | Head-up display unit |
JP2021154988A (en) * | 2020-03-30 | 2021-10-07 | 株式会社豊田中央研究所 | In-vehicle display device, method for controlling the in-vehicle display device, and computer program |
JP2022083609A (en) * | 2020-11-25 | 2022-06-06 | 日本精機株式会社 | Display control device, head-up display device, and image display control method |
-
2024
- 2024-02-15 WO PCT/JP2024/005318 patent/WO2024181153A1/en unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005301144A (en) * | 2004-04-15 | 2005-10-27 | Denso Corp | Virtual image display device and program |
WO2017060966A1 (en) * | 2015-10-06 | 2017-04-13 | 日立マクセル株式会社 | Head-up display |
JP2019083385A (en) * | 2017-10-30 | 2019-05-30 | 日本精機株式会社 | Head-up display unit |
JP2021154988A (en) * | 2020-03-30 | 2021-10-07 | 株式会社豊田中央研究所 | In-vehicle display device, method for controlling the in-vehicle display device, and computer program |
JP2022083609A (en) * | 2020-11-25 | 2022-06-06 | 日本精機株式会社 | Display control device, head-up display device, and image display control method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7332448B2 (en) | Head-up display system and moving body | |
JP2020076810A (en) | 3D display device, 3D display system, head-up display, and moving body | |
CN113039784A (en) | Display device, three-dimensional display device, head-up display, and vehicle | |
WO2021106690A1 (en) | Head-up display system and mobile unit | |
CN114728587A (en) | Head-up display, head-up display system, and moving body | |
WO2020090626A1 (en) | Image display device, image display system, and moving body | |
WO2022019154A1 (en) | Three-dimensional display device | |
WO2019225400A1 (en) | Image display device, image display system, head-up display, and mobile object | |
US12313847B2 (en) | Head-up display, head-up display system, and movable body | |
US20230004003A1 (en) | Head-up display system and movable body | |
WO2024181153A1 (en) | Virtual image display device, virtual image display method, program, and mobile object | |
CN113614619A (en) | Image display module, image display system, moving object, image display method, and image display program | |
US11874464B2 (en) | Head-up display, head-up display system, moving object, and method of designing head-up display | |
JP7441333B2 (en) | 3D display device, image display system and moving object | |
JP7346587B2 (en) | Head-up display, head-up display system and mobile object | |
WO2021065427A1 (en) | Camera, head-up display system, and mobile body | |
JP7495584B1 (en) | VIRTUAL IMAGE DISPLAY DEVICE, MOBILE BODY, DRIVING METHOD FOR VIRTUAL IMAGE DISPLAY DEVICE, AND PROGRAM | |
JP7498053B2 (en) | Camera systems and driver assistance systems | |
JP7332449B2 (en) | Head-up display module, head-up display system and moving body | |
US20230171393A1 (en) | Image display system | |
JP2022066080A (en) | Display control device, head-up display apparatus and image display control method | |
JP2022021866A (en) | Camera system and image display system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24763636 Country of ref document: EP Kind code of ref document: A1 |