[go: up one dir, main page]

GB2547512B - Warning a vehicle occupant before an intense movement - Google Patents

Warning a vehicle occupant before an intense movement Download PDF

Info

Publication number
GB2547512B
GB2547512B GB1621125.2A GB201621125A GB2547512B GB 2547512 B GB2547512 B GB 2547512B GB 201621125 A GB201621125 A GB 201621125A GB 2547512 B GB2547512 B GB 2547512B
Authority
GB
United Kingdom
Prior art keywords
occupant
vehicle
warning
sdrs
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
GB1621125.2A
Other versions
GB201621125D0 (en
GB2547512A (en
Inventor
Thieberger-Navon Tal
Thieberger Gil
M Frank Ari
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Active Knowledge Ltd
Original Assignee
Active Knowledge Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Active Knowledge Ltd filed Critical Active Knowledge Ltd
Publication of GB201621125D0 publication Critical patent/GB201621125D0/en
Publication of GB2547512A publication Critical patent/GB2547512A/en
Application granted granted Critical
Publication of GB2547512B publication Critical patent/GB2547512B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/02Occupant safety arrangements or fittings, e.g. crash pads
    • B60R21/04Padded linings for the vehicle interior ; Energy absorbing structures associated with padded or non-padded linings
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/34Protecting non-occupants of a vehicle, e.g. pedestrians
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/34Protecting non-occupants of a vehicle, e.g. pedestrians
    • B60R21/36Protecting non-occupants of a vehicle, e.g. pedestrians using airbags
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0097Predicting future conditions
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B5/00Optical elements other than lenses
    • G02B5/08Mirrors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/12Mirror assemblies combined with other articles, e.g. clocks
    • B60R2001/1238Mirror assemblies combined with other articles, e.g. clocks with vanity mirrors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/34Protecting non-occupants of a vehicle, e.g. pedestrians
    • B60R2021/346Protecting non-occupants of a vehicle, e.g. pedestrians means outside vehicle body
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • B60R2300/202Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used displaying a blind spot scene on the vehicle part responsible for the blind spot
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • B60R2300/207Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used using multi-purpose displays, e.g. camera image and navigation or video on same display
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/304Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using merged images, e.g. merging camera image with stored images
    • B60R2300/305Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using merged images, e.g. merging camera image with stored images merging camera image with lines or icons
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/8006Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying scenes of vehicle interior, e.g. for monitoring passengers or cargo
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/802Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying vehicle exterior blind spot views
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/143Alarm means
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0141Head-up displays characterised by optical features characterised by the informative content of the display
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0183Adaptation to parameters characterising the motion of the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Traffic Control Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Description

WARNING A VEHICLE OCCUPANT BEFORE AN INTENSE MOVEMENT
BACKGROUND
[0001] When traveling in a vehicle, the occupant may be engaged in various work- andentertainment-related activities. As a result, the occupant may not be aware of the drivingconditions, which may lead to undesired consequences in certain cases when the occupant isengaged in certain activities such as drinking a beverage, applying makeup, or using varioustools. For example, if an unexpected driving event occurs, such as hitting a speed bump, makinga sharp turn, or a hard breaking, this may startle the occupant or cause the occupant to losestability, which can lead to the occupant spilling a hot beverage or hurting himself/herself. Thus,there is a need for a way to make the occupant aware of certain unexpected driving events, inorder to avoid accidents when conducting various activities in an autonomous vehicle.
SUMMARY
[0002] While traveling in a vehicle, an occupant of the vehicle may not always be aware of theenvironment outside and/or of what actions the vehicle is about to take (e.g., breaking, turning,or hitting a speedbump). Thus, if such an event occurs without the occupant being aware that it isabout to happen, this may cause the occupant to be surprised, disturbed, distressed, and evenphysically thrown off balance (in a case where the event involves a significant change in thebalance of the physical forces on the occupant). This type of event is typically referred to hereinas a Sudden Decrease in Ride Smoothness (SDRS) event. Some examples of SDRS eventsinclude at least one of the following events: hitting a speed bump, driving over a pothole,climbing on the curb, making a sharp turn, a hard breaking, an unusual acceleration (e.g., 0-100km/h in less than 6 seconds), and starting to drive after a full stop.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003] The embodiments are herein described by way of example only, with reference to theaccompanying drawings. No attempt is made to show structural details of the embodiments inmore detail than is necessary for a fundamental understanding of the embodiments. In thedrawings: [0004] FIG. 1 is a schematic illustration of components of a system configured to combine video see-through (VST) with video-unrelated-to-the-VST (VUR); [0005] FIG. 2 illustrates an HMD tracking module that measures the position of the HMDrelative to the compartment; [0006] FIG. 3 illustrates a vehicle in which an occupant wears an HMD; [0007] FIG. 4 illustrates an occupant wearing an HMD and viewing large VUR and smallerVST; [0008] FIG. 5a illustrates how the VST moves to the upper left when the occupant looks to thebottom right; [0009] FIG. 5b illustrates how the VST moves to the bottom right when the occupant looks tothe upper left; [0010] FIG. 6 illustrates HMD-video that includes both a non-transparent VST and video thatshows the hands of the occupant and the interior of the compartment; [0011] FIG. 7 illustrates HMD-video that includes both a partially transparent VST and videothat shows the hands of the occupant and the interior of the compartment; [0012] FIG. 8 illustrates HMD-video that includes a VST and partially transparent video thatshows the hands of the occupant and the interior of the compartment; [0013] FIG. 9a illustrates HMD-video that includes a VUR in full FOV, a first windowcomprising compartment-video (CV) and a second smaller window comprising the VST; [0014] FIG. 9b illustrates HMD-video that includes VUR in full FOV, a first windowcomprising the CV and a second partially transparent smaller window comprising the VST; [0015] FIG. 10a illustrates HMD-video that includes VUR in full FOV, a first windowcomprising VST and a second smaller window comprising zoom out of the CV; [0016] FIG. 10b illustrates HMD-video that includes VUR and a partially transparent CV; [0017] FIG. 11a illustrates a FOV of a vehicle occupant when the occupant wears an HMD thatpresents HMD-video; [0018] FIG. lib illustrates a FOV of a vehicle occupant when the vehicle occupant does notwear an HMD that presents the video, such as when watching an autostereoscopic display; [0019] FIG. lie illustrates FOV of a 3D camera that is able to capture sharp images fromdifferent focal lengths; [0020] FIG. 12 is a schematic illustration of components of a system configured to enable anHMD to cooperate with a window light shading module; [0021] FIG. 13a illustrates a first mode for a shading module where an occupant sees theoutside environment through the optical see-through component; [0022] FIG. 13b illustrates a second mode for a shading module where the occupant sees the outside environment through a VST; [0023] FIG. 14 illustrates a VST over a curtain; [0024] FIG. 15 illustrates a light shading module that is unfurled from the inside of thecompartment; [0025] FIG. 16 illustrates a light shading module that is unfurled from the outside of thecompartment; [0026] FIG. 17 is a schematic illustration of components of a video system that may be used toincrease awareness of an occupant of a vehicle regarding an imminent SDRS; [0027] FIG. 18a illustrates presenting VUR to an occupant when there is no indication that anSDRS event is imminent; [0028] FIG. 18b illustrates presenting VST responsive to receiving an indication that an SDRSevent is imminent (a pothole); [0029] FIG. 18c illustrates presenting VST responsive to receiving an indication that an SDRSevent is imminent (a sharp turn); [0030] FIG. 19a illustrates presenting VUR and VST when there is no indication that an SDRSevent is imminent; [0031] FIG. 19b illustrates presenting a larger VST responsive to receiving an indication thatan SDRS event is imminent (a road bump); [0032] FIG. 19c illustrates presenting a partially transparent VST responsive to receiving anindication that an SDRS event is imminent; [0033] FIG. 20a illustrates a smart glasses shading module when there is no indication that anSDRS event is imminent; [0034] FIG. 20b illustrates the smart glasses shading module when there is an indication that anSDRS event is imminent; [0035] FIG. 21 and FIG. 22 illustrate a cross section of a vehicle with a user interface to warnan occupant engaged in a dangerous activity; [0036] FIG. 23 is a schematic illustration of an embodiment of a safety system for anautonomous vehicle; [0037] FIG. 24 illustrates an embodiment of an autonomous vehicle in which a drivingcontroller installed in the vehicle may be utilized by an occupant of the vehicle engaged ingaming activity; [0038] FIG. 25 is a schematic illustration of components of an autonomous vehicle thatincludes a computer, a window, and a camera; and [0039] FIG. 26a and FIG. 26b are schematic illustrations of computers able to realize one or more of the embodiments discussed herein.
DETAILED DESCRIPTION
[0040] The following are definitions of various terms that may be used to describe one or moreof the embodiments in this disclosure.
[0041] The terms “autonomous on-road vehicle” and “autonomous on-road manned vehicle”refer to cars and motorcycles designed to drive on public roadways utilizing automated drivingof level 3 and above according to SAE International® standard J3016 “Taxonomy andDefinitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems”. Forexample, the autonomous on-road vehicle may be a level 3 vehicle, in which within known,limited environments, drivers can safely turn their attention away from driving tasks; theautonomous on-road vehicle may be a level 4 vehicle, in which the automated system can controlthe vehicle in all but a few environments; and/or the autonomous on-road vehicle may be a level5 vehicle, in which no human intervention is required and the automatic system can drive to anylocation where it is legal to drive. Herein, the terms “autonomous on-road vehicle” and “self-driving on-road vehicle” are equivalent terms that refer to the same. The term “autonomous on-road vehicle” does not include trains, airplanes, boats, and armored fighting vehicles.
[0042] An autonomous on-road vehicle utilizes an autonomous-driving control system to drivethe vehicle. The disclosed embodiments may use any suitable known and/or to be inventedautonomous-driving control systems. The following three publications describe variousautonomous-driving control systems that may be utilized with the disclosed embodiments: (i)Paden, Brian, et al. "A Survey of Motion Planning and Control Techniques for Self-drivingUrban Vehicles." arXiv preprint arXiv:1604.07446 (2016); (ii) Surden, Harry, and Mary-AnneWilliams. "Technological Opacity, Predictability, and Self-Driving Cars." Predictability, andSelf-Driving Cars (March 14, 2016) (2016); and (iii) Gonzalez, David, et al. "A Review ofMotion Planning Techniques for Automated Vehicles." IEEE Transactions on IntelligentTransportation Systems 17.4 (2016): 1135-1145.
[0043] Autonomous-driving control systems usually utilize algorithms such as machinelearning, pattern recognition, neural network, machine vision, artificial intelligence, and/orprobabilistic logic to calculate on the fly the probability of an imminent collision, or to calculateon the fly values that are indicative of the probability of an imminent collision (from which it ispossible to estimate the probability of an imminent collision). The algorithms usually receive asinputs the trajectory of the vehicle, measured locations of at least one nearby vehicle, information about the road, and/or information about environmental conditions. Calculating theprobability of an imminent collision is well known in the art, also for human driven vehicles,such as the anticipatory collision system disclosed in US patent num. 8,041,483 to Breed.
[0044] In order to calculate whether a Sudden Decrease in Ride Smoothness (SDRS) event isimminent, the autonomous-driving control system may compare parameters describing the stateof the vehicle at time ti with parameters describing the state of the vehicle at time t2 that isshortly after ti. If the change in one or more of the parameters reaches a threshold (such asdeceleration above a certain value, change of height in the road above a certain value, and/or anangular acceleration above a certain value) then it may be determined that an SDRS event isimminent.
[0045] An “occupant” of a vehicle, as the term is used herein refers to a person that is in thevehicle when it drives. The term “occupant” refers to a typical person having a typical shape,such as a 170 cm tall human (herein “cm” refers to centimeters). An occupant may be a driver,having some responsibilities and/or control regarding the driving of the vehicle (e.g., in a vehiclethat is not completely autonomous), or may be a passenger. When an embodiment refers to “theoccupant of the vehicle”, it may refer to one of the occupants of the vehicle. Stating that avehicle has an “occupant” should not be interpreted that the vehicle necessarily accommodatesonly one occupant at a time, unless that is explicitly stated, such as stating that the vehicle is“designed for a single occupant”.
[0046] Herein, a “seat” may be any structure designed to hold an occupant travelling in thevehicle (e.g., in a sitting and/or reclining position). A “front seat” is a seat that positions anoccupant it holds no farther from the front of the vehicle than any other occupants of the vehicleare positioned. Herein, sitting in a seat also refers to sitting on a seat. Sitting in a seat is to beinterpreted in this disclosure as occupying the space corresponding the seat, even if the occupantdoes so by assuming a posture that does not necessarily correspond to sitting. For example, insome vehicles the occupant may be reclined or lying down, and in other vehicles the occupantmay be more upright, such as when leaning into the seat in a half standing half seating positionsimilar to leaning into a Locus Seat by Focal® Upright LLC.
[0047] The interchangeable terms “environment outside the vehicle” and “outsideenvironment” refer to the environment outside the vehicle, which includes objects that are notinside the vehicle compartment, such as other vehicles, roads, pedestrians, trees, buildings,mountains, the sky, and outer space.
[0048] A sensor “mounted to the vehicle” may be connected to any relevant part of the vehicle,whether inside the vehicle, outside the vehicle, to the front, back, top, bottom, and/or to the side of the vehicle. A sensor, as used herein, may also refer to a camera.
[0049] The term “camera” refers herein to an image-capturing device that takes images of anenvironment. For example, the camera may be based on at least one of the following sensors: aCCD sensor, a CMOS sensor, a near infrared (NIR) sensor, an infrared sensor (IR), and a camerabased on active illumination such as a LiDAR. The term “video” refers to a series of images thatmay be provided in a fixed rate, variable rates, a fixed resolution, and/or dynamic resolutions.The use of a singular “camera” should be interpreted herein as “one or more cameras”. Thus,when embodiments herein are described as including a camera that captures video and/or imagesof the outside in order to generate a representation of the outside, the representation may in factbe generated based on images and/or video taken using multiple cameras.
[0050] Various embodiments described herein involve providing an occupant of the vehiclewith representation of the outside environment, generated by a computer and/or processor, basedon video taken by a camera. In some embodiments, video from a single camera (e.g., which maybe positioned on the exterior of the vehicle at eye level), may be sent to presentation to theoccupant by the processor and/or computer following little, if any, processing. In otherembodiments, video from a single camera or multiple cameras is processed in various ways, bythe computer and/or processor, in order to generate the representation of the outside environmentthat is presented to the occupant.
[0051] Methods and systems for stitching live video streams from multiple cameras, stitchinglive video streams with database objects and/or other video sources, transforming a video streamor a stitched video stream from one point of view to another point of view (such as forgenerating a representation of the outside environment for an occupant at eye level, or forgenerating a compartment view for a person standing outside the compartment), tracking theposition of an HMD relative to a compartment, and presenting rendered images that are perfectlyaligned with the outside world - are all known in the art of computer graphics, video stitching,image registration, and real-time 360° imaging systems. The following publications are just afew examples of reviews and references that describe various ways to perform the videostitching, registration, tracking, and transformations, which may be utilized by the embodimentsdisclosed herein: (i) Wang, Xiaogang. "Intelligent multi-camera video surveillance: A review."Pattern recognition letters 34.1 (2013): 3-19. (ii) Szeliski, Richard. "Image alignment andstitching: A tutorial." Foundations and Trends® in Computer Graphics and Vision 2.1 (2006): 1-104. (iii) Tanimoto, Masayuki. "FTV: Free-viewpoint television." Signal Processing: ImageCommunication 27.6 (2012): 555-570. (iv) Ernst, Johannes M., Hans-Ullrich Doehler, and SvenSchmerwitz. "A concept for a virtual flight deck shown on an HMD." SPIE Defense+ Security.
International Society for Optics and Photonics, 2016. (v) Doehler, H-U., Sven Schmerwitz, andThomas Lueken. "Visual-conformal display format for helicopter guidance." SPIE Defense+Security. International Society for Optics and Photonics, 2014. (vi) Sanders-Reed, John N., KenBernier, and Jeff Giiell. "Enhanced and synthetic vision system (ESVS) flight demonstration."SPIE Defense and Security Symposium. International Society for Optics and Photonics, 2008.And (vii) Bailey, Randall E., Kevin J. Shelton, and J. J. Arthur III. "Head-worn displays forNextGen." SPIE Defense, Security, and Sensing. International Society for Optics and Photonics,2011.
[0052] A video that provides “representation of the outside environment” refers to a video thatenables the average occupant, who is familiar with the outside environment, to recognize thelocation of the vehicle in the outside environment from watching the video. In one example, theaverage occupant is a healthy 30 years old human who is familiar with the outside environment,and the threshold for recognizing a video as a “representation of the outside environment” is atleast 20 correct recognitions of the outside environment out of 30 tests.
[0053] Herein, sentences such as “VST that represents a view of the outside environment fromthe point of view of the occupant”, or “VST representation of the outside environment, whichcould have been seen from the point of view of the occupant” refer to a video representing atleast a portion of the outside environment, with a deviation of less than ±20 degrees from theoccupant’s point of view of the outside environment, and zoom in the range of 30% to 300%(assuming the occupant’s unaided view is at 100% zoom level).
[0054] The VST may be generated based on at least one of the following resources: a video ofthe outside environment that is taken in real-time, a video of the outside environment that wastaken in the past and is played/processed according to the trajectory of the vehicle, a database ofthe outside environment that is utilized for rendering the VST according to the trajectory of thevehicle, and/or a video that is rendered as function of locations of physical objects identified inthe outside environment using detection and ranging systems such as RADAR and/or LIDAR.[0055] Moreover, the term “video see-through (VST)” covers both direct representations of theoutside environment, such as a video of the outside environment, and/or enriched video of theoutside environment, such as captured video and/or rendered video of the outside environmentpresented together with one or more layers of virtual objects, as long as more than 20% of theaverage vehicle occupants, who are familiar with the outside environment, would be able todetermine their location in the outside environment, while the vehicle travels, without using amap, and with a margin of error below 200 meters. However, it is noted that showing a map thatindicates the location of the vehicle on the driving path (such as from the start to the destination) is not considered herein as equivalent to the VST, unless the map includes all of the followingproperties: the map shows images of the path, the images of the path capture at least 5 degrees ofthe occupant's FOV at eye level, and the images of the path reflect the dynamics of the vehicleand change in a similar manner to a video taken by a camera mounted to the vehicle and directedto the outside environment.
[0056] Herein, “field of view (FOV) of the occupant to the outside environment” refers to thepart of the outside environment that is visible to the occupant of a vehicle at a particular positionand orientation in space. In one example, in order for an occupant-tracking module to calculatethe FOV to the outside environment of an occupant sitting in a vehicle compartment, theoccupant-tracking module determines the position and orientation of the occupant’s head. Inanother example, in order for an occupant-tracking module to calculate the FOV of an occupantsitting in a vehicle compartment, the occupant-tracking module utilizes an eye tracker.
[0057] It is noted that sentences such as “a three dimensional (3D) video see-through (VST)that represents a view of the outside environment, which could have been seen from the point ofview of the occupant had the FOV not been obstructed by at least a portion of the nontransparentelement” cover also just one or more portions of the FOV, and are to be interpreted as “a threedimensional (3D) video see-through (VST) that represents a view of at least a portion of theoutside environment, which could have been seen from the point of view of the occupant had atleast some of the FOV not been obstructed by at least a portion of the nontransparent element”.
[0058] The term “display” refers herein to any device that provides a human user with visualimages (e.g., text, pictures, and/or video). The images provided by the display may be two-dimensional or three-dimensional images. Some non-limiting examples of displays that may beused in embodiments described in this disclosure include: (i) screens and/or video displays ofvarious devices (e.g., televisions, computer monitors, tablets, smartphones, or smartwatches), (ii)headset- or helmet-mounted displays such as augmented-reality systems (e.g., HoloLens®),virtual-reality systems (e.g., Oculus rift®, HTC® Vive®, or Samsung GearVR®), and mixed-reality systems (e.g., Magic Leap®), and (iii) image projection systems that project images on aoccupant’s retina, such as: Virtual Retinal Displays (VRD) that creates images by projecting lowpower light directly onto the retina, and/or light-field technologies that project light rays directlyinto the eye.
[0059] Various embodiments may include a reference to elements located at eye level. The“eye level” height is determined according to an average adult occupant for whom the vehiclewas designed, who sits straight and looks to the horizon. Sentences in the form of “an elementlocated at eye level of an occupant who sits in a vehicle” refer to the element and not to the occupant. The occupant is used in such sentences in the context of “eye level”, and thus claimscontaining such sentences do not require the existence of the occupant in order to construct theclaim.
[0060] Sentences such as “SAEDP located at eye level”, “stiff element located at eye level”,and “crumple zone located at eye level” refer to elements that are located at eye level, but mayalso extended to other levels, such as from sternum level to the roof level, from floor level to eyelevel, and/or from floor level to roof level. For example, an SAEDP located at an eye level canextend from sternum level to above the occupant’s head, such that at least a portion of theSAEDP is located at the eye level.
[0061] Herein, “normal driving” refers to typical driving conditions, which persist most of thetime the autonomous vehicle is in forward motion. During normal driving, the probability of acollision is expected to be below a threshold. When the threshold is reached, at least one of thefollowing activities may be taken: deployment of safety devices that are not usually deployed(e.g., inflating airbags), taking evasive action to avoid a collision, and warning occupants of thevehicle about an imminent event that may cause a Sudden Decrease in Ride Smoothness(SDRS).
[0062] A Shock-Ab sorbing Energy Dissipation Padding (SAEDP) is an element that may beused to cushion impact of a body during a collision or during SDRS events. Various types ofSAEDPs may be used in embodiments described herein, such as passive materials, airbags, andpneumatic pads.
[0063] Some examples of passive materials that may be used to the SAEDP in one or more ofthe disclosed embodiments include one or more of the following materials: CONFOR® foam byTrelleborg Applied Technology, Styrofoam® by The Dow Chemical Company®, Micro-LatticeMaterials and/or Metallic Microlattices (such as by HRL Laboratories in collaboration withresearchers at University of California and Caltech), non-Newtonian energy Absorbing materials(such as D3O® by D3O lab, and DEFLEXION™ by Dow Corning®), Sorbothane® bySorbothane Incorporated, and padding that includes compression cells and/or shock absorbers ofthe Xenith® LLC type (such as described in US patent num. 8,950,735 and US patentapplication num. 20100186150), and materials that include rubber such as a sponge rubber.
[0064] The term “stiff element”, together with any material mounted between an SAEDP andthe outside environment, refers to a material having stiffness and impact resistance equal orgreater than that of glazing materials for use in motor vehicles as defined in the following twostandards: (i) “American National Standard for Safety Glazing Materials for Glazing MotorVehicles and Motor Vehicle Equipment Operating on Land Highways-Safety Standard” ANS1/SAE Z26.1-1996, and (11) lhe Society ot Automotive Engineers (SAE) RecommendedPractice J673, revised April 1993, “Automotive Safety Glasses” (SAE J673, rev. April 93). Theterm “stiff element” in the context of low speed vehicles, together with any material mountedbetween the SAEDP and the outside environment, refers to a material having stiffness andimpact resistance equal or greater than that of glazing materials for use in low speed motorvehicles as defined in Federal Motor Vehicle Safety Standard 205 - Glazing Materials (FMVSS205), from 49 CFR Ch. V (10-1-04 Edition). The stiff element may be transparent (such asautomotive laminated glass, or automotive tempered glass) or nontransparent (such as fiber-reinforced polymer, carbon fiber reinforced polymer, steel, or aluminum).
[0065] Herein, a nontransparent element is defined as an element having Visible LightTransmittance (VLT) between 0% and 20%, which does not enable the occupant to recognizewhat lies on the other side of it. For example, a thick ground glass usually allows light to passthrough it but does not let the occupant recognize the objects on the other side of it, unlike plaintint glass that usually lets the occupant recognize the objects on the other side of it, even when itfeatures VLT below 10%. The nontransparent includes an opaque element having VLT ofessentially 0% and includes a translucent element having VLT below 20%. VLT is defined as theamount of incident visible light that passes through the nontransparent element, where incidentlight is defined as the light that strikes the nontransparent element. VLT is also known asLuminous Transmittance of a lens, a light diffuser, or the like, and is defined herein as the ratioof the total transmitted light to the total incident light. The common clear vehicle windshield hasa VLT of approximately 85%, although LIS Federal Motor Vehicle Safety Standard No. 205allows the VLT to be as low as 70%.
[0066] Sentence such as “video unrelated to the VST (VUR)” mean that an average occupantwould not recognize the video as a representation of the outside environment. In someembodiments, the content of the VUR does not change as function of the position of theoccupant’s head, which means that the point of view from which the occupant watches the VURdoes not change essentially when the occupant’s head moves. Herein, stabilization effects, imagefocusing, dynamic resolution, color corrections, and insignificant changes to less than 10% of theframe as function of the position of the position of the occupant’s head occupant’s head - are stillconsidered as content that does not change as function of the position of the occupant’s head.Examples of such content (common in the year 2016) include cinema movies, broadcast TVshows, standard web browsers, and Microsoft Office® applications (such as Word, Excel andPowerPoint®).
[0067] Herein, a “crumple zone” refers to a structure designed to slow down inertia and absorb energy from impact during a traffic collision by controlled detormation. the controlleddeformation absorbs some of the impact within the outer parts of the vehicle, rather than beingdirectly transferred to the occupants, while also preventing intrusion into and/or deformation ofthe compartment. Crumple zone may be achieved by various configurations, such as one or moreof the following exemplary configurations: (i) by controlled weakening of sacrificial outer partsof the vehicle, while strengthening and increasing the stiffness of the inner parts of the vehicle,such as by using more reinforcing beams and/or higher strength steels for the compartment; (ii)by mounting composite fiber honeycomb or carbon fiber honeycomb outside the compartment;(iii) by mounting an energy absorbing foam outside the compartment; and/or (iv) by mounting animpact attenuator that dissipates impact.
[0068] It has become more and more common for vehicle occupants to engage in various work-or entertainment-related activities. The activities typically involve various forms of displayswhich the occupants can view, e.g., instead of looking out of the vehicle. This can be aproductive or entertaining way to pass the time spent traveling. And while the occupants may bemostly engaged in their work or entertainment, at times they might want to view the outsideenvironment.
[0069] Traditionally, a vehicle occupant views the outside of the vehicle through physicalwindows. Most on-road vehicles, including autonomous and non-autonomous vehicles, includeas part of the vehicle body one or more windows, such as a windshield, side windows, or a rearwindow. The purpose of these windows is to offer vehicle occupants a view of the outside world.However, this feature comes at a cost; there are several drawbacks to using windows in vehicles.[0070] Vehicle windows are typically made of glass or other transparent stiff materials. Thismakes most windows heavy and often expensive to manufacture. In addition, windows aretypically poor thermal insulators, which can greatly increase the energy demands of a vehicle’sclimate control systems, especially when the sun beats down. Furthermore, in the case of acollision, windows may put a vehicle’s occupants at a risk such as being hit by external objectsdue to intrusion of foreign objects, being thrown out of the vehicle, or being struck by parts ofthe vehicle they are traveling in.
[0071] Thus, there is a need for vehicles that can offer an advantage offered by windows (e.g.,a view of the outside), which do not suffer from at least some the shortcomings of vehiclewindows, such as the increased safety risk that windows often pose.
[0072] In order to enable an occupant of a vehicle to view the outside environment, withoutneeding to look out of a physical window, some aspects of this disclosure involve systems thatcombine video see-through (VST) with video-unrelated-to-the-VST (VUR).
[0073] In one embodiment, a system configured to combine video see-through (VST) withvideo-unrelated-to-the-VST (VUR) includes a head-mounted display (HMD), a camera, an HMDtracking module and a computer. The HMD is configured to be worn by an occupant of acompartment of a moving vehicle and to present an HMD-video to the occupant. The camera,which is mounted to the vehicle, is configured to take video of the outside environment (Vout).The HMD tracking module is configured to calculate position of the HMD relative to thecompartment, based on measurements of a sensor. The computer is configured to receive alocation of a video see-through window (VSTW) in relation to the compartment, and tocalculate, based on the position of the HMD relative to the compartment, a window-location forthe VSTW on the HMD-video. Additionally, the computer is further configured to generate,based on the window-location and the Vout, the VST that represents a view of the outsideenvironment from the point of view of the occupant. The computer is also configured to generatethe HMD-video based on combining the VUR with the VST in the window-location. It is to benoted that the content of the VUR is unrelated to the video taken by the camera.
[0074] In one embodiment, a system configured to combine video see-through (VST) withvideo-unrelated-to-the-VST (VUR) includes at least the following components: a head-mounteddisplay (HMD), such as HMD 15, a camera (e.g., camera 12), an HMD tracking module 27, anda computer 13. FIG. 1 provides a schematic illustration of at least some of the relationshipsbetween the components mentioned above.
[0075] The HMD 15 is configured to be worn by an occupant of a compartment of a movingvehicle and to present an HMD-video 16 to the occupant. In one embodiment, the HMD 15 is anaugmented-reality (AR) HMD. In another embodiment, the HMD 15 is a virtual reality (VR)HMD. Optionally, in this embodiment, the system further comprises a video camera mounted tothe VR HMD, and the VST video comprises video of the compartment received from the videocamera mounted to the VR HMD. In yet another embodiment, the HMD 15 is a mixed realityHMD. The term “Mixed Reality” (MR) are used herein involves a system that is able to combinereal world data with virtual data. Mixed Reality encompasses Augmented Reality andencompasses Virtual Reality that does not immerse its user 100% of the time in the virtual world.Examples of mixed reality HMDs include, but are not limited to, Microsoft HoloLens® HMDand MagicLeap® HMD.
[0076] The camera 12, which is mounted to the vehicle, is configured to take video of theoutside environment (Vout). Optionally, the data captured by the camera comprises 3D data. Forexample, the camera may be based on at least one of the following sensors: a CCD sensor, aCMOS sensor, a near infrared (NIR) sensor, an infrared sensor (IR), and a camera based on active illumination such as a LiDAR.
[0077] The HMD tracking module 27 is configured to calculate position of the HMD 15relative to the compartment, based on measurements of a sensor. In different embodiments, theHMD tracking module 27 may have different configurations.
[0078] In one embodiment, the sensor comprises first and second Inertial Measurement Units(IMUs). In this embodiment, the first IMU is physically coupled to the HMD 15 and isconfigured to measure a position of the HMD 15, and the second IMU is physically coupled tothe compartment and is configured to measure a position of the compartment. The HMD trackingmodule 27 is configured to calculate the position of the HMD 15 in relation to the compartmentbased on the measurements of the first and second IMUs.
[0079] In another embodiment, the sensor comprises an Inertial Measurement Unit (IMU) anda location measurement system. In this embodiment, the IMU is physically coupled to the HMD15 and is configured to measure an orientation of the HMD 15. The location measurementsystem is physically coupled to the compartment and is configured to measure a location of theHMD in relation to the compartment. The HMD tracking module 27 is configured to calculatethe position of the HMD 15 in relation to the compartment based on the measurements of theIMU and the location measurement system. Optionally, the location measurement systemmeasures the location of the HMD 15 in relation to the compartment based on at least one of thefollowing inputs: a video received from a camera that captures the HMD 15, a video receivedfrom a stereo vision system, measurements of magnetic fields inside the compartment, wirelesstriangulation measurements, acoustic positioning measurements, and measurements of an indoorpositioning systems.
[0080] FIG. 2 illustrates one embodiment in which the HMD tracking module 27 is physicallycoupled to the compartment and is configured to measure the position of the HMD relative to thecompartment. The HMD tracking module 27 may utilize a passive camera system, an activecamera system that captures reflections of a transmitted grid, and/or a real-time locating systemsbased on microwaves and/or radio waves.
[0081] The computer 13 is configured to receive a location of a video see-through window(VSTW) in relation to the compartment, and to calculate, based on the position of the HMDrelative to the compartment, a window-location for the VSTW on the HMD-video. The computer13 is also configured to generate, based on the window-location and the Vout, the VST thatrepresents a view of the outside environment from the point of view of the occupant. Optionally,the VST is rendered as a 3D video content. Additionally, the computer 13 is further configuredto generate the HMD-video 16 based on combining the VUR with the VST in the window- location. The computer 13 may use various know in the art computer graphics functions and/orlibraries to generate the VST, transform the VST to the occupant’s point of view, render the 3Dvideo content, and/or combine the VUR with the VST.
[0082] In one embodiment, the content of the VUR does not change when the occupant movesthe head, and the content of the VUR is unrelated to the video taken by the camera. Additionally,the content of the VUR is generated based on data that is more than 2 seconds before the HMD-video 16 is displayed to the occupant. Some examples of the VUR include a video stream of atleast one of the following types of content: a recorded television show, a computer game, an e-mail, and a virtual computer desktop.
[0083] FIG. 3 illustrates one embodiment in which the occupant 14 wears an HMD 15. TheHMD 15 provides video to the occupant 14 through the display of the HMD 15. The vehicleincludes a camera 12 that takes video of the outside environment 11a and processes it in amanner suitable for the location of the occupant. The output in the HMD 15 provides video tothe occupant’s display in the HMD 15 as a VSTW and the position of the VSTW is calculated inrelation to the compartment of the vehicle and moves with the compartment. While the vehicle isin motion, the VSTW change its content to represent the outside environment 11a of the vehicle.Whereas the video-unrelated-to-the-VST doesn’t change when the occupant moves his head. Thecomputer is configured to receive a location of a VSTW in relation to the compartment, and tocalculate, based on the position of the occupant’s head, a window-location for the VSTW on thevideo.
[0084] FIG. 4 illustrates one embodiment in which the occupant 44 wears HMD 45 and viewslarge VUR 40 and smaller VST 41a. The VUR 40 does not change when the occupant’s head 44moves. The VSTW presents video of the street based on video taken by the camera that ismounted to the vehicle. The location of the video-see-through window in relation to thecompartment does not change when the occupant 44 moves his/her head in order to imitate aphysical window that does not change its position relative to the compartment when theoccupant’s head moves.
[0085] FIG. 5a illustrates how, in one embodiment, the VST moves to the upper left when theoccupant 44 looks to the bottom right. FIG. 5b illustrates how the VST moves to the bottom rightwhen the occupant 44 looks to the upper left, while the VUR moves with the head. Tn both cases,the VUR moves with the head while the location of the VST changes according to the movementof the head relative to the compartment as measured by the HMD tracking module 27.
[0086] In some embodiments, the content of the VUR may be augmented-reality content,mixed-reality content, and/or virtual-reality content rendered to correspond to the occupant’s viewing direction. In this embodiment, the VUR is unrelated to the video taken by the camera. Inone example, the VUR may include a video description of a virtual world in which the occupantmay be playing in a game (e.g., represented by an avatar). Optionally, in this example, most ofthe features of the virtual world are different from the view of the outside of the vehicle (as seenfrom the occupant’s viewing direction). For example, the occupant may be driving in a city,while the virtual world displays woods, a meadow, or outer space. In another example, the VURmay include augmented reality content overlaid above a view of the inside of the compartment.[0087] In addition to the components described above, in some embodiments, the system mayinclude a second camera that is mounted to the HMD and is configured to take video of thecompartment (VCOmp). In this embodiment, the computer is further configured to generate acompartment-video (CV), based on VCOmp and a location of a compartment-video window(CVW) in relation to the HMD-video (e.g., HMD-video 16), and to generate the HMD-videoalso based on the CV in the CVW, such that the HMD-video combines the VUR with the VST inthe window-location with the CV in the CVW. There are various ways in which the CVW maybe incorporated into the HMD-video. Some examples of these approaches are illustrated in thefollowing figures.
[0088] FIG. 6 illustrates HMD-video that includes both a non-transparent VST 55 in thewindow-location and a CV 56 that shows the hands of the occupant and the interior of thecompartment in the CVW. FIG. 7 illustrates HMD-video that includes both a partiallytransparent VST 57 in the window-location and the CV 56 that shows the hands of the occupantand the interior of the compartment in the CVW. FIG. 8 illustrates HMD-video that includes aVST 58 and partially transparent CV 59. The figure illustrates that the occupant sees the outsideenvironment in full field-of-view (FOV), while on top of it there is a partially transparent image(illustrated as dotted image) of the compartment and the hands of the occupant, in order to helpthe occupant not to hit things in the compartment.
[0089] FIG. 9a illustrates HMD-video that includes a VUR 70 in full FOV, a first windowcomprising the CV 71 in the CVW and a second smaller window comprising the VST 72in thewindow-1 ocati on.
[0090] FIG. 9b illustrates HMD-video that includes VUR 70 in full FOV, a first windowcomprising the CV 71 in the CVW and a second partially transparent smaller windowcomprising the VST 73 in the window-location.
[0091] FIG. 10a illustrates HMD-video that includes VUR 70 in full FOV, a first windowcomprising VST 75 in the window-location and a second smaller window comprising zoom outof the CV 76 in the CVW. Optionally, the cabin view in the zoom out is smaller than reality, and may enable the occupant to orient in the cabin. Optionally, the occupant may move the CVW, asillustrated in FIG. 10a where the zoom out of the CV in the CVW is somewhat above its locationin reality.
[0092] FIG. 10b illustrates HMD-video that includes VUR 70 and a partially transparent CV72. Here a first occupant sees the VUR in full field-of-view (FOV), and on top of it there is apartially transparent image of the compartment and a second occupant that sits to the left of thefirst occupant, which may help the first occupant not to hit the second occupant.
[0093] There may be various ways in which the system determines the location and/or size ofthe VSTW. In one embodiment, the VSTW is pinned to at least one of the following locations: aspecific physical location and a location of an object in the compartment, such that the locationof the VSTW in relation to the compartment does not change when the occupant moves his/herhead with the HMD 15 as part of watching the HMD-video 16 and without commanding theVSTW to move in relation to the compartment.
[0094] In another embodiment, the system includes a user interface configured to receive acommand from the occupant to move and/or resize the VSTW in relation to the compartment. Inone example, the command is issued through a voice command (e.g., saying “move VST to thebottom”). In another example, the command may be issued by making a gesture, which isdetected by a gesture control module in the compartment and/or on a device of the occupant(e.g., as part of the HMD). Optionally, in this embodiment, the computer is further configured to:update the window-location based on the command from the occupant, and generate an updatedVST based on the updated window-location and the video taken by the camera. In thisembodiment, the VST and the updated VST present different VSTW locations and/or dimensionsin relation to the compartment. Optionally, the HMD is configured not to present any part of theVST to the occupant when the window-location is not in the field of view presented to theoccupant through the HMD.
[0095] In yet another embodiment, the system may further include a video analyzer configuredto identify an Object Of Interest (OOI) in the outside environment. For example, the OOI ofinterest may be a certain landmark (e.g., a building), a certain object (e.g., a store or a certainmodel of automobile), or a person. In this embodiment, the computer is further configured toreceive, from the video analyzer, an indication of the position of the OOT, and to track the OOTby adjusting the window-location according to the movements of the vehicle, such that the OOIis visible via the VST. Optionally, the HMD is configured not to present any part of the VST tothe occupant when the window-location is not in the field of view presented to the occupantthrough the HMD.
[0096] The VST that represents the view of the outside environment from the point of view ofthe occupant, in some embodiments, does not necessarily match the video taken by the cameras.In one embodiment, the VST may utilize image enhancement techniques to compensate foroutside lighting conditions, to give an occupant an experience similar to looking out through aconventional vehicle window but without the view being distorted by raindrops or dirt on thewindow, to improve the visual impression of the outside environment e.g. by showingbackground images which are different from those retrievable from the outside environment.Additionally or alternatively, the VST may mimic the outside environment, alter the outsideenvironment, and/or be completely different from what can be seen on the outside environment.The VST may be focus on providing visual information that makes the travelling more fun. Thevehicle may provide different styles of the outside environment to different occupants in thevehicle, such that a first VST provided to a first occupant may mimics the outside environment,while a second VST provided to a second occupant may alter the outside environment and/or becompletely different from the outside environment optionally for comfort enhancement and/orentertainment.
[0097] In some embodiments, the VST is informative, and aids at least some of the occupantsto determine the location of the vehicle in the environment. In one embodiment, at least some ofthose occupants could not determine their location without the VST. In one example, less than20% of average vehicle occupants, who are familiar with the outside environment, are able todetermine their real location in the outside environment by watching the VUR, without using amap, with a margin of error that is less than 100 meters, and while the vehicle travels; whilemore than 20% of the average vehicle occupants, who are familiar with the outside environment,are able to determine their real location in the outside environment by watching the VST,without using a map, and with a margin of error that is less than 100 meters, and while thevehicle travels.
[0098] FIG. 11a illustrates a FOV in the context of presented video and terminology usedherein. The vehicle occupant 200 wears an HMD 201 that presents HMD-video (such as HMD-video 16). The HMD-video may be presented at a single focal plane, or at multiple focal planes,depending on the characteristics of the HMD 201 (when the occupant focuses on a certain focalplane, then his/her point of gaze is said to be on the certain focal plane). Tn addition, thepresented objects may be two-dimensional (2D) virtual objects and/or three-dimensional (3D)virtual objects that may also be referred to as holographic objects. Element 204 represents thelocation of a nontransparent element physically coupled to the vehicle compartment. In oneexample, the HMD 201 is a holographic HMD, such as Microsoft HoloLens®, which can present content displayed on a series of focal planes that are separated by some distance. The virtualobjects may be presented before the nontransparent element (e.g., polygons 202, 203), essentiallyon the nontransparent element 204, and/or beyond the nontransparent element (e.g., polygons205, 206). As a result, the occupant’s gaze distance may be shorter than the distance to thenontransparent element (e.g., distance to polygons 202, 203), essentially equal to the distance tothe nontransparent element 204, and/or longer than the distance to the nontransparent element(e.g., distance to polygons 205, 206). Polygon 207 represents a portion of the presented video ateye level of the vehicle occupant, which in one example is within ±7 degrees from the horizontalline of sight. Although the figure illustrates overlapping FOVs of polygons 202, 203, 204, and205, the HMD may show different objects, capturing different FOVs, at different focal planes. Itis noted that the embodiments using a multi focal plane HMD are not limited to displayingcontent on a plane. For example, the HMD may project an image throughout a portion of, or allof, a display volume. Further, a single object such as a vehicle could occupy multiple volumes ofspace.
[0099] According to the terminology used herein, the nontransparent element 204 is said to belocated on FOV overlapping the FOV of polygons 205 and 203 because polygons 203, 204, 205share the same FOV. FOV of polygon 206 is contained in the FOV of polygon 204, and FOV ofpolygon 207 intersects the FOV of polygon 204. FOV of polygon 203 is before thenontransparent element 204 and therefore may hide the nontransparent element 204 partially orentirely, especially when utilizing a multi-focal plane HMD.
[0100] FIG. lib illustrates a FOV in the context of the presented video, where the vehicleoccupant 210 does not wear an HMD that presents the video, such as when watching anautostereoscopic display. The autostereoscopic display is physically located on plane 214 and thepresented video may be presented at a single focal plane, or at multiple focal planes, dependingon the characteristics of the autostereoscopic display. In one example, the autostereoscopicdisplay is a holographic display, such as SeeReal Technologies® holographic display, where thepresented video may present virtual objects before the focal plane autostereoscopic display (e.g.,planes 212, 213), essentially on the focal plane of the autostereoscopic display 214, and/orbeyond the focal plane of the autostereoscopic display (e.g., planes 215, 216). As a result, theoccupant’s gaze distance may be shorter than the distance to the autostereoscopic display (e g.,planes plans 212, 213), essentially equal to the distance to the autostereoscopic display 214,and/or longer than the distance to the autostereoscopic display (e.g., planes 215, 216). The term“autostereoscopic” includes technologies such as automultiscopic, glasses-free 3D, glassesless3D, parallax barrier, integral photography, lenticular arrays, Compressive Light Field Displays, holographic display based on eye tracking, color filter pattern autostereoscopic display,volumetric display that reconstructed light field, integral imaging that uses a fly's-eye lens array,and/or High-Rank 3D (HR3D).
[0101] FIG. 11c illustrates FOV of a 3D camera that is able to capture sharp images fromdifferent focal lengths.
[0102] In some embodiments, the vehicle and/or the HMD utilize at least one InertialMeasurement Unit (IMU), and the system utilizes an Inertial Navigation System (INS) tocompensate imperfections in the IMU measurements. An INS typically has one or moresecondary navigation sensors that provide direct measurements of the linear velocity, positionand/or orientation of the vehicle. These secondary navigation sensors could be anything fromstereo vision systems, to GPS receivers, to digital magnetic compasses (DMCs) or any other typeof sensor that could be used to measure linear velocity, position and/or orientation. In oneexample, the information from these secondary navigation sensors is incorporated into the INSusing an Extended Kalman Filter (EKF). The EKF produces correction that are used to adjust theinitial estimations of linear velocity, position and orientation that are calculated from theimperfect IMU measurements. Adding secondary navigation sensors into an INS can increase itsability to produce accurate estimations of the linear velocity, position and orientation of thevehicle over long periods of time.
[0103] In one embodiment, the system utilizes domain specific assumptions in order to reducedrift of an INS used to calculate the HMD spatial position in relation to the compartment. Morespecifically, the following methods may be used to reduce or correct drift. Such methodsgenerally fall the categories of using sensor fusion and/or domain specific assumptions.
[0104] (i) Sensor fusion refers to processes in which signals from two or more types of sensors are used to update and/or maintain the state of a system. In the case of INS, the state generallyincludes the orientation, velocity and displacement of the device measured in a global frame ofreference. A sensor fusion algorithm may maintain this state using IMU accelerometer andgyroscope signals together with signals from additional sensors or sensor systems. There aremany techniques to perform sensor fusion, such as Kalman filter and particle filter.
[0105] One example of periodically correcting drift is to use position data from a triangulationpositioning system relative to the compartment. Such systems try to combine the drift free natureof positions obtained from the triangulation positioning system with the high sampling frequencyof the accelerometers and gyroscopes of the IMU. Roughly speaking, the accelerometer andgyroscope signals are used to ‘fill in the gaps’ between successive updates from the triangulationpositioning system.
[0106] Another example of reducing the drift is using a vector magnetometer that measuresmagnetic field strength in a given direction. The IMU may contain three orthogonalmagnetometers in addition to the orthogonal gyroscopes and accelerometers. The magnetometersmeasure the strength and direction of the local magnetic field, allowing the north direction to befound.
[0107] (ii) In some embodiments, it is possible to make domain specific assumptions about themovements of the occupant and/or the vehicle. Such assumptions can be used to minimize drift.One example in which domain specific assumptions may be exploited is the assumption thatwhen the vehicle accelerates or decelerates significantly, the HMD accelerates or deceleratesessentially the same as the vehicle, allowing HMD drift in velocity to be periodically correctedbased on a more accurate velocity received from the autonomous-driving control system of thevehicle. Another example in which domain specific assumptions may be exploited is theassumption that when the vehicle accelerates or decelerates significantly, the HMDs of twooccupants travelling in the same vehicle accelerate or decelerate essentially the same, allowingHMD drifts to be periodically corrected based on comparing the readings of the two HMDs. Stillanother example in which domain specific assumptions are exploited is the assumption that thepossible movement of an HMD of a belted occupant is most of the time limited to a portion ofthe compartment, allowing HMD drifts to be periodically corrected based on identifying whenthe HMD exceeds beyond that portion of the compartment.
[0108] In one example, it may be desirable to adjust the position of displaying a virtual objectin response to relative motion between the vehicle and the HMD so that the virtual object wouldappear stationary in location. However, the HMD IMU may indicate that the HMD is movingeven when the detected motion is a motion of the vehicle carrying the HMD. In order todistinguish between motion of the HMD caused by the vehicle and motion of the HMD relativeto the vehicle, non-HMD sensor data may be obtained by the HMD from sensor such as an IMUlocated in the vehicle and/or the GPS system of the vehicle, and the motion of the vehicle may besubtracted from the motion of the HMD in order to obtain a representation of the motion of theHMD relative to the vehicle. By differentiating movements of the HMD caused by the occupantmotion compared to movements caused by the vehicle motion, the rendering of the virtual objectmay be adjusted for the relative motion between the HMD and the vehicle.
[0109] Using the nontransparent element, instead of a transparent glass window that providesthe same FOV to the outside environment, may provide various benefits, such as: (i) reducedmanufacturing cost of the vehicle compared to a similar vehicle having instead of thenontransparent element a transparent glass window that provides the same FOV to the outside environment as provided by the 3D display device, (ii) reduced weight of the vehicle comparedto a similar vehicle having instead of the nontransparent element a transparent glass window thatprovides the same FOV to the outside environment as provided by the 3D display device, andprovides the same safety level, (iii) better aerodynamic shape and lower drag for the vehicle,which results in an improved energy consumption, and (iv) improved privacy for the occupant asa result of not enabling an unauthorized person standing nearby the vehicle to see the occupantdirectly.
[0110] The term “real-depth VST window (VSTW)” is defined herein as an imaging displaythat shows a 3D image of an outside environment located beyond a wall that interrupts theoccupant’s unaided view of the outside environment. The real-depth VSTW has the followingcharacteristics: (i) the 3D image corresponds to a FOV to the outside environment beyond thewall, as would have essentially been seen by the occupant had the wall been removed; (ii) theoutside environment is captured by a camera, and the rendering of the 3D image is based onimages taken by the camera; and (iii) while looking via the imaging display, the occupant’s pointof gaze (where one is looking) is most of the time beyond the wall that interrupts the occupant’sunaided view of the outside environment.
[0111] A possible test to determine whether “(i) the 3D image corresponds to a FOV to theoutside environment beyond the wall, as would have essentially been seen by the occupant hadthe wall been removed” is whether an imaginary user standing beyond the wall, watching boththe real-depth VSTW and the outside environment, would recognize that at least 20% of thecontours of objects in the 3D image correspond to the contours of the objects seen on the outsideenvironment. Differences between the colors of the corresponding objects in the 3D image andthe outside environment usually do not affect the criterion of the 20% corresponding contours, aslong as the color difference does not affect the perception of the type of object. For example,different skin colors to corresponding people in the 3D image and the outside environment donot violate the criterion of the 20% corresponding contours. As another example, differences inthe weight and/or height of corresponding objects in the 3D image and the outside environmentdo not violate the criterion of the 20% corresponding contours as long as the imaginary userunderstands that the objects correspond to the same person.
[0112] Sentences such as “from the FOV of the occupant” are to be interpreted as no more than20 degrees angular deviation from the field of view of the occupant to the outside environment.Zoom in/out does not affect the FOV as long as the average occupant would still recognize therendered environment as the 3D VST. For example, zoom in of up to x4, which maintains nomore than 20 degrees angular deviation from the FOV of the occupant to the outside environment, is still considered “from the FOV of the occupant”. Reasonable lateral deviationessentially does not affect the FOV as long as the average occupant would still recognize therendered environment as the 3D VST. For example, displaying to the occupant the outsideenvironment from the FOV of a camera located on the roof of the occupant’s vehicle, is stillconsidered as showing the outside environment from the occupant’s FOV.
[0113] A possible test to determine whether “(ii) the outside environment is measured by acamera, and the images taken by the camera are used to render the 3D image” is whether thereal-depth VSTW would display a different 3D VST when it does not receive the images takenby the camera. For example, assuming the camera is a 3D video camera, and the 3D image is amanipulation of the images taken by the 3D video camera; then, when the real-depth VSTWdoes not receive the images, it cannot show the changes that are taking place in the outsideenvironment. As another example, assuming the 3D image is mainly rendered from cached datastored in a database, and the camera is used to provide the setup of objects that behave in anunknown way, such as trajectories of nearby vehicles on the road, or a gesture of a personwalking beyond the wall; then, when the output of the camera is used to render the 3D image, thereal-depth VSTW would represent the unknown trajectory of the nearby vehicles or the unknowngesture of the person, while when the output of the camera is not used to render the 3D image,the real-depth VSTW would not represent the unknown trajectory of the nearby vehicles or theunknown gesture of the person merely because the Tenderer does not have that data.
[0114] A possible test to determine whether “(iii) the occupant’s point of gaze (where one islooking) is most of the time beyond the wall that interrupts the occupant’s unaided view of theoutside environment” includes the following steps: (a) use eye tracker to determine the point ofgaze on a representative scenario, (b) measure the distance to the wall, and (c) determine whetherthe average gaze distance is longer than the distance to the wall.
[0115] It has become more and more common for vehicle occupants to engage in various work-or entertainment-related activities. The activities typically involve various forms of displayswhich the occupants view, e.g., instead of looking out of the vehicle, can offer a productive orentertaining way to pass the time spent traveling. The quality of viewing experience can beinfluenced by the amount of ambient light that penetrates the vehicle.
[0116] US patent application num. 20150261219 describes an autonomous mode controllerconfigured to control the operation of shaded vehicle windows. However, the operation isunrelated to watching video.
[0117] Thus, there is a need to be able to control ambient light levels in a vehicle in a way thatrelates to consumption of video content while in the vehicle.
[0118] Some aspects of this disclosure involve a system that utilizes window shading of avehicle window in order to improve the quality of video viewed by an occupant of the vehiclewho wears a head-mounted display (HMD).
[0119] In one example, an autonomous on-road vehicle includes a system configured to enablean HMD to cooperate with a window light shading module. This example involves a lightshading module, a camera, a processor, and the HMD. The light shading module is integratedwith a vehicle window and is configured to be in at least first and second states. In the first statethe Visible Light Transmittance (VLT) of the vehicle window is above 10% of ambient lightentering through the window, in the second state the VLT of the vehicle window is below 50%of ambient light entering through the window, and the VLT of the vehicle window in the firststate is higher than the VLT of the vehicle window in the second state. The camera is physicallycoupled to the vehicle and configured to take video of the outside environment. The processor isconfigured to generate, based on the video, a video see-through (VST) that represents the outsideenvironment from a point of view of an occupant looking to the outside environment through atleast a portion of the vehicle window. The HMD comprises an optical see-through componentand a display component; the HMD is configured to operate according to a first mode ofoperation when the occupant looks at the direction of the vehicle window and the light shadingmodule is in the first state, and to operate according to a second mode of operation when theoccupant looks at the direction of the vehicle window and the light shading module is in thesecond state. Wherein the total intensity of the VST light, emitted by the display component andreaching the occupant’s eyes, is higher in the second mode than in the first mode.
[0120] In one example, a system configured to enable a head-mounted display (HMD) tocooperate with a window light shading module of an autonomous on-road vehicle includes atleast the following elements: the HMD 62, a light shading module 61, a camera (such as camera12), and a processor 18. FIG. 12 is a schematic illustration of at least some of the relationshipsbetween the system elements mentioned above.
[0121] The light shading module 61 is integrated with a vehicle window and is configured to bein at least first and second states. Optionally, the light shading module 61 covers more than halfof the front windshield in the second state. In one example, in the first state, the Visible LightTransmittance (VLT) of the vehicle window is above 10% of ambient light entering through thewindow, and in the second state, the VLT of the vehicle window is below 50% of ambient lightentering through the window. Additionally, the VLT of the vehicle window in the first state ishigher than the VLT of the vehicle window in the second state. In another example, in the firststate the VLT of the vehicle window is above 70% of ambient light entering through the window, and in the second state, the VLT of the vehicle window is below 30% of ambient lightentering through the window.
[0122] Herein, “ambient light” in the context of a vehicle refers to visible light that is notcontrolled by the vehicle, such as light arriving from: the sun, lights of other vehicles, street/roadlighting, and various reflections from elements such as windows.
[0123] In some examples, utilizing the light shading module 61 may improve the quality ofimages viewed via the HMD 62 when the light shading module 61 is in the second state.Optionally, the perceived contrast of the optical see-through component is better when the lightshading module is in the second state compared to when the light shading module 61 is in thefirst state.
[0124] Various types of light shading modules may be utilized. In one example, the lightshading module 61 is a movable physical element configured to reduce the intensity of theambient light entering into the vehicle compartment through the vehicle window. Optionally, thelight shading module is unfurled from the inside of the compartment in order to block at least50% of the ambient light intensity. Optionally, the light shading module is unfurled from theoutside of the compartment in order to block at least 50% of the ambient light intensity. FIG. 13aillustrates a first mode where the occupant sees the outside environment through the optical see-through component. This figure illustrates the view that the occupant sees when looking outsidethrough the window. FIG. 13b illustrates a second mode where the occupant sees the outsideenvironment through the VST. In this example, the outside environment is a bit different, andthere is also a virtual Superman floating near the tree.
[0125] In another example, the light shading module 61 may be a curtain. FIG. 14 illustrates aVST over a curtain. FIG. 15 illustrates a light shading module that is unfurled from the inside ofthe compartment. FIG. 16 illustrates a light shading module that is unfurled from the outside ofthe compartment.
[0126] And in yet another example, the vehicle window is made of a material that is able toserve as the light shading module 61 by changing its transparency properties.
[0127] The camera is physically coupled to the vehicle, and configured to take video of theoutside environment. For example, the camera may be based on at least one of the followingsensors: a CCD sensor, a CMOS sensor, a near infrared (NIR) sensor, an infrared sensor (TR),and a camera based on active illumination such as a LiDAR.
[0128] The processor is configured to generate, based on the video, a video see-through (VST19) that represents the outside environment from a point of view of an occupant looking to theoutside environment through at least a portion of the vehicle window. Optionally, the processor is further configured not to generate the VST 19 when the HMD 62 operates in the first mode.[0129] The HMD 62 comprises an optical see-through component and a display component.Optionally, the HMD 62 is configured to operate according to a first mode of operation when theoccupant looks at the direction of the vehicle window and the light shading module 61 is in thefirst state, and to operate according to a second mode of operation when the occupant looks at thedirection of the vehicle window and the light shading module 61 is in the second state. The totalintensity of the VST light, emitted by the display component and reaching the occupant’s eyes, ishigher in the second mode than in the first mode.
[0130] In one example, in the first mode, intensity of light that reaches the occupant’s eyes viathe optical see-through component is higher than intensity of light from the VST that is emittedby the display component and reaches the occupant’s eyes. And in the second mode, the intensityof light from the environment that reaches the occupant’s eyes via the optical see-throughcomponent is lower than the intensity of light from the VST that is emitted by the displaycomponent and reaches the occupant’s eyes. In one example, the total intensity of VST light,emitted by the display component and reaching the occupant’s eyes, is at least 50% higher in thesecond mode than in the first mode. In some cases, the display component may be based on adigital display that produces the virtual image (such as in Oculus rift®), direct retina illumination(such as in Magic Leap®), or other methods that are capable of producing the virtual image.
[0131] In one example, the system described above optionally includes an occupant trackingmodule configured to calculate the point of view of the occupant based on measurements of asensor. Optionally, the occupant tracking module is the HMD tracking module 27. Optionally, inthis example, the processor is further configured to render the VST based on data received fromthe occupant tracking module. Optionally, the display is a three dimensional (3D) displayconfigured to show the occupant the VST, such that point of gaze of the occupant, while lookingvia the 3D display device, is most of the time beyond the location of the light shading module61.
[0132] When traveling in a vehicle, there are various work- and entertainment-related activitiesto engage an occupant of the vehicle. Many of these activities typically involve viewing contenton displays. And while most of the time the occupant may be mostly be engaged in contentpresented on a display, there are times in which a lack of awareness of the divining environmentcan lead to undesired consequences. For example, if an unexpected driving event occurs, such ashitting a speed bump, making a sharp turn, or a hard breaking, this may startle the occupant.Thus, there is a need for a way to make the occupant aware of certain unexpected driving events,in order to make the driving experience less distressful when such events occur.
[0133] In some embodiments, an occupant of a vehicle may have the opportunity to view videosee-through (VST), which is video generated based on video of the environment outside thevehicle. VST can often replace the need to look out of a window (if the vehicle has windows).Some examples of scenarios in which VST may be available in a vehicle include a windowlessvehicle, a vehicle with shaded windows having VLT below 30%, and/or when the occupantwears a VR headset. While traveling in such a vehicle, the occupant may benefit from gaining aview to the outside environment when an unexpected driving event occurs. By being made awareof the event, the occupant is less likely to be surprised, disturbed, and/or distressed by the event.
[0134] While traveling in a vehicle, an occupant of the vehicle may not always be aware of theenvironment outside and/or of what actions the vehicle is about to take (e.g., breaking, turning,or hitting a speedbump). Thus, if such an event occurs without the occupant being aware that it isabout to happen, this may cause the occupant to be surprised, disturbed, distressed, and evenphysically thrown off balance (in a case where the event involves a significant change in thebalance of the physical forces on the occupant). This type of event is typically referred to hereinas a Sudden Decrease in Ride Smoothness (SDRS) event. Some examples of SDRS eventsinclude at least one of the following events: hitting a speed bump, driving over a pothole,climbing on the curb, making a sharp turn, a hard breaking, an unusual acceleration (e.g., 0-100km/h in less than 6 seconds), and starting to drive after a full stop.
[0135] In some embodiment, an SDRS event takes place at least 2 minutes after starting totravel and it is not directly related to the act of the starting to travel. Additionally, the SDRSevent takes place at least 2 minutes before arriving to the destination and is not directly related tothe act of arriving at the destination. In one example, a sentence such as “an SDRS event isimminent” refers to an SDRS event that is: (i) related to traveling in the vehicle, and (ii) expectedto happen in less than 30 seconds, less than 20 seconds, less than 10 seconds, or less than 5seconds. In another example, a sentence such as “an SDRS event is imminent” may refer to anevent that starts at that instant, or is about to start within less than one second.
[0136] The following is a description of a video system that may be used to increase awarenessof an occupant of a vehicle regarding an imminent SDRS. FIG. 17 illustrates one example of avideo system for an autonomous on-road vehicle, which includes at least an autonomous-drivingcontrol system 65, a camera (such as camera 12), a processor (such as processor 18), and a videomodule 66.
[0137] The autonomous-driving control system 65 is configured to generate, based ontrajectory of the vehicle and information about the road, an indication indicative of whether aSudden Decrease in Ride Smoothness (SDRS) event is imminent. Optionally, the autonomous- driving control system 65 receives at least some of the information about the road from at leastone of the following sources: sensors mounted to the vehicle, sensors mounted to nearbyvehicles, an autonomous-driving control system 65 used to drive a nearby vehicle, and adatabase comprising descriptions of obstacles in the road that are expected to cause intensemovement of the vehicle. In one example, the database comprising the descriptions of theobstacles includes one or more of the following types of data: locations of speed bumps,locations of potholes, locations of stop signs, and locations of sharp turns in the road.
[0138] In one example, the autonomous-driving control system 65 is configured to generate theindication indicative of whether an SDRS event is imminent based on at least one of thefollowing configurations: (i) the autonomous-driving control system 65 receives from a cameraimages of the road, and calculates the indication based on the vehicle trajectory and imageanalysis of the images, (ii) the autonomous-driving control system 65 receives from a radarreflections of electromagnetic waves from the road, and calculates the indication based on thevehicle trajectory and signal processing of the reflections, and (iii) the autonomous-drivingcontrol system 65 receives a notification from a detailed road map, and calculates the indicationbased on the vehicle trajectory and the notification.
[0139] The camera, which is mounted to the vehicle, is configured to take video of theenvironment outside the vehicle. Optionally, the data captured by the camera comprises 3D data.The processor is configured to generate a video see-through (VST) based on the video taken bythe camera.
[0140] The video module 66 is configured to select a first mode of presentation, in whichvideo-unrelated-to-the-VST (VUR) is presented on the foveal vision region of the occupant, ateye level, responsive to the indication not indicating that an SDRS event is imminent. The videomodule 66 is further configured to select a second mode of presentation, in which the VST ispresented on the foveal vision region of the occupant, at eye level, responsive to the indicationindicating that an SDRS event is imminent. Optionally, the VST captures more than 50% of thefoveal vision region of the occupant in the second mode of presentation. Optionally, presentingvideo on the foveal vision region comprises presenting images with at least 50% transparency.Herein, “foveal vision” refers to an angle of about 5° of the sharpest field of vision.
[0141] Tn one example, in the first mode of presentation, the VUR is presented on the fovealvision region of the occupant with opacity A, and the VST is presented on the foveal visionregion of the occupant with opacity B, where A>B>0. Optionally, a normalized opacityparameter takes a value from 0.0 to 1.0, and the lower the value the more transparent the videois. In this example, in the second mode of presentation, the VUR is presented on the foveal vision region of the occupant with opacity A’, and the VST is presented on the foveal visionregion of the occupant with opacity B’, where B’>B and B’>A’. In some examples, one or moreof the following values may be true: A’>0, B=0, and A’=0. Herein, “partially transparent” refersto opacity below one and above zero.
[0142] Having the VST presented when an SDRS event is imminent can make the occupant beaware and prepared for the SDRS event. Thus, the occupant is less likely to be startled,distressed, and/or physically thrown off balance by the SDRS event. In one example, the SDRSevent involves hitting a speedbump, while the occupant views a movie. About 5 seconds prior tohitting the speedbump, a partially transparent window displaying VST in which the speedbumpis highlighted (e.g., flashing red) is presented on the foveal vision region of the occupant for acouple seconds (e.g., by being presented in the center of the movie). This way upon hitting thespeedbump, the occupant is not startled by the event. In another example, the autonomous-driving control system 65 determines that a “hard breaking” is required, e.g., in order to avoidcollision with a vehicle ahead that slowed unexpectedly. In this example, the occupant may beworking on a virtual desktop, and within 100 milliseconds of when the determination is madethat the vehicle is about to rapidly deaccelerate (a “hard breaking”), the VST depicting the rearof the vehicle ahead is displayed in the center of the virtual desktop. This way the occupant isimmediately made aware of why the vehicle is breaking and this notification may prompt theoccupant to seek a more appropriate posture for the breaking.
[0143] Some illustrations of utilization of the different modes of operation are given in thefollowing figures. FIG. 18a illustrates presenting VUR responsive to not receiving from theautonomous-driving control system 65 an indication that an SDRS event is imminent. This figurehas two parts, the left part shows the vehicle driving over a clean road, and the right part showsthe VUR. FIG. 18b illustrates presenting VST responsive to receiving from the autonomous-driving control system 65 an indication that an SDRS event is imminent. The figure has twoparts, the left part shows the vehicle about to drive over a pothole, and the right part shows asmall window showing the pothole over the VUR (optionally to warn the occupant). FIG. 18cillustrates presenting VST responsive to receiving from the autonomous-driving control system65 an indication that an SDRS event is imminent. The figure has two parts, the left part showsthe vehicle about to enter a sharp turn, and on the right part shows a small window showing thesharp turn over the VUR (optionally to warn the occupant).
[0144] Traditional vehicles typically have a front windshield that offers occupants of thevehicle a frontal view of the outside environment. However, in some cases, this frontal view maybe provided using the VST. For example, in one example, the vehicle includes a nontransparent element, which is coupled to the vehicle, and obstructs at least 30 degrees out of the frontalhorizontal unaided FOV to the outside environment of an occupant at eye level. In one exampleof a standard vehicle, such as Toyota® Camry® model 2015, the frontal horizontal unaided FOVextends from the left door through the windshield to the right door.
[0145] The use of the nontransparent element improves the safety of the occupant during acollision compared to a similar vehicle having the same total weight and comprising atransparent glass window instead of the nontransparent element. The nontransparent elementmay be coupled to the vehicle in various configurations. In one example, the nontransparentelement is physically coupled to the vehicle at an angle, relative to the occupant, that is coveredby the field of view of the VST, and the nontransparent element features visible lighttransmittance (VLT) below 10% of ambient light.
[0146] Various types of displays may be utilized to present the occupant with video (e.g., theVST and/or the VUR). In one example, the video is presented to the occupant on a screencoupled to the vehicle compartment. In one example, the screen coupled to the vehiclecompartment utilizes parallax barrier technology. A parallax barrier is a device located in frontof an image source, such as a liquid crystal display, to allow it to show a stereoscopic image ormultiscopic image without the need for the viewer to wear 3D glasses. The parallax barrierincludes a layer of material with a series of precision slits, allowing each eye to see a differentset of pixels, thus creating a sense of depth through parallax. In another example, the occupantwears a head-mounted display (HMD), and the HMD is used to present the video to theoccupant. Optionally, the HMD is a VR headset, and as a result of presenting the VST, theoccupant does not need to remove the VR headset in order to see the cause of the SDRS event.
[0147] In some cases, the video module 66 may be selective regarding indications of whichSRDS events may prompt it to operate in the second mode of operation. For example, if theoccupant is engaged in a game, the video module 66 may refrain from presenting the VST in thefoveal vision region if the vehicle is about to make a sharp turn. However, it may optionally stillpresent the VST in the foveal vision region if the SDRS event involves something that may bemore forcefully felt by the occupant, such as extreme evasive maneuvering performed to avoid acollision.
[0148] Tn some cases, the video module 66 may determine whether to show a VST responsiveto an SRDS event (in the second mode of operation) based on the level of concentration of theoccupant. For example, if the occupant is deeply engaged in a certain activity (e.g., in work orplaying a game) above a threshold, the video module 66 may refrain from operating in thesecond mode for certain SDRS events that would cause the video module 66 to operate in the second mode were the occupant engaged in the certain activity below the threshold. In oneexample, the engagement level may be based on the occupant’s level of concentration, asmeasured by a wearable sensor (such as an EEG headset or a smartwatch) or a sensor physicallycoupled to the compartment (such as an eye tracker, a thermal camera, or a movement sensorembedded in the seat).
[0149] Presenting an occupant of a vehicle with video see-through (VST) of the outsideenvironment from a point of view of the occupant can help the occupant be prepared for variousevents that may be considered to cause a Sudden Decrease in Ride Smoothness (SDRS events).Some examples of SDRS events include the following events: hitting a speed bump, driving overa pothole, climbing on the curb, making a sharp turn, a hard breaking, an unusual acceleration(e.g., 0-100 km/h in less than 6 seconds), and starting to drive after a full stop.
[0150] In order for the occupant to become aware of an imminent SDRS event, the VST needsto be presented in an attention-grabbing way. For example, when an SDRS event is imminent,the VST that describes the environment is brought to the center of the occupant’s attention bydisplaying it at eye level and/or increasing the size of the VST (compared to other times when anSDRS event is not imminent).
[0151] The following is a description of a video system that may be used to increase awarenessof an occupant of a vehicle regarding an imminent SDRS by making VST more prominent for animminent SDRS. In one example, a video system for an autonomous on-road vehicle includes atleast the autonomous-driving control system 65, a camera, and a processor. In this example, theoccupant is engaged, at least part of the time, in entertainment- or work-related activities, whichinvolve presentation of video-unrelated-to-the-VST (VUR) to the occupant, for example, on ascreen coupled to the compartment of the vehicle or a HMD worn by the occupant. Someexamples of such content (common in the year 2016) include cinema movies, broadcast TVshows, standard web browsers, and Microsoft Office® applications (such as Word, Excel andPowerPoint®).
[0152] The camera, which is mounted to the vehicle, is configured to take video of theenvironment outside the vehicle. The processor is configured to generate, based on the videotaken by the camera, a video see-through (VST) of outside environment from a point of view ofan occupant of the vehicle. Optionally, the occupant is in a front seat of the vehicle (such that noother occupant in the vehicle is positioned ahead of the occupant). In some scenarios, dependingon whether an SDRS event is imminent, the processor is configured to present video, which mayinclude VUR and/or VST, to the occupant using different presentation modes. For example, thevideo may be presented in first or second video modes, depending on whether an SDRS event is imminent. Optionally, the VST captures in the first video mode a diagonal FOV of at least 3° ,5° , or 10° of the occupant’s FOV. Optionally, the VST is not presented in the foveal visionregion of the occupant in the first video mode, while the VST is presented in the foveal visionregion of the occupant in the second video mode.
[0153] In one example, responsive to an indication that is not indicative of an imminent SDRSevent (generated by the autonomous-driving control system 65), the processor is configured toprovide video to the occupant using the first video mode. In the first video mode, the occupant ispresented with video that comprises a video-unrelated-to-the-VST (VUR) at eye level in thedirection of forward traveling. Additionally, the video may comprise a video see-through (VST)of outside environment that is not presented at eye level in the direction of forward traveling.
[0154] Receiving an indication indicative that an SDRS event is imminent may change the wayvideo is presented to the occupant. Optionally, this change is made without receiving a commandto do so from the occupant. In one example, responsive to the indication indicating that an SDRSevent is imminent, the processor is configured provide video to the occupant using a secondvideo mode. In the second video mode, the occupant is presented with video that comprises theVST, presented at eye level in the direction of forward traveling. Optionally, if the first videomode includes VST, then the size of the VST window in the second video mode is larger by atleast 25% relative to the size of the VST window in the first video mode. Optionally, the secondvideo mode includes presenting the VUR in the background (e.g., the VST is overlaid above theVUR). Optionally, while providing the second video mode, responsive to an updated indicationthat does not indicate that an SDRS event is imminent, the processor is further configured switchback to provide the first video mode to the occupant.
[0155] The following figures illustrate various ways in which the first and second video modesmay be utilized. FIG. 19a illustrates presenting a VUR, which is a movie showing a personskiing, responsive to not receiving from the autonomous-driving control system 65 an indicationthat an SDRS event is imminent. This figure has two parts, the left part shows the vehicle drivingover a clean road, and the right part shows the VUR with a small VST on the right. FIG. 19billustrates presenting a VST responsive to receiving from the autonomous-driving control system65 an indication that an SDRS event is imminent. This figure has two parts, the left part showsthe vehicle about to drive over a speed bump, and the right part shows the VUR but now with abig VST on the right. In this example, the big VST captures about half of the VUR and showsthe speed bump. FIG. 19c illustrates presenting a partially transparent VST responsive toreceiving from the autonomous-driving control system 65 an indication that an SDRS event isimminent. Here, the big VST (that captures about half of the VUR and shows the speed bump) is presented as partially transparent layer over the VUR in order to show the occupant both theVUR and the VST.
[0156] In one example, presenting video to the occupant in the second video mode involvespresenting the VUR behind the VST, and the size and location of the VUR in the second videomode is essentially the same as the size and location of the VUR in the first video mode.Optionally, this means that there is a difference of less than 10% in the size and location of theVURs in the first and second video modes. In another example, the VUR is presented in adiagonal FOV of at least 10 degrees, and is not based on the video taken by the camera.
[0157] In some cases, the VUR may be unrelated to the purpose of the traveling in the vehicle.For example, the VUR may include videos related to the following activities: watching cinemamovies, watching TV shows, checking personal emails, playing entertainment games, andsurfing in social networks.
[0158] In some cases, the occupant’s field of view (FOV) to the outside environment isobstructed by a nontransparent element, and the VST represents at least a portion of theobstructed FOV. Optionally, the occupant uses a VR headset and the obstruction is due to anontransparent element belonging to the VR headset. Additionally or alternatively, theobstruction may be due to the vehicle’s compartment; in this case the nontransparent elementmay be an SAEDP, a safety beam, and/or a crumple zone at eye level, which obstruct at least 30degrees out of the frontal horizontal unaided FOV to the outside environment of the occupant ateye level.
[0159] When traveling in a vehicle, an occupant of the vehicle may not always be viewing theoutside environment. For example, the occupant may be engaged in work- or entertainment-related activities. Additionally, the in some vehicles, the occupant may not have a good view ofthe outside environment most of the time, or even all of the time. For example, the vehicle mayhave very few windows (if at all) or the vehicle may have a shading mechanism that reduces thelight from the outside. However, there are times when the occupant should be made aware of theoutside environment, even though the occupant may not be actively driving the vehicle. Forexample, the occupant may be made aware of the outside environment in order to make theoccupant prepared for an event that causes a Sudden Decrease in Ride Smoothness (an SDRSevent). Some examples of SDRS events include the following events: hitting a speed bump,driving over a pothole, climbing on the curb, making a sharp turn, a hard breaking, an unusualacceleration (e.g., 0-100 km/h in less than 6 seconds), and starting to drive after a full stop.
[0160] In order for the occupant to become aware of an imminent SDRS event, in some casesthat involve a vehicle that has a shading module that controls how much ambient light is let in, when an SDRS event is imminent the vehicle may increase the amount of light that enters via awindow. This additional light can give an occupant a better view of the outside environment,which can make the occupant aware and better prepared for the SDRS.
[0161] The following is a description of a system that may be used to increase awareness of anoccupant of a vehicle regarding an imminent SDRS by enabling more ambient light to enter avehicle via a window. In one example, a shading system for a window of an autonomous on-roadvehicle includes at least the autonomous-driving control system 65, a shading module, and aprocessor.
[0162] FIG. 20a illustrates a smart glasses shading module that operates according to anindication that an SDRS event is not imminent. This figure has two parts, the left part shows thevehicle driving over a clean road, and the right part shows that the smart glass window blocksmost of the ambient light (illustrated in the figure by the tree outside that is invisible to theoccupant). FIG. 20b illustrates the smart glasses shading module that operates according to anindication that an SDRS event is imminent. This figure has two parts, the left part shows thevehicle about to drive over a pothole, and the right part shows that the smart glass window doesnot block the ambient light (illustrated in the figure by the tree outside that is visible to theoccupant).
[0163] The shading module is configured to control the amount of ambient light that enters thevehicle via the window. Optionally, the window is a front-facing window (e.g., a windshield).Optionally, the window is a side-facing window. There are various types of shading modules thatmay be utilized.
[0164] In one example, the shading module comprises a curtain. Optionally, the curtain coversmost of the area of the window. Optionally, the curtain may open and close with the aid of anelectromechanical device, such as a motor, based on commands issued by the processor.
[0165] In another example, the shading module is a movable physical element configured toreduce the intensity of the ambient light entering through the vehicle window into the vehiclecompartment. For example, the shading module may include various forms of blinds, a shutter,or a sliding element. Optionally, the shading module may be unfurled from the inside of thevehicle compartment in order to block more than 70% of the ambient light intensity. Optionally,the shading module may be unfurled from the outside of the vehicle compartment in order toblock more than 70% of the ambient light intensity.
[0166] In yet another example, the shading module comprises a smart glass able to change itslight transmission level. Optionally, the smart glass is a vehicle window smart glass thatcomprises suspended particle devices (SPDs) film. Smart glass window may also be known as a switchable glass, a smart window, and/or a switchable window. Smart glass is glass or glazingwhose light transmission properties are altered when voltage, current, light or heat is applied.Examples of electrically switchable smart glasses include: suspended particle devices (SPDs),electrochromic devices, transition-metal hydride electrochromics devices, modified porous nano-crystalline films, polymer dispersed liquid crystal devices, micro-blinds, and thin coating ofnanocrystals embedded in glass. Examples of non-electrical smart glasses include: mechanicalsmart windows, Vistamatic®, and Sunvalve.
[0167] The processor is configured to command the shading module to operate in differentmodes based on indications generated by the autonomous-driving control system 65. In somecases, depending on whether an SDRS event is imminent, the processor is configured commandthe shading module to operate in different modes that allow different amounts of the ambientlight to enter the vehicle via the window. For example, shading module may operate in first orsecond modes, depending on whether an SDRS event is imminent. Optionally, in the first modethe shading module blocks more of the ambient light entering through the vehicle window thanin the second mode. Optionally, the increased ambient light in the second mode can help makethe occupant more aware of the outside environment, which can enable the occupant to preparefor the SDRS event.
[0168] In one example, responsive to an indication that no SDRS event is imminent, theprocessor is configured to command a shading module to operate in a first mode in which theshading module blocks more than 30% of ambient light entering through a window of thevehicle. Receiving an indication indicative that an SDRS event is imminent may change theamount of ambient light that enters the vehicle via the window. Optionally, this change is madewithout receiving a command to do so from the occupant. In one example, responsive to anindication that an SDRS event is imminent, the processor is configured to command the shadingmodule to operate in the second mode in which the shading module blocks less than 90% of theambient light entering through the vehicle window.
[0169] When traveling in a vehicle, the occupant may be engaged in various work- and entertainment-related activities. As a result, the occupant may not be aware of the drivingconditions, which may lead to undesired consequences in certain cases when the occupant isengaged in certain activities such as drinking a beverage, applying makeup, or using varioustools. For example, if an unexpected driving event occurs, such as hitting a speed bump, makinga sharp turn, or a hard breaking, this may startle the occupant or cause the occupant to losestability, which can lead to the occupant spilling a hot beverage or hurting himself/herself. Thus,there is a need for a way to make the occupant aware of certain unexpected driving events, in order to avoid accidents when conducting various activities in an autonomous vehicle.
[0170] While traveling in a vehicle, an occupant of the vehicle may not always be aware of theenvironment outside and/or of what actions the vehicle is about to take (e.g., breaking, turning,or hitting a speedbump). Thus, if such an event occurs without the occupant being aware that it isabout to happen, this may cause the occupant to be surprised, disturbed, distressed, and evenphysically thrown off balance (in a case where the event involves a significant change in thebalance of the physical forces on the occupant). This type of event is typically referred to hereinas a Sudden Decrease in Ride Smoothness (SDRS) event. Some examples of SDRS eventsinclude at least one of the following events: hitting a speed bump, driving over a pothole,climbing on the curb, making a sharp turn, a hard breaking, an unusual acceleration (e.g., 0-100km/h in less than 6 seconds), and starting to drive after a full stop.
[0171] The aforementioned SDRS events may become harmful to the occupant when theoccupant is engaged in certain activities, such as activities that involve manipulating objects thatmay harm the occupant if unintended body movement occurs due to the SDRS event. Thus, someaspects of this disclosure involve identifying when the occupant is engaged in a certain activityinvolving manipulating an object, which may become dangerous if the vehicle makes a suddenunexpected movement, such as when an SDRS event occurs. In one example, the object is a toolfor applying makeup, and the certain activity comprises bringing the tool close to the eye. Inanother example, the object is an ear swab, and the certain activity comprises cleaning the earwith the ear swab. In yet another example, the object is selected from the group comprising thefollowing tools: a knife, a tweezers, a scissors, and a syringe, and the certain activity comprisesusing the tool. And in still another example, the object is a cup that is at least partially filled withliquid, and the certain activity comprises drinking the liquid (e.g., drinking without a straw).
[0172] In some embodiment, an SDRS event takes place at least 2 minutes after starting totravel and it is not directly related to the act of starting to travel. Additionally, the SDRS eventtakes place at least 2 minutes before arriving to the destination and is not directly related to theact of arriving at the destination. In one example, a sentence such as “an SDRS event isimminent” refers to an SDRS event that is: (i) related to traveling in the vehicle, and (ii) expectedto happen in less than 30 seconds, less than 20 seconds, less than 10 seconds, or less than 5seconds. Tn another example, a sentence such as “an SDRS event is imminent” may refer to anevent that starts at that instant, or is about to start within less than one second.
[0173] Some aspects of this disclosure involve a safety system that warns an occupant of avehicle that is engaged in a certain activity (examples of which are given above) regarding animminent SDRS event. In one embodiment, the safety system includes a camera that takes images of the occupant and a computer that estimates, based on the images, whether theoccupant is engaged in a certain activity that involves handling an object that can harm theoccupant in a case of an occurrence of an SDRS event. The computer receives from anautonomous-driving control system an indication indicative of whether an SDRS event isimminent. Responsive to both receiving an indication indicative of an imminent SDRS event andestimating that the occupant is engaged in the certain activity, the computer commands a userinterface to provide a first warning to the occupant shortly before the SDRS event. Responsive toreceiving an indication indicative of an imminent SDRS event and not estimating that theoccupant is engaged in the certain activity, the computer does not command the user interface towarn the occupant, or commands the user interface to provide a second warning to the occupant,shortly before the SDRS event. In this embodiment, the second warning is less noticeable thanthe first warning. Optionally, no second warning is generated. FIG. 21 and FIG. 22 illustrate across section of a vehicle with a user interface 242 (e.g., a speaker) that warns an occupant whois engaged in an activity that may become dangerous in the occurrence of an SDRS event. Thespeaker in these figures may emit a warning (e.g., a beeping sound) at least one second beforethe time the SDRS event is expected to occur.
[0174] FIG. 23 is a schematic illustration of an embodiment of a safety system for anautonomous vehicle, which may be utilized to warn an occupant of a vehicle who is engaged in acertain activity that may become dangerous if an SDRS event occurs. In one embodiment, thesafety system includes at least a camera 240, a computer 241, and a user interface 242.
[0175] The camera 240 is configured to take images of an occupant of the vehicle. Optionally,the camera 240 may be physically coupled to the compartment of the vehicle. Alternatively, thecamera 240 may be physically coupled to a head-mounted display (HMD) that is worn by theoccupant of the vehicle.
[0176] The computer 241 is configured, in one embodiment, to make an estimation, based onthe images taken by the camera 240, whether the occupant is engaged in a certain activity thatinvolves handling an object that can harm the occupant in a case of an intense movement of thevehicle, such an SDRS event (e.g., the certain activity may be applying makeup, drinking abeverage from an open cup, or manipulating a sharp tool). Additionally, in this embodiment, thecomputer 241 is further configured to receive, from the autonomous-driving control system 65,an indication indicative of whether an SDRS event is imminent. The autonomous-driving controlsystem 65 is discussed above in relation to FIG. 17. The computer 241 may be any of thecomputers described in this disclosure, such as the computers illustrated in FIG. 26a or FIG. 26b.[0177] In one embodiment, the camera 240 comprises a video camera, and the computer 241 is configured to utilize an image-processing algorithm to identify the object and/or the certainactivity, and to estimate whether the occupant is engaged in the certain activity. In anotherembodiment, the camera 240 comprises an active 3D tracking device, and the computer 241 isconfigured to analyze the 3D data to identify the object and/or the certain activity, and toestimate whether the occupant is engaged in the certain activity. Optionally, the active 3Dtracking device is based on emitting electromagnetic waves and generating 3D images based onreceived reflections of the emitted electromagnetic waves. Two examples of technologies thatinvolve this approach, which may be utilized by the camera 240 in this embodiment, includeLiDAR and a combination of IR sensors and LEDs such as the systems used by Leap Motion®.[0178] Based on the indication and the estimation described above, the computer 241 maycause the user interface 242 to warn the occupant in various ways, or refrain from warning theoccupant (regarding an imminent SDRS event). Optionally, warning the occupant regarding animminent SDRS event is done shortly before the time the SDRS event is expected to occur.Herein, “shortly before” refers to at most 30 seconds before the SDRS event. Optionally,warning the occupant regarding an imminent SDRS event is done at least one second before theSDRS event, or within some other time that may be required for the occupant to safely ceasefrom the certain activity in which the occupant is engaged at the time, and prepare for the SDRSevent. In one example, responsive to both receiving an indication indicative of an imminentSDRS event and estimating that the occupant is engaged in the certain activity, the computer 241is further configured to command the user interface 242 to provide a first warning to theoccupant shortly before the SDRS event. In another example, responsive to receiving anindication indicative of an imminent SDRS event and not estimating that the occupant is engagedin the certain activity, the computer 241 is further configured to command the user interface 242to provide a second warning to the occupant, shortly before the SDRS event. In this example, thesecond warning is less noticeable than the first warning. In yet another example, responsive toboth receiving an indication indicative of no imminent SDRS event and estimating that theoccupant is engaged in the certain activity, the computer 241 is further configured not tocommand the user interface 242 to warn the occupant.
[0179] The user interface 242 may include, in some embodiments, an element that provides theoccupant with an auditory indication (e g., by providing a verbal warning and/or a sound effectthat may draw the occupant’s attention). For example, in one embodiment, the user interface 242may include a speaker which may be coupled to the compartment of the vehicle or worn by theoccupant (e.g., as part of earphones). Optionally, the first warning is louder than the secondwarning. Optionally, in this embodiment, the occupant does not drive the vehicle. In another embodiment, the user interface 242 may include an element that can provide the occupant with avisual cue, such as project a certain image in the field of view of the occupant and/or create avisual effect that may be detected by the occupant (e.g., flashing lights). Optionally, the userinterface 242 includes a display that is coupled to the compartment of the vehicle or is part of ahead mounted display (HMD) worn by the occupant. Optionally, in this embodiment, the firstwarning comprises a more intense visual cue than the second warning (e.g., the first warninginvolves more intense flashing of a warning icon than the second warning involves).
[0180] Detecting whether the occupant is engaged in the certain activity with an object can bedone utilizing various object detection and/or activity detection algorithms. These algorithmstypically employ various image analysis algorithms known in the art. For example, some of theapproaches that may be utilized to detect moving objects are described in Joshi, et al. "A surveyon moving object detection and tracking in video surveillance system." International Journal ofSoft Computing and Engineering 2.3 (2012): 44-48. Additionally, various examples ofapproaches that may be used to detect human activity are described in the following references:Aggarwal, et al., "Human activity analysis: A review", ACM Computing Surveys (CSUR) 43.3(2011): 16, Weinland, el al. "A survey of vision-based methods for action representation,segmentation and recognition", Computer vision and image understanding 115.2 (2011): 224-241, and Ramanathan, et al., "Human action recognition with video data: research and evaluationchallenges", IEEE Transactions on Human-Machine Systems 44.5 (2014): 650-663.
[0181] In one embodiment, a method for warning an occupant of an autonomous vehicle,includes the following steps: In step 1, receiving images of the occupant. In step 2, estimating,based on the images, whether the occupant is engaged in a certain activity that involves handlingan object that can harm the occupant in a case of a Sudden Decrease in Ride Smoothness(SDRS). In step 3, receiving, from an autonomous-driving control system, an indicationindicative of whether an SDRS event is imminent. In step 4, responsive to both receiving anindication indicative of an imminent SDRS event and estimating that the occupant is engaged inthe certain activity, commanding a user interface to provide a first warning to the occupantshortly before the SDRS event. And in step 5, responsive to receiving an indication indicative ofan imminent SDRS event and not estimating that the occupant is engaged in the certain activity,commanding the user interface to provide a second warning to the occupant, or not commandingthe user interface to warn the occupant, shortly before the SDRS event; wherein the secondwarning is less noticeable than the first warning.
[0182] Optionally, responsive to both receiving an indication indicative of no expected SDRSevent and estimating that the occupant is engaged in the certain activity, the computer is further configured not to command the user interface to warn the occupant. Optionally, warning theoccupant shortly before the SDRS event refers to warning the occupant less than 30 secondsbefore the expected SDRS event; and wherein the SDRS event may result from one or more ofthe following: driving on a speed bump, driving over a pothole, starting to drive after a full stop,driving up the pavement, making a sharp turn, and a hard breaking. Optionally, the methodfurther includes utilizing an image processing algorithm for identifying the object and forestimating whether the occupant is engaged in the certain activity.
[0183] In one embodiment, a non-transitory computer-readable medium is used in a computerto warn an occupant of an autonomous vehicle; the computer comprises a processor, and thenon-transitory computer-readable medium includes: program code for receiving images of theoccupant; program code for estimating, based on the images, whether the occupant is engaged ina certain activity that involves handling an object that can harm the occupant in a case of aSudden Decrease in Ride Smoothness (SDRS); program code for receiving, from anautonomous-driving control system, an indication indicative of whether an SDRS event isimminent; program code for commanding a user interface to provide a first warning to theoccupant shortly before the SDRS event, responsive to both receiving an indication indicative ofan imminent SDRS event and estimating that the occupant is engaged in the certain activity; andprogram code for commanding the user interface to provide a second warning to the occupant, ornot commanding the user interface to warn the occupant, shortly before the SDRS event,responsive to receiving an indication indicative of an imminent SDRS event and not estimatingthat the occupant is engaged in the certain activity; wherein the second warning is less noticeablethan the first warning.
[0184] Autonomous vehicles provide occupants with the opportunity to conduct in variousrecreational activities while traveling in the vehicles. Some of these activities may involveplaying games. To this end, there is a need to make autonomous vehicles more accommodatingfor such activities.
[0185] Some aspects of this disclosure involve utilization of driving controllers installed in anautonomous vehicle by an occupant engaged in gaming activity.
[0186] FIG. 24 illustrates an autonomous vehicle in which a driving controller installed in thevehicle may be utilized by an occupant of the vehicle engaged in gaming activity. The vehicleincludes at least compartment 245 and computer 248. The compartment 245 is configured tocarry the occupant. Additionally, the compartment 245 comprises at least one of the followingvehicle driving controllers: an accelerator pedal, a brake pedal (e.g., brake pedal 246), a steeringwheel (e.g., steering wheel 247), and a vehicle navigation module. It is to be noted that the computer 248 may be any of the computers described in this disclosure, such as the computersillustrated in FIG. 26a or FIG. 26b.
[0187] The computer 248 is configured to operate at least one of the vehicle driving controllersaccording to a driving mode or a gaming mode. In the driving mode, the computer 248 isresponsive to operating at least one of the vehicle driving controllers, and as a result performs atleast one of the following driving activities: accelerating the vehicle in response to operating theaccelerator pedal, slowing the vehicle in response to operating the brake pedal, steering thevehicle in response to operating the steering wheel, and changing the traveling destination inresponse to operating the vehicle navigation module. In the gaming mode, the computer 248 isnot responsive to the vehicle driving controllers and does not perform at least one of the drivingactivities in response to operating at least one of the vehicle driving controllers by the user.
[0188] In one example, in the driving mode, the computer 248 is responsive to voicecommands by the occupant related to the driving activities, while in the gaming mode, thecomputer 248 is not responsive to voice commands by the occupant related to the drivingactivities.
[0189] Autonomous vehicles provide occupants with the opportunity to conduct in variousactivities while traveling in the vehicles. Some of these activities may be considered private. Tothis end, there is a need to make autonomous vehicles capable of protecting the occupants’privacy.
[0190] Some aspects of this disclosure involve autonomous vehicles that protect theiroccupants’ privacy.
[0191] FIG. 25 is a schematic illustration of components of an autonomous vehicle thatincludes computer 251, window 250, and the camera 240. The vehicle further includes acompartment configured to carry an occupant. The compartment includes the window 250through which a person, located outside the compartment, can see the occupant. Additionally, thevehicle includes the camera 240, which is located in a location that enables it to take a video ofthe occupant.
[0192] The computer 251 is configured to process the video taken by the camera 240, make adetermination of whether the occupant is conducting in a private activity, and operate thewindow 250 according to at least first and second modes based on the determination. Optionally,conducting in the private activity comprises exposing an intimate body part. Optionally,conducting in the private activity comprises an action involving scratching, grooming, sleeping,or dressing. Optionally, detecting that the occupant is conducting in the private activity is doneutilizing an image analysis method, such as an approach described in Aggarwal, et al., "Human activity analysis: A review", ACM Computing Surveys (CSUR) 43.3 (2011): 16, Weinland, el al."A survey of vision-based methods for action representation, segmentation and recognition",Computer vision and image understanding 115.2 (2011): 224-241, and/or Ramanathan, et al.,"Human action recognition with video data: research and evaluation challenges", IEEETransactions on Human-Machine Systems 44.5 (2014): 650-663.
[0193] In one example, responsive to the determination indicating that the occupant is notconducting in a private activity, the computer 251 configures the window 250 to operate thewindow in the first mode, which enables the person to see the occupant up to a first privacylevel. Optionally, responsive to the determination indicating that the occupant is conducting in aprivate activity, the computer 251 configures the window 250 to operate in the second mode,which enables the person to see the occupant up to a second privacy level. The second privacylevel maintains the privacy of the occupant to a higher extent than the first privacy level.
[0194] In one example, the second privacy level does not enable the person to see the occupantand/or an intimate body part of the occupant that would be otherwise exposed (e.g., if thewindow 250 were operated in the first mode). In one example, the window 250 may be atransparent physical window, and the transparency of the window 250 is lower by at least 30% inthe second mode comparted to the first mode. In another example, the window 250 is a virtualwindow, and in the second privacy level, the window 250 refrains from displaying at least a partof an intimate body part captured by the camera 240.
[0195] [0196] Various embodiments described herein include a processor and/or a computer. Forexample, the autonomous-driving control system may be implemented using a computer andgeneration of a representation of the outside environment is done using a processor or acomputer. The following are some examples of various types of computers and/or processorsthat may be utilized in some of the embodiments described herein.
[0197] FIG. 26a and FIG. 26b are schematic illustrations of possible embodiments forcomputers (400, 410) that are able to realize one or more of the embodiments discussed herein.The computer (400, 410) may be implemented in various ways, such as, but not limited to, aserver, a client, a personal computer, a network device, a handheld device (e.g., a smartphone),and/or any other computer form capable of executing a set of computer instructions.
[0198] The computer 400 includes one or more of the following components: processor 401,memory 402, computer readable medium 403, user interface 404, communication interface 405,and bus 406. In one example, the processor 401 may include one or more of the followingcomponents: a general-purpose processing device, a microprocessor, a central processing unit, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing(RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a special-purposeprocessing device, an application specific integrated circuit (ASIC), a field programmable gatearray (FPGA), a digital signal processor (DSP), a distributed processing entity, and/or a networkprocessor. Continuing the example, the memory 402 may include one or more of the followingmemory components: CPU cache, main memory, read-only memory (ROM), dynamic randomaccess memory (DRAM) such as synchronous DRAM (SDRAM), flash memory, static randomaccess memory (SRAM), and/or a data storage device. The processor 401 and the one or morememory components may communicate with each other via a bus, such as bus 406.
[0199] The computer 410 includes one or more of the following components: processor 411,memory 412, and communication interface 413. In one example, the processor 411 may includeone or more of the following components: a general-purpose processing device, amicroprocessor, a central processing unit, a complex instruction set computing (CISC)microprocessor, a reduced instruction set computing (RISC) microprocessor, a very longinstruction word (VLIW) microprocessor, a special-purpose processing device, an applicationspecific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signalprocessor (DSP), a distributed processing entity, and/or a network processor. Continuing theexample, the memory 412 may include one or more of the following memory components: CPUcache, main memory, read-only memory (ROM), dynamic random access memory (DRAM)such as synchronous DRAM (SDRAM), flash memory, static random access memory (SRAM),and/or a data storage device [0200] Still continuing the examples, the communication interface (405,413) may include oneor more components for connecting to one or more of the following: an inter-vehicle network,Ethernet, intranet, the Internet, a fiber communication network, a wired communication network,and/or a wireless communication network. Optionally, the communication interface (405,413) isused to connect with the network 408. Additionally or alternatively, the communication interface405 may be used to connect to other networks and/or other communication interfaces. Stillcontinuing the example, the user interface 404 may include one or more of the followingcomponents: (i) an image generation device, such as a video display, an augmented realitysystem, a virtual reality system, and/or a mixed reality system, (ii) an audio generation device,such as one or more speakers, (iii) an input device, such as a keyboard, a mouse, an electronicpen, a gesture based input device that may be active or passive, and/or a brain-computerinterface.
[0201] It is to be noted that when a processor (computer) is disclosed in one embodiment, the scope of the embodiment is intended to also cover the use of multiple processors (computers).Additionally, in some embodiments, a processor and/or computer disclosed in an embodimentmay be part of the vehicle, while in other embodiments, the processor and/or computer may beseparate of the vehicle. For example, the processor and/or computer may be in a device carriedby the occupant and/or remote of the vehicle (e.g., a server).
[0202] As used herein, references to "one embodiment" (and its variations) mean that thefeature being referred to may be included in at least one embodiment of the invention. Moreover,separate references to "one embodiment", "some embodiments", "another embodiment", “stillanother embodiment”, etc., may refer to the same embodiment, may illustrate different aspects ofan embodiment, and/or may refer to different embodiments.
[0203] Some embodiments may be described using the verb “indicating”, the adjective“indicative”, and/or using variations thereof. Herein, sentences in the form of “X is indicative ofY” mean that X includes information correlated with Y, up to the case where X equals Y. Forexample, sentences in the form of “thermal measurements indicative of a physiologicalresponse” mean that the thermal measurements include information from which it is possible toinfer the physiological response. Additionally, sentences in the form of “provide/receive anindication indicating whether X happened” refer herein to any indication method, including butnot limited to: sending/receiving a signal when X happened and not sending/receiving a signalwhen X did not happen, not sending/receiving a signal when X happened and sending/receivinga signal when X did not happen, and/or sending/receiving a first signal when X happened andsending/receiving a second signal X did not happen.
[0204] Herein, “most” of something is defined herein as above 51% of the something(including 100% of the something). For example, most of an ROI refers to at least 51% of theROI. A “portion” of something refers herein to 0.1% to 100% of the something (including 100%of the something). Sentences of the form “a portion of an area” refer herein to 0.1% to 100%percent of the area.
[0205] As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,”“having”, or any other variation thereof, indicate an open claim language that does not excludeadditional limitations. The “a” or “an” is employed to describe one or more, and the singular alsoincludes the plural unless it is obvious that it is meant otherwise.
[0206] Certain features of some of the embodiments, which may have been, for clarity,described in the context of separate embodiments, may also be provided in various combinationsin a single embodiment. Conversely, various features of some of the embodiments, which mayhave been, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination.
[0207] Embodiments described in conjunction with specific examples are presented by way ofexample, and not limitation. Moreover, it is evident that many alternatives, modifications, andvariations will be apparent to those skilled in the art. It is to be understood that otherembodiments may be utilized and structural changes may be made without departing from thescope of the appended claims and their equivalents.

Claims (20)

WE CLAIM:
1. A safety system for an autonomous vehicle, comprising: a camera configured to take images of an occupant of the vehicle; a computer configured to estimate, based on the images, whether the occupant is engagedin a certain activity that involves handling an object that can harm the occupant in a caseof a Sudden Decrease in Ride Smoothness (SDRS); the computer is further configured to receive, from an autonomous-driving controlsystem, an indication indicative of whether an SDRS event is imminent; responsive to both receiving an indication indicative of an imminent SDRS event andestimating that the occupant is engaged in the certain activity, the computer is furtherconfigured to command a user interface to provide a first warning to the occupant shortlybefore the SDRS event; and responsive to receiving an indication indicative of an imminent SDRS event and notestimating that the occupant is engaged in the certain activity, the computer is furtherconfigured not to command the user interface to warn the occupant, or to command theuser interface to provide a second warning to the occupant, shortly before the SDRSevent; wherein the second warning is less noticeable than the first warning.
2. The safety system of claim 1, wherein the first warning is louder than the secondwarning, and the occupant does not drive the vehicle.
3. The safety system of claim 1, wherein the first warning comprises a more intense visualcue than the second warning.
4. The safety system of claim 1, wherein the object is a tool for applying makeup, and thecertain activity comprises bringing the tool close to an eye.
5. The safety system of claim 1, wherein the object is an ear swab, and the certain activitycomprises cleaning an ear with the ear swab.
6. The safety system of claim 1, wherein the object is selected from the group comprisingthe following tools: a knife, a tweezers, a scissors, and a syringe; and the certain activitycomprises using the tool.
7. The safety system of claim 1, wherein the object is a cup that is at least partially filledwith liquid, and the certain activity comprises drinking the liquid.
8. The safety system of claim 1, wherein responsive to both receiving an indicationindicative of no expected SDRS event and estimating that the occupant is engaged in the certain activity, the computer is further configured not to command the user interface towarn the occupant.
9. The safety system of claim 1, wherein warning the occupant shortly before the SDRSevent refers to warning the occupant less than 30 seconds before the expected SDRSevent; and wherein the SDRS event may result from one or more of the following:driving on a speed bump, driving over a pothole, starting to drive after a full stop, drivingup the pavement, making a sharp turn, and a hard breaking.
10. The safety system of claim 9, wherein the computer is configured to command the userinterface to provide the first warning at least one second before the SDRS event.
11. The safety system of claim 1, wherein the autonomous-driving control system calculatesthe indication based on at least one of the following sources: sensors mounted to thevehicle, sensors mounted to nearby vehicles, an autonomous-driving control system usedto drive a nearby vehicle, and a database comprising descriptions of obstacles in the roadthat are expected to cause intense movement of the vehicle.
12. The safety system of claim 1, wherein the camera comprises a video camera, and thecomputer is configured to utilize an image processing algorithm to identify the object andto estimate whether the occupant is engaged in the certain activity.
13. The safety system of claim 1, wherein the camera comprises an active 3D trackingdevice, and the computer is configured to analyze the 3D data to identify the object and toestimate whether the occupant is engaged in the certain activity.
14. The safety system of claim 1, wherein the camera is physically coupled to thecompartment.
15. The safety system of claim 1, wherein the occupant wears a head-mounted display(HMD), and the camera is physically coupled to the HMD.
16. A method for warning an occupant of an autonomous vehicle, comprising:receiving images of the occupant; estimating, based on the images, whether the occupant is engaged in a certain activity thatinvolves handling an object that can harm the occupant in a case of a Sudden Decrease inRide Smoothness (SDRS); receiving, from an autonomous-driving control system, an indication indicative ofwhether an SDRS event is imminent; responsive to both receiving an indication indicative of an imminent SDRS event andestimating that the occupant is engaged in the certain activity, commanding a userinterface to provide a first warning to the occupant shortly before the SDRS event; andresponsive to receiving an indication indicative of an imminent SDRS event and notestimating that the occupant is engaged in the certain activity, commanding the userinterface to provide a second warning to the occupant, or not commanding the userinterface to warn the occupant, shortly before the SDRS event; wherein the secondwarning is less noticeable than the first warning.
17. The method of claim 16, wherein responsive to both receiving an indication indicative ofno expected SDRS event and estimating that the occupant is engaged in the certainactivity, the computer is further configured not to command the user interface to warn theoccupant.
18. The method of claim 16, wherein warning the occupant shortly before the SDRS eventrefers to warning the occupant less than 30 seconds before the expected SDRS event; andwherein the SDRS event may result from one or more of the following: driving on aspeed bump, driving over a pothole, starting to drive after a full stop, driving up thepavement, making a sharp turn, and a hard breaking.
19. The method of claim 16, further comprising utilizing an image processing algorithm foridentifying the object and for estimating whether the occupant is engaged in the certainactivity.
20. A non-transitory computer-readable medium for use in a computer to warn an occupantof an autonomous vehicle, the computer comprises a processor, and the non-transitorycomputer-readable medium comprising: program code for receiving images of the occupant; program code for estimating, based on the images, whether the occupant is engaged in acertain activity that involves handling an object that can harm the occupant in a case of aSudden Decrease in Ride Smoothness (SDRS); program code for receiving, from an autonomous-driving control system, an indicationindicative of whether an SDRS event is imminent; program code for commanding a user interface to provide a first warning to the occupantshortly before the SDRS event, responsive to both receiving an indication indicative of animminent SDRS event and estimating that the occupant is engaged in the certain activity;and program code for commanding the user interface to provide a second warning to theoccupant, or not commanding the user interface to warn the occupant, shortly before theSDRS event, responsive to receiving an indication indicative of an imminent SDRS eventand not estimating that the occupant is engaged in the certain activity; wherein the secondwarning is less noticeable than the first warning.
GB1621125.2A 2015-12-20 2016-12-12 Warning a vehicle occupant before an intense movement Expired - Fee Related GB2547512B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201562270010P 2015-12-20 2015-12-20
US201662369127P 2016-07-31 2016-07-31

Publications (3)

Publication Number Publication Date
GB201621125D0 GB201621125D0 (en) 2017-01-25
GB2547512A GB2547512A (en) 2017-08-23
GB2547512B true GB2547512B (en) 2019-09-18

Family

ID=57963520

Family Applications (4)

Application Number Title Priority Date Filing Date
GB1618138.0A Withdrawn GB2545547A (en) 2015-12-20 2016-10-27 A mirroring element used to increase perceived compartment volume of an autonomous vehicle
GB1621125.2A Expired - Fee Related GB2547512B (en) 2015-12-20 2016-12-12 Warning a vehicle occupant before an intense movement
GB1717339.4A Expired - Fee Related GB2558361B (en) 2015-12-20 2016-12-20 Autonomous vehicle having an external movable shock-absorbing energy dissipation padding
GB1621783.8A Expired - Fee Related GB2547532B (en) 2015-12-20 2016-12-20 Autonomous vehicle having an external shock-absorbing energy dissipation padding

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GB1618138.0A Withdrawn GB2545547A (en) 2015-12-20 2016-10-27 A mirroring element used to increase perceived compartment volume of an autonomous vehicle

Family Applications After (2)

Application Number Title Priority Date Filing Date
GB1717339.4A Expired - Fee Related GB2558361B (en) 2015-12-20 2016-12-20 Autonomous vehicle having an external movable shock-absorbing energy dissipation padding
GB1621783.8A Expired - Fee Related GB2547532B (en) 2015-12-20 2016-12-20 Autonomous vehicle having an external shock-absorbing energy dissipation padding

Country Status (1)

Country Link
GB (4) GB2545547A (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017218444B4 (en) * 2017-10-16 2020-03-05 Audi Ag Method for operating a safety system for a seat system of a motor vehicle and safety system for a seat system of a motor vehicle
CN108995590A (en) * 2018-07-26 2018-12-14 广州小鹏汽车科技有限公司 A kind of people's vehicle interactive approach, system and device
US11221741B2 (en) * 2018-08-30 2022-01-11 Sony Corporation Display control of interactive content based on direction-of-view of occupant in vehicle
DE102019118854A1 (en) * 2019-07-11 2021-01-14 Bayerische Motoren Werke Aktiengesellschaft Head-mounted display for use in dynamic application areas

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130076212A (en) * 2011-12-28 2013-07-08 현대자동차주식회사 An indoor system in vehicle which having a function of assistance for make-up of user's face
WO2016109829A1 (en) * 2014-12-31 2016-07-07 Robert Bosch Gmbh Autonomous maneuver notification for autonomous vehicles

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6585384B2 (en) * 2001-06-29 2003-07-01 N-K Enterprises Llc Wireless remote controlled mirror
JP4160848B2 (en) * 2003-03-20 2008-10-08 本田技研工業株式会社 Collision protection device for vehicle
FR2873087B1 (en) * 2004-07-16 2006-09-22 Univ Pasteur ACTIVE SAFETY DEVICE COMPRISING A DAMAGING PLATE COVERING THE WINDSHIELD OF A VEHICLE IN THE EVENT OF COLLISION WITH A PIETON
JP2007145308A (en) * 2005-11-07 2007-06-14 Toyoda Gosei Co Ltd Occupant crash protection device
US20090174774A1 (en) * 2008-01-03 2009-07-09 Kinsley Tracy L Video system for viewing vehicle back seat
US8629784B2 (en) * 2009-04-02 2014-01-14 GM Global Technology Operations LLC Peripheral salient feature enhancement on full-windshield head-up display
DE102010016113A1 (en) * 2010-03-24 2011-09-29 Krauss-Maffei Wegmann Gmbh & Co. Kg Method for training a crew member of a particular military vehicle
DE102013014210A1 (en) * 2013-08-26 2015-02-26 GM Global Technology Operations LLC Motor vehicle with multifunctional display instrument
WO2016044820A1 (en) * 2014-09-19 2016-03-24 Kothari Ankit Enhanced vehicle sun visor with a multi-functional touch screen with multiple camera views and photo video capability

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130076212A (en) * 2011-12-28 2013-07-08 현대자동차주식회사 An indoor system in vehicle which having a function of assistance for make-up of user's face
WO2016109829A1 (en) * 2014-12-31 2016-07-07 Robert Bosch Gmbh Autonomous maneuver notification for autonomous vehicles

Also Published As

Publication number Publication date
GB2545547A (en) 2017-06-21
GB201618138D0 (en) 2016-12-14
GB201621125D0 (en) 2017-01-25
GB2558361A (en) 2018-07-11
GB2558361B (en) 2019-09-25
GB2547532A (en) 2017-08-23
GB2547532B (en) 2019-09-25
GB2547512A (en) 2017-08-23
GB201717339D0 (en) 2017-12-06
GB201621783D0 (en) 2017-02-01

Similar Documents

Publication Publication Date Title
US10717402B2 (en) Shock-absorbing energy dissipation padding placed at eye level in an autonomous vehicle
US10710608B2 (en) Provide specific warnings to vehicle occupants before intense movements
US10059347B2 (en) Warning a vehicle occupant before an intense movement
US11970104B2 (en) Unmanned protective vehicle for protecting manned vehicles
US10717406B2 (en) Autonomous vehicle having an external shock-absorbing energy dissipation padding
JP7571829B2 (en) Information processing device, information processing method, program, and mobile object
US9756319B2 (en) Virtual see-through instrument cluster with live video
GB2547512B (en) Warning a vehicle occupant before an intense movement
CN111837175A (en) Image display system, information processing device, information processing method, program, and moving object
US20230059458A1 (en) Immersive displays
JP2014201197A (en) Head-up display apparatus

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20201212