[go: up one dir, main page]

Next Article in Journal
PPGTempStitch: A MATLAB Toolbox for Augmenting Annotated Photoplethsmogram Signals
Previous Article in Journal
Measurement and 3D Visualization of the Human Internal Heat Field by Means of Microwave Radiometry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mitigating Cybersickness in Virtual Reality Systems through Foveated Depth-of-Field Blur

Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, 16126 Genoa, Italy
*
Author to whom correspondence should be addressed.
Current address: Via Dodecaneso 35, 16146 Genoa, Italy.
Sensors 2021, 21(12), 4006; https://doi.org/10.3390/s21124006
Submission received: 6 April 2021 / Revised: 28 May 2021 / Accepted: 8 June 2021 / Published: 10 June 2021
(This article belongs to the Section Wearables)
Figure 1
<p>Process flow of the proposed foveated DoF technique showing the intermediate outputs. Fixation is at the center of the red sphere.</p> ">
Figure 2
<p>Illustration of the circle of confusion concept. Point of fixation is at distance <math display="inline"><semantics> <msub> <mi>D</mi> <mi>f</mi> </msub> </semantics></math>. Point located at distance <math display="inline"><semantics> <msub> <mi>D</mi> <mi>p</mi> </msub> </semantics></math> forms a circle on the retina with diameter <span class="html-italic">C</span>. <span class="html-italic">A</span> denotes the aperture and <span class="html-italic">s</span> is the posterior nodal distance.</p> ">
Figure 3
<p>An example scene along with its associated depth map.</p> ">
Figure 4
<p>Depth-of-field effects for different planes of fixation. Points of fixation (depth values are reported in red on the images) are on the vase and the front tree in the left and right images, respectively.</p> ">
Figure 5
<p>Human field-of-view for both eyes showing the foveal, near, mid, and far peripheral regions.</p> ">
Figure 6
<p>Stereoscopic view of the multi-region foveation output. The central region has no blur applied while the other two regions (highlighted in green for sake of visualization only) have different blurs applied to them.</p> ">
Figure 7
<p>Example of an output from the foveated depth-of-field blur filter.</p> ">
Figure 8
<p>Rollercoaster track outline. The arrow indicates the direction of motion. The coordinate system follows the convention used in Unity, i.e., X: right direction; Y: up direction; Z: forward direction.</p> ">
Figure 9
<p>Instantaneous user velocity and acceleration components during each rollercoaster cycle. The coordinate system follows the convention used in Unity, i.e., X: right direction; Y: up direction; Z: forward direction. Seesaw motion: 8–32 s; spiral motion: 36–44 s and 48–64 s.</p> ">
Figure 10
<p>Rollercoaster virtual environment. (<b>A</b>) user-view; (<b>B</b>) rollercoaster cart with VR camera attached; (<b>C</b>) top view of the clustered environment.</p> ">
Figure 11
<p>SSQ scores for the cybersickness experiment (conditions: NB—No Blur; GC—Unity Blur; FD—Ours). The questionnaire was filled before (Pre) and after (Post) each session. Each plot shows the mean values, averaged over all the participants, and the standard deviations for the three sub-scales and the overall score.</p> ">
Figure 12
<p>Comparison of the Post-Pre difference of the SSQ scores for each condition (conditions: NB—No Blur; GC—Unity Blur; FD—Ours). The plot shows the changes in individual SSQ scores between the pre and post experiment conditions.</p> ">
Figure 13
<p>IPQ scores for the cybersickness experiment (conditions: NB—No Blur; GC—Unity Blur; FD—Ours). The questionnaire was filled after each session. NB: Involvement 3.57, Experienced Realism 4.07, Spatial Presence 5.09; GC: Involvement 3.60, Experienced Realism 3.57, Spatial Presence 4.90; FD: Involvement 3.83, Experienced Realism 4.53, Spatial Presence 5.21.</p> ">
Figure 14
<p>Average heart rate fluctuations from a resting heart rate during a rollercoaster cycle. Origin on the heart rate axis represents the resting heart rate. (conditions: NB—No Blur; GC—Unity Blur; FD—Ours).</p> ">
Figure 15
<p>Heatmap of the visual field for user gaze combined for all sessions performed. The circles are centered at the center of the HMD screen and indicate the visual angle (e.g., the 10° circle represents the central 20° of visual eccentricity). The colors represent how frequent the user fixated at that particular location on the HMD screen with white representing 0 and black representing 9358.</p> ">
Figure 16
<p>Histogram for angular speed greater than 350°/s of the eye for all users during a saccade.</p> ">
Figure 17
<p>Comparison of the Post–Pre difference of the SSQ scores for each condition with respect to age groups (conditions: NB—No Blur; GC—Unity Blur; FD—Ours). The plot shows the changes in individual SSQ total scores between the Pre and Post experiment conditions for the two age groups. Old: NB 68.34, GC 47.55, FD 22.26; Young: NB 55.03, GC 37.06, FD 19.38.</p> ">
Figure 18
<p>Comparison of the Post–Pre difference of the SSQ scores for each condition with respect to gender groups (conditions: NB—No Blur; GC—Unity Blur; FD—Ours). The plot shows the changes in individual SSQ total scores between the Pre and Post experiment conditions for the two age groups. Male: NB 60.67, GC 44.37, FD 21.63; Female: NB 59.84, GC 46.72, FD 19.39.</p> ">
Versions Notes

Abstract

:
Cybersickness is one of the major roadblocks in the widespread adoption of mixed reality devices. Prolonged exposure to these devices, especially virtual reality devices, can cause users to feel discomfort and nausea, spoiling the immersive experience. Incorporating spatial blur in stereoscopic 3D stimuli has shown to reduce cybersickness. In this paper, we develop a technique to incorporate spatial blur in VR systems inspired by the human physiological system. The technique makes use of concepts from foveated imaging and depth-of-field. The developed technique can be applied to any eye tracker equipped VR system as a post-processing step to provide an artifact-free scene. We verify the usefulness of the proposed system by conducting a user study on cybersickness evaluation. We used a custom-built rollercoaster VR environment developed in Unity and an HTC Vive Pro Eye headset to interact with the user. A Simulator Sickness Questionnaire was used to measure the induced sickness while gaze and heart rate data were recorded for quantitative analysis. The experimental analysis highlighted the aptness of our foveated depth-of-field effect in reducing cybersickness in virtual environments by reducing the sickness scores by approximately 66%.

1. Introduction

The introduction of modern head-mounted displays (HMDs) such as the Oculus Rift and HTC Vive has seen major advancements in the field of virtual reality (VR). These devices facilitate new and novel experiences for users above and beyond what is possible with traditional audiovisual displays. However, its widespread usage has been hindered due to the fact that users tend to feel discomfort after its prolonged usage. This discomfort, which can be referred to as simulator sickness (SS) or eyestrain or visual fatigue, occurs due to the differences in the visual experience between the real world and the virtual world [1]. In VR devices, the virtual environment is presented in pin-sharp focus with the aim of allowing users to extract information from all areas of the projected images [2]. On the contrary, humans focus on objects in their surroundings by continuously altering their eye position and accommodation. Objects located at the accommodative distance form a sharp image on the retinae while other objects appear blurred [3].
Several user studies involving modern consumer-oriented HMDs have been conducted to try to identify different factors that may influence the level of the induced simulator sickness. Occlusion of the external environment (i.e., environment was not displayed) in the virtual world can cause the users to suffer from SS even more [4,5]. An unnatural mapping of senses and simulation errors such as tracking errors and latency can also lead to higher sickness and a lower sense of immersion [6,7]. Users who have prior experience with video games are less susceptible to cybersickness [8]. Personality traits such as neuroticism also have a strong correlation to SS but are mainly related to nausea [9]. Physiological responses to VR stimuli such as pupil dilation, blinking, saccades, and heart rate have been found to have a significant correlation with cybersickness [10]. An increased blink frequency has also been reported among VR users when they suffered from cybersickness [11]. User studies on VR systems have also shown that users are sensitive to artifacts present inside 20° of eccentricity [12].
There are several methods to detect and measure cybersickness [13]. Questionnaires based on self-report responses of users are the earliest methods for assessment and are the de-facto choice for VR systems [14]. There are several types of these questionnaires such as Simulator Sickness Questionnaire (SSQ) [15] and Virtual Reality Sickness Questionnaire (VRSQ) [16]. However, SSQ remains the most popular and most cited among the VR community. Recently, there has been some interest to assess cybersickness through physiological signals such as heart rate, respiratory rate, and skin conductance with promising results [17,18].
Researchers have proposed many techniques to reduce the level of induced cybersickness. Fernandes et al. [19] proposed to dynamically alter the field-of-view depending on the user motion. However, such approaches limit the sense of presence in the virtual world. Alternatively, user studies have been shown to reduce SS by incorporating spatial or defocus blur [20,21]. Budhiraja et al. [22] tried to address sickness caused by vection in VR. Vection is the perception of self-motion in the absence of any physical movement, often caused by secondary moving objects in the user-view. They incorporated rotation blurring, i.e., applying a Gaussian blur to the entire scene when peripheral objects undergo rotational movements. Buhler et al. [23] addressed cybersickness induced from peripheral motion by dividing the scene circularly and reducing optic flow in the peripheral section. Use of vignetting during amplified head movements to counter cybersickness had an opposite effect [24]. Saliency-based dynamic blurring only worked for high speed scenes [25]. Moreover, a recent study demonstrated that introducing spatial blur effects in VR systems can also help with depth perception [26].
In the computer graphics field, depth-of-field (DoF) rendering is a popular approach to incorporate spatial blur. Images are blurred using information from the camera model and the corresponding depth maps. Depth-based spatial blur techniques can be classified into two main categories: object space and image space methods [27]. Object space methods, in order to generate DoF effects, operate directly on the 3D scene and are built into the rendering pipeline. On the contrary, image space methods are considered a post-processing operation since they operate on images and their corresponding depth maps. Object space methods suffer less from artifacts when compared to image space methods. However, image space methods are preferred in VR applications since speed is of utmost importance and image space methods are much faster. In order to avoid artifacts, image space methods need to be tuned carefully. Most commonly encountered artifacts include intensity leakage and depth discontinuity. Intensity leakage is when a blurred background blurs on top of an in-focus object. Depth discontinuity is when the background is in-focus, but the silhouette of the foreground object appears sharp. These artifacts mainly occur when there is an abrupt change in the depth map.
An alternate approach for introducing space variant blur is foveated imaging in which the image resolution varies across the image based on the user’s fixation [28]. Foveated rendering [29,30,31,32] can reduce the computational load for VR devices by providing high acuity to the user’s fixation point and reduced acuity in the peripheral regions. However, foveated rendering provides focus information decoupled from depth cues. A more natural scene can be produced by using a combination of the two [33]. Furthermore, Maiello et al. [34] demonstrated that depth perception can be affected by foveation, and Solari et al. [35] showed that the size and the stimulated portion of the field-of-view can affect perceived visual speed.
In this paper, we develop a system that takes its inspiration from the human physiological system and the optical characteristics of lenses. The proposed system couples the output of foveated rendering and DoF blur to provide an artifact-free scene in the central region. Our system offers smooth transitions when the fixation point changes while providing real-time performance. Current spatial blur techniques applied to VR (discussed in detail in Section 2) often suffer from artifacts or fail to provide sufficient frame rates [36]. Our system provides real-time gaze-contingency for off-the-shelf HMDs that have an integrated eye tracking system using image space methods. We also present a user study we conducted on cybersickness in order to evaluate whether our technique can significantly reduce the level of induced cybersickness in virtual environments. For the user study, we assess cybersickness through the SSQ questionnaire and heart rate measurements.
The paper is organized as follows: Section 2 highlights the related works. Section 3 presents our developed system. The user study designed to evaluate the system is presented in Section 4. In Section 5, we analyze the performance of our system based on the results of the user study. In Section 6, we conclude the paper with a discussion.

2. Related Works

In this section, we present some recent works done to introduce space-variant blur in VR systems such as foveated rendering and depth-of-field effects. The aim of this section is to highlight some of the problems faced by the VR community with regard to these topics, which also served as a motivation of our work.
Several attempts have been made to introduce DoF blur effects in VR systems [37,38]. These systems assume a focus distance and use the lens model to compute the circle of confusion. The amount of blur in the peripheral pixels is based on the depth difference between the point of fixation and that particular pixel. However, these systems are not gaze-contingent as they assume either a fixed focus distance or assume the user is always fixated at the center of the scene. Alternatively, gaze contingent systems have also been proposed for near-eye displays [39,40]. These systems use adjustable lenses and can potentially be used to correct hyperopia and myopia in VR systems. The major drawback of such setups is that they are hardware intensive and cannot be adapted to modern lightweight HMDs.
Space-variant resolution can be provided through log-polar mapping [41]. The image is first transformed into the cortical domain and then into the retinal domain to provide an image that has higher resolution in the center and lower resolution as the image coordinates move away from the image center. Such techniques were exploited by Meng et al. [31] who proposed a kernel based foveated rendering approach that maps well to current generation of GPUs. Alternatively, a phase-aligned approach towards foveated rendering has also been developed [42]. Only the high acuity foveal region is aligned with the head movements while the peripheral region is instead aligned with the virtual world. Thus, only the high acuity regions require additional processing in each frame. Current foveated rendering methods use fixed parameters that are often tuned manually. Tursun et al. [32] proposed to use a content aware prediction model based on luminance and contrast to compute the optimal parameters. Lin et al. [43] investigated how the size of the foveal region or the central window influences cybersickness. In their study, they found no correlation between the amount of induced sickness and the size of the central window. However, their study highlighted that users adapt more quickly to larger foveal regions.
A common issue in most foveated rendering techniques is geometric aliasing which appears in the form of temporal flickering and can be easily noticed by users [36]. Some solutions have recently been proposed to overcome these artifacts. Franke et al. proposed temporal foveation built into the rasterization pipeline [44]. They introduce a confidence function based on which they decide whether to re-project the pixels from the previous frame or to redraw them. Their system works relatively well on dynamic objects, which is a bottleneck for most foveated rendering algorithms. Since they do not always use a freshly rendered image as input and rely on data from previous frame to achieve a high computational performance, their system does not work well with reflections and transparent objects. Alternatively, Weier et al. [45] propose adding depth-of-field as a post-step to remove artifacts introduced by foveated rendering algorithms. Their approach showed promising visual results in their user study. However, they were unable to achieve the necessary frame rates to meet the HMD’s V-Sync limit, which is necessary to reduce fatigue and cope with fast eye movements [46]. The authors emphasized the need to combine the DoF blur and foveated imaging to obtain optimal results.
Our proposed technique described in detail in the next section tries to combine the foveation and DoF effects applied to the source image using image space methods which are much faster and support a wider range of VR developing platforms. We use freshly rendered images every frame which allows the system to accommodate a more diverse range of objects in the VR environment. By shifting to image space methods, we are able to meet the V-sync limit of VR HMDs.

3. The Proposed Foveated Depth-of-Field Effects

The proposed spatial blur technique incorporates DoF blur and foveation effects. The processing is implemented at the shader level to ensure real-time performance. Image space methods are exploited in the linear color space. Different types of smoothing filters were considered, such as Gaussian filtering, Bokeh [47], and disc effects. However, since the system takes inspiration from the human physiological system, the Bokeh filter was preferred as it better mimics the aperture present in the human eyes and can lead to a more realistic output.
The pseudocode of the foveated DoF effects is described in Algorithm 1, while the process flow of the proposed technique is shown in Figure 1. In the first shader pass, the circle of confusion diameters is computed using the raw depth values and stored in a single-channel texture object. The circle of confusion diameters are shown as grey for objects farther from a fixation plane and as purple for objects in between the user and the fixation plane. Simultaneously, the image is divided into three circular sections by computing the distance of each pixel to the fixation pixel. Red pixels represent the pixels in the foveal area, while green and blue pixels represent the near and mid peripheral regions, respectively. Using the source image and the circle of confusion texture, the depth-of-field effects are computed in the second shader pass. Similarly, using the foveation mask and the source image, the foveation effects are computed in the third shader pass. In the last shader pass, the effects are combined to obtain the final output. The smoothing filters are applied at half resolution of the source image and the resultant frames are later up-sampled. Details of the individual processes involved are described in the following subsections.
Algorithm 1: Foveated DoF effects for VR
Sensors 21 04006 i001

3.1. Depth-of-Field Blur

When humans visually perceive their surroundings, the retinal images contain variations in blur. This variation is due to the objects being placed at different depth planes and is an important cue for depth perception. In order to synthesize this blur effect in VR systems, we use a depth texture object to create the depth map of the virtual scene. Depth values corresponding to each pixel on the HMD screen are computed and stored in a Z-buffer. The information inside the Z-buffer is scaled between 0.0 and 1.0 to ensure the system can be used with any HMD configuration. This depth information is used to define the parameters of the smoothing filter. An eye tracker is used to identify the fixation plane and the amount of blur is varied based on the difference in pixel depths (i.e., on the difference in depth of the scene objects with respect to the fixation plane). Objects on the accommodative plane are kept as they are in the source image while a smoothing filter is applied on every other region.
We use the circle of confusion concept from the field of optics to model the amount of blur associated with each pixel. An illustration of the concept can be seen in Figure 2. When the lens is focused at the object placed at distance D f , a circle with diameter C is imaged on the retina by the object place at distance D p . This circle is referred to as the circle of confusion. We use the formulation developed by Held et al. [48] for computing C and this is defined by (1):
C = A s 1 D f 1 D p
where s is the distance between the retina and lens, more commonly known as the posterior nodal distance, and A is the aperture of the eye.
We use the circle of confusion to alter the blur associated with each pixel. The bigger the size of C, the higher the amount of blur that is present. This implies that the parameter of the blur σ d has a direct relation to the size of the circle of confusion, i.e., σ d C . We adapt (1) to our system and formulate (2) to compute σ d :
σ d = K 1 Z f 1 Z p
where Z f is the depth of the fixation point, Z p is the depth of the rendered pixel, and parameter K is the fitting of A s and the constant relating C and σ d . The parameter K is scene and user dependent and has to be tuned accordingly. We tune this parameter based on the quality index of the image proposed by Wang and Bovik [49]. Image degradation such as contrast loss is often associated with blurred images [32]. We chose the value of K that ensures a sufficient quality index.
A detailed illustration of this depth-of-field effect can be seen in Figure 3 and Figure 4. Figure 3 shows the original scene along with its calculated depth map. Figure 4 shows the output for the plane of fixation at different depths. The plane of fixation in the left image is on the vase. Pixels at the vase depth plane appear sharp. The right image shows the output when the plane of fixation is on the tree. It can be seen that the chair (only partially visible as it is occluded by the vase) also forms a sharp image as it is at the same depth as the tree.

3.2. Multi-Region Foveation

Human visual field-of-view is composed of foveal and peripheral regions [50]. The divisions of the human visual system can be seen in Figure 5. The central foveal region is sharp and detailed since the light rays entering the eye form a sharp image on the retinae while the peripheral region lacks fidelity and appears blurred on the retinal image due to the decrease in density of the light sensitive cells in the periphery. The peripheral region can be subdivided into three further categories, namely the near, mid, and far peripheral regions. The amount of perceived detail in each region decreases as it moves further from the center. A far peripheral region is only visible to one eye and does not contribute to stereoscopic vision.
We divide the overall imaged scene into three sections corresponding to the foveal, near, and mid peripheral regions. A far peripheral region is not visible in modern HMDs due to their optical limitations and thus is not considered in our system. However, the system can be adapted to include it as well by simply increasing the divisions of the rendered scene. We use circular divisions as opposed to rectangular ones since it better represents the shape of the lenses present in commercially available HMDs. The fixation point is considered the reference center of the circular regions, and the regions are sketched around it. The central division defines the foveal region and is output without any further processing while the smoothing filter is applied to the other regions. The parameter of the blur σ f associated with each pixel is determined by the location of that particular pixel in the divided scene. In our implementation, we keep σ f m for the mid peripheral region as double the σ f n of the near peripheral region.
A detailed illustration of the effect is shown in Figure 6. The middle of the image is assumed as the fixation point in this particular example. From the left eye view in Figure 6, it can be seen that the circular outlines are quite distinct and cause artifacts in the view which can be uncomfortable for the user in its current form.

3.3. Artifact Removal and Image Merging

From Figure 4 and Figure 6, it can be observed that some artifacts exist in the resulting images where there is an abrupt change in the blur σ parameter. In order to eliminate/minimize them, we use a technique proposed by Perry and Geisler [51] for blending multi-resolution images using the transfer function of the resolution map. We adapted their approach to our VR system on the transitional regions (i.e., regions with abrupt σ variations). In our system, we use the radial distances between the transitional regions from the fixation point instead of the transfer function.
We introduce the transitional region R t and define the surrounding regions as either inner R i or outer R o based on the location with respect to the fixation point. Likewise, their corresponding radii to the fixation point are defined as r j with j = 1 , 2 , 3 and r j < r j 1 . We compute the blending function B ( x , y ) by (3):
B j ( x , y ) = 0 d ( x , y ) r j d ( x , y ) r j r j 1 r j r j < d ( x , y ) < r j 1 1 d ( x , y ) r j 1
where d ( x , y ) is the distance between the rendered pixel coordinates and the pixel coordinates of the point of fixation.
The output of (3) approaches 1.0 as the considered pixel nears the outer region and approaches 0.0 as the pixel nears the inner region. Using the blending function, we determine the output of the smoothing filter by (4):
O ( x , y ) = B j ( x , y ) I j ( x , y ) + ( 1 B j ( x , y ) ) I j 1 ( x , y )
where I j ( x , y ) and I j 1 ( x , y ) are the outputs from the smoothing filters from jth and ( j 1 ) th regions. This makes sure that a percentage from each blur level is taken based on the location of the pixel in the transitional region to determine the final output resulting in an artifact free scene.
To merge the output of the DoF blur and foveation, we compute pixel-wise σ for both. However, we only use the smaller σ for the smoothing filter. Figure 7 shows an example output of the foveated DoF effect. The transitions between high acuity and blurred regions are smoother and the central 20° of eccentricity is free of artifacts.

4. User Study on Cybersickness

In order to analyze whether the developed foveated DoF effects could help reduce SS while using a VR device, we conducted a cybersickness study. The objective of this study was to measure the level of sickness induced and gather user data for further analysis.

4.1. Participants

We collected data from 18 volunteers (9 males and 9 females) aged from 18 to 46 years (mean 29.3 ± 7.6). The participants were volunteers and received no reward. All users had normal to corrected-to-normal acuity and normal stereo vision. All users except four were novice VR users.

4.2. Setup

The developed system was implemented using Unity operating on an Intel Core i7-9700K processor equipped with a NVIDIA GeForce GTX 1080 graphics card. An HTC Vive Pro Eye device that has an integrated Tobii eye tracking system was used for interacting with the user. The HMD has a resolution of 1440 × 1600 pixels per eye and a 110° field-of-view. The Scosche Rhythm armband monitor was used to measure the user’s heart rate.

4.3. Design

A VR rollercoaster environment was designed to induce motion sickness. The rollercoaster was custom-built in Unity in order to have a system which allows us to control and manipulate the experimental parameters, such as velocities, acceleration, and duration of the experiment. The track consists of seesaw and spiral motions placed at different points (see Figure 8). Figure 9 shows the cart velocity and acceleration components over a rollercoaster cycle. Various objects and buildings were closely placed around the rollercoaster tracks to have a clustered environment. The clustered environment ensures that the user’s focus point changes rapidly and the effect of the foveated DoF blur is more prominent. Figure 10 shows the custom VR environment created for the experiment.

4.4. Procedure

We consider three conditions: one with our foveated DoF technique enabled (referred to as FD), and one with the Unity’s post-processing stack blur (see the developer’s website https://docs.unity3d.com/Packages/[email protected]/manual/Depth-of-Field.html (accessed on 6 April 2021)) enabled (referred to as GC), and one with the scene with no blur present (referred to as NB). The full fidelity NB condition acts as the control group. The Unity blur GC condition only implements the depth-of-field effect using a 7-pass shader. It also uses the Bokeh effect to introduce spatial blur in the peripheral regions. The size of the Bokeh filter in the Unity blur condition and our foveated depth-of-field condition were kept the same to ensure comparability. The Unity blur does not explicitly support eye-tracking or VR devices so a custom interface was developed to integrate the eye-tracking module with the Unity blur effect to provide gaze-contingency.
All users underwent these three conditions in random order, i.e., 1 / 3 rd of the users performed the FD session first, 1 / 3 rd of the users performed the GC session first, and 1 / 3 rd of the users performed the NB session first. This was to ensure that no bias was present in the experiment. Each session only had one condition active. A significant amount of time was provided between each session to all users to recover from the after-effects of the previous condition. Participants were provided with a minimum of a 90-min break between the sessions. Most users opted to undergo the sessions on successive days. Before each session, the participants underwent an eye calibration process.
Each user session lasted for 5 min. This length of the experimental session was determined based on pre-testing trials which suggested that this time-frame was sufficient to induce SS based on the rollercoaster design. For quantitative evaluation, the user’s positional data, gaze data, and heart rate were recorded. Heart rate data were recorded at 1 Hz frequency while all other data were recorded at approximately 50 Hz frequency.

4.5. Analysis

To measure SS, users had to fill the Simulator Sickness Questionnaire (SSQ) [15]. The SSQ consists of 16 questions, to be answered on a 4-point Likert scale. The SSQ scores reflect the level of nausea, oculomotor disturbance, disorientation, and overall severeness of induced sickness. The questionnaire was filled by each user immediately before (Pre) and after (Post) each session. To measure user experience between each type of session, the Igroup Presence Questionnaire (IPQ) [52] was used. The IPQ consists of 14 questions, to be answered on a 7-point Likert scale. Each user filled the IPQ after each session.

5. Experimental Results

Data gathered from the experimental sessions were analyzed to have a better understanding of performance of the developed system. Data analysis is described in the following subsections.

5.1. Cybersickness and Presence Evaluation

Figure 11 and Figure 12 show the results of the SSQ questionnaire. It can be observed that our foveated DOF blur has a better performance over the no blur setup. A Wilcoxon rank sum test was performed to compare results of the different conditions (see Figure 11). The cross-validation among the pre states of the users who used different blurred systems showed no significant difference between them. The cross-validation between the Pre and Post states of users during each type of system shows a significant difference, i.e., the experimental environment caused a significant increase in the SSQ scores (see Table 1).
The differences between the Pre and Post scores (see Figure 12 and Table 2) show that the amount of increase in individual subscales is highest in NB sessions ranging between 49–54. The conditions with spatial blur incorporated (GC and FD) show the highest change in disorientation scores which is related to the vestibular disturbances. The amount of induced disorientation is similar in the NB and GC conditions. Although the range of individual subscores is different, the results demonstrate that the three conditions produce slightly different patterns of symptomatology, i.e., NB: D ≈ O ≈ N; GC: D > O > N; FD: D > O ≈ N.
Table 3 shows a comparison between different techniques discussed earlier with our foveated DoF effects. We use the difference in the sickness scores between the no effect or full fidelity condition and the best performing parameters for each respective technique. The reported mean SSQ total scores were used where available. One of the user studies did not use the SSQ for the sickness evaluation. The study on peripheral visual effects [23] used a custom questionnaire instead. It can be observed that our foveated DoF blur approach outperforms the other methods.
Figure 13 shows the results of the IPQ questionnaire. A Wilcoxon rank sum test between the samples for Unity blur and our foveated DOF against the ones from the no blur sessions displayed no significant differences in the perceived sense of presence between the users of each type of session.

5.2. Heart Rate Observations

Another parameter to observe discomfort is the heart rate fluctuations. However, at the moment, there is no psychophysiological parameter that can satisfactorily measure and predict sickness [53,54], measurements like the finger temperature, reaction time, and heart rate were correlated with cybersickness by Nalivaiko et al. [55]. Figure 14 shows the mean heart rate fluctuations, averaged over all the users, and the standard deviation during a rollercoaster cycle. It can be observed that our foveated DOF blur results in a stable heart rate and only a minute increase from the resting heart rate. On the contrary, the heart rate fluctuation in the no blur sessions is more abrupt. The Unity blur sessions have a median performance. Spatio-temporal data of the user’s movement (see Figure 9) suggest that the spiral/torsional motion has a more adverse effect as compared to seesaw motion (up and down movements). We computed the Pearson’s correlation coefficients between the heart rate fluctuations and the velocity and acceleration data. The results indicate a strong correlation between each other (r-value: NB = 0.87; GC = 0.81; FD = 0.75). It should be noted that the plots in Figure 14 do not begin from the origin because, in each session, there are four rollercoaster cycles, and the plot shows the mean heart rate of the participant, i.e., only in the first cycle, the participants have the resting heart rate while, in the subsequent cycles, there is an aftereffect from previous cycles.

5.3. User Gaze Analysis

In order to better understand how a user behaves/interacts with a VR device, we analyzed the gaze data collected from the experimental sessions. Approximately 4% of the eye tracking data was discarded. This was due to the fact that, during the experiment, for some frames, the users either blinked/closed their eyes or there was faulty sensor reading. Figure 15 shows the combined heatmap of all users. It can be observed that the users tend to fixate mostly on the center of the scene. Positional and orientation data of the user revealed that, when they had to focus on an object further away from the center, they preferred to move their heads instead of just the gaze. This observation is in support of studies conducted by Kenny et al. [56] on first person shooter (FPS) games which highlighted that user gaze is mostly directed towards the center of the view (approximately 85% of the time). Consequently, it can be assumed that gaze related user behavior in VR is similar to FPS games, verifying the assumptions taken in other user studies in the absence of eye tracking [37,38].
We also analyzed the saccadic movements of the users’ eyes. We computed the angular speeds of the eye from the eye tracking data. In humans, angular speed of the eye usually varies between 200°/s to 500°/s, but can go up to 900°/s [57]. Thus, for analysis, we considered mainly the saccades having relatively higher speed ranges to determine whether the motion of the eye has any influence on the induced level of cybersickness. Table 4 describes the peak angular speed measured for each user and how many times speeds of greater than 200°/s was achieved. It can be noticed that, during our blur algorithm integrated sessions, saccades were shorter/slower as compared to the other sessions. A Kolmogorov–Smirnov test was performed on the angular speed data. The statistical analysis showed a significant difference in the distribution of the angular speed data with a 95% confidence interval for the three conditions.
Figure 16 describes the number of occurrences for speeds higher than 350°/s. It should be noted that speeds lower than this value had a similar trend in all the three conditions, so they are not shown here. Previously, the SSQ revealed that the level of sickness in the no blur sessions is higher than our blur system. Correspondingly, there may be a correlation between the occurrences of faster saccades with the level of induced sickness. The temporal analysis revealed that higher peaks were observed mostly during the seesaw motion.
A possible explanation for lower amplitudes in our system could be that the encompassed blur reduces the amount of detail in the periphery. Consequently, the saccades are shorter. This peripheral reduction mimics the popular approach of reducing the field-of-view to minimize cybersickness [19]. However, in our approach, the peripheral content is still visible, albeit at a lower acuity; thus, the level of presence is not compromised unlike the field-of-view reduction approach.

5.4. Age and Gender Variation

We also analyzed how age affects cybersickness. It is widely assumed that motion sickness is more prevalent in younger participants; however, past studies on cybersickness in VR have revealed contradicting conclusions. Studies by Arns et al. [58] and Hakkinen et al. [59] revealed that younger participants suffer less from SS, whereas a meta-anlysis by Saredakis et al. [60] showed the opposite. We divided the participants into two groups, young and old. The younger group is comprised of people aged between 18 and 26 years while the rest comprised the older group. There were 10 users in the younger group and eight users in the older group. Figure 17 shows the difference in the total score of SSQ for the two age groups. A Wilcoxon rank sum test was performed. In the FD condition, no statistical difference was found in the SSQ scores and heart rate distributions (p > 0.45). However, in the NB and GC conditions, the older participants suffered more from cybersickness (p < 0.05).
The participants were also sub-grouped with respect to gender. Figure 18 shows the difference in the total score of SSQ for the two gender groups. A Wilcoxon rank sum test was also performed; however, no statistically significant difference was found between the two groups (p > 0.65). It should be noted that age and gender do not exclusively influence sickness. Factors such as neuroticism, prior VR experience, etc. also simultaneously affect cybersickness. Wider studies on age and gender may be required to fully understand how these factors influence cybersickness as highlighted by Chang et al. [61].

5.5. Computational Load Comparison

Using the data recorded from the cybersickness user study, we also calculated the frame processing times in order to have a better understanding of the computational overhead added by the blurring techniques. Data from the no blur sessions acted as the reference for comparison. The average processing times and their equivalent frame rates are summarized in Table 5. There is no overlap between the processing time of the three conditions within a 95% confidence interval. It can be observed that our system offers better computational performance than Unity’s blur even though the built-in blur in Unity only applies the DoF effect, whereas our system processes two different types of blur.

6. Conclusions

In this work, we developed a technique for incorporating biologically inspired spatial blur in VR devices with the aim of closing the gap between real world and virtual world experiences. Due to limited field-of-view and near eye displays, modern HMDs provide limited and often mismatching visual cues as compared to the real world. The depth-of-field effect provides an essential cue for depth; however, none of the modern HMDs are able to effectively provide this feature. Foveated imaging is an actively researched field with the aim of reducing the computational load on VR systems by reducing the spatial resolution in the peripheral regions. The technique we developed blends the two blurring procedures to provide a more realistic virtual environment.
The developed system used a Bokeh filter as the main smoothing function. The blurring algorithm used image space methods implemented using a four-pass shader program. Pre-processing was done in the first pass. In the second pass, the DoF blur effects were computed based on the circle of confusion. The third pass divided the VR scene into different circular sections centered around the point of fixation. Each region was assigned a different blur parameter. In the last pass, the outputs from the previous two passes were merged using a blending function to obtain the final rendered scene. The developed system is gaze contingent and offers smooth transitions when the user gaze changes.
We then conducted a user study on cybersickness involving 18 participants. We compared the amount of induced sickness among three types of systems: no blur, Unity post-processing stack depth-based blur, and our foveated depth-of-field blur. A custom-built rollercoaster virtual environment was used to conduct the study. We used the Simulator Sickness Questionnaire to measure cybersickness. For quantitative analysis, we also analyzed heart rate and user gaze measurements. Our analysis showed that there was a statistically significant difference in the level of induced sickness by including spatial blur in the system. There was a 27% and 66% reduction in the SSQ total score for the Unity blur and our technique respectively as compared to the full fidelity condition (mean Post-Pre SSQ score difference: NB = 60.26; GC = 44.05; FD = 20.51). The observations were also supported by the heart rate measurements. It was also observed through the heart rate analysis that circular/spiral motion contributes more adversely to cybersickness as compared to linear motion.
The analysis also showed that older people generally tend to suffer more from cybersickness in immersive VR environments as compared to younger people for the no blur (mean Post-Pre SSQ score difference: old = 68.34; young = 55.03) and Unity blur (mean Post-Pre SSQ score difference: old = 47.55; young = 37.06) conditions. However, there was no statistically significant difference between the two age groups using our blur system (mean Post-Pre SSQ score difference: old = 22.26; young = 19.38). Furthermore, we found no statistical difference in the performance for gender groups.
There are obvious differences between the scenes presented in the three conditions which may help understand why there is lower sickness induced in the systems with spatial blur incorporated. The no blur condition presented the entire VR scene in high focus which contradicts natural viewing. The Unity blur condition mimics how lenses work while our technique considers depth-of-field and foveation effects together as in natural vision leading to a more realistic scene. Another possible explanation to why a reduced sickness is observed is optic flow. Motion in the periphery can cause sickness. Motion is detected by the visual system and hence the motion is seen, but no motion or little motion is sensed by the vestibular system. By reducing the amount of information in the peripheral region, the users are less susceptible to this sensory conflict.
Even though we had a relatively small number of participants, our data indicate that incorporating our blur technique in virtual reality systems can have a soothing effect, potentially decreasing the simulator sickness. As a future work, we will further investigate the resourcefulness of the developed system for mitigating the vergence–accommodation conflict in virtual reality systems. We will also test our system with other virtual reality applications and possibly extend it to augmented reality devices.

Author Contributions

Conceptualization, R.H., M.C., and F.S.; methodology, R.H., M.C., and F.S.; formal analysis, R.H.; investigation, R.H.; data curation, R.H.; writing—original draft preparation, R.H.; writing—review and editing, M.C. and F.S.; supervision, M.C. and F.S.; project administration, M.C. and F.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been partially supported by the Interreg Alcotra projects PRO-SOL We-Pro (n. 4298) and CLIP E-Santé (n. 4793).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not available.

Acknowledgments

The authors would like to thank all the people who voluntarily participated in the user studies.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Davis, S.; Nesbitt, K.; Nalivaiko, E. A Systematic Review of Cybersickness. In Proceedings of the 2014 Conference on Interactive Entertainment, Newcastle, NSW, Australia, 2–3 December 2014; pp. 1–9. [Google Scholar]
  2. Geisler, W.S. Visual Perception and the Statistical Properties of Natural Scenes. Annu. Rev. Psychol. 2008, 59, 167–192. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Cohen, M.A.; Dennett, D.C.; Kanwisher, N. What is the Bandwidth of Perceptual Experience? Trends Cogn. Sci. 2016, 20, 324–335. [Google Scholar] [CrossRef] [Green Version]
  4. Moss, J.D.; Muth, E.R. Characteristics of Head-Mounted Displays and Their Effects on Simulator Sickness. Hum. Factors 2011, 53, 308–319. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Sharples, S.; Cobb, S.; Moody, A.; Wilson, J.R. Virtual reality induced symptoms and effects (VRISE): Comparison of head mounted display (HMD), desktop and projection display systems. Displays 2008, 29, 58–69. [Google Scholar] [CrossRef]
  6. Feigl, T.; Roth, D.; Gradl, S.; Wirth, M.; Latoschik, M.E.; Eskofier, B.M.; Philippsen, M.; Mutschler, C. Sick Moves! Motion Parameters as Indicators of Simulator Sickness. IEEE Trans. Vis. Comput. Graph. 2019, 25, 3146–3157. [Google Scholar] [CrossRef] [PubMed]
  7. Zielinski, D.J.; Rao, H.M.; Sommer, M.A.; Kopper, R. Exploring the effects of image persistence in low frame rate virtual environments. In Proceedings of the 2015 IEEE Virtual Reality (VR), Arles, France, 23–27 March 2015; pp. 19–26. [Google Scholar]
  8. Häkkinen, J.; Liinasuo, M.; Takatalo, J.; Nyman, G. Visual comfort with mobile stereoscopic gaming. In Proceedings of the SPIE 6055, Stereoscopic Displays and Virtual Reality Systems XIII, San Jose, CA, USA, 16–19 January 2006; pp. 1–9. [Google Scholar]
  9. Grassini, S.; Laumann, K.; Luzi, A.K. Association of Individual Factors with Simulator Sickness and Sense of Presence in Virtual Reality Mediated by Head-Mounted Displays (HMDs). Multimodal Technol. Interact. 2021, 5, 7. [Google Scholar] [CrossRef]
  10. Cebeci, B.; Celikcan, U.; Capin, T.K. A comprehensive study of the affective and physiological responses induced by dynamic virtual reality environments. Comput. Anim. Virtual Worlds 2019, 30, e1893. [Google Scholar] [CrossRef]
  11. Lopes, P.; Tian, N.; Boulic, R. Eye Thought You Were Sick! Exploring Eye Behaviors for Cybersickness Detection in VR. In Proceedings of the ACM Motion, Interaction and Games (MIG’20), North Charleston, SC, USA, 16–18 October 2020; pp. 1–10. [Google Scholar]
  12. Hoffman, D.M.; Meraz, Z.; Turner, E. Sensitivity to Peripheral Artifacts in VR Display Systems. SID Symp. Dig. Tech. Pap. 2018, 49, 858–861. [Google Scholar] [CrossRef]
  13. Dużmańska, N.; Strojny, P.; Strojny, A. Can Simulator Sickness Be Avoided? A Review on Temporal Aspects of Simulator Sickness. Front. Psychol. 2018, 9, 2132. [Google Scholar] [CrossRef]
  14. Rebenitsch, L.; Owen, C. Review on cybersickness in applications and visual displays. Virtual Real. 2016, 20, 101–125. [Google Scholar] [CrossRef]
  15. Kennedy, R.S.; Lane, N.E.; Berbaum, K.S.; Lilienthal, M.G. Simulator Sickness Questionnaire: An Enhanced Method for Quantifying Simulator Sickness. Int. J. Aviat. Psychol. 1993, 3, 203–220. [Google Scholar] [CrossRef]
  16. Kim, H.K.; Park, J.; Choi, Y.; Choe, M. Virtual reality sickness questionnaire (VRSQ): Motion sickness measurement index in a virtual reality environment. Appl. Ergon. 2018, 69, 66–73. [Google Scholar] [CrossRef] [PubMed]
  17. Bruck, S.; Watters, P.A. The factor structure of cybersickness. Displays 2011, 32, 153–158. [Google Scholar] [CrossRef] [Green Version]
  18. Gavgani, A.M.; Nesbitt, K.V.; Blackmore, K.L.; Nalivaiko, E. Profiling subjective symptoms and autonomic changes associated with cybersickness. Auton. Neurosci. 2017, 203, 41–50. [Google Scholar] [CrossRef]
  19. Fernandes, A.S.; Feiner, S.K. Combating VR sickness through subtle dynamic field-of-view modification. In Proceedings of the 2016 IEEE Symposium on 3D User Interfaces (3DUI), Greenville, SC, USA, 19–20 March 2016; pp. 201–210. [Google Scholar]
  20. Hoffman, D.M.; Girshick, A.R.; Akeley, K.; Banks, M.S. Vergence–accommodation conflicts hinder visual performance and cause visual fatigue. J. Vis. 2008, 8, 33:1–33:30. [Google Scholar] [CrossRef]
  21. Ang, S.; Quarles, J. GingerVR: An Open Source Repository of Cybersickness Reduction Techniques for Unity. In Proceedings of the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Atlanta, GA, USA, 22–26 March 2020; pp. 460–463. [Google Scholar]
  22. Budhiraja, P.; Miller, M.; Modi, A.; Forsyth, D. Rotation Blurring: Use of Artificial Blurring to Reduce Cybersickness in Virtual Reality First Person Shooters. arXiv 2017, arXiv:1710.02599. [Google Scholar]
  23. Buhler, H.; Misztal, S.; Schild, J. Reducing VR Sickness Through Peripheral Visual Effects. In Proceedings of the 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Reutlingen, Germany, 18–22 March 2018; pp. 517–519. [Google Scholar]
  24. Norouzi, N.; Bruder, G.; Welch, G. Assessing Vignetting as a Means to Reduce VR Sickness during Amplified Head Rotations. In Proceedings of the 5th ACM Symposium on Applied Perception (SAP’18), Vancouver, BC, Canada, 10–11 August 2018; pp. 1–8. [Google Scholar]
  25. Nie, G.; Duh, H.B.; Liu, Y.; Wang, Y. Analysis on Mitigation of Visually Induced Motion Sickness by Applying Dynamical Blurring on a User’s Retina. IEEE Trans. Vis. Comput. Graph. 2020, 26, 2535–2545. [Google Scholar] [CrossRef]
  26. Hussain, R.; Chessa, M.; Solari, F. Modelling Foveated Depth-of-field Blur for Improving Depth Perception in Virtual Reality. In Proceedings of the 2020 IEEE 4th International Conference on Image Processing, Applications and Systems (IPAS), Genova, Italy, 9–11 December 2020; pp. 71–76. [Google Scholar]
  27. Barsky, B.A.; Kosloff, T.J. Algorithms for Rendering Depth of Field Effects in Computer Graphics. In Proceedings of the 12th WSEAS International Conference on Computers, Heraklion, Greece, 23–25 July 2008; pp. 999–1010. [Google Scholar]
  28. Bastani, B.; Turner, E.; Vieri, C.; Jiang, H.; Funt, B.; Balram, N. Foveated Pipeline for AR/VR Head-Mounted Displays. Inf. Display 2017, 33, 14–35. [Google Scholar] [CrossRef] [Green Version]
  29. Patney, A.; Salvi, M.; Kim, J.; Kaplanyan, A.; Wyman, C.; Benty, N.; Luebke, D.; Lefohn, A. Towards Foveated Rendering for Gaze-tracked Virtual Reality. ACM Trans. Graph. 2016, 35, 1–12. [Google Scholar] [CrossRef]
  30. Swafford, N.T.; Iglesias-Guitian, J.A.; Koniaris, C.; Moon, B.; Cosker, D.; Mitchell, K. User, Metric, and Computational Evaluation of Foveated Rendering Methods. In Proceedings of the ACM Symposium on Applied Perception, Anaheim, CA, USA, 22–23 July 2016; pp. 7–14. [Google Scholar]
  31. Meng, X.; Du, R.; Zwicker, M.; Varshney, A. Kernel Foveated Rendering. In Proceedings of the ACM on Computer Graphics and Interactive Techniques; Association for Computing Machinery: New York, NY, USA, 2018; pp. 1–20. [Google Scholar]
  32. Tursun, O.T.; Arabadzhiyska-Koleva, E.; Wernikowski, M.; Mantiuk, R.; Seidel, H.P.; Myszkowski, K.; Didyk, P. Luminance-Contrast-Aware Foveated Rendering. ACM Trans. Graph. 2019, 38, 98:1–98:14. [Google Scholar] [CrossRef]
  33. Held, R.T.; Cooper, E.A.; O’Brien, J.F.; Banks, M.S. Blur and Disparity Are Complementary Cues to Depth. Curr. Biol. 2012, 22, 426–431. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Maiello, G.; Chessa, M.; Bex, B.J.; Solari, F. Near-optimal combination of disparity across a log-polar scaled visual field. PLoS Comput. Biol. 2020, 16, 1–28. [Google Scholar] [CrossRef] [Green Version]
  35. Solari, F.; Caramenti, M.; Chessa, M.; Pretto, P.; Bülthoff, H.H.; Bresciani, J. A Biologically-Inspired Model to Predict Perceived Visual Speed as a Function of the Stimulated Portion of the Visual Field. Front. Neural Circuits 2019, 13, 1–15. [Google Scholar] [CrossRef]
  36. Guenter, B.; Finch, M.; Drucker, S.; Tan, D.; Snyder, J. Foveated 3D Graphics. ACM Trans. Graph. 2012, 31, 1–10. [Google Scholar] [CrossRef]
  37. Hillaire, S.; Lécuyer, A.; Cozot, R.; Casiez, G. Depth-of-Field Blur Effects for First-Person Navigation in Virtual Environments. IEEE Comput. Graph. Appl. 2008, 28, 47–55. [Google Scholar] [CrossRef] [Green Version]
  38. Carnegie, K.; Rhee, T. Reducing Visual Discomfort with HMDs Using Dynamic Depth of Field. IEEE Comput. Graph. Appl. 2015, 35, 34–41. [Google Scholar] [CrossRef]
  39. Padmanaban, N.; Konrad, R.; Stramer, T.; Cooper, E.A.; Wetzstein, G. Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays. Proc. Nat. Acad. Sci. USA 2017, 114, 2183–2188. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Bos, P.J.; Li, L.; Bryant, D.; Jamali, A.; Bhowmik, A.K. Simple Method to Reduce Accommodation Fatigue in Virtual Reality and Augmented Reality Displays. SID Symp. Dig. Tech. Pap. 2016, 47, 354–357. [Google Scholar] [CrossRef]
  41. Traver, V.J.; Bernardino, A. A Review of Log-polar Imaging for Visual Perception in Robotics. Robot. Auton. Syst. 2010, 58, 378–398. [Google Scholar] [CrossRef]
  42. Turner, E.; Jiang, H.; Saint-Macary, D.; Bastani, B. Phase-Aligned Foveated Rendering for Virtual Reality Headsets. In Proceedings of the of 2018 IEEE Conference on Virtual Reality and 3D User Interfaces, Los Alamitos, CA, USA, 18–22 March 2018; pp. 1–2. [Google Scholar]
  43. Lin, Y.X.; Venkatakrishnan, R.; Venkatakrishnan, R.; Ebrahimi, E.; Lin, W.C.; Babu, S.V. How the Presence and Size of Static Peripheral Blur Affects Cybersickness in Virtual Reality. ACM Trans. Appl. Percept. 2020, 17, 16:1–16:18. [Google Scholar] [CrossRef]
  44. Franke, L.; Fink, L.; Martschinke, J.; Selgrad, K.; Stamminger, M. Time-Warped Foveated Rendering for Virtual Reality Headsets. Comput. Graph. Forum 2021, 40, 110–123. [Google Scholar] [CrossRef]
  45. Weier, M.; Roth, T.; Hinkenjann, A.; Slusallek, P. Foveated Depth-of-field Filtering in Head-mounted Displays. In Proceedings of the 15th ACM Symposium on Applied Perception, Vancouver, BC, Canada, 10–11 August 2018; pp. 1–14. [Google Scholar]
  46. Albert, R.; Patney, A.; Luebke, D.; Kim, J. Latency requirements for foveated rendering in virtual reality. ACM Trans. Appl. Percept. 2017, 14, 1–13. [Google Scholar] [CrossRef]
  47. Merklinger, H.M. A technical view of bokeh. Photo Tech. 1997, 18, 1–5. [Google Scholar]
  48. Held, R.T.; Cooper, E.A.; O’Brien, J.F.; Banks, M.S. Using Blur to Affect Perceived Distance and Size. ACM Trans. Graph. 2010, 29, 19. [Google Scholar] [CrossRef] [Green Version]
  49. Wang, Z.; Bovik, A.C. A universal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
  50. Strasburger, H.; Rentschler, I.; Jüttner, M. Peripheral vision and pattern recognition: A review. J. Vis. 2011, 11, 13. [Google Scholar] [CrossRef] [Green Version]
  51. Perry, J.S.; Geisler, W.S. Gaze-contingent real-time simulation of arbitrary visual fields. In Proceedings of the SPIE 4662: Human Vision and Electronic Imaging VII, San Jose, CA, USA, 19 January 2002; pp. 57–69. [Google Scholar]
  52. Regenbrecht, H.; Schubert, T. Real and Illusory Interactions Enhance Presence in Virtual Environments. Presence 2002, 11, 425–434. [Google Scholar] [CrossRef]
  53. Shupak, A.; Gordon, C.R. Motion sickness: Advances in pathogenesis, prediction, prevention, and treatment. Aviat. Space Environ. Med. 2006, 77, 1213–1223. [Google Scholar]
  54. Nesbitt, K.; Davis, S.; Blackmore, K.; Nalivaiko, E. Correlating reaction time and nausea measures with traditional measures of cybersickness. Displays 2017, 48, 1–8. [Google Scholar] [CrossRef]
  55. Nalivaiko, E.; Davis, S.L.; Blackmore, K.L.; Vakulin, A.; Nesbitt, K. Cybersickness provoked by head-mounted display affects cutaneous vascular tone, heart rate and reaction time. Physiol. Behav. 2015, 151, 583–590. [Google Scholar] [CrossRef] [PubMed]
  56. Kenny, A.; Koesling, H.; Delaney, D. A preliminary investigation into eye gaze data in a first person shooter game. In Proceedings of the European Conference Modelling and Simulation, Riga, Latvia, 1–4 June 2005; pp. 733–738. [Google Scholar]
  57. Leigh, R.J.; Zee, D.S. The Neurology of Eye Movements; Oxford University Press: Oxford, UK, 2015. [Google Scholar]
  58. Arns, L.L.; Cerney, M.M. The relationship between age and incidence of cybersickness among immersive environment users. In Proceedings of the IEEE Virtual Reality, Bonn, Germany, 12–16 March 2005; pp. 267–268. [Google Scholar]
  59. Hakkinen, J.; Vuori, T.; Paakka, M. Postural stability and sickness symptoms after HMD use. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Yasmine Hammamet, Tunisia, 6–9 October 2002; pp. 147–152. [Google Scholar]
  60. Saredakis, D.; Szpak, A.; Birckhead, B.; Keage, H.A.D.; Rizzo, A.; Loetscher, T. Factors Associated with Virtual Reality Sickness in Head-Mounted Displays: A Systematic Review and Meta-Analysis. Front. Hum. Neurosci. 2020, 14, 96:1–96:17. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  61. Chang, E.; Kim, H.T.; Yoo, B. Virtual Reality Sickness: A Review of Causes and Measurements. Int. J. Hum. Comput. Int. 2020, 36, 1658–1682. [Google Scholar] [CrossRef]
Figure 1. Process flow of the proposed foveated DoF technique showing the intermediate outputs. Fixation is at the center of the red sphere.
Figure 1. Process flow of the proposed foveated DoF technique showing the intermediate outputs. Fixation is at the center of the red sphere.
Sensors 21 04006 g001
Figure 2. Illustration of the circle of confusion concept. Point of fixation is at distance D f . Point located at distance D p forms a circle on the retina with diameter C. A denotes the aperture and s is the posterior nodal distance.
Figure 2. Illustration of the circle of confusion concept. Point of fixation is at distance D f . Point located at distance D p forms a circle on the retina with diameter C. A denotes the aperture and s is the posterior nodal distance.
Sensors 21 04006 g002
Figure 3. An example scene along with its associated depth map.
Figure 3. An example scene along with its associated depth map.
Sensors 21 04006 g003
Figure 4. Depth-of-field effects for different planes of fixation. Points of fixation (depth values are reported in red on the images) are on the vase and the front tree in the left and right images, respectively.
Figure 4. Depth-of-field effects for different planes of fixation. Points of fixation (depth values are reported in red on the images) are on the vase and the front tree in the left and right images, respectively.
Sensors 21 04006 g004
Figure 5. Human field-of-view for both eyes showing the foveal, near, mid, and far peripheral regions.
Figure 5. Human field-of-view for both eyes showing the foveal, near, mid, and far peripheral regions.
Sensors 21 04006 g005
Figure 6. Stereoscopic view of the multi-region foveation output. The central region has no blur applied while the other two regions (highlighted in green for sake of visualization only) have different blurs applied to them.
Figure 6. Stereoscopic view of the multi-region foveation output. The central region has no blur applied while the other two regions (highlighted in green for sake of visualization only) have different blurs applied to them.
Sensors 21 04006 g006
Figure 7. Example of an output from the foveated depth-of-field blur filter.
Figure 7. Example of an output from the foveated depth-of-field blur filter.
Sensors 21 04006 g007
Figure 8. Rollercoaster track outline. The arrow indicates the direction of motion. The coordinate system follows the convention used in Unity, i.e., X: right direction; Y: up direction; Z: forward direction.
Figure 8. Rollercoaster track outline. The arrow indicates the direction of motion. The coordinate system follows the convention used in Unity, i.e., X: right direction; Y: up direction; Z: forward direction.
Sensors 21 04006 g008
Figure 9. Instantaneous user velocity and acceleration components during each rollercoaster cycle. The coordinate system follows the convention used in Unity, i.e., X: right direction; Y: up direction; Z: forward direction. Seesaw motion: 8–32 s; spiral motion: 36–44 s and 48–64 s.
Figure 9. Instantaneous user velocity and acceleration components during each rollercoaster cycle. The coordinate system follows the convention used in Unity, i.e., X: right direction; Y: up direction; Z: forward direction. Seesaw motion: 8–32 s; spiral motion: 36–44 s and 48–64 s.
Sensors 21 04006 g009
Figure 10. Rollercoaster virtual environment. (A) user-view; (B) rollercoaster cart with VR camera attached; (C) top view of the clustered environment.
Figure 10. Rollercoaster virtual environment. (A) user-view; (B) rollercoaster cart with VR camera attached; (C) top view of the clustered environment.
Sensors 21 04006 g010
Figure 11. SSQ scores for the cybersickness experiment (conditions: NB—No Blur; GC—Unity Blur; FD—Ours). The questionnaire was filled before (Pre) and after (Post) each session. Each plot shows the mean values, averaged over all the participants, and the standard deviations for the three sub-scales and the overall score.
Figure 11. SSQ scores for the cybersickness experiment (conditions: NB—No Blur; GC—Unity Blur; FD—Ours). The questionnaire was filled before (Pre) and after (Post) each session. Each plot shows the mean values, averaged over all the participants, and the standard deviations for the three sub-scales and the overall score.
Sensors 21 04006 g011
Figure 12. Comparison of the Post-Pre difference of the SSQ scores for each condition (conditions: NB—No Blur; GC—Unity Blur; FD—Ours). The plot shows the changes in individual SSQ scores between the pre and post experiment conditions.
Figure 12. Comparison of the Post-Pre difference of the SSQ scores for each condition (conditions: NB—No Blur; GC—Unity Blur; FD—Ours). The plot shows the changes in individual SSQ scores between the pre and post experiment conditions.
Sensors 21 04006 g012
Figure 13. IPQ scores for the cybersickness experiment (conditions: NB—No Blur; GC—Unity Blur; FD—Ours). The questionnaire was filled after each session. NB: Involvement 3.57, Experienced Realism 4.07, Spatial Presence 5.09; GC: Involvement 3.60, Experienced Realism 3.57, Spatial Presence 4.90; FD: Involvement 3.83, Experienced Realism 4.53, Spatial Presence 5.21.
Figure 13. IPQ scores for the cybersickness experiment (conditions: NB—No Blur; GC—Unity Blur; FD—Ours). The questionnaire was filled after each session. NB: Involvement 3.57, Experienced Realism 4.07, Spatial Presence 5.09; GC: Involvement 3.60, Experienced Realism 3.57, Spatial Presence 4.90; FD: Involvement 3.83, Experienced Realism 4.53, Spatial Presence 5.21.
Sensors 21 04006 g013
Figure 14. Average heart rate fluctuations from a resting heart rate during a rollercoaster cycle. Origin on the heart rate axis represents the resting heart rate. (conditions: NB—No Blur; GC—Unity Blur; FD—Ours).
Figure 14. Average heart rate fluctuations from a resting heart rate during a rollercoaster cycle. Origin on the heart rate axis represents the resting heart rate. (conditions: NB—No Blur; GC—Unity Blur; FD—Ours).
Sensors 21 04006 g014
Figure 15. Heatmap of the visual field for user gaze combined for all sessions performed. The circles are centered at the center of the HMD screen and indicate the visual angle (e.g., the 10° circle represents the central 20° of visual eccentricity). The colors represent how frequent the user fixated at that particular location on the HMD screen with white representing 0 and black representing 9358.
Figure 15. Heatmap of the visual field for user gaze combined for all sessions performed. The circles are centered at the center of the HMD screen and indicate the visual angle (e.g., the 10° circle represents the central 20° of visual eccentricity). The colors represent how frequent the user fixated at that particular location on the HMD screen with white representing 0 and black representing 9358.
Sensors 21 04006 g015
Figure 16. Histogram for angular speed greater than 350°/s of the eye for all users during a saccade.
Figure 16. Histogram for angular speed greater than 350°/s of the eye for all users during a saccade.
Sensors 21 04006 g016
Figure 17. Comparison of the Post–Pre difference of the SSQ scores for each condition with respect to age groups (conditions: NB—No Blur; GC—Unity Blur; FD—Ours). The plot shows the changes in individual SSQ total scores between the Pre and Post experiment conditions for the two age groups. Old: NB 68.34, GC 47.55, FD 22.26; Young: NB 55.03, GC 37.06, FD 19.38.
Figure 17. Comparison of the Post–Pre difference of the SSQ scores for each condition with respect to age groups (conditions: NB—No Blur; GC—Unity Blur; FD—Ours). The plot shows the changes in individual SSQ total scores between the Pre and Post experiment conditions for the two age groups. Old: NB 68.34, GC 47.55, FD 22.26; Young: NB 55.03, GC 37.06, FD 19.38.
Sensors 21 04006 g017
Figure 18. Comparison of the Post–Pre difference of the SSQ scores for each condition with respect to gender groups (conditions: NB—No Blur; GC—Unity Blur; FD—Ours). The plot shows the changes in individual SSQ total scores between the Pre and Post experiment conditions for the two age groups. Male: NB 60.67, GC 44.37, FD 21.63; Female: NB 59.84, GC 46.72, FD 19.39.
Figure 18. Comparison of the Post–Pre difference of the SSQ scores for each condition with respect to gender groups (conditions: NB—No Blur; GC—Unity Blur; FD—Ours). The plot shows the changes in individual SSQ total scores between the Pre and Post experiment conditions for the two age groups. Male: NB 60.67, GC 44.37, FD 21.63; Female: NB 59.84, GC 46.72, FD 19.39.
Sensors 21 04006 g018
Table 1. The Wilcoxon rank sum test confidence scores between Pre and Post states for the different subcategories of the SSQ test (N—Nausea; O—Oculomotor; D—Disorientation; TS—Total Score); (conditions: NB—No Blur; GC—Unity Blur; FD—Ours).
Table 1. The Wilcoxon rank sum test confidence scores between Pre and Post states for the different subcategories of the SSQ test (N—Nausea; O—Oculomotor; D—Disorientation; TS—Total Score); (conditions: NB—No Blur; GC—Unity Blur; FD—Ours).
NODTS
NBp = 0.001p = 0.002p = 0.002p = 0.001
GCp = 0.001p = 0.003p = 0.004p = 0.001
FDp = 0.005p = 0.004p = 0.004p = 0.003
Table 2. The mean, standard deviation, and 95% confidence intervals of the Post-Pre difference of the SSQ scores for each condition (N—Nausea; O—Oculomotor; D—Disorientation; TS—Total Score); (conditions: NB—No Blur; GC—Unity Blur; FD—Ours).
Table 2. The mean, standard deviation, and 95% confidence intervals of the Post-Pre difference of the SSQ scores for each condition (N—Nausea; O—Oculomotor; D—Disorientation; TS—Total Score); (conditions: NB—No Blur; GC—Unity Blur; FD—Ours).
Mean (Standard Deviation)95% Confidence Interval
NB—N49.29 (5.81)[43.14, 55.44]
NB—O53.48 (6.56)[46.27, 60.69]
NB—D54.13 (7.83)[46.08, 62.19]
NB—TS60.26 (7.16)[52.65, 67.85]
GC—N30.74 (8.44)[26.91, 34.57]
GC—O39.58 (11.61)[33.65, 45.52]
GC—D46.40 (11.88)[40.86, 51.94]
GC—TS44.05 (11.14)[38.92, 49.17]
FD—N16.96 (9.07)[12.97, 20.95]
FD—O46.40 (5.09)[10.56, 17.79]
FD—D25.52 (10.56)[21.05, 29.99]
FD—TS20.51 (7.63)[16.57, 24.42]
Table 3. Comparison among different techniques for reducing cybersickness. Δ S is the reduction in the mean sickness scores between the no effect condition and the best performing condition/parameters.
Table 3. Comparison among different techniques for reducing cybersickness. Δ S is the reduction in the mean sickness scores between the no effect condition and the best performing condition/parameters.
TechniqueHMDVE/TaskΔS
Dynamic FOV modification [19]Oculus Rift DK2Reach waypoints5.6%
Rotation blurring [22]Oculus Rift DK2FPS shooter game17.9%
Peripheral visual effects [23]HTC ViveFind objects49.1%
FOV reduction (vignetting) [24]HTC ViveFollow butterfly30.1%
Dynamic blurring (saliency) [25]HTC ViveRace track35.2%
Static peripheral blur [43]HTC Vive ProMaze54.8%
Unity depth blurHTC Vive Pro EyeRollercoaster26.9%
Foveated DoF (ours)HTC Vive Pro EyeRollercoaster66.0%
Table 4. Comparison of angular speed during saccadic motion for each user. Number of occurrences of speeds greater than 200°/s and the peak speed observed are shown. (conditions: NB—No Blur; GC—Unity Blur; FD—Ours).
Table 4. Comparison of angular speed during saccadic motion for each user. Number of occurrences of speeds greater than 200°/s and the peak speed observed are shown. (conditions: NB—No Blur; GC—Unity Blur; FD—Ours).
UserNBGCFD
>200°/sPeak>200°/sPeak>200°/sPeak
AT106810°/s89502°/s59354°/s
CT132784°/s108544°/s96497°/s
EV88859°/s99743°/s74556°/s
GB136546°/s90650°/s101549°/s
HR115773°/s125663°/s97568°/s
KK78593°/s71539°/s84542°/s
LH132731°/s93707°/s103581°/s
MB87581°/s116582°/s63431°/s
MM112703°/s95697°/s88553°/s
ND101802°/s107718°/s71655°/s
NR86824°/s119702°/s105603°/s
OQ88595°/s92629°/s95612°/s
SA106697°/s105735°/s94514°/s
SR97710°/s82657°/s68570°/s
TB113688°/s89617°/s87545°/s
UG115591°/s84623°/s89511°/s
US92597°/s111502°/s89533°/s
YK67351°/s142661°/s67508°/s
Total1999859°/s1923743°/s1619655°/s
Table 5. Frame rate comparison (conditions: NB—No Blur; GC—Unity Blur; FD—Ours).
Table 5. Frame rate comparison (conditions: NB—No Blur; GC—Unity Blur; FD—Ours).
SystemAverage Processing Time95% Confidence IntervalFrame Rate
NB15.9 ms[15.9 ms, 15.9 ms]63 Hz
GC17.2 ms[17.1 ms, 17.3 ms]58 Hz
FD16.7 ms[16.6 ms, 16.8 ms]60 Hz
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hussain, R.; Chessa, M.; Solari, F. Mitigating Cybersickness in Virtual Reality Systems through Foveated Depth-of-Field Blur. Sensors 2021, 21, 4006. https://doi.org/10.3390/s21124006

AMA Style

Hussain R, Chessa M, Solari F. Mitigating Cybersickness in Virtual Reality Systems through Foveated Depth-of-Field Blur. Sensors. 2021; 21(12):4006. https://doi.org/10.3390/s21124006

Chicago/Turabian Style

Hussain, Razeen, Manuela Chessa, and Fabio Solari. 2021. "Mitigating Cybersickness in Virtual Reality Systems through Foveated Depth-of-Field Blur" Sensors 21, no. 12: 4006. https://doi.org/10.3390/s21124006

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop