Skip to main content
Large-scale scientific simulations typically output massive amounts of data that must be later read in for post-hoc visualization and analysis. With codes simulating complex phenomena at ever-increasing fidelity, writing data to disk... more
Large-scale scientific simulations typically output massive amounts of data that must be later read in for post-hoc visualization and analysis. With codes simulating complex phenomena at ever-increasing fidelity, writing data to disk during this traditional high-performance computing workflow has become a significant bottleneck. In situ workflows offer a solution to this bottleneck, whereby data is simultaneously produced and analyzed without involving disk storage. In situ analysis can increase efficiency for domain scientists who are exploring a data set or fine-tuning visualization and analysis parameters. Our work seeks to enable researchers to easily create and interactively analyze large-scale simulations through the use of Jupyter Notebooks without requiring application developers to explicitly integrate in situ libraries.
When dealing with extremely large data sets or computationally expensive rendering pipelines, local workstations may not be able to render the full data set or maintain interactive frame rates. In these cases, high-performance graphics... more
When dealing with extremely large data sets or computationally expensive rendering pipelines, local workstations may not be able to render the full data set or maintain interactive frame rates. In these cases, high-performance graphics clusters can be leveraged for distributed rendering. However, this traditionally has removed real-time feedback from the visualization system. In order to harness the power of distributed rendering and the real-time nature of local rendering, we developed PxStream — a streaming framework to transfer dynamically rendered images from high-performance graphics clusters to remote machines in real-time. PxStream clients can range from a standard computer with single monitor to a cluster-driven tiled display wall. Additionally, the PxStream server supports multiple concurrent endpoints to allow collaborators at different physical locations to simultaneously view the image stream. Initial tests demonstrate that PxStream can simultaneously stream 66 megapixel images to two locations at nearly 50 frames per second. Index Terms: Human-centered computing—Visualization; Human-centered computing—Collaborative and social computing; Theory of computation—Distributed algorithms.
Surround-view panoramic images and videos have become a popular form of media for interactive viewing on mobile devices and virtual reality headsets. Viewing such media provides a sense of immersion by allowing users to control their view... more
Surround-view panoramic images and videos have become a popular form of media for interactive viewing on mobile devices and virtual reality headsets. Viewing such media provides a sense of immersion by allowing users to control their view direction and experience an entire environment. When using a virtual reality headset, the level of immersion can be improved by leveraging stereoscopic capabilities. Stereoscopic images are generated in pairs, one for the left eye and one for the right eye, and result in providing an important depth cue for the human visual system. For computer generated imagery, rendering proper stereo pairs is well known for a fixed view. However, it is much more difficult to create omnidirectional stereo pairs for a surround-view projection that work well when looking in any direction. One major drawback of traditional omnidirectional stereo images is that they suffer from binocular misalignment in the peripheral vision as a user's view direction approaches the zenith / nadir (north / south pole) of the projection sphere. This paper presents a real-time geometry-based approach for omnidirectional stereo rendering that fits into the standard rendering pipeline. Our approach includes tunable parameters that enable pole merging - a reduction in the stereo effect near the poles that can minimize binocular misalignment. Results from a user study indicate that pole merging reduces visual fatigue and discomfort associated with binocular misalignment without inhibiting depth perception.
Ultra-high-resolution visualizations of large-scale data sets are often rendered using a remotely located graphics cluster that does not have a connected display. In such instances, rendered images must either be streamed over a network... more
Ultra-high-resolution visualizations of large-scale data sets are often rendered using a remotely located graphics cluster that does not have a connected display. In such instances, rendered images must either be streamed over a network for live viewing, or saved to disk for later viewing. This process introduces the additional overhead associated with transferring data off of the GPU device. We present early work on real-time compression of rendered visualizations that aims to reduce both the device-to-host data transfer time and the I/O time for streaming or writing to disk. By using OpenGL / CUDA interop, images are compressed on the GPU prior to transferring the data to main memory. Although there is a computation cost to performing compression, our results show that this overhead is more than offset by the reduced data transfer and I/O times.
Large-scale scientific simulations typically output massive amounts of data that must be later read in for post-hoc visualization and analysis. With codes simulating complex phenomena at ever-increasing fidelity, writing data to disk... more
Large-scale scientific simulations typically output massive amounts of data that must be later read in for post-hoc visualization and analysis. With codes simulating complex phenomena at ever-increasing fidelity, writing data to disk during this traditional high-performance computing workflow has become a significant bottleneck. In situ workflows offer a solution to this bottleneck, whereby data is simultaneously produced and analyzed without involving disk storage. In situ analysis can increase efficiency for domain scientists who are exploring a data set or fine-tuning visualization and analysis parameters. Our work seeks to enable researchers to easily create and interactively analyze large-scale simulations through the use of Jupyter Notebooks without requiring application developers to explicitly integrate in situ libraries.
<strong>Major changes</strong> added support for warping and edge blending (thanks @voidcycles) added support for XCode (thanks @koosha94) modules now do not need to be registered with a hub repository. It is possible to tell... more
<strong>Major changes</strong> added support for warping and edge blending (thanks @voidcycles) added support for XCode (thanks @koosha94) modules now do not need to be registered with a hub repository. It is possible to tell omegalib omegalib to use any github repository containing an omegalib module. added new OpenGL3 GPU API (GpuProgram, GpuBuffer, Uniform, etc) added support for using external python distributions on Windows <strong>Fixes</strong> fixed module dependency solver fixed cmake files for including omegalib into external applications several fixes to opengl core profile support improved packaging scripts including support for packaging installers on OSX Full changelog: https://github.com/uic-evl/omegalib/compare/v13.1...v15.0
When dealing with extremely large data sets or computationally expensive rendering pipelines, local workstations may not be able to render the full data set or maintain interactive frame rates. In these cases, high-performance graphics... more
When dealing with extremely large data sets or computationally expensive rendering pipelines, local workstations may not be able to render the full data set or maintain interactive frame rates. In these cases, high-performance graphics clusters can be leveraged for distributed rendering. However, this traditionally has removed real-time feedback from the visualization system. In order to harness the power of distributed rendering and the real-time nature of local rendering, we developed PxStream — a streaming framework to transfer dynamically rendered images from high-performance graphics clusters to remote machines in real-time. PxStream clients can range from a standard computer with single monitor to a cluster-driven tiled display wall. Additionally, the PxStream server supports multiple concurrent endpoints to allow collaborators at different physical locations to simultaneously view the image stream. Initial tests demonstrate that PxStream can simultaneously stream 66 megapixel images to two locations at nearly 50 frames per second. Index Terms: Human-centered computing—Visualization; Human-centered computing—Collaborative and social computing; Theory of computation—Distributed algorithms.
Extreme scale analytics often requires distributed memory algorithms in order to process the volume of data output by high performance simulations. Traditionally, these analysis routines post-process data saved to disk after a simulation... more
Extreme scale analytics often requires distributed memory algorithms in order to process the volume of data output by high performance simulations. Traditionally, these analysis routines post-process data saved to disk after a simulation has completed. However, concurrently executing both simulation and analysis can yield great benefits – reduce or eliminate disk I/O, increase output frequency to improve fidelity, and ultimately shorten time-to-discovery. One such method for concurrent simulation and analysis is in transit – transferring data from the resource running the simulation to a separate resource running the analysis. In transit analysis can be beneficial since computational resources may not have certain resources needed for analysis (e.g. GPUs) and to reduce the impact of performing analysis tasks to the run time of the simulation. The work described in this paper compares three techniques for transferring data between distributed memory applications: 1) writing data to and reading data from a parallel file system, 2) copying data into and out of a network-accessed shared memory pool, and 3) streaming data in parallel from the processes in the simulation application to the processes in the analysis application. Our results show that using a shared memory pool and streaming data over high-bandwidth networks can both drastically increase I/O speeds and lead to quicker analysis.
High-performance distributed memory applications often load or receive data in a format that differs from what the application uses. One such difference arises from how the application distributes data for parallel processing. Data must... more
High-performance distributed memory applications often load or receive data in a format that differs from what the application uses. One such difference arises from how the application distributes data for parallel processing. Data must be redistributed from how it was laid out by the producer to how the application needs the data to be laid out amongst its processes. In this paper, we present a large-scale distributed memory library, provided to developers in an easily integrated API, for automating data redistribution in MPI enabled applications. We then present the results of two scientific computing use cases to evaluate our library. The first use case highlights how dynamic data redistribution can greatly reduce load time when reading three-dimensional medical imaging data from disk. The second use case highlights how dynamic data redistribution can facilitate in-transit analysis of computational fluid dynamics, which results in smaller data output size and faster time-to-discovery.
Surround-view panoramic images and videos have become a popular form of media for interactive viewing on mobile devices and virtual reality headsets. Viewing such media provides a sense of immersion by allowing users to control their view... more
Surround-view panoramic images and videos have become a popular form of media for interactive viewing on mobile devices and virtual reality headsets. Viewing such media provides a sense of immersion by allowing users to control their view direction and experience an entire environment. When using a virtual reality headset, the level of immersion can be improved by leveraging stereoscopic capabilities. Stereoscopic images are generated in pairs, one for the left eye and one for the right eye, and result in providing an important depth cue for the human visual system. For computer generated imagery, rendering proper stereo pairs is well known for a fixed view. However, it is much more difficult to create omnidirectional stereo pairs for a surround-view projection that work well when looking in any direction. One major drawback of traditional omnidirectional stereo images is that they suffer from binocular misalignment in the peripheral vision as a user's view direction approaches the zenith / nadir (north / south pole) of the projection sphere. This paper presents a real-time geometry-based approach for omnidirectional stereo rendering that fits into the standard rendering pipeline. Our approach includes tunable parameters that enable pole merging - a reduction in the stereo effect near the poles that can minimize binocular misalignment. Results from a user study indicate that pole merging reduces visual fatigue and discomfort associated with binocular misalignment without inhibiting depth perception.
In this paper we present a novel technique for 3D micro capillary bed model reconstruction and computational fluid-dynamics (CFD) calculation to simulate morphological and blood perfusion parameters. Major arterial and venous cerebral... more
In this paper we present a novel technique for 3D micro capillary bed model reconstruction and computational fluid-dynamics (CFD) calculation to simulate morphological and blood perfusion parameters. Major arterial and venous cerebral blood vessels were reconstructed from scanning electron microscope (SEM) images and vessels whose diameters are beyond the resolution of modern imaging techniques were grown from this base structure using our novel directed interactive growth algorithm (DIGA). 3D voronoi networks were used to represent the microvasculature capillary network that joins arterial vessels to adjacent draining veins. The resulting network is morphologically accurate to in vivo measurements of the functional unit with accurate measurements of vessel density (3.17%) and surface area to tissue volume ratio (5.84%). Perfusion patterns of supply to the functional unit and systemic pressure drops match those expected in living tissue and indicate the model is a good candidate for exploring the hemodynamic phenomenon of autoregulation.
Head mounted displays (HMDs) can provide users with an immersive virtual reality (VR) experience, but often are limited to viewing a single environment or data set at a time. In this paper, we describe a system of networked applications... more
Head mounted displays (HMDs) can provide users with an immersive virtual reality (VR) experience, but often are limited to viewing a single environment or data set at a time. In this paper, we describe a system of networked applications whereby co-located users in the real world can use a large-scale display wall to collaborate and share data with immersed users wearing HMDs. Our work focuses on the sharing of 360° surround-view panoramic images and contextual annotations. The large-scale display wall affords non-immersed users the ability to view a multitude of contextual information and the HMDs afford the ability for users to immerse themselves in a virtual scene. The asymmetric virtual reality collaboration between immersed and non-immersed individuals can lead to deeper under-standing and the feeling of a shared experience. We will highlight a series of use cases – two digital humanities projects that capture real locations using a 360° camera, and one scientific discovery project that uses computer generated 360° surround-view panoramas. In all cases, groups can benefit from both the immersive capabilities of HMDs and the collaborative affordances of large-scale display walls, and a unified experience is created for all users.
Head mounted displays (HMDs) can provide users with an immersive virtual reality (VR) experience, but often are limited to viewing a single environment / data set at a time. In this position paper, we argue that co-located users in the... more
Head mounted displays (HMDs) can provide users with an immersive virtual reality (VR) experience, but often are limited to viewing a single environment / data set at a time. In this position paper, we argue that co-located users in the real world can help provide additional context and steer virtual experiences. With the use of a separate canvas, such as a large-scale display wall, non-immersed users can view a multitude of contextual information. This information can be used to drive the VR user's interactions and lead to deeper understanding. We will highlight two digital humanities use cases that capture real locations using a 360°camera: 1) urban art and 2) urban community gardens. In both cases, HMDs allow users to view a space and its surroundings, while non-immersed users can help with tasks such as placing overlays with auxiliary information, navigating between related spaces, and directing the VR user's actions.