Skip to main content
John Dingliana
  • Dublin, Leinster, Ireland
Generalized cylinders are a versatile class of ob-jects commonly constructed from a spine and cross-sections orthogonal to it. We propose a novel method for the intuitive sketch-based specification of arbitrarily complex spines,... more
Generalized cylinders are a versatile class of ob-jects commonly constructed from a spine and cross-sections orthogonal to it. We propose a novel method for the intuitive sketch-based specification of arbitrarily complex spines, in-cluding those that loop over and under themselves. A two-dimensional sketch of the spine is drawn using a pen and a graphics tablet first. This is surrounded by a swept-sphere bounding volume representing the generalized cylinder. Any overlapping sections are automatically offset perpendicular to the sketch plane, their ordering controlled by pen pres-sure. The user may adjust the resulting shape by oversketch-ing or rotating the view and dragging points in the spine. All user input is processed by an optimization that generates a smooth, non-intersecting shape at interactive speeds.
People are very sensitive to physical events occurring around them... We know that one solid object cannot merge into another; We make decisions about the properties of objects based on the way in which they interact with each other; We... more
People are very sensitive to physical events occurring around them... We know that one solid object cannot merge into another; We make decisions about the properties of objects based on the way in which they interact with each other; We judge whether objects are animate or inanimate depending on whether we perceive them as moving of their own volition, or being "caused" to move by another object (referred to as the perception of causality [Mic68]). Many studies have shown that these perceptual mechanisms are established very early on in infancy (e.g. see [BSW85])... but how accurate are they? Research in the realm of physics education has shown that most people have erroneous, yet very robust, pre-conceptions regarding the physical behaviour of objects [Cle82]. This is obviously not a good thing if you are trying to teach introductory mechanics, but it could be very useful if you are trying to get away with fast, yet plausible, physically-based simulations for real-time ap...
In this paper we present an automated approach for optimizing the conspicuity of features in 3D volume visualization. By iteratively adjusting the opacity transfer function, we are able to generate visualizations that satisfy a... more
In this paper we present an automated approach for optimizing the conspicuity of features in 3D volume visualization. By iteratively adjusting the opacity transfer function, we are able to generate visualizations that satisfy a user-specified target distribution defining the relative conspicuity of particular features in the data set. Our approach exploits a metric, called Visibility-Weighted Saliency (VWS), that takes into account both the issues of view-dependent occlusion and visual saliency in defining the visibility of features in volume data. A parallel line search strategy is presented to improve the performance of the optimization mechanism. We demonstrate that the approach is able to achieve promising results in optimizing visualizations of both static and time-varying volume data.
We present an automated technique to optimize the clarity of features in visualizations of 3D volume datasets. By adjusting the opacity transfer function, we achieve user-specified target distributions of feature conspicuity. Unlike... more
We present an automated technique to optimize the clarity of features in visualizations of 3D volume datasets. By adjusting the opacity transfer function, we achieve user-specified target distributions of feature conspicuity. Unlike previous techniques our approach accounts for both the issues of view-dependent occlusion and visual saliency of features in volume data. We demonstrate how the automated approach is useful in particular for optimizing the visualization of time-varying volume datasets.
ABSTRACT
This paper introduces an empirical, perceptually-based method which exploits the temporal coherence in consecutive frames to reduce the CPU-GPU traffic size during real-time visualization of time-varying volume data. In this new scheme, a... more
This paper introduces an empirical, perceptually-based method which exploits the temporal coherence in consecutive frames to reduce the CPU-GPU traffic size during real-time visualization of time-varying volume data. In this new scheme, a multi-threaded CPU mechanism simulates GPU pre-rendering functions to characterize the local behaviour of the volume. These functions exploit the temporal coherence in the data to reduce the sending of complete per frame datasets to the GPU. These predictive computations are designed to be simple enough to be run in parallel on the CPU while improving the general performance of GPU rendering. Tests performed provide evidence that we are able to reduce considerably the texture size transferred at each frame without losing visual quality while maintaining performance compared to the sending of entire frames to the GPU. The proposed framework is designed to be scalable to Client/Server network based implementations to deal with multi-user systems.
Abstract: If the computational demands of an interactive gra-phics rendering application cannot be met by a single commodity Graphics Processing Unit (GPU), multiple graphics accelerators may be utilised on multi-GPU based systems such as... more
Abstract: If the computational demands of an interactive gra-phics rendering application cannot be met by a single commodity Graphics Processing Unit (GPU), multiple graphics accelerators may be utilised on multi-GPU based systems such as SLI [1] or Crossfire [2] or by a cluster of PCs in conjunction with a soft-ware infrastructure. Typically these PC cluster solutions allow the application programmer to use a standard OpenGL API. In this paper we describe an FPGA based hardware architecture, which provides an interface for multiple commodity graphics ac-celerators. Our scalable parallel rendering architecture aims to accelerate graphics applications using a tightly coupled hybrid system of parallel commodity GPUs and reconfigurable hardwa-re [3], while providing similar services to the above mentioned approach. This is work in progress. So far, we have designed and manufactured the required custom-hardware. Currently, we are focussing on implementing the shared-memory subsystem.
For many systems that produce physically based animations, plausibility rather than accuracy is acceptable. We consider the problem of evaluating the visual quality of animations in which physical parameters have been distorted or... more
For many systems that produce physically based animations, plausibility rather than accuracy is acceptable. We consider the problem of evaluating the visual quality of animations in which physical parameters have been distorted or degraded, either unavoidably due to real-time frame-rate requirements, or intentionally for aesthetic reasons. To date, no generic means of evaluating or predicting the fidelity, either physical or visual, of the dynamic events occurring in an animation exists. As a first step towards providing such a metric, we present a set of psychophysical experiments that established some thresholds for human sensitivity to dynamic anomalies, including angular, momentum and spatio-temporal distortions applied to simple animations depicting the elastic collision of two rigid objects. In addition to finding significant acceptance thresholds for these distortions under varying conditions, we identified some interesting biases that indicate non-symmetric responses to thes...
Interactive simulation is made possible in many applications by simplifying or culling the finer details that would make real-time performance impossible. This paper examines detail simplification in the specific problem of collision... more
Interactive simulation is made possible in many applications by simplifying or culling the finer details that would make real-time performance impossible. This paper examines detail simplification in the specific problem of collision handling for rigid body animation. We present an automated method for calculating consistent collision response at different levels of detail. The mechanism works closely with a system which uses a pre-computed hierarchical volume model for collision detection. 1.
References 1. Barzel, R., Hughes, J.F., & Wood, D.N. (1996). Plausible motion simulation for computer graphics animation. Computer Animation and Simulation '96, 183-197. 2. Chenney, S. & Forsythe, D.A. (2000) Sampling plausible... more
References 1. Barzel, R., Hughes, J.F., & Wood, D.N. (1996). Plausible motion simulation for computer graphics animation. Computer Animation and Simulation '96, 183-197. 2. Chenney, S. & Forsythe, D.A. (2000) Sampling plausible solutions to multi-body constraint problems. In Proceedings of SIGGRAPH 2000, 219-228. 3. Clement, J. (1982). Students’ preconceptions in introductory mechanics. American Journal of Physics, 50. (1), 66-71. 4. Gilden, D. & Profitt, D. (1989). Understanding collision dynamics. Journal of Experimental Psychology: Human Perception and Performance, 5. (2), 372-383. 5. Michotte, A. (1963). The perception of causality. New York: Basic Books, 1963. 6. Profitt, D. and D. Gilden. (1989). Understanding natural dynamics. Journal of Experimental Psychology: Human Perception and Performance, 15. (2), 384-393.
Time-varying volume data is used in many areas of science and engineering. However visualizations of such data are not easy for users to visually process due to the amount of information that can be presented simultaneously. In this... more
Time-varying volume data is used in many areas of science and engineering. However visualizations of such data are not easy for users to visually process due to the amount of information that can be presented simultaneously. In this paper, we propose a novel visualization approach which modulates focus, emphasizing important information, by adjusting saturation and brightness of voxels based on an importance measure derived from temporal and multivariate information. By conducting a voxel-wise analysis of a number of consecutive frames, we acquire a volatility measure of each voxel. We then use intensity, volatility and additional multivariate information to determine opacity, saturation and brightness of the voxels. The method was tested in visualizing a multivariate hurricane data set. The results suggest that our approach can give the user a more detailed understanding of the data by presenting multivariate information variables in one self-contained visualization.
We investigate the use of Principal Component Analysis (PCA) for image-based volume rendering. We compute an eigenspace using training images, pre-rendered using a standard raycaster, from a spherically distributed range of camera... more
We investigate the use of Principal Component Analysis (PCA) for image-based volume rendering. We compute an eigenspace using training images, pre-rendered using a standard raycaster, from a spherically distributed range of camera positions. Our system is then able to synthesize novel views of the data set with minimal computation at run time. Results indicate that PCA is able to sufficiently learn the full volumetric model through a finite number of training images and generalizer of training images and generalize the computed eigenspace to produce high quality novel view images.
In recent years, thanks to the increasing computational power available, real time computer animation has naturally evolved to model more complex and computationally expensive scenes. Consequently, all the problems concerning physical... more
In recent years, thanks to the increasing computational power available, real time computer animation has naturally evolved to model more complex and computationally expensive scenes. Consequently, all the problems concerning physical modelling need further research to tackle these new requirements, especially the problem of collision detection for deformable objects. Most existing solutions cannot not be trivially extended, because they are strongly based on the assumption that the shape of the object is fixed. In this paper we propose a general approach to reduce the cost of collision detection between deformable objects explicitly represented, regardless of the specific geometrical and physical manner in which they are modelled.
We present a system for time-critical ray-cast direct volume rendering which can be easily integrated into existing acceleration techniques. Our system modifies the global sampling rate of the scene based on knowledge of past frame rates... more
We present a system for time-critical ray-cast direct volume rendering which can be easily integrated into existing acceleration techniques. Our system modifies the global sampling rate of the scene based on knowledge of past frame rates and quickly and robustly converges on a user specified frame rate while requiring no overhead to implement. We have tested our technique on a wide variety of datasets and our system quickly adapts to any changes in scene complexity and transfer function and dramatically minimises the large change in frame rates that traditionally occur due to user navigation of a complex volume dataset.
Volume visualization has been widely used to depict complicated 3D structures in volume data sets. However, obtaining clear visualization of the features of interest in a volume is still a major challenge. The clarity of features depends... more
Volume visualization has been widely used to depict complicated 3D structures in volume data sets. However, obtaining clear visualization of the features of interest in a volume is still a major challenge. The clarity of features depends on the transfer function, the viewpoint and the spatial distribution of features in the volume data set. In this paper, we propose visibility-weighted saliency as a measure of visual saliency of features in volume rendered images, in order to assist users in choosing suitable viewpoints and designing effective transfer functions to visualize the features of interest. Visibility-weighted saliency is based on a computational measure of perceptual importance of voxels and the visibility of features in volume rendered images. The effectiveness of this scheme is demonstrated by test results on two volume data sets.
In this paper, we propose an acceleration scheme for mini-batch streaming PCA methods that are based on the Stochastic Gradient Approximation. Our scheme converges to the first k > 1 eigenvectors in a single data pass even when using a... more
In this paper, we propose an acceleration scheme for mini-batch streaming PCA methods that are based on the Stochastic Gradient Approximation. Our scheme converges to the first k > 1 eigenvectors in a single data pass even when using a very small batch size. We provide empirical convergence results of our scheme based on the spiked covariance model. Our scheme does not require any prior knowledge of the data distribution and hence is well suited for streaming data scenarios. Furthermore, based on empirical evaluations using the spiked covariance model and large-scale benchmark datasets, we find that our acceleration scheme outperforms related state-of-the-art online PCA approaches including SGA, Incremental PCA and Candid Covariance-free Incremental PCA.
In this paper, we present an online adaptive PCA algorithm that is able to compute the full dimensional eigenspace per new time-step of sequential data. The algorithm is based on a one-step update rule that considers all second order... more
In this paper, we present an online adaptive PCA algorithm that is able to compute the full dimensional eigenspace per new time-step of sequential data. The algorithm is based on a one-step update rule that considers all second order correlations between previous samples and the new time-step. Our algorithm has O(n) complexity per new time-step in its deterministic mode and O(1) complexity per new time-step in its stochastic mode. We test our algorithm on a number of time-varying datasets of different physical phenomena. Explained variance curves indicate that our technique provides an excellent approximation to the original eigenspace computed using standard PCA in batch mode. In addition, our experiments show that the stochastic mode, despite its much lower computational complexity, converges to the same eigenspace computed using the deterministic mode.
We investigate the use of Principal Component Analysis (PCA) for the visualization of 3D volumetric data. For static volume datasets, we assume, as input training samples, a set of images rendered from spherically distributed viewing... more
We investigate the use of Principal Component Analysis (PCA) for the visualization of 3D volumetric data. For static volume datasets, we assume, as input training samples, a set of images rendered from spherically distributed viewing positions, using a state-of-the-art volume rendering technique. We compute a high-dimensional eigenspace, that we can then use to synthesize arbitrary views of the dataset with minimal computation at run-time. Visual quality is improved by subdividing the training samples using two techniques: cell-based decomposition into equally sized spatial partitions and a more generalized variant, which we referred to as band-based PCA. The latter approach is further extended for the compression of time-varying volume data directly. This is achieved by taking, as input, full 3D volumes comprised by the time-steps of the time-varying sequence and generating an eigenspace of volumes. Results indicate that, in both cases, PCA can be used for effective compression wit...
We present an approach for creating non-photorealistic renderings of 3D scenes in real-time. We employ a hybrid system which uses both image-space and object-space techniques for creating fast and effective results. A reliable method of... more
We present an approach for creating non-photorealistic renderings of 3D scenes in real-time. We employ a hybrid system which uses both image-space and object-space techniques for creating fast and effective results. A reliable method of edge detection is presented to find all important edges within a scene. This edge detection technique is combined with a painterly renderer to render the scene using different levels of abstraction. This is used to increase the saliency of important objects and remove extraneous detail. 3D object information is used to apply an object-based segmentation technique which allows each scene object to be rendered using a single abstraction level depending on its scene importance. The abstraction techniques are implemented on the GPU which helps the system achieve interactive rates.
In this study, we propose an efficient approach for modelling and compressing large-scale datasets. The main idea is to subdivide each sample into smaller partitions where each partition constitutes a particular subset of attributes and... more
In this study, we propose an efficient approach for modelling and compressing large-scale datasets. The main idea is to subdivide each sample into smaller partitions where each partition constitutes a particular subset of attributes and then apply PCA to each partition separately. This simple approach enjoys several key advantages over the traditional holistic scheme in terms of reduced computational cost and enhanced reconstruction quality. We study two variants of this approach, namely, cell-based PCA for image datasets where samples are spatially divided into smaller blocks and the more general band-based PCA where attributes are partitioned based on their values distribution.
This state-of-the-art report provides a comprehensive review of the research on client-server architectures for volume visualization. The design of such schemes capable of dealing with static and dynamic volume datasets has been an... more
This state-of-the-art report provides a comprehensive review of the research on client-server architectures for volume visualization. The design of such schemes capable of dealing with static and dynamic volume datasets has been an important challenge for researchers due to the need for the reduction of information transmitted. Thus, compression techniques designed to facilitate such systems are a particular focus of this survey. The ever increasing complexity and widespread use of volume data in interdisciplinary fields, as well as the opportunities afforded by continuing advances in computational power of mobile devices are strong motivations for this review. In particular, the client-server paradigm has particular significance to medical imaging due to the practical advantages and increased likelihood of use, of portable low-spec clients in lab and clinical settings.
Lighting has been used to enhance emotion and appeal of characters for centuries, from paintings in the Renaissance to the modern-day digital arts. In VFX and animation studios, lighting is considered as important as modelling, shading,... more
Lighting has been used to enhance emotion and appeal of characters for centuries, from paintings in the Renaissance to the modern-day digital arts. In VFX and animation studios, lighting is considered as important as modelling, shading, or rigging. Most existing work focuses on either empirical best-practice created by artists of the centuries or on lighting perception with basic shapes. In contrast, our work focuses on the effect of lighting on emotional characters. Our study presents an extensive set of novel perceptual experiments designed to investigate the effects of brightness levels (key light brightness) and the proportion of light intensity illuminating the two sides of a character’s face (key-to-fill ratio). We are particularly interested in the effect of lighting on the recognition of emotion, emotion intensity, and the overall appeal, as these are crucial factors for audience engagement. Our results have implications for artists and developers wishing to increase the app...
ABSTRACT Dreams in High Fidelity is a painting that evolves. It was designed and rendered with the Electric Sheep screen-saver, a cyborg mind composed of 30,000 computers and people mediated by a genetic algorithm. Physically it consists... more
ABSTRACT Dreams in High Fidelity is a painting that evolves. It was designed and rendered with the Electric Sheep screen-saver, a cyborg mind composed of 30,000 computers and people mediated by a genetic algorithm. Physically it consists of a small computer ...
We present a novel algorithm to model density-dependent behaviours in crowd simulation. Previous work has shown that density is a key factor in governing how pedestrians adapt their behaviour. This paper specifically examines, through... more
We present a novel algorithm to model density-dependent behaviours in crowd simulation. Previous work has shown that density is a key factor in governing how pedestrians adapt their behaviour. This paper specifically examines, through analysis of real pedestrian data, how density affects how agents control their rate of change of bearing angle with respect to one another. We extend upon existing synthetic vision based approaches to local collision avoidance and generate pedestrian trajectories that more faithfully represent how real people avoid each other. Our approach is capable of producing realistic human behaviours, particularly in dense, complex scenarios where the amount of time for agents to make decisions is limited.
We present a framework for interactive real-time visualization of three-dimensional volume data on commodity augmented reality (AR) displays. In particular, we address the problem of seamlessly blending internal anatomy data sets with... more
We present a framework for interactive real-time visualization of three-dimensional volume data on commodity augmented reality (AR) displays. In particular, we address the problem of seamlessly blending internal anatomy data sets with real-world objects. One key challenge, particularly relevant to this scenario, is conveying the correct sense of relative depths of virtual and real world objects. To address this issue, we exploit information captured by a depth sensor to build a mask which is used as a weighting parameter to correctly combine rendered volume images with real imagery in a depth-preserving manner. Results obtained on prototype AR hardware devices indicate improvements to relative depth perception. Furthermore, we address performance challenges and provide solutions to ensure that the framework is applicable to a range of different AR devices, many of whom have limited computational and graphical rendering capabilities.
Copyright © 2008 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed... more
Copyright © 2008 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear ...
Copyright © 2009 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed... more
Copyright © 2009 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear ...
Copyright © 2009 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed... more
Copyright © 2009 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear ...
Research Interests:
ABSTRACT
Research Interests:
Abstract We present a system for time-critical ray-cast direct volume rendering which can be easily integrated into existing acceleration techniques. Our system modifies the global sampling rate of the scene based on knowledge of past... more
Abstract We present a system for time-critical ray-cast direct volume rendering which can be easily integrated into existing acceleration techniques. Our system modifies the global sampling rate of the scene based on knowledge of past frame rates and quickly and ...

And 33 more