Skip to main content
  • I try to understand how nature processes information. I study this 'natural information processing' in complex system... moreedit
The analysis of questionnaires often involves representing the high-dimensional responses in a low-dimensional space (e.g., PCA, MCA, or t-SNE). However questionnaire data often contains categorical variables and common statistical model... more
The analysis of questionnaires often involves representing the high-dimensional responses in a low-dimensional space (e.g., PCA, MCA, or t-SNE). However questionnaire data often contains categorical variables and common statistical model assumptions rarely hold. Here we present a non-parametric approach based on Fisher Information which obtains a low-dimensional embedding of a statistical manifold (SM). The SM has deep connections with parametric statistical models and the theory of phase transitions in statistical physics. Firstly we simulate questionnaire responses based on a non-linear SM and validate our method compared to other methods. Secondly we apply our method to two empirical datasets containing largely categorical variables: an anthropological survey of rice farmers in Bali and a cohort study on health inequality in Amsterdam. Compare to previous analysis and known anthropological knowledge we conclude that our method best discriminates between different behaviours, pavi...
Research environments for modern, cross-disciplinary scientific endeavors have to unite multiple users, with varying levels of expertise and roles, along with multitudes of data sources and processing units. The high level of required... more
Research environments for modern, cross-disciplinary scientific endeavors have to unite multiple users, with varying levels of expertise and roles, along with multitudes of data sources and processing units. The high level of required integration contrasts with the loosely-coupled nature of environments which are appropriate for research. The problem is to support integration of dynamic service-based infrastructures with data sources, tools and users in a way that conserves ubiquity, extensibility and usability. This chapter presents a close examination of related achievements in the field and the description of proposed approach. It shows that integration of loosely-coupled system components with formallydefined vocabularies may fulfill the listed requirements. The authors demonstrate that combining formal representations of domain knowledge with techniques like data integration, semantic annotations and shared vocabularies, enables the development of systems for modern e-Science. ...
Implicit time stepping is often difficult to parallelize. The recently proposed Minimal Residual Approximate Implicit (MRAI) schemes (2) are spe- cially designed as a cheaper and parallelizable alternative for implicit time step- ping. A... more
Implicit time stepping is often difficult to parallelize. The recently proposed Minimal Residual Approximate Implicit (MRAI) schemes (2) are spe- cially designed as a cheaper and parallelizable alternative for implicit time step- ping. A several GMRES iterations are performed to solve approximately the im- plicit scheme of interest, and the step size is adjusted to guarantee stability. A natural way to apply the approach is to modify a given implicit scheme in which one is interested. Here, we present numerical results for two parallel im- plementations of MRAI schemes. One is based on the simple Euler Backward scheme, and the other is the MRAI-modified multistep ODE solver LSODE. On the Cray T3E and IBM SP2 platforms, the MRAI codes exhibit parallelism of explicit schemes. The model problem under consideration is the 3D spatially discretized heat equation. The speed-up results for the Cray T3E and IBM SP2 are reported.
Research Interests:
The authors study the adaptation of an optimistic Time Warp kernel to cross-cluster computing on the Grid. Wide-area communication, the primary source of overhead, is offloaded onto dedicated routing processes. This allows the simulation... more
The authors study the adaptation of an optimistic Time Warp kernel to cross-cluster computing on the Grid. Wide-area communication, the primary source of overhead, is offloaded onto dedicated routing processes. This allows the simulation processes to run at full speed and thus significantly decreases the performance gap caused by the wide-area distribution. Further improvements are obtained by employing message aggregation on the wide-area links and using a distributed global virtual time algorithm. The authors achieve many of their objectives for a cellular automaton simulation with lazy cancellation and moderate communication. High communication rates, especially with aggressive cancellation, present a challenge. This is confirmed by the experiments with synthetic loads. Even then, a satisfactory speedup can be achieved, provided that the computational grain of events is large enough.
After nearly forty years of laboratory development, with about ten years since the first systems became commercially available, parallel computing stands on the threshold of becoming the leading architecture of choice for high end... more
After nearly forty years of laboratory development, with about ten years since the first systems became commercially available, parallel computing stands on the threshold of becoming the leading architecture of choice for high end commercial, scientific, and technical computing. Several reasons exist why parallel computers soon will be very successful. They are briefly: technology advances, efficient middleware, entry by industry biggies like IBM, support from independent software vendors, a new focus on solutions, compatibility and standards, the acceptance of the client/server model, stimuli from governments, need for high performance and fast solution times etc. High-Performance Computing and Networking (HPCN) is driven by several initiatives in Europe, the United States, and Japan. In Europe several groups, e.g. the Rubbia Advisory Committee, the European Industry Initiative Ei3, the Teraflops Initiative, and others encouraged the Commission of the European Communities (CEC) to start a European HPCN programme. They recognized the economic, scientific, and social importance of the HPCN technology for Europe. Members of these groups started the first HPCN conference in 1993 in Amsterdam. The next event, the HPCN Europe ‘94 in Munich, was already a combination of the HPCN conference together with a large exhibition on HPCN hardware and software. In this special issue, a selection of the conference papers highlights some important aspects on software tools for parallel computing as well as on real applications on parallel systems. In the first section, some new developments in parallel software tools are presented. The first tool, TAPER: A graphical programming environment for parallel systems, described by Schafers, Scheidler, and Kramer-Fuhrmann, supports the development of industrial applications for parallel systems. It contains high-level tools for the design, configuration, mapping, visualization, and optimization of parallel applications. In Parallel application design: The simulation approach with HASTE, Pouzet, Paris, and Jorrand present estimates of the execution and cycle time of the application from early stages of its conception. HASTE is a tool built to simulate the behavior of an application in its design form, in terms of real time predictions for a particular target machine. Debugging parallel programs is one of the most tedious jobs in programming scalable multiprocessor architectures. Due to the distributed resources of these machines, programming is often architecture-dependent. Most development tools still reflect this dependency even during the analysis phase of parallel programs. In their paper On-line distributed debugging on scalable multiprocessor architectures, Bemmerl and Wismiiller discuss the distributed debugger DETOP, which offers a global name space and hides architectural features like the mapping processes. In most of the parallel approaches, today, the message passing programming model dominates the application development, despite the overhead and the complexity introduced by the explicitely coded synchronisation and data transfers. Pfenning, Bachem, and Minnich in I+tual Shared Memory programming on workstation clusters give an introduction to the virtual shared
In order to ensure successful adoption of Virtual Reality applications, usability and the context in which a system will be used has to be considered in system design and development. Both quantitative and qualitative methods are... more
In order to ensure successful adoption of Virtual Reality applications, usability and the context in which a system will be used has to be considered in system design and development. Both quantitative and qualitative methods are available for system development (see eg Preece et al., 2002). This paper focuses on a combination of qualitative methods that have been applied in the development of a prototype of a VR system for medical diagnosis and planning for vascular disorders, the Virtual Radiology ...
KNAW Narcis. Back to search results. Publication 2nd IEEE International Conference on e-Science and Grid Computing (2006). Pagina-navigatie: Main. ...
The three-volume set LNCS 5101-5103 constitutes the refereed proceedings of the 8th International Conference on Computational Science, ICCS 2008, held in Krakow, Poland in June 2008. The 167 revised papers of the main conference track... more
The three-volume set LNCS 5101-5103 constitutes the refereed proceedings of the 8th International Conference on Computational Science, ICCS 2008, held in Krakow, Poland in June 2008. The 167 revised papers of the main conference track presented together with the abstracts of 7 keynote talks and the 100 revised papers from 14 workshops were carefully reviewed and selected for inclusion in the three volumes. The main conference track was divided into approximately 20 parallel sessions addressing topics such as e-science ...
A simulation program that provides insight into the vibrational properties of resonant mass gravitational radiation antennas is developed from scratch. The requirements that are set necessitate the use of an explicit finite element... more
A simulation program that provides insight into the vibrational properties of resonant mass gravitational radiation antennas is developed from scratch. The requirements that are set necessitate the use of an explicit finite element kernel. Since the computational complexity of this kernel requires significant computing power, it is tailored for execution on parallel computer systems. After validating the physical correctness of the program as well as the performance on distributed memory architectures, we present a number of “sample” simulation experiments to illustrate the simulation capabilities of the program. The development path of the code, consisting of problem definition, mathematical modeling, choosing an appropriate solution method, parallelization, physical validation, and performance validation, is argued to be typical for the design process of large-scale complex simulation codes. © 1997 American Institute of Physics.
This paper presents a study of the interactions between the random number generator used and the run-time behaviour of the parallel Time Warp simulation kernel APSIS. A different rollback length distribution, with a far larger chance of... more
This paper presents a study of the interactions between the random number generator used and the run-time behaviour of the parallel Time Warp simulation kernel APSIS. A different rollback length distribution, with a far larger chance of long rollbacks taking place, is observed when the state of the random number generator is not preserved across the rollbacks. An explanation for this phenomenon is provided. An analytical model of the rollback behaviour in Time Warp is developed, for rollback length expressed in either ...
We present a comparison between the finite-element and the lattice-Boltzmann method for simulating fluid flow in a SMRX static mixer reactor. The SMRX static mixer is a piece of equipment with excellent mixing performance and it is used... more
We present a comparison between the finite-element and the lattice-Boltzmann method for simulating fluid flow in a SMRX static mixer reactor. The SMRX static mixer is a piece of equipment with excellent mixing performance and it is used in highly efficient chemical reactors for viscous systems like polymers. The complex geometry of this mixer makes such 3D simulations nontrivial. An excellent agreement between the results of the two simulation methods and experimental data was found.

And 620 more

The Fisher–Rao metric from information geometry is related to phase transition phenomena in classical statistical mechanics. Several studies propose to extend the use of information geometry to study more general phase transitions in... more
The Fisher–Rao metric from information geometry is related to phase transition phenomena in classical statistical mechanics. Several studies propose to extend the use of information geometry to study more general phase transitions in complex systems. However, it is unclear whether the Fisher–Rao metric does indeed detect these more general transitions, especially in the absence of a statistical model. In this paper we study the transitions between patterns in the Gray-Scott reaction–diffusion model using Fisher information. We describe the system by a probability density function that represents the size distribution of blobs in the patterns and compute its Fisher information with respect to changing the two rate parameters of the underlying model. We estimate the distribution non-parametrically so that we do not assume any statistical model. The resulting Fisher map can be interpreted as a phase-map of the different patterns. Lines with high Fisher information can be considered as boundaries between regions of parameter space where patterns with similar characteristics appear. These lines of high Fisher information can be interpreted as phase transitions between complex patterns.