[go: up one dir, main page]

0% found this document useful (0 votes)
56 views5 pages

PySPH A Python Framework For Smoothed Pa

Uploaded by

Premchand V. P.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views5 pages

PySPH A Python Framework For Smoothed Pa

Uploaded by

Premchand V. P.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

80 PROC. OF THE 9th PYTHON IN SCIENCE CONF.

(SCIPY 2010)

PySPH: A Python Framework for Smoothed Particle


Hydrodynamics
Prabhu Ramachandran‡∗ , Chandrashekhar Kaushik‡

Abstract—[PySPH] is a Python-based open source parallel framework for The above equation can be written in summation form as
Smoothed Particle Hydrodynamics (SPH) simulations. It is distributed under a mj
BSD license. The performance critical parts are implemented in [Cython]. The f (ri ) = ∑ f (r j ) W (ri − r j , h) (3)
j ρj
framework provides a load balanced, parallel execution of solvers. It is designed
to be easy to extend. In this paper we describe the architecture of PySPH and
The above equation forms the core of all SPH calculations. The
how it can be used.
index j loops over all neighboring particles. m j is the mass of a
At it’s core PySPH provides a particle kernel, an SPH kernel and a solver
framework. Serial and parallel versions of solvers for some standard problems
particle and ρ j is the density of the particle. The term
are also provided. The parallel solver uses [mpi4py]. We employ a simple but el- mj
egant automatic load balancing strategy for the parallelization. Currently, we are
,
ρj
able to perform free surface simulations and some gas dynamics simulations.
PySPH is still a work in progress and we will discuss our future plans for the can be thought of as representing a volume element [Morris96].
project. Gradients and divergence encountered in the equations repre-
senting fluid motion are represented using similar summations.
Index Terms—parallel, Cython, fluid dynamics, simulation SPH finds widespread use in many domains. [Monaghan05] and
[Morris97] give extensive details about the SPH method.
Introduction
Related Work
SPH Primer
Despite the age of SPH and its applicability to many domains,
Smoothed Particle Hydrodynamics (SPH) is a computational sim- there does not seem to be much effort in developing a unified
ulation technique. It was developed to simulate astral phenomena framework for SPH. [SPHysics] is a FORTRAN-based open
by [Gingold77] and [Lucy77] in 1977. Since then, it has been used source package for performing SPH. It’s primary objective is to
in numerous other fields including fluid-dynamics, gas-dynamics model free-surface flows. From the provided documentation we
and solid mechanics. feel that it is not easy to set up simulations in this package.
The central idea behind SPH is the use of integral interpolants. [SPH2000] is another parallel framework for SPH written in C++.
Consider a function f (r). It can be represented by the equation This code however does not seem to be in active development
Z
currently. Moreover, they show exactly one example simulation
f (r) = f (r′ )δ (r − r′ )dr′ (1)
with their code. Neither package has a publicly accessible source
Replacing the delta distribution with an approximate delta func- code repository. Therefore, an open source package that is easy
tion, W , gives us: to experiment with and extend will be a useful contribution to
Z the community, especially when combined with the flexibility of
f (r) = f (r′ )W (r − r′ , h)dr′ . (2) Python [Oliphant07].
PySPH [PySPH] was created to address this need. It is an open
The above equation estimates the value of function f at a point r in source, parallel, framework for Smoothed Particle Hydrodynamics
space using the weighted values of f at points near it. The weight (SPH) implemented in Python.
decreases as the distance between r and r′ increase. h in the above
equation represents the particle interaction radius. The support of Choice of implementation language
the kernel W is some small multiple of h. Outside the support, We use a combination of Python and [Cython] to implement the
the value of W is set to zero. Compact support is computationally framework. Python is a high-level, object-oriented, interpreted
advantageous since it allows us to avoid an N 2 interaction among programming language which is easy to learn. Python code is also
particles. very readable. There are numerous packages (both scientific and
otherwise) that can be used to enhance the productivity of applica-
* Corresponding author: prabhu@aero.iitb.ac.in
‡ IIT Bombay tions. A Python-based SPH implementation can take advantage of
these packages, which could enhance it in various aspects, from
Copyright © 2010 Prabhu Ramachandran et al. This is an open-access article providing plotting facilities (2D and 3D), to generating GUI’s,
distributed under the terms of the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, to running SPH simulations from the web, to parallelization.
provided the original author and source are credited. All these features can also be accessed through an interactive
PYSPH: A PYTHON FRAMEWORK FOR SMOOTHED PARTICLE HYDRODYNAMICS 81

interpreter. [Oliphant07] discusses how Python can be used for


scientific computing.
Python, however, is an interpreted language. Thus, compute-
intensive tasks implemented in pure Python will be prohibitively
slow. To overcome this, we delegate all performance-critical tasks
to a language called Cython [Cython]. Cython makes writing C
extensions for Python nearly as simple as writing Python code
itself. A Cython module is compiled by a compiler into a C exten-
sion module. When the C code is compiled, it becomes a module
that may be imported from Python. Most of Python’s features are
available in Cython. Thus, by delegating all performance-critical
components to Cython, we are able to overcome the performance
hit due to the interpreted nature of Python and still use all of
Python’s features.

An overview of features
PySPH currently allows a user to set up simulations involving
incompressible fluids and free surfaces in two and three dimen-
sions. The framework supports complex geometries. However,
only a few simple shapes have been currently implemented. The
framework has been designed from the ground up to be parallel.
We use mpi4py [mpi4py] for the parallel solver. The parallel solver
is automatically load balanced.
In the following, we outline the framework, discuss the current
status and future improvements that are planned.

The Framework Fig. 1: Outline of tasks to set up a simulation.


The whole framework was designed to enable simple simulations
to be set up very easily, and yet be flexible enough to add
complex features. We present a high level view of a particle-based • Additional operations to the solver: We may
simulation in the following. require the solver to perform additional oper-
ations (apart from the main simulation), like
Guiding Principle - High level view of a simulation writing data to file, plotting the data etc. This
A simulation always involves a few key objects: is configured during this step.
• Solver: The solver is an object that manages • Start the solver: The solver iterations are
the entire simulation. It typically delegates its started.
activities to other objects like integrators, com-
The outline given above is very generic. This set of steps is
ponent managers and arrays of particles.
useful in setting up almost any particle-based simulation. Parallel
• Entities: The simulation involves distinct collec-
simulations too should adhere to the basic outline given above.
tions of particles each representing a particular
Given below is pseudo-Python code to run a simple serial simula-
physical entity. Each entity is a derived class from
tion:
the base class EntityBase. For example, Fluid and
Solid are two different classes and a user may # Imports...
create a collection of fluids and solids using this. solver = FSFSolver(time_step=0.0001,
This allows a user to set up a simulation with a total_simulation_time=10.,
collection of physical entities. kernel=CubicSpline2D())

The high level view outlined in Figure 1 served as the guiding # create the two entities.
principle while designing various components of the framework. dam_wall = Solid(name='dam_wall')
dam_fluid = Fluid(name='dam_fluid')
The various tasks shown in Figure 1 are explained below:
# The particles for the wall.
• Create and set up the solver: Initially, we
rg = RectangleGenerator(...)
create an appropriate solver object for the sim- dam_wall.add_particles(
ulation. Different solvers are used for different rg.get_particles())
kinds of simulations. We also set up various solver.add_entity(dam_wall)
# Particles for the left column of fluid.
parameters of the solver. rg = RectangleGenerator(...)
• Create physical entities: In this step, we add dam_fluid.add_particles(
the physical entities (made of up particles), that rg.get_particles())
will take part in the simulation. Multiple sets of solver.add_entity(dam_fluid)
particles could be added, one for each physical # start the solver.
entity involved. solver.solve()
82 PROC. OF THE 9th PYTHON IN SCIENCE CONF. (SCIPY 2010)

Solver framework
Finally, bringing all the underlying modules together is the Solver
framework. The framework is component based, and allows users
to write components, which are subclasses of SolverComponent,
with a standard interface set. The SolverComponent is the base
class for all classes that perform any operation on any of the
entities. Many abstractions required for a solver have been imple-
mented, and a user can inherit from various classes to implement
new formulations. The ComponentManager manages all the
SolverComponents used by the solver. It is also responsible for
the property requirements of each of the components involved
in a calculation. Thus, if an entity is operated by a component
that requires a particular property to be available, the manager
Fig. 2: Architecture of the framework ensures that the entity is suitably set up. An Integrator class
handles the actual time integration. The Integrator is also a
SolverComponent. These are implemented in a combination of
Architecture Overview Python and Cython.
The architecture may be broadly split into the following:
Solvers
• the particle kernel, New solvers are written using the various abstractions devel-
• the SPH kernel, oped in the solver framework and all of them derive from the
• the solver framework, SolverBase class. Serial and parallel solvers are written using the
• serial and parallel solvers. functionality made available in the solver framework.
The overall architecture of the framework is shown in Figure
2. We discuss this in detail in the following sections. Parallelization

Particle kernel
In SPH simulations, particles simply influence other particles in
a small neighborhood around them. Thus, in order to perform a
A fast implementation of arrays in Cython forms the foundation
parallel simulation one needs to:
of the framework. Arrays are ubiquitous in the implementation,
hence the implementation is made as fast as possible (close to • partition the particles among different pro-
C performance) using Cython. The base class for these arrays cessors, and
is called BaseArray and subclasses of these in the form of • share neighboring particle information between
IntArray, FloatArray etc. are made available. These expose some of the processors.
a get_npy_array method which returns a numpy array which
For an SPH simulation, this does require a reasonable amount
internally uses the same C data buffer. Our arrays may be resized
of communication overhead since the particles are moving and
and are up to 4 times faster than numpy arrays when used from
the neighbor information keeps changing. In addition to this, we
Cython.
would like the load on the processors to be reasonably balanced.
The ParticleArray module uses these arrays extensively and
This is quite challenging.
allows us to represent collections of particles in the framework. It
Our objective was to maintain an outline similar to the serial
is also implemented in Cython to achieve maximum performance.
code for setting up simulations that run in parallel. For paralleliza-
Each ParticleArray maintains a collection of particle properties
tion of the framework, ideally only the CellManager needs to be
and uses the arrays to store this data. Since the arrays allow the
aware of the parallelism. The components in the solver framework
developer to manipulate them as numpy arrays, it becomes easy to
simply operate on particle data that they are presented with. This
perform calculations on the particle properties, if required.
is achievable to a good extent, except when a component requires
One of the central requirements of the SPH is to find the
global data, in which case the serial component may need to
nearest neighbors of a given particle. This is necessary in order
subclassed and a parallel version written, which collects the global
to calculate the influence of each particle on the others. We do
data before executing the serial version code. A good example for
this using a nearest neighbor algorithm (Nearest Neighbor Particle
this is when a component needs to know the maximum speed of
Search - NNPS) which bins the domain into a collection of fixed
sound in the entire domain in order to limit the time-step say.
size cells. Particles are organized into a dictionary keyed on a tuple
The pseudo-code of a typical parallel simulation is the same as
indicative of the location of the particle. The nearest neighbor
the serial example given earlier with just one change to the solver
search is collectively performed by the CellManager class and
as below:
the nnps modules. Both are implemented in Cython.
solver = ParallelFSFSolver(
SPH kernel time_step=0.0001,
total_simulation_time=10.,
The SPH kernel consits of the sph module which contains classes kernel=CubicSpline2D())
to perform the SPH summation (as given in the equations in the
introductory section) and also to represent particle interactions. # Code to load particles in proc with
# rank 0.
This includes a variety of kernels. These are implemented so as
to use the nnps and other modules discussed earlier. These are all In the above pseudo-code, the only thing that changes is the
implemented in Cython for performance. fact that we instantiate a parallel solver rather than a serial one.
PYSPH: A PYTHON FRAMEWORK FOR SMOOTHED PARTICLE HYDRODYNAMICS 83

Serial case
Solver components

CellManager

Particles

Parallel case

Processor 1 Processor 2

Solver components Solver components

ParallelCellManager ParallelCellManager
Particles Particles Fig. 4: Initial condition of a square block of water falling towards a
vessel with water.

Fig. 3: The parallel solvers simply use a ParallelCellManager instead


of a CellManager.

We also ensure that the particles are all loaded only on the
first processor. The ParallelCellManager manages the parallel
neighbor information. It also performs automatic load-balancing
by distributing the particles to different processors on demand
based on the number of particles in each processor.
The full details of the parallelization are beyond the scope of
this article but we provide a brief outline of the general approach.
More details can be obtained from [Kaushik09].
The basic idea of the parallelization involves the following key
steps:
• Particles are organized into small cubical
Cells. Each cell manages a set of particles. Cells
are created and destroyed on demand depending
Fig. 5: Square block of water after it strikes a vessel containing water
on where the particles are present. simulated with the SPH.
• A region consists of a set of usually (but not
always) connected cells. Each region is managed
by one processor. 1500 lines we had implemented the core ideas, added support
• The domain of particles is decomposed into cells for visualization, logging and command line options. The initial
and regions and allocated to different processors. design was subsequently refined and parts of it implemented in
• Cells are moved between processors in order to Cython. Thus, the use of Python clearly allowed us to prototype
balance the load. rapidly and yet obtain good performance with Cython.
In addition, the ParallelCellManager ensures that each pro-
cessor has all the necessary information such that an SPH compu- Current status
tation may be performed on the the particles it manages.
Figures 4, 5 show the fluid at a particular instant when a square
Figure 3 outlines how the parallel and serial solvers are set
block of water strikes a vessel filled with water. This is a two-
up internally. In both cases, solver components operate on cell
dimensional simulation.
managers to obtain the nearest neighbors and get the particles, the
Figure 6 shows a typical 3D dam-break problem being sim-
only difference being the ParallelCellManager, which manages
ulated with 8 processors. The fluid involved is water. The colors
the load distribution and communication in the parallel case.
indicate the processor on which the particles are located.
It is important to note that the basic ideas for the parallel
The current capabilities of PySPH include the following:
algorithm were implemented and tested in pure Python using
mpi4py. This was done in highly fragmented time and was possi- • Fully automatic, load balanced, parallel
ble only because of the convenience of both Python and mpi4py. framework.
Mpi4py allows us to send Python objects to processors and this • Fairly easy to script.
allowed us to focus on the algorithm without worrying about • Good performance.
the details of MPI. The use of Python enabled rapid prototyping • Relatively easy to extend.
and its libraries made it easy to visualize the results. In roughly • Solver for incompressible free surface flows.
84 PROC. OF THE 9th PYTHON IN SCIENCE CONF. (SCIPY 2010)

[Gingold77] R. A. Gingold and J. J. Monaghan. Smoothed particle hydro-


dynamics: theory and application to non-spherical stars, Mon.
Not. R. astr. Soc., 181:375-389, 1977.
[Kaushik09] Chandrashekhar P. Kaushik. A Python based parallel frame-
work for Smoothed Particle Hydrodynamics, M.Tech. disser-
tation, Department of Computer Science and Engineering, IIT
Bombay, 2009.
[Lucy77] L. B. Lucy. A numerical approach to testing the fission hypoth-
esis, The Astronomical Journal, 82(12):1013-1024, December
1977.
[Monaghan05] J. J. Monaghan. Smoothed particle hydrodynamics, Reports on
Progress in Physics, 68(8):1703-1759, 2005.
[Morris96] J. P. Morris. Analysis of smoothed particle hydrodynamics with
applications, PhD Thesis, Monash University, Australia, 1996.
[Morris97] J. P. Morris, P. J. Fox and Yi Zhu. Modeling low Reynolds
number incompressible flows using SPH, Journal of Computa-
tional Physics, 136(1):214-226, 1997.
[mpi4py] http://mpi4py.scipy.org
[Oliphant07] Travis E. Oliphant. Python for scientific computing, Comput-
ing in science and engineering, 9:10-20, 2007.
[PySPH] http://pysph.googlecode.com
[SPH2000] S. Ganzenmuller, S. Pinkenburg and W. Rosenstiel. SPH2000:
A Parallel Object-Oriented Framework for Particle Simula-
tions with SPH, Lecture notes in computer science, 3648:1275-
Fig. 6: 3D dam-break problem simulated on 8 processors with 1284, 2005.
particles colored as per processor ID indicating a load balanced [SPHysics] Gòmez-Gesteira M., Rogers, B.D., Dalrymple, R.A., Crespo,
simulation. A.J.C. and Narayanaswamy, M. User guide for the SPHysics
code 1.4, http://wiki.manchester.ac.uk/sphysics.

Most importantly, we have a working framework and a rea-


sonable design which provides good performance. However, there
are several things we need to improve.

Future work
Our code is available in the form of a Mercurial repository on
Google’s project hosting site [PySPH]. However, the code is not
ready for a proper release yet because we would like to perform a
redesign of some parts of the solver framework. At the moment,
they are a little too complex. Once this is done we would like to
do the following:
• Improve the documentation.
• Reduce any compulsory dependence on VTK or
TVTK.
• Improve testing on various platforms.
• A full-fledged release.
• Support for gas-dynamics problems.
• Support for solid mechanics problems.
This would take a few more months and at which point we
will make a formal release.

Conclusions
We have provided a high-level description of the current capabili-
ties and architecture of PySPH. We have also mentioned what we
believe are the future directions we would like to take. We think
we have made an important beginning and believe that PySPH
will help enable open research and computing using particle-based
computing in the future. It is important to note that Python has
been centrally important in the development of PySPH by way
of its rapid prototyping capability and access to a plethora of
libraries.

R EFERENCES
[Cython] http://www.cython.org

You might also like