Skip to main content
Recently, a paradigm shift in computer architecture has offered computational science the prospect of a vast increase in capability at relatively little cost. The tremendous computational power of graphics processors (GPU) provides a... more
Recently, a paradigm shift in computer architecture has offered computational science the prospect of a vast increase in capability at relatively little cost. The tremendous computational power of graphics processors (GPU) provides a great opportunity for those willing to rethink algorithms and rewrite existing simulation codes. In this introduction, we give a brief survey of GPU computing, and its potential capabilities, intended for the general scientific and engineering audience. We will also review some challenges facing the users in adapting the large toolbox of scientific computing to these changes in computer architecture, and what the community can expect in the near future.
ABSTRACT Considerable effort has been focused on the acceleration using GPUs of high order spectral element methods and discontinuous Galerkin finite element methods. However, these methods are not universally applicable, and much of the... more
ABSTRACT Considerable effort has been focused on the acceleration using GPUs of high order spectral element methods and discontinuous Galerkin finite element methods. However, these methods are not universally applicable, and much of the existing FEM software base employs low order methods. In this talk, we present a formulation of FEM, using the PETSc framework from ANL, which is amenable to GPU acceleration even at very low order. In addition, using the FEniCS system for FEM, we show that the relevant kernels can be automatically generated and optimized using a symbolic manipulation system.
We present a heterogeneous computing strategy for a hybridizable discontinuous Galerkin (HDG) geometric multigrid (GMG) solver. Parallel GMG solvers require a combination of coarse grain and fine grain parallelism is utilized to improve... more
We present a heterogeneous computing strategy for a hybridizable discontinuous Galerkin (HDG) geometric multigrid (GMG) solver. Parallel GMG solvers require a combination of coarse grain and fine grain parallelism is utilized to improve time to solution performance. In this work we focus on fine grain parallelism. We use Intel's second generation Xeon Phi (Knights Landing) to enable acceleration. The GMG method achieves ideal convergence rates of $0.2$ or less, for high polynomial orders. A matrix free (assembly free) technique is exploited to save considerable memory usage and increase arithmetic intensity. HDG enables static condensation, and due to the discontinuous nature of the discretization, we developed a matrix-vector multiply routine that does not require any costly synchronizations or barriers. Our algorithm is able to attain 80\% of peak bandwidth performance for higher order polynomials. This is possible due to the data locality inherent in the HDG method. Very high performance is realized for high order schemes, due to good arithmetic intensity, which declines as the order is reduced.
PyLith is a finite element code for the solution of dynamic and quasi-static tectonic deformation problems.
Added UserFunctionDB for user-specified analytical functions. Changed GravityField::gravAcceleration() to GravityField::gravityAcc for consistency with gravityDir(). Add methods for computing density scale and pressure scale from other... more
Added UserFunctionDB for user-specified analytical functions. Changed GravityField::gravAcceleration() to GravityField::gravityAcc for consistency with gravityDir(). Add methods for computing density scale and pressure scale from other scales. Only three of length scale, time scale, pressure scale, and density scale are independent.
Version of Firedrake used in OT-mesh paper This release is specifically created to document the version of Firedrake used in a particular set of experiments. Please do not cite this as a general source for Firedrake or any of its... more
Version of Firedrake used in OT-mesh paper This release is specifically created to document the version of Firedrake used in a particular set of experiments. Please do not cite this as a general source for Firedrake or any of its dependencies. Instead, refer to http://www.firedrakeproject.org/publications.html
PyLith is an open-source finite-element code for dynamic and quasistatic simulations of crustal deformation, primarily earthquakes and volcanoes. Main page:... more
PyLith is an open-source finite-element code for dynamic and quasistatic simulations of crustal deformation, primarily earthquakes and volcanoes. Main page: [https://geodynamics.org/cig/software/pylith](https://geodynamics.org/cig/software/pylith) User Manual Binary packages Utility to build PyLith and all of its dependencies from source PyLith Wiki: [https://wiki.geodynamics.org/software:pylith:start](https://wiki.geodynamics.org/software:pylith:start) Archive of online tutorials Hints, tips, tricks, etc PyLith development plan Submit bug reports via https://github.com/geodynamics/pylith/issues Send all questions to: cig-short@geodynamics.org Features Quasi-static (implicit) and dynamic (explicit) time-stepping Cell types include triangles, quadrilaterals, hexahedra, and tetrahedra Linear elastic, linear and generalized Maxwell viscoelastic, power-law viscoelastic, and Drucker-Prager elastoplastic materials Infinitesimal and small strain elasticity formulations Fault interfaces using cohesive cells Prescribed slip with multiple, potentially overlapping earthquake ruptures and aseismic creep Spontaneous slip with slip-weakening friction and Dieterich rate- and state-friction fault constitutive models Time-dependent Dirichlet (displacement/velocity) boundary conditions Time-dependent Neumann (traction) boundary conditions Time-dependent point forces Absorbing boundary conditions Gravitational body forces VTK and HDF5/Xdmf output of solution, fault information, and state variables Templates for adding your own bulk rheologies, fault constitutive models, and interfacing with a custom seismic velocity model. User-friendly computation of static 3-D Green's functions Installation Detailed installation instructions for the binary packages are in the User Manual with detailed building instructions for a few platforms in the INSTALL file bundled with the PyLith Installer utility. We also offer a Docker image (https://wiki.geodynamics.org/software:pylith:docker) [...]
Added new examples. examples/3d/subduction: New suite of examples for a 3-D subduction zone. This intermediate level suite of examples illustrates a wide range of PyLith features for quasi-static simulations. examples/2d/subduction: Added... more
Added new examples. examples/3d/subduction: New suite of examples for a 3-D subduction zone. This intermediate level suite of examples illustrates a wide range of PyLith features for quasi-static simulations. examples/2d/subduction: Added quasi-static spontaneous rupture earthquake cycle examples (Steps 5 and 6) for slip-weakening and rate- and state-friction. These new examples make use of ParaView Python scripts to facilitate using ParaView with PyLith. Improved the PyLith manual Added diagram to guide users on which installation method best meets their needs. Added instructions for how to use the Windows Subsystem for Linux to install the PyLith Linux binary on systems running Windows 10. Fixed bug in generating Xdmf files for 2-D vector output. Converted Xdmf generator from C++ to Python for more robust generation of Xdmf files from Python scripts. Updated spatialdata to v1.9.10. Improved error messages when reading SimpleDB and SimpleGridDB files. Updated PyLith parameter viewer to v1.1.0. Application and documentation are now available on line at https://geodynamics.github.io/pylith_parameters. Small fix to insure hierarchy path listed matches the one for PyLith. Updated PETSc to v3.7.6. See the PETSc documentation for a summary of all of the changes. Switched to using CentOS 6.9 for Linux binary builds to insure compatibility with glibc 2.12 and later.

And 241 more