Highly flexible and reusable finite element simulations with ViennaX. (English) Zbl 1321.65205

Summary: An approach for increasing the flexibility and reusability of finite element applications for scientific computing is investigated by utilizing the ViennaX framework. Implementations are decoupled into components, allowing for extensible application setups as well as convenient changes in the simulation flow. The feasibility of our approach is shown by decoupling finite element implementations, provided by the ViennaFEM and the deal.II library, respectively. A ViennaFEM elasticity problem is highly decoupled into separate components, whereas an adaptive mesh refinement example provided by the deal.II library is used to show ViennaX’s support for execution loops. Finally, we underline the high level of flexibility and reusability by outlining the transformation from the depicted applications into a finite volume-based solution of the Poisson equation.


65Y15 Packaged methods for numerical algorithms
65M60 Finite element, Rayleigh-Ritz and Galerkin methods for initial value and initial-boundary value problems involving PDEs
65N30 Finite element, Rayleigh-Ritz and Galerkin methods for boundary value problems involving PDEs
Full Text: DOI


[1] deal.II. URL http://www.dealii.org/ (accessed 05.09.13).
[2] D.E. Bernholdt, R.C. Armstrong, B.A. Allan, Managing complexity in modern high end scientific computing through component-based software engineering, in: Proceedings of the HPCA Workshop on Productivity and Performance in High-End Computing, P-PHEC, 2004.
[3] Szyperski, C., Component software: beyond object-oriented programming, (2002), Addison-Wesley Longman Publishing Co., Inc.
[4] Cactus. URL http://cactuscode.org/ (accessed 05.09.13).
[5] ViennaX. URL http://viennax.sourceforge.net/ (accessed 05.09.13).
[6] Weinbub, J.; Rupp, K.; Selberherr, S., Viennax: a parallel plugin execution framework for scientific computing, Eng. Comput., (2013)
[7] Bangerth, W.; Hartmann, R.; Kanschat, G., Deal.II—a general-purpose object-oriented finite element library, ACM Trans. Math. Softw., 33, 4, 24:1-24:27, (2007) · Zbl 1365.65248
[8] Bangerth, W.; Burstedde, C.; Heister, T., Algorithms and data structures for massively parallel generic adaptive finite element codes, ACM Trans. Math. Softw., 38, 2, 14:1-14:28, (2011) · Zbl 1365.65247
[9] DUNE. URL http://www.dune-project.org/ (accessed 05.09.13).
[10] Dedner, A.; Klöfkorn, R.; Nolte, M., A generic interface for parallel and adaptive discretization schemes, Computing, 90, 3-4, 165-196, (2010) · Zbl 1201.65178
[11] Kirk, B. S.; Peterson, J. W.; Stogner, R. H., Libmesh: a C++ library for parallel adaptive mesh refinement/coarsening simulations, Eng. Comput., 22, 3-4, 237-254, (2006)
[12] libMesh. URL http://libmesh.sourceforge.net/ (accessed 05.09.13).
[13] FEniCS. URL http://fenicsproject.org/ (accessed 05.09.13).
[14] Logg, A.; Mardal, K. A.; Wells, G., (Automated Solution of Differential Equations by the Finite Element Method, Lecture Notes in Computational Science and Engineering, vol. 84, (2012), Springer)
[15] Logg, A.; Wells, G. N., DOLFIN: automated finite element computing, ACM Trans. Math. Softw., 37, 2, 20:1-20:28, (2010) · Zbl 1364.65254
[16] OpenFOAM. URL http://www.openfoam.com/ (accessed 05.09.13).
[17] H. Jasak, A. Jemcov, Ž. Tuković, OpenFOAM: a C++ library for complex physics simulations, in: Proceedings of the International Workshop on Coupled Methods in Numerical Dynamics, 2007, pp. 47-66.
[18] FreeFem++. URL http://www.freefem.org/ff++/ (accessed 03.12.13).
[19] Hecht, F., New development in freefem++, J. Numer. Math., 20, 3-4, 251-265, (2012) · Zbl 1266.68090
[20] ViennaFEM. URL http://viennafem.sourceforge.net/ (accessed 05.09.13).
[21] ViennaCL. URL http://viennacl.sourceforge.net/ (accessed 05.09.13).
[22] ViennaMath. URL http://viennamath.sourceforge.net/ (accessed 05.09.13).
[23] CCA. URL http://www.cca-forum.org/ (accessed 05.09.13).
[24] R. Armstrong, D. Gannon, A. Geist, et al. Toward a common component architecture for high-performance scientific computing, in: Proceedings of the 8th IEEE International Symposium on High Performance Distributed Computing, HPDC, 1999, pp. 115-124.
[25] Bernholdt, D. E.; Allan, B. A.; Armstrong, R., A component architecture for high-performance scientific computing, Int. J. High Perform. Comput. Appl., 20, 2, 163-202, (2006)
[26] Allan, B. A.; Armstrong, R. C.; Bernholdt, D. E., The CCA core specification in a distributed memory SPMD framework, Concurr. Comput.: Pract. Exper., 14, 5, 323-345, (2002) · Zbl 1008.68528
[27] Govindaraju, M.; Head, M. R.; Chiu, K., (XCAT-C++: Design and Performance of a Distributed CCA Framework, Lecture Notes in Computer Science, vol. 3769, (2005)), 270-279
[28] Lewis, M. J.; Ferrari, A. J.; Humphrey, M. A., Support for extensibility and site autonomy in the legion grid system object model, J. Parallel Distrib. Comput., 63, 5, 525-538, (2003) · Zbl 1055.68021
[29] K. Zhang, K. Damevski, V. Venkatachalapathy, et al. SCIRun2: a CCA framework for high performance computing, in: Proceedings of the 9th International Workshop on High-Level Parallel Programming Models and Supportive Environments, HIPS, 2004, pp. 72-79. http://dx.doi.org/10.1109/HIPS.2004.1299192.
[30] Goodale, T.; Allen, G.; Lanfermann, G., The cactus framework and toolkit, (High Performance Computing for Computational Science—VECPAR 2002, Lecture Notes in Computer Science, vol. 2565, (2003)), 197-227 · Zbl 1027.65524
[31] COOLFluiD. URL http://coolfluidsrv.vki.ac.be/trac/coolfluid/ (accessed 05.09.13).
[32] T. Quintino, A component environment for high-performance scientific computing, Ph.D. Thesis, Katholieke Universiteit Leuven, 2008.
[33] ESMF. URL http://www.earthsystemmodeling.org/ (accessed 05.09.13).
[34] Hill, C.; DeLuca, C.; Balaji, V., The architecture of the Earth system modeling framework, Comput. Sci. Eng., 6, 1, 18-28, (2004)
[35] Uintah. URL http://www.uintah.utah.edu/ (accessed 05.09.13).
[36] Berzins, M., Status of release of the uintah computational framework, tech. rep. UUSCI-2012-001, (2012), Scientific Computing and Imaging Institute, University of Utah
[37] J. Davison de St Germain, J. McCorquodale, S.G. Parker, et al. Uintah: a massively parallel problem solving environment, in: Proceedings of the 9th IEEE International Symposium on High Performance Distributed Computing, HPDC, 2000, pp. 33-41. http://dx.doi.org/10.1109/HPDC.2000.868632.
[38] A. Miller, The task graph pattern, in: Proceedings of the 2nd Workshop on Parallel Programming Patterns, ParaPLoP, 2010, pp. 8:1-8:7. http://dx.doi.org/10.1145/1953611.1953619.
[39] Cormen, T. H.; Leiserson, C. E.; Rivest, R. L.; Stein, C., Introduction to algorithms, (2009), The MIT Press · Zbl 1187.68679
[40] ViennaGrid. URL http://viennagrid.sourceforge.net/ (accessed 05.09.13).
[41] Schöberl, J., NETGEN an advancing front 2D/3D mesh generator based on abstract rules, Comput. Vis. Sci., 1, 1, 41-52, (1997) · Zbl 0883.68130
[42] PETSc. URL http://www.mcs.anl.gov/petsc/ (accessed 05.09.13).
[43] Colella, P.; Bell, J.; Keen, N.; Ligocki, T., Performance and scaling of locally-structured grid methods for partial differential equations, J. Phys. Conf. Ser., 78, 1, 012013, (2007)
[44] Möller, M.; Kuzmin, D., Adaptive mesh refinement for high-resolution finite element schemes, Internat. J. Numer. Methods Fluids, 52, 5, 545-569, (2006) · Zbl 1108.65115
[45] Barth, T. J., A posteriori error estimation and mesh adaptivity for finite volume and finite element methods, (Adaptive Mesh Refinement—Theory and Applications, Lecture Notes in Computational Science and Engineering, vol. 41, (2005)), 183-202 · Zbl 1065.65113
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.