waLBerla: a block-structured high-performance framework for multiphysics simulations. (English) Zbl 07288725

Summary: Programming current supercomputers efficiently is a challenging task. Multiple levels of parallelism on the core, on the compute node, and between nodes need to be exploited to make full use of the system. Heterogeneous hardware architectures with accelerators further complicate the development process. waLBerla addresses these challenges by providing the user with highly efficient building blocks for developing simulations on block-structured grids. The block-structured domain partitioning is flexible enough to handle complex geometries, while the structured grid within each block allows for highly efficient implementations of stencil-based algorithms. We present several example applications realized with waLBerla, ranging from lattice Boltzmann methods to rigid particle simulations. Most importantly, these methods can be coupled together, enabling multiphysics simulations. The framework uses meta-programming techniques to generate highly efficient code for CPUs and GPUs from a symbolic method formulation.


65-XX Numerical analysis
76-XX Fluid mechanics
Full Text: DOI arXiv


[1] Keyes, D. E.; McInnes, L. C.; Woodward, C.; Gropp, W.; Myra, E.; Pernice, M.; Bell, J.; Brown, J.; Clo, A.; Connors, J., Multiphysics simulations: Challenges and opportunities, Int. J. High Perform. Comput. Appl., 27, 1, 4-83 (2013)
[2] Rüde, U.; Willcox, K.; McInnes, L.; Sterck, H., Research and education in computational science and engineering, SIAM Rev., 60, 3, 707-754 (2018)
[3] Godenschwager, C.; Schornbaum, F.; Bauer, M.; Köstler, H.; Rüde, U., A framework for hybrid parallel flow simulations with a trillion cells in complex geometries, (Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis (2013), ACM Press), 1-12
[4] Feichtinger, C.; Donath, S.; Köstler, H.; Götz, J.; Rüde, U., WaLBerla: HPC software design for computational engineering simulations, J. Comput. Sci., 2, 2, 105-112 (2011)
[5] (2019), https://www.mpi-forum.org/, accessed on 30-09-2019
[6] Risso, J. V.T.; Bauer, M.; Carvalho, P. R.; Rüde, U.; Weingaertner, D., Scalable GPU communication with code generation on stencil applications, (2019 31st International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD) (2019)), 88-95
[7] Heuveline, V.; Latt, J., The OpenLB project: An open source and object oriented implementation of lattice Boltzmann methods, Internat. J. Modern Phys. C, 18, 04, 627-634 (2007) · Zbl 1388.76293
[8] (2019), https://www.openlb.net/, accessed on 30-09-2019
[9] Lagrava, D.; Malaspinas, O.; Latt, J.; Chopard, B., Advances in multi-domain lattice Boltzmann grid refinement, J. Comput. Phys., 231, 14, 4808-4822 (2012) · Zbl 1246.76131
[10] (2019), http://www.palabos.org/, accessed on 30-09-2019
[11] Mierke, D.; Janßen, C.; Rung, T., An efficient algorithm for the calculation of sub-grid distances for higher-order LBM boundary conditions in a GPU simulation environment, Comput. Math. Appl. (2018)
[12] (2019), https://www.tuhh.de/elbe/home.html, accessed on 30-09-2019
[13] Groen, D.; Henrich, O.; Janoschek, F.; Coveney, P.; Harting, J., Lattice-Boltzmann methods in fluid dynamics: Turbulence and complex colloidal fluids, (JüLich Blue Gene/P Extreme Scaling Workshop (2011)), 17
[14] Schmieschek, S.; Shamardin, L.; Frijters, S.; Krüger, T.; Schiller, U. D.; Harting, J.; Coveney, P. V., LB3D: A parallel implementation of the lattice-Boltzmann method for simulation of interacting amphiphilic fluids, Comput. Phys. Comm., 217, 149-161 (2017) · Zbl 1408.76004
[15] (2019), http://ccs.chem.ucl.ac.uk/lb3d, accessed on 30-09-2019
[16] Groen, D.; Hetherington, J.; Carver, H. B.; Nash, R. W.; Bernabeu, M. O.; Coveney, P. V., Analysing and modelling the performance of the HemeLB lattice-Boltzmann simulation environment, J. Comput. Sci., 4, 5, 412-422 (2013)
[17] (2019), https://github.com/sailfish-team/sailfish, accessed on 30-09-2019
[18] Liu, Z.; Chu, X.; Lv, X.; Meng, H.; Shi, S.; Han, W.; Xu, J.; Fu, H.; Yang, G., Sunwaylb: Enabling extreme-scale lattice Boltzmann method based computing fluid dynamics simulations on Sunway TaihuLight, (2019 IEEE International Parallel and Distributed Processing Symposium (IPDPS) (2019)), 557-566
[19] Wittmann, M.; Haag, V.; Zeiser, T.; Köstler, H.; Wellein, G., Lattice Boltzmann benchmark kernels as a testbed for performance analysis, Comput. & Fluids, 172, 582-592 (2018) · Zbl 1410.76388
[20] (2019), https://www.cfdem.com/, accessed on 30-09-2019
[21] (2019), https://www.yakuru.fr/granoo/index.html, accessed on 30-09-2019
[22] (2019), https://yade-dev.gitlab.io/trunk/, accessed on 30-09-2019
[23] (2019), https://projectchrono.org/, accessed on 30-09-2019
[24] (2019), http://www.mercurydpm.org/, accessed on 30-09-2019
[25] Preclik, T.; Rüde, U., Ultrascale simulations of non-smooth granular dynamics, Comput. Part. Mech., 2, 2, 173-196 (2015)
[26] Schruff, T.; Liang, R.; Rüde, U.; Schüttrumpf, H.; Frings, R. M., Generation of dense granular deposits for porosity analysis: assessment and application of large-scale non-smooth granular dynamics, Comput. Part. Mech., 5, 1, 1-12 (2016)
[27] Ostanin, I. A.; Zhilyaev, P.; Petrov, V.; Dumitrica, T.; Eibl, S.; Rüde, U.; Kuzkin, V. A., Toward large scale modeling of carbon nanotube systems with the mesoscopic distinct element method, Lett. Mater., 8, 3, 240-245 (2018)
[28] Ostanin, I.; Dumitrica, T.; Eibl, S.; Rüde, U., Size-independent mechanical response of ultrathin CNT films in mesoscopic distinct element method simulations, J. Appl. Mech., 1-17 (2019)
[29] Rettinger, C.; Rüde, U., A comparative study of fluid-particle coupling methods for fully resolved lattice Boltzmann simulations, Comput. & Fluids, 154, 74-89 (2017) · Zbl 1390.76759
[30] Rettinger, C.; Rüde, U., A coupled lattice Boltzmann method and discrete element method for discrete particle simulations of particulate flows, Comput. & Fluids, 172, 706-719 (2018) · Zbl 1410.76458
[31] Hötzer, J.; Jainta, M.; Steinmetz, P.; Nestler, B.; Dennstedt, A.; Genau, A.; Bauer, M.; Köstler, H.; Rüde, U., Large scale phase-field simulations of directional ternary eutectic solidification, Acta Mater., 93, 194-204 (2015)
[32] Bauer, M.; Hötzer, J.; Ernst, D.; Hammer, J.; Seitz, M.; Hierl, H.; Hönig, J.; Köstler, H.; Wellein, G.; Nestler, B.; Rüde, U., Code generation for massively parallel phase-field simulations, (Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (2019), ACM), 59:1-59:32
[33] Deiterding, R.; Wood, S. L., Predictive wind turbine simulation with an adaptive lattice Boltzmann method for moving boundaries, J. Phys. Conf. Ser., 753, 8, Article 082005 pp. (2016)
[34] (2019), https://amroc.sourceforge.net/, accessed on 30-09-2019
[35] Burstedde, C.; Wilcox, L.; Ghattas, O., P4est: Scalable algorithms for parallel adaptive mesh refinement on forests of octrees, SIAM J. Sci. Comput., 33, 3, 1103-1133 (2011) · Zbl 1230.65106
[36] Neumann, P.; Neckel, T., A dynamic mesh refinement technique for lattice Boltzmann simulations on octree-like grids, Comput. Mech., 51, 2, 237-253 (2012) · Zbl 1312.76051
[37] (2019), https://i10git.cs.fau.de/walberla/walberla/, accessed on 30-09-2019
[38] (2019), https://www.walberla.net, accessed on 30-09-2019
[39] Schornbaum, F.; Rüde, U., Massively parallel algorithms for the lattice Boltzmann method on NonUniform grids, SIAM J. Sci. Comput., 38, 2, 96-126 (2016)
[40] Schornbaum, F.; Rüde, U., Extreme-scale block-structured adaptive mesh refinement, SIAM J. Sci. Comput., 40, 3, 358-387 (2018) · Zbl 06890193
[41] Dubey, A.; Almgren, A.; Bell, J.; Berzins, M.; Brandt, S.; Bryan, G.; Colella, P.; Graves, D.; Lijewski, M.; Löffler, F.; O’Shea, B.; Schnetter, E.; Straalen, B. V.; Weide, K., A survey of high level frameworks in block-structured adaptive mesh refinement packages, J. Parallel Distrib. Comput., 74, 12, 3217-3227 (2014)
[42] Schloegel, K.; Karypis, G.; Kumar, V., Parallel static and dynamic multi-constraint graph partitioning, Concurr. Comput.: Pract. Exper., 14, 3, 219-240 (2002) · Zbl 1012.68146
[43] (2019), http://glaros.dtc.umn.edu/gkhome/views/metis/, accessed on 30-09-2019
[44] (2019), https://www.top500.org/, accessed on 30-09-2019
[45] Snir, M.; Wisniewski, R. W.; Abraham, J. A.; Adve, S. V.; Bagchi, S.; Balaji, P.; Belak, J.; Bose, P.; Cappello, F.; Carlson, B.; Chien, A. A.; Coteus, P.; DeBardeleben, N. A.; Diniz, P. C.; Engelmann, C.; Erez, M.; Fazzari, S.; Geist, A.; Gupta, R.; Johnson, F.; Krishnamoorthy, S.; Leyffer, S.; Liberty, D.; Mitra, S.; Munson, T.; Schreiber, R.; Stearley, J.; Hensbergen, E. V., Addressing failures in exascale computing, Int. J. High Perform. Comput. Appl., 28, 2, 129-173 (2014)
[46] Dongarra, J., Emerging heterogeneous technologies for high performance computing (2019), http://www.netlib.org/utk/people/JackDongarra/SLIDES/hcw-0513.pdf, accessed on 30-09-2019
[47] Huang, Kuang-Hua; Abraham, J. A., Algorithm-based fault tolerance for matrix operations, IEEE Trans. Comput., C-33, 6, 518-528 (1984) · Zbl 0557.68027
[48] Randell, B., System structure for software fault tolerance, IEEE Trans. Softw. Eng., SE-1, 2, 220-232 (1975)
[49] Huber, M.; Gmeiner, B.; Rüde, U.; Wohlmuth, B., Resilience for massively parallel multigrid solvers, SIAM J. Sci. Comput., 38, 5, S217-S239 (2016) · Zbl 1352.65626
[50] Zheng, G.; Ni, X.; Kalé, L. V., A scalable double in-memory checkpoint and restart scheme towards exascale, (IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN 2012) (2012), IEEE), 1-6
[51] Herault, T.; Robert, Y., Fault-Tolerance Techniques for High-Performance Computing (2015), Springer · Zbl 1330.68026
[52] Kohl, N.; Hötzer, J.; Schornbaum, F.; Bauer, M.; Godenschwager, C.; Köstler, H.; Nestler, B.; Rüde, U., A scalable and extensible checkpointing scheme for massively parallel simulations, Int. J. High Perform. Comput. Appl., 33, 4, 571-589 (2019)
[53] Lorensen, W. E.; Cline, H. E., Marching cubes: A high resolution 3D surface construction algorithm, (ACM SIGGRAPH Computer Graphics, Vol. 21 (1987), ACM), 163-169
[54] Bauer, M.; Hötzer, J.; Jainta, M.; Steinmetz, P.; Berghoff, M.; Schornbaum, F.; Godenschwager, C.; Köstler, H.; Nestler, B.; Rüde, U., Massively parallel phase-field simulations for ternary eutectic directional solidification, (SC’15: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (2015), IEEE), 1-12
[55] Garland, M.; Heckbert, P. S., Surface simplification using quadric error metrics, (Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques (1997), ACM Press/Addison-Wesley Publishing Co.), 209-216
[56] (2019), https://www.openmesh.org/, accessed on 30-09-2019
[57] Jones, M. W., 3D Distance from a Point to a TriangleTech. Rep. (1995), Department of Computer Science, University of Wales
[58] Bærentzen, J.; Aanæs, H., Signed distance computation using the angle weighted pseudonormal, visualization and computer graphics, IEEE Trans. Vis. Comput. Graphics, 11, 3, 243-253 (2005)
[59] Payne, B.; Toga, A., Distance field manipulation of surface models, computer graphics and applications, IEEE, 12, 1, 65-71 (1992)
[60] Krüger, T.; Kusumaatmaja, H.; Kuzmin, A.; Shardt, O.; Silva, G.; Viggen, E. M., The Lattice Boltzmann Method (2017), Springer
[61] d’Humieres, D., Multiple-relaxation-time lattice Boltzmann models in three dimensions, Phil. Trans. R. Soc. A, 360, 1792, 437-451 (2002) · Zbl 1001.76081
[62] Ginzburg, I.; Verhaeghe, F.; d’Humieres, D., Two-relaxation-time lattice Boltzmann scheme: About parametrization, velocity, pressure and mixed boundary conditions, Commun. Comput. Phys., 3, 2, 427-478 (2008)
[63] Geier, M.; Schönherr, M.; Pasquali, A.; Krafczyk, M., The cumulant lattice Boltzmann equation in three dimensions: Theory and validation, Comput. Math. Appl., 70, 4, 507-547 (2015) · Zbl 1443.76172
[64] Yu, H.; Girimaji, S. S.; Luo, L.-S., DNS and LES of decaying isotropic turbulence with and without frame rotation using lattice Boltzmann method, J. Comput. Phys., 209, 2, 599-616 (2005) · Zbl 1138.76373
[65] Bösch, F.; Chikatamarla, S. S.; Karlin, I. V., Entropic multirelaxation lattice Boltzmann models for turbulent flows, Phys. Rev. E, 92, Article 043309 pp. (2015)
[66] Junk, M.; Yang, Z., Outflow boundary conditions for the lattice Boltzmann method, Prog. Comput. Fluid Dyn., 8, 1-4, 38-48 (2008) · Zbl 1139.76045
[67] Guo, Z.; Zheng, C.; Shi, B., Discrete lattice effects on the forcing term in the lattice Boltzmann method, Phys. Rev. E, 65, Article 046308 pp. (2002) · Zbl 1244.76102
[68] Rohde, M.; Kandhai, D.; Derksen, J. J.; van den Akker, H. E.A.; generic, A., Mass conservative local grid refinement technique for lattice-Boltzmann schemes, Internat. J. Numer. Methods Fluids, 51, 4, 439-468 (2006) · Zbl 1276.76060
[69] Schornbaum, F., Block-Structured Adaptive Mesh Refinement for Simulations on Extreme-Scale Supercomputers (2018), Friedrich-Alexander-Universität Erlangen-Nürnberg, (Ph.D. thesis) · Zbl 06890193
[70] Zeiser, T.; Wellein, G.; Nitsure, A.; Iglberger, K.; Rüde, U.; Hager, G., Introducing a parallel cache oblivious blocking approach for the lattice Boltzmann method, Prog. Comput. Fluid Dyn. Int. J., 8, 1-4, 179-188 (2008) · Zbl 1388.76320
[71] Donath, S.; Iglberger, K.; Wellein, G.; Zeiser, T.; Nitsure, A.; Rüde, U., Performance comparison of different parallel lattice Boltzmann implementations on multi-core multi-socket systems, Int. J. Comput. Sci. Eng., 4, 1, 3-11 (2008)
[72] Wellein, G.; Zeiser, T.; Hager, G.; Donath, S., On the single processor performance of simple lattice Boltzmann kernels, Comput. & Fluids, 35, 8-9, 910-919 (2006) · Zbl 1177.76335
[73] (2019), http://www.fz-juelich.de/ias/jsc/EN/Expertise/High-Q-Club/_node.html, accessed on 30-09-2019
[74] Eibl, S.; Rüde, U., A local parallel communication algorithm for polydisperse rigid body dynamics, Parallel Comput., 80, 36-48 (2018)
[75] Fattahi, E.; Waluga, C.; Wohlmuth, B.; Rüde, U., Large scale lattice Boltzmann simulation for the coupling of free and porous media flow, (Proceedings of the International Conference on High Performance Computing in Science and Engineering (2016), Springer), 1-18 · Zbl 1382.76207
[76] Fattahi, E.; Waluga, C.; Wohlmuth, B.; Rüde, U.; Manhart, M.; Helmig, R., Lattice Boltzmann methods in porous media simulations: From laminar to turbulent flow, Comput. & Fluids, 140, 247-259 (2016) · Zbl 1390.76847
[77] Rybak, I.; Schwarzmeier, C.; Eggenweiler, E.; Rüde, U., Validation and calibration of coupled porous-medium and free-flow problems using pore-scale resolved models (2019), submitted manuscript: https://arxiv.org/abs/1906.06884
[78] Gil, A.; Galache, J.; Godenschwager, C.; Rüde, U., Optimum configuration for accurate simulations of chaotic porous media with lattice Boltzmann methods considering boundary conditions, lattice spacing and domain size, Comput. Math. Appl., 73, 12, 2515-2528 (2017) · Zbl 1370.76143
[79] Eibl, S.; Rüde, U., A systematic comparison of runtime load balancing algorithms for massively parallel rigid particle dynamics, Comput. Phys. Comm., 244, 76-85 (2019)
[80] Hockney, R.; Goel, S.; Eastwood, J., Quiet high-resolution computer models of a plasma, J. Comput. Phys., 14, 2, 148-158 (1974)
[81] Allen, M. P.; Tildesley, D. J., Computer Simulation of Liquids (2017), Oxford University Press · Zbl 1372.82005
[82] Ericson, C., Real-Time Collision Detection (2004), CRC Press
[83] Erleben, K.; Sporring, J.; Henriksen, K.; Dohlman, K., Physics-Based Animation (Graphics Series) (2005), Charles River Media, Inc.: Charles River Media, Inc. Rockland, MA, USA
[84] Gilbert, E.; Johnson, D.; Keerthi, S., A fast procedure for computing the distance between complex objects in three-dimensional space, IEEE J. Robot. Autom., 4, 2, 193-203 (1988)
[85] Gilbert, E. G.; Foo, C.-P., Computing the distance between general convex objects in three-dimensional space, IEEE Trans. Robot. Autom., 6, 1, 53-61 (1990)
[86] Bergen, G. V.D., Collision Detection in Interactive 3D Environments (2003), CRC Press
[87] P.A. Cundall, A computer model for simulating progressive, large-scale movements in blocky rock systems, in: Proceedings of the Symposium of the International Society for Rock Mechanics, 1971, II-8.
[88] Cundall, P. A.; Strack, O. D.L., A discrete numerical model for granular assemblies, Géotechnique, 29, 1, 47-65 (1979)
[89] Preclik, T.; Eibl, S.; Rüde, U., The maximum dissipation principle in rigid-body dynamics with inelastic impacts, Comput. Mech., 62, 1, 1-16 (2017)
[90] Rapaport, D., Multi-million particle molecular dynamics: II. design considerations for distributed processing, Comput. Phys. Comm., 62, 2-3, 217-228 (1991)
[91] Eibl, S.; Preclik, T.; Rüde, U., JUQUEEN Extreme Scaling Workshop 2017JSC Internal Report, 47 (2017), URL https://juser.fz-juelich.de/record/828084
[92] Ladd, A. J.C., Numerical simulations of particulate suspensions via a discretized Boltzmann equation. part 1. theoretical foundation, J. Fluid Mech., 271, 285-309 (1994) · Zbl 0815.76085
[93] Aidun, C. K.; Lu, Y.; Ding, E.-J., Direct analysis of particulate suspensions with inertia using the discrete Boltzmann equation, J. Fluid Mech., 373, 287-311 (1998) · Zbl 0933.76092
[94] Noble, D. R.; Torczynski, J. R., A lattice-Boltzmann method for partially saturated computational cells, Internat. J. Modern Phys. C, 09, 08, 1189-1201 (1998)
[95] Zou, Q.; He, X., On pressure and velocity boundary conditions for the lattice Boltzmann BGK model, Phys. Fluids, 9, 6, 1591-1598 (1997) · Zbl 1185.76873
[96] Peng, C.; Teng, Y.; Hwang, B.; Guo, Z.; Wang, L.-P., Implementation issues and benchmarking of lattice Boltzmann method for moving rigid particle simulations in a viscous flow, Comput. Math. Appl., 72, 2, 349-374 (2016) · Zbl 1358.76061
[97] Rettinger, C.; Rüde, U., Dynamic load balancing techniques for particulate flow simulations, Computation, 7, 1 (2019)
[98] Rettinger, C.; Godenschwager, C.; Eibl, S.; Preclik, T.; Schruff, T.; Frings, R.; Rüde, U., Fully resolved simulations of dune formation in riverbeds, (Kunkel, J. M.; Yokota, R.; Balaji, P.; Keyes, D., High Performance Computing (2017), Springer International Publishing: Springer International Publishing Cham), 3-21
[99] Huang, L. R.; Cox, E. C.; Austin, R. H.; Sturm, J. C., Continuous particle separation through deterministic lateral displacement, Science, 304, 5673, 987-990 (2004)
[100] McGrath, J.; Jimenez, M.; Bridle, H., Deterministic lateral displacement for particle separation: a review, Lab Chip, 14, 4139-4158 (2014)
[101] Kuron, M.; Stärk, P.; Burkard, C.; de Graaf, J.; Holm, C., A lattice Boltzmann model for squirmers, J. Chem. Phys., 150, 14, Article 144110 pp. (2019)
[102] Kuron, M.; Stärk, P.; Holm, C.; de Graaf, J., Hydrodynamic mobility reversal of squirmers near flat and curved surfaces, Soft Matter, 15, 5908-5920 (2019)
[103] Elgeti, J.; Winkler, R. G.; Gompper, G., Physics of microswimmers—single particle motion and collective behavior: a review, Rep. Progr. Phys., 78, 5, Article 056601 pp. (2015)
[104] Blake, J. R., A spherical envelope approach to ciliary propulsion, J. Fluid Mech., 46, 1, 199-208 (1971) · Zbl 0224.76031
[105] Lighthill, M., On the squirming motion of nearly spherical deformable bodies through liquids at very small Reynolds numbers, Comm. Pure Appl. Math., 5, 2, 109-118 (1952) · Zbl 0046.41908
[106] Schruff, T.; Schornbaum, F.; Godenschwager, C.; Rüde, U.; Frings, R. M.; Schüttrumpf, H., Numerical simulation of pore fluid flow and fine sediment infiltration into the riverbed, (11th International Conference on Hydroinformatics, CUNY Academic Works (2014))
[107] Pippig, M., PFFT: An extension of FFTW to massively parallel architectures, SIAM J. Sci. Comput., 35, 3, C213-C236 (2013) · Zbl 1275.65098
[108] Bartuschat, D.; Rüde, U., Parallel multiphysics simulations of charged particles in microfluidic flows, J. Comput. Sci., 8, 1-19 (2015)
[109] Capuani, F.; Pagonabarraga, I.; Frenkel, D., Discrete solution of the electrokinetic equations, J. Chem. Phys., 121, 2, 973-986 (2004)
[110] Rempfer, G.; Davies, G. B.; Holm, C.; de Graaf, J., Reducing spurious flow in simulations of electrokinetic phenomena, J. Chem. Phys., 145, 4, Article 044901 pp. (2016)
[111] Kuron, M.; Rempfer, G.; Schornbaum, F.; Bauer, M.; Godenschwager, C.; Holm, C.; de Graaf, J., Moving charged particles in lattice Boltzmann-based electrokinetics, J. Chem. Phys., 145, 21, Article 214102 pp. (2016)
[112] (2019), https://i10git.cs.fau.de/pycodegen/pystencils, accessed on 30-09-2019
[113] Meurer, A.; Smith, C. P.; Paprocki, M.; Čertík, O.; Kirpichev, S. B.; Rocklin, M.; Kumar, A.; Ivanov, S.; Moore, J. K.; Singh, S.; Rathnayake, T.; Vig, S.; Granger, B. E.; Muller, R. P.; Bonazzi, F.; Gupta, H.; Vats, S.; Johansson, F.; Pedregosa, F.; Curry, M. J.; Terrel, A. R.; v. Roučka, S.; Saboo, A.; Fernando, I.; Kulal, S.; Cimrman, R.; Scopatz, A., SymPy: symbolic computing in python, PeerJ Comput. Sci., 3, Article e103 pp. (2017)
[114] (2019), https://i10git.cs.fau.de/pycodegen/pystencils_walberla, accessed on 30-09-2019
[115] S. Eibl, U. Rüde, A modular and extensible software architecture for particle dynamics, in: Proceedings of the 8th International Conference on Discrete Element Methods (DEM8). URL http://arxiv.org/abs/1906.10963.
[116] (2019), https://jinja.palletsprojects.com/, accessed on 30-09-2019
[117] (2019), https://git-scm.com/, accessed on 30-09-2019
[118] (2019), https://gitlab.com/, accessed on 30-09-2019
[119] (2019), https://github.com/, accessed on 30-09-2019
[120] (2019), https://grafana.com/, accessed on 30-09-2019
[121] (2019), https://www.docker.com/, accessed on 30-09-2019
[122] (2019), https://www.paraview.org/, accessed on 30-09-2019
[123] (2019), https://wci.llnl.gov/simulation/computer-codes/visit/, accessed on 30-09-2019
[124] Bauer, M.; Schornbaum, F.; Godenschwager, C.; Markl, M.; Anderl, D.; Köstler, H.; Rüde, U., A Python extension for the massively parallel multiphysics simulation framework waLBerla, Int. J. Parallel Emergent Distrib. Syst., 31, 6, 529-542 (2016)
[125] (2019), https://www.boost.org/doc/libs/1_72_0/libs/python/, accessed on 13-12-2019
[126] (2019), https://numpy.org/, accessed on 30-09-2019
[127] Kohl, N.; Thönnes, D.; Drzisga, D.; Bartuschat, D.; Rüde, U., The HyTeG finite-element software framework for scalable multigrid solvers, Int. J. Parallel Emergent Distrib. Syst., 34, 5, 477-496 (2019)
[128] Körner, C.; Thies, M.; Hofmann, T.; Thürey, N.; Rüde, U., Lattice Boltzmann model for free surface flow for modeling foaming, J. Stat. Phys., 121, 1, 179-196 (2005) · Zbl 1108.76059
[129] Donath, S.; Feichtinger, C.; Pohl, T.; Götz, J.; Rüde, U., Localized parallel algorithm for bubble coalescence in free surface lattice-Boltzmann method, (Sips, H.; Epema, D.; Lin, H., Euro-Par 2009 Parallel Processing. Euro-Par 2009 Parallel Processing, Lecture Notes in Computer Science, vol. 5704 (2009), Springer: Springer Berlin, Heidelberg), 735-746
[130] Anderl, D.; Bogner, S.; Rauh, C.; Rüde, U.; Delgado, A., Free surface lattice Boltzmann with enhanced bubble model, Comput. Math. Appl., 67, 2, 331-339 (2014) · Zbl 1381.76274
[131] Donath, S.; Mecke, K.; Rabha, S.; Buwa, V.; Rüde, U., Verification of surface tension in the parallel free surface lattice Boltzmann method in waLBerla, Comput. & Fluids, 45, 1, 177-186 (2011) · Zbl 1430.76009
[132] Anderl, D.; Bauer, M.; Rauh, C.; Rüde, U.; Delgado, A., Numerical simulation of adsorption and bubble interaction in protein foams using a lattice Boltzmann method, Food Funct., 5, 755-763 (2014)
[133] Anderl, D.; Bauer, M.; Rauh, C.; Rüde, U.; Delgado, A., Numerical simulation of bubbles in shear flow, PAMM, 14, 1, 667-668 (2014)
[134] Ammer, R.; Markl, M.; Ljungblad, U.; Körner, C.; Rüde, U., Simulating fast electron beam melting with a parallel thermal free surface lattice Boltzmann method, Comput. Math. Appl., 67, 318-330 (2014) · Zbl 1381.76273
[135] Markl, M.; Ammer, R.; Rüde, U.; Körner, C., Numerical investigations on hatching process strategies for powder-bed-based additive manufacturing using an electron beam, Int. J. Adv. Manuf. Technol., 78, 1-4, 239-247 (2015)
[136] Bauer, M.; Hötzer, J.; Jainta, M.; Steinmetz, P.; Berghoff, M.; Schornbaum, F.; Godenschwager, C.; Köstler, H.; Nestler, B.; Rüde, U., Massively parallel phase-field simulations for ternary eutectic directional solidification, (Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (2015), ACM), 8
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.