×

zbMATH — the first resource for mathematics

Enhancing speed and scalability of the ParFlow simulation code. (English) Zbl 1405.65116
Summary: Regional hydrology studies are often supported by high-resolution simulations of subsurface flow that require expensive and extensive computations. Efficient usage of the latest high performance parallel computing systems becomes a necessity. The simulation software ParFlow has been demonstrated to meet this requirement and shown to have excellent solver scalability for up to 16,384 processes. In the present work, we show that the code requires further enhancements in order to fully take advantage of current petascale machines. We identify ParFlow’s way of parallelization of the computational mesh as a central bottleneck. We propose to reorganize this subsystem using fast mesh partition algorithms provided by the parallel adaptive mesh refinement library p4est. We realize this in a minimally invasive manner by modifying selected parts of the code to reinterpret the existing mesh data structures. We evaluate the scaling performance of the modified version of ParFlow, demonstrating good weak and strong scaling up to 458k cores of the Juqueen supercomputer, and test an example application at large scale.
MSC:
65M60 Finite element, Rayleigh-Ritz and Galerkin methods for initial value and initial-boundary value problems involving PDEs
65M50 Mesh generation, refinement, and adaptive methods for the numerical solution of initial value and initial-boundary value problems involving PDEs
65Y05 Parallel numerical computation
65Y15 Packaged methods for numerical algorithms
76S05 Flows in porous media; filtration; seepage
86A05 Hydrology, hydrography, oceanography
PDF BibTeX XML Cite
Full Text: DOI
References:
[1] Ashby, SF; Falgout, RD, A parallel multigrid preconditioned conjugate gradient algorithm for groundwater flow simulations, Nucl. Sci. Eng., 124, 145-159, (1996)
[2] Balay, S., Brown, J., Buschelman, K., Eijkhout, V., Gropp, W.D., Kaushik, D., Knepley, M.G., McInnes, L.C., Smith, B.F., Zhang, H.: PETSC users manual. Tech. Rep. ANL-95/11 - Revision 3.3, Argonne National Laboratory (2012)
[3] Burstedde, C., Ghattas, O., Gurnis, M., Isaac, T., Stadler, G., Warburton, T., Wilcox, L.C.: Extreme-scale AMR SC10: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. ACM/IEEE (2010)
[4] Burstedde, C., Holke, J., Isaac, T.: Bounds on the number of discontinuities of Morton-type space-filling curves. arXiv:1505.05055 (2017)
[5] Burstedde, C.; Wilcox, LC; Ghattas, O., p4est: scalable algorithms for parallel adaptive mesh refinement on forests of octrees, SIAM J. Sci. Comput., 33, 1103-1133, (2011) · Zbl 1230.65106
[6] Camporese, M., Paniconi, C., Putti, M., Orlandini, S.: Surface-subsurface flow modeling with path-based runoff routing, boundary condition-based coupling, and assimilation of multisource observation data. Water Resour. Res. 46(2) (2010). https://doi.org/10.1029/2008WR007536.W02512
[7] Dai, Y.; Zeng, X.; Dickinson, RE; Baker, I.; etal., The common land model, Bull. Am. Meteorol. Soc., 84, 1013, (2003)
[8] Gasper, F.; Goergen, K.; Shrestha, P.; Sulis, M.; Rihani, J.; Geimer, M.; Kollet, S., Implementation and scaling of the fully coupled terrestrial systems modeling platform (TerrSysMP v1.0) in a massively parallel supercomputing environment - a case study on JUQUEEN (IBM blue gene/Q), Geosci. Model Dev., 7, 2531-2543, (2014)
[9] Geimer, M.; Wolf, F.; Wylie, B.; Ábrahám, E.; Becker, D.; Mohr, B., The scalasca performance toolset architecture, concurrency and computation: practice and experience, 22, 702-719, (2010)
[10] Hammond, GE; Lichtner, PC; Mills, RT, Evaluating the performance of parallel subsurface simulators: an illustrative example with PFLOTRAN, Water Resour. Res., 50, 208-228, (2014)
[11] Hardelauf, H.; Javaux, M.; Herbst, M.; Gottschalk, S.; Kasteel, R.; Vanderborght, J.; Vereecken, H., PARSWMS: a parallelized model for simulating three-dimensional water flow and solute transport in variably saturated soils, Vadose Zone J., 6, 255-259, (2007)
[12] Haring, RA; Ohmacht, M.; Fox, TW; Gschwind, MK; Satterfield, DL; Sugavanam, K.; Coteus, PW; Heidelberger, P.; Blumrich, MA; Wisniewski, RW; etal., The IBM blue gene/Q compute chip, Micro, IEEE, 32, 48-60, (2012)
[13] Hindmarsh, AC; Brown, PN; Grant, KE; Lee, SL; Serban, R.; Shumaker, DE; Woodward, CS, SUNDIALS: suite Of nonlinear and differential/algebraic equation solvers, ACM Trans. Math. Softw. (TOMS), 31, 363-396, (2005) · Zbl 1136.65329
[14] Hwang, HT; Park, YJ; Sudicky, E.; Forsyth, P., A parallel computational framework to solve flow and transport in integrated surface-subsurface hydrologic systems, Environ. Model Softw., 61, 39-58, (2014)
[15] Isaac, T.; Burstedde, C.; Wilcox, LC; Ghattas, O., Recursive algorithms for distributed forests of octrees, SIAM J. Sci. Comput., 37, c497-c531, (2015) · Zbl 1323.65105
[16] Jones, JE; Woodward, CS, Newton-Krylov-multigrid solvers for large-scale, highly heterogeneous, variably saturated flow problems, Adv. Water Resour., 24, 763-774, (2001)
[17] Jülich Supercomputing Centre: JUQUEEN: IBM Blue Gene/Q Supercomputer system at the jülich Supercomputing Centre. Journal of Large-Scale Research Facilities A1 (2015). https://doi.org/10.17815/jlsrf-1-18
[18] Karypis, G., Kumar, V.: METIS - unstructured graph partitioning and sparse matrix ordering system. Version 2.0 (1995)
[19] Karypis, G.; Kumar, V., A parallel algorithm for multilevel graph partitioning and sparse matrix ordering, J. Parallel Distrib. Comput., 48, 71-95, (1998)
[20] Kollet, SJ; Maxwell, RM, Integrated surface-groundwater flow modeling: a free-surface overland flow boundary condition in a parallel groundwater flow model, Adv. Water Resour., 29, 945-958, (2006)
[21] Kollet, S.J., Maxwell, R.M.: Capturing the influence of groundwater dynamics on land surface processes using an integrated, distributed watershed model. Water Resour. Res. 44(2) (2008). https://doi.org/10.1029/2007WR006004
[22] Kollet, SJ; Maxwell, RM; Woodward, CS; Smith, S.; Vanderborght, J.; Vereecken, H.; Simmer, C., Proof of concept of regional scale hydrologic simulations at hydrologic resolution utilizing massively parallel computer resources, Water Resour. Res., 46, w04,201, (2010)
[23] Kuznetsov, M.; Yakirevich, A.; Pachepsky, Y.; Sorek, S.; Weisbrod, N., Quasi 3d modeling of water flow in vadose zone and groundwater, J. Hydrol., 450-451, 140-149, (2012)
[24] Liu, X., Parallel modeling of three-dimensional variably saturated ground water flows with unstructured mesh using open source finite volume platform openfoam, Engineering Applications of Computational Fluid Mechanics, 7, 223-238, (2013)
[25] Maxwell, RM, A terrain-following grid transform and preconditioner for parallel, large-scale, integrated hydrologic modeling, Adv. Water Resour., 53, 109-117, (2013)
[26] Miller, CT; Dawson, CN; Farthing, MW; Hou, TY; Huang, J.; Kees, CE; Kelley, C.; Langtangen, HP, Numerical simulation of water resources problems: models, methods, and trends, Adv. Water Resour., 51, 405-437, (2013)
[27] Müller, A., Kopera, M.A., Marras, S., Wilcox, L.C., Isaac, T., Giraldo, F.X.: Strong scaling for numerical weather prediction at petascale with the atmospheric model NUMA. arXiv:1511.01561 (2015)
[28] Muskat, M.: Physical principles of oil production. IHRDC, Boston, MA (1981)
[29] OpenCFD: OpenFOAM - the open source CFD toolbox—user’s guide. OpenCFD Ltd., United Kingdom. 1.4 edn (2007)
[30] Orgogozo, L.; Renon, N.; Soulaine, C.; Hénon, F.; Tomer, S.; Labat, D.; Pokrovsky, O.; Sekhar, M.; Ababou, R.; Quintard, M., An open source massively parallel solver for richards equation: mechanistic modelling of water fluxes at the watershed scale, Comput. Phys. Commun., 185, 3358-3371, (2014) · Zbl 1360.76008
[31] Osei-Kuffuor, D.; Maxwell, R.; Woodward, C., Improved numerical solvers for implicit coupling of subsurface and overland flow, Adv. Water Resour., 74, 185-195, (2014)
[32] Performance applications programming interface (PAPI). http://icl.cs.utk.edu/papi/. Last Accessed September 7, 2017
[33] Richards, LA, Capillary conduction of liquids through porous media, Physics, 1, 318-33, (1931) · Zbl 0003.28403
[34] Rudi, J., Malossi, A.C.I., Isaac, T., Stadler, G., Gurnis, M., Staar, P.W.J., Ineichen, Y., Bekas, C., Curioni, A., Ghattas, O.: An extreme-scale implicit solver for complex PDEs: highly heterogeneous flow in earth’s mantle Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, p. 5. ACM (2015)
[35] Saviankou, P.; Knobloch, M.; Visser, A.; Mohr, B., Cube v4: from performance report explorer to performance analysis tool, Procedia Computer Science, 51, 1343-1352, (2015)
[36] The Hypre Team: hypre - high performance preconditioners users manual. Center for applied scientific computing, lawrence livermore national laboratory. Software version 2.0.9b (2012)
[37] Tompson, AFB; Ababou, R.; Gelhar, LW, Implementation of the three-dimensional turning bands random field generator, Water Resour. Res., 25, 2227-2243, (1989)
[38] Tuminaro, R.S., Heroux, M., Hutchinson, S.A., Shadid, J.N.: Official Aztec User’s Guide. Sandia National Laboratories, sand99-8801j edn (1999)
[39] Yamamoto, H.; Zhang, K.; Karasaki, K.; Marui, A.; Uehara, H.; Nishikawa, N., Numerical investigation concerning the impact of CO2 geologic storage on regional groundwater flow, Int. J. Greenhouse Gas Control, 3, 586-599, (2009)
[40] Zhang, K., Wu, Y.S., Pruess, K.: User’s guide for TOUGH2-MP a massively parallel version of the TOUGH2 code. Lawrence Berkeley National Laboratory. Report LBNL-315E (2008)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.