The main principle to consider when running with MPI is that EE will divide the model into several subdomains. Each subdomain will be simulated on the number of cores set by the user. How many subdomains are used to divide the model, and how to arrange those subdomains is also important and the method is explained below, however, some experimentation may be needed to determine the optimal arrangement. Guiding principles for domain decomposition for running MPI are also provided here.
It is recommended that restart files be written as infrequently as reasonably possible to avoid reduced performance as EFDC+ writes out the output files.
Several restrictions should be noted when running a model when running with MPI. A model will need to be modified for:
Hydraulic Structures with a downstream receiving cell. Both the upstream and the downstream cells must be in the same subdomain
Cell Connectors. Both ends of the connectors must be in the same subdomain
Withdrawal/Return boundaries. Both the withdrawal and the return must be in the same subdomain
Flow Boundary with group redistribution of flows. All cells in the boundary group must be in the same subdomain
Model grids that do not have two columns of extra cells on the North, South, East, and West sides. It is a requirement with the MPI model that there be a buffer zone around the model cells of at least two rows and columns. Without this, the model may run but the results will be incorrect. Check this with the https://eemodelingsystem.atlassian.net/wiki/spaces/EK/pages/246580346 option.
Some other model configurations are not yet supported when running with MPI:
Computed Groundwater fluxes (ISGWIE > 0)
Hydraulic Structures with Low Chord bridge option (NQCTYP = 3 or 4 option)
Once the model cell blocks have been arranged, check the MPI Domain Decomposition box as shown in Figure 1. For this option, the model will be run using MPI and the user should set the number of cores to be used as they would with OMP. The total number of cores available (based on the user’s computer hardware) is shown under Available CPU Cores.
Set the #OMP Cores Used based on the machine hardware. Enter Total # Subdomains to be used. which is the number of subdomains into which you want to divide the overall model domain. In the example shown in Figure 1 below two subdomains have been set. The Total # Cores Used will be updated as this is calculated based on # OMP Cores Used multiplied by Total # Subdomains.
Figure 1 EFDC+ Run Options with MPI-Subdomain defined (1).
EE can automatically divide the model into the subdomains entered using the Run Automatic Domain Decomposition button. The number of cells chosen in each subdomain is done such that each subdomain has roughly the same number of active cells. Selecting this will use an algorithm to divide the whole domain into the number of subdomains as defined above, i.e… six subdomains from 1 x 6 cuts. Initially, EE always assigns # Subdomains in I direction to one. The user can modify this number from 1 x 6 to 6 x 1 or some other combination by selecting the up and down arrow next to the text box.
The user can also set the # of Subdomains in I Direction and # of Subdomains in J Direction manually by entering the number in the text boxes (eg. 2; 2) as shown in Figure 2.
Figure 2 EFDC+ Run Options with MPI-Subdomain defined (2).
The number of cells contained in each subdomain is based on the number of rows and grid columns in the model domain. Figure 3 shows an example of a model domain divided into four subdomains, this model domain has 46 rows and 89 columns. For this example, EE requires the total grid rows to add to 46 and total grid columns to add to 89 in this case. So, the user can divide the 46 rows into 24 + 22, and the 89 columns into 55 + 34.
Figure 3 EFDC+ Run Options with MPI-Subdomain defined (3).
To visualize the MPI subdomains click the Show Cell Map button from the EE main menu. Figure 4 shows an example of four MPI subdomains.
After the settings described above have been completed click the Run EFDC+ button to run the model with MPI as shown in Figure 5.
Figure 4 MPI Subdomain Visualization by Cell Map.
Figure 5 Run menu sample utilizing the domain decomposition approach with MPI.