STEP 6: Hazard aggregation

This step generates a new folder called HAZARD in the working directory and calculates the hazard curves at each point of the local finest grid for different percentiles of the epistemic uncertainty, by combining the output of each simulation from STEP 5 with the scenario rates retrieved in STEP 4.

To speed up the process, this step is parallelized on the domain, through a horizontal decomposition (along the y direction) of the grid in "slices", which are then processed in parallel, so that each process manages only the points within one slice. The number of slices (i.e. the number of parallel processes) is computed at runtime by the workflow, but the maximum allowed number must be declared in the JSON input file, as a trade-off between parallelism and manageability. Finally, all the results are recombined.

The code is available both in Python and MATLAB, and the user can choose the preferred language in the JSON input file, also depending on the software available on the cluster in use.