pelicun.tools.regional_sim
Performs regional-scale disaster impact simulations on building inventories.
Functions
|
Calculate losses using a 1-to-1 mapping approach. |
|
Calculate losses using the Hazus Earthquake methodology. |
|
Format elapsed time from a start timestamp to current time as hh:mm:ss. |
|
Parse a filter string into a list of unique, sorted integer IDs. |
|
Process a single chunk of buildings and save results to temporary compressed CSV files. |
|
Process a chunk of buildings through the complete regional simulation pipeline. |
|
Perform a regional-scale disaster impact simulation. |
|
Context manager to patch joblib to report progress into a tqdm progress bar. |
|
Return unique values in a pandas Series as a comma-separated string. |
- pelicun.tools.regional_sim.format_elapsed_time(start_time: float) str [source]
Format elapsed time from a start timestamp to current time as hh:mm:ss.
- Parameters:
- start_timefloat
Start time as a float timestamp (from time.time())
- Returns:
- str
Formatted elapsed time string in hh:mm:ss format
- pelicun.tools.regional_sim.parse_id_filter(filter_str: str) list[int] [source]
Parse a filter string into a list of unique, sorted integer IDs.
The filter string can contain comma-separated integers and ranges (e.g., “1, 3-5, 8”).
- Parameters:
- filter_strstr
The filter string to parse.
- Returns:
- list[int]
A sorted list of unique integer IDs.
- Raises:
- ValueError
If the filter string contains non-numeric parts or invalid ranges.
- pelicun.tools.regional_sim.process_and_save_chunk(i: int, chunk: DataFrame, temp_dir: str, grid_points: DataFrame, grid_data: DataFrame, n_neighbors: int, sample_size_demand: int, sample_size_damage: int, dl_method: str, im_types: dict[str, str]) None [source]
Process a single chunk of buildings and save results to temporary compressed CSV files.
This function serves as a wrapper around process_buildings_chunk that handles the file I/O operations for parallel processing. It processes a chunk of buildings through the complete simulation pipeline and saves the results to temporary files for later aggregation.
- Parameters:
- iint
Chunk index number used for naming output files
- chunkpd.DataFrame
DataFrame containing building inventory data for this specific chunk
- temp_dirstr
Path to temporary directory where results will be saved
- grid_pointspd.DataFrame
DataFrame with grid point coordinates (Longitude, Latitude)
- grid_datapd.DataFrame
DataFrame with intensity measure data for each grid point
- n_neighborsint
Number of nearest neighbors to use for mapping event intensity from grid points to buildings
- sample_size_demandint
Number of demand realizations available
- sample_size_damageint
Number of damage realizations to generate
- dl_methodstr
Damage and loss methodology
- im_typesdict[str, str]
Dictionary of intensity measure types and their units from the config file
- pelicun.tools.regional_sim.process_buildings_chunk(bldg_df_chunk: DataFrame, grid_points: DataFrame, grid_data: DataFrame, n_neighbors: int, sample_size_demand: int, sample_size_damage: int, dl_method: str, im_types: dict[str, str]) tuple[DataFrame, DataFrame, DataFrame, DataFrame] [source]
Process a chunk of buildings through the complete regional simulation pipeline.
This function performs event-to-building mapping, building-to-archetype mapping, damage calculation, and loss calculation for a subset of buildings. It uses nearest neighbor regression to map grid-based intensity measures to building locations, then applies Pelicun assessment methods to calculate damage and losses.
- Parameters:
- bldg_df_chunkpd.DataFrame
DataFrame containing building inventory data for this chunk, must include Longitude, Latitude, and building characteristics
- grid_pointspd.DataFrame
DataFrame with grid point coordinates (Longitude, Latitude)
- grid_datapd.DataFrame
DataFrame with intensity measure data for each grid point
- n_neighborsint
Number of nearest neighbors to use for mapping intensity measures to building locations
- sample_size_demandint
Number of demand realizations available
- sample_size_damageint
Number of damage realizations to generate
- dl_methodstr
Damage and loss methodology
- im_typesdict[str, str]
Dictionary of intensity measure types and their units from the config file
- Returns:
- tuple
- demand_samplepd.DataFrame
DataFrame with demand sample results (intensity measures)
- damage_dfpd.DataFrame
DataFrame with damage state results for each component
- repair_costspd.DataFrame
DataFrame with repair cost estimates
- repair_timespd.DataFrame
DataFrame with repair time estimates
- pelicun.tools.regional_sim.regional_sim(config_file: str, num_cores: int | None = None) None [source]
Perform a regional-scale disaster impact simulation.
This function orchestrates the complete regional simulation workflow including: 1. Loading hazard event data from gridded intensity measure files 2. Loading building inventory data 3. Mapping event intensity measures to building locations using nearest neighbor regression 4. Mapping buildings to damage/loss archetypes using Pelicun auto-population 5. Calculating damage states for all buildings 6. Calculating repair costs and times 7. Aggregating and saving results to CSV files
The simulation is performed in parallel chunks to handle large building inventories efficiently. Results are saved as compressed CSV files for demand samples, damage states, repair costs, and repair times.
- Parameters:
- config_filestr
Path to JSON configuration file containing simulation parameters, file paths, and analysis settings (inputRWHALE.json from SimCenter’s R2D Tool)
- num_coresint, optional
Number of CPU cores to use for parallel processing. If None, uses all available cores minus one
- Raises:
- ValueError
If the building ID filter specified in the config file does not match any building IDs in the inventory. If required intensity measure types specified in the config are not found in the grid data files.
Notes
- Output Files:
demand_sample.csv: Intensity measure realizations for all buildings
damage_sample.csv: Damage state realizations for all building components
repair_cost_sample.csv: Repair cost estimates for all buildings
repair_time_sample.csv: Repair time estimates for all buildings
- pelicun.tools.regional_sim.tqdm_joblib(tqdm_object: tqdm) contextlib.Generator[None, None, None] [source]
Context manager to patch joblib to report progress into a tqdm progress bar.
This function temporarily replaces joblib’s BatchCompletionCallBack to update the provided tqdm progress bar with batch completion information during parallel processing.
- Parameters:
- tqdm_objecttqdm
tqdm progress bar object to update with progress information
- Yields:
- None
Context manager yields control to the calling code
- pelicun.tools.regional_sim.unique_list(x: Series) str [source]
Return unique values in a pandas Series as a comma-separated string.
- Parameters:
- xpd.Series
pandas Series containing values to extract unique elements from
- Returns:
- str
Comma-separated string of unique values, or single value if only one unique value exists