pelicun.tools.regional_sim
Temporary solution that provides regional simulation capability to Pelicun.
Functions
|
Format elapsed time from a start timestamp to current time as hh:mm:ss. |
|
Process a single chunk of buildings and save results to temporary compressed CSV files. |
|
Process a chunk of buildings through the complete regional simulation pipeline. |
|
Perform a regional-scale disaster impact simulation. |
|
Context manager to patch joblib to report progress into a tqdm progress bar. |
|
Return unique values in a pandas Series as a comma-separated string. |
- pelicun.tools.regional_sim.format_elapsed_time(start_time: float) str [source]
Format elapsed time from a start timestamp to current time as hh:mm:ss.
- Parameters:
- start_timefloat
Start time as a float timestamp (from time.time())
- Returns:
- str
Formatted elapsed time string in hh:mm:ss format
- pelicun.tools.regional_sim.process_and_save_chunk(i: int, chunk: DataFrame, temp_dir: str, grid_points: DataFrame, grid_data: DataFrame, sample_size_demand: int, sample_size_damage: int, dl_method: str) None [source]
Process a single chunk of buildings and save results to temporary compressed CSV files.
This function serves as a wrapper around process_buildings_chunk that handles the file I/O operations for parallel processing. It processes a chunk of buildings through the complete simulation pipeline and saves the results to temporary files for later aggregation.
- Parameters:
- iint
Chunk index number used for naming output files
- chunkpd.DataFrame
DataFrame containing building inventory data for this specific chunk
- temp_dirstr
Path to temporary directory where results will be saved
- grid_pointspd.DataFrame
DataFrame with grid point coordinates (Longitude, Latitude)
- grid_datapd.DataFrame
DataFrame with intensity measure data for each grid point
- sample_size_demandint
Number of demand realizations available
- sample_size_damageint
Number of damage realizations to generate
- dl_methodstr
Damage and loss methodology
- pelicun.tools.regional_sim.process_buildings_chunk(bldg_df_chunk: DataFrame, grid_points: DataFrame, grid_data: DataFrame, sample_size_demand: int, sample_size_damage: int, dl_method: str) tuple[DataFrame, DataFrame, DataFrame, DataFrame] [source]
Process a chunk of buildings through the complete regional simulation pipeline.
This function performs event-to-building mapping, building-to-archetype mapping, damage calculation, and loss calculation for a subset of buildings. It uses nearest neighbor regression to map grid-based intensity measures to building locations, then applies Pelicun assessment methods to calculate damage and losses.
- Parameters:
- bldg_df_chunkpd.DataFrame
DataFrame containing building inventory data for this chunk, must include Longitude, Latitude, and building characteristics
- grid_pointspd.DataFrame
DataFrame with grid point coordinates (Longitude, Latitude)
- grid_datapd.DataFrame
DataFrame with intensity measure data for each grid point
- sample_size_demandint
Number of demand realizations available
- sample_size_damageint
Number of damage realizations to generate
- dl_methodstr
Damage and loss methodology
- Returns:
- tuple
- demand_samplepd.DataFrame
DataFrame with demand sample results (intensity measures)
- damage_dfpd.DataFrame
DataFrame with damage state results for each component
- repair_costspd.DataFrame
DataFrame with repair cost estimates
- repair_timespd.DataFrame
DataFrame with repair time estimates
- pelicun.tools.regional_sim.regional_sim(config_file: str, num_cores: int | None = None) None [source]
Perform a regional-scale disaster impact simulation.
This function orchestrates the complete regional simulation workflow including: 1. Loading earthquake event data from gridded intensity measure files 2. Loading building inventory data 3. Mapping event intensity measures to building locations using nearest neighbor regression 4. Mapping buildings to damage/loss archetypes using Pelicun auto-population 5. Calculating damage states for all buildings 6. Calculating repair costs and times 7. Aggregating and saving results to CSV files
The simulation is performed in parallel chunks to handle large building inventories efficiently. Results are saved as compressed CSV files for demand samples, damage states, repair costs, and repair times.
- Parameters:
- config_filestr
Path to JSON configuration file containing simulation parameters, file paths, and analysis settings (inputRWHALE.json from SimCenter’s R2D Tool)
- num_coresint, optional
Number of CPU cores to use for parallel processing. If None, uses all available cores minus one
Notes
- Output Files:
demand_sample.csv: Intensity measure realizations for all buildings
damage_sample.csv: Damage state realizations for all building components
repair_cost_sample.csv: Repair cost estimates for all buildings
repair_time_sample.csv: Repair time estimates for all buildings
- pelicun.tools.regional_sim.tqdm_joblib(tqdm_object: tqdm) contextlib.Generator[None, None, None] [source]
Context manager to patch joblib to report progress into a tqdm progress bar.
This function temporarily replaces joblib’s BatchCompletionCallBack to update the provided tqdm progress bar with batch completion information during parallel processing.
- Parameters:
- tqdm_objecttqdm
tqdm progress bar object to update with progress information
- Yields:
- None
Context manager yields control to the calling code
- pelicun.tools.regional_sim.unique_list(x: Series) str [source]
Return unique values in a pandas Series as a comma-separated string.
- Parameters:
- xpd.Series
pandas Series containing values to extract unique elements from
- Returns:
- str
Comma-separated string of unique values, or single value if only one unique value exists