8.1.1. pelicun.assessment
Classes and methods that control the performance assessment.
Classes
|
Assessment class. |
|
Base class for Assessment objects. |
|
Base class for the assessment objects used in DL_calculation.py. |
Time-based assessment. |
- class pelicun.assessment.Assessment(config_options: dict[str, Any] | None = None)[source]
Assessment class.
Has methods implementing a Scenario-Based assessment.
- aggregate_loss(replacement_configuration: tuple[RandomVariableRegistry, dict[str, float]] | None = None, loss_combination: dict | None = None) tuple[DataFrame, DataFrame] [source]
Aggregate losses.
- Parameters:
- replacement_configuration: Tuple, optional
Tuple containing a RandomVariableRegistry and a dictionary. The RandomVariableRegistry is defining building replacement consequence RVs for the active decision variables. The dictionary defines exceedance thresholds. If the aggregated value for a decision variable (conditioned on no replacement) exceeds the threshold, then replacement is triggered. This can happen for multiple decision variables at the same realization. The consequence keyword replacement is reserved to represent exclusive triggering of the replacement consequences, and other consequences are ignored for those realizations where replacement is triggered. When assigned to None, then replacement is still treated as an exclusive consequence (other consequences are set to zero when replacement is nonzero) but it is not being additionally triggered by the exceedance of any thresholds. The aggregated loss sample contains an additional column with information on whether replacement was already present or triggered by a threshold exceedance for each realization.
- loss_combination: dict, optional
Dictionary defining how losses for specific components should be aggregated for a given decision variable. It has the following structure: {dv: {(c1, c2): arr, …}, …}, where dv is some decision variable, (c1, c2) is a tuple defining a component pair, arr is a NxN numpy array defining a combination table, and … means that more key-value pairs with the same schema can exist in the dictionaries. The loss sample is expected to contain columns that include both c1 and c2 listed as the component. The combination is applied to all pairs of columns where the components are c1 and c2, and all of the rest of the multiindex levels match (loc, dir, uid). This means, for example, that when combining wind and flood losses, the asset model should contain both a wind and a flood component defined at the same location-direction. arr can also be an M-dimensional numpy array where each dimension has length N (NxNx…xN). This structure allows for the loss combination of M components. In this case the (c1, c2) tuple should contain M elements instead of two.
- Returns:
- tuple
Dataframe with the aggregated loss of each realization, and another boolean dataframe with information on which DV thresholds were exceeded in each realization, triggering replacement. If no thresholds are specified it only contains False values.
Notes
Regardless of the value of the arguments, this method does not alter the state of the loss model, i.e., it does not modify the values of the .sample attributes.
- calculate_damage(num_stories: int, demand_config: dict, demand_data_source: str | dict, cmp_data_source: str | dict[str, DataFrame], damage_data_paths: list[str | DataFrame], dmg_process: dict | None = None, scaling_specification: dict | None = None, residual_drift_configuration: dict | None = None, collapse_fragility_configuration: dict | None = None, block_batch_size: int = 1000) None [source]
Calculate damage.
- Parameters:
- num_stories: int
Number of stories of the asset. Applicable to buildings.
- demand_config: dict
A dictionary containing configuration options for the sample generation. Key options include: * ‘SampleSize’: The number of samples to generate. * ‘PreserveRawOrder’: Boolean indicating whether to preserve the order of the raw data. Defaults to False. * ‘DemandCloning’: Specifies if and how demand cloning should be applied. Can be a boolean or a detailed configuration.
- demand_data_source: string or dict
If string, the demand_data_source is a file prefix (<prefix> in the following description) that identifies the following files: <prefix>_marginals.csv, <prefix>_empirical.csv, <prefix>_correlation.csv. If dict, the demand data source is a dictionary with the following optional keys: ‘marginals’, ‘empirical’, and ‘correlation’. The value under each key shall be a DataFrame.
- cmp_data_source: str or dict
The source from where to load the component model data. If it’s a string, it should be the prefix for three files: one for marginal distributions (<prefix>_marginals.csv), one for empirical data (<prefix>_empirical.csv), and one for correlation data (<prefix>_correlation.csv). If it’s a dictionary, it should have keys ‘marginals’, ‘empirical’, and ‘correlation’, with each key associated with a DataFrame containing the corresponding data.
- damage_data_paths: list of (string | DataFrame)
List of paths to data or files with damage model information. Default XY datasets can be accessed as PelicunDefault/XY. Order matters. Parameters defined in prior elements in the list take precedence over the same parameters in subsequent data paths. I.e., place the Default datasets in the back.
- dmg_process: dict, optional
Allows simulating damage processes, where damage to some component can alter the damage state of other components.
- scaling_specification: dict, optional
A dictionary defining the shift in median. Example: {‘CMP-1-1’: ‘1.2’, ‘CMP-1-2’: ‘/1.4’} The keys are individual components that should be present in the `capacity_sample`. The values should be strings containing an operation followed by the value formatted as a float. The operation can be ‘+’ for addition, ‘-’ for subtraction, ‘’ for multiplication, and ‘/’ for division.
- residual_drift_configuration: dict
Dictionary containing the following keys-values: - params: dict A dictionary containing parameters required for the estimation method, such as ‘yield_drift’, which is the drift at which yielding is expected to occur. - method: str, optional The method used to estimate the RID values. Currently, only ‘FEMA P58’ is implemented. Defaults to ‘FEMA P58’.
- collapse_fragility_configuration: dict
Dictionary containing the following keys-values: - label: str Label to use to extend the MultiIndex of the demand sample. - value: float Values to add to the rows of the additional column. - unit: str Unit that corresponds to the additional column. - location: str, optional Optional location, defaults to 0. - direction: str, optional Optional direction, defaults to 1.
- block_batch_size: int
Maximum number of components in each batch.
- calculate_loss(decision_variables: tuple[str, ...], loss_model_data_paths: list[str | DataFrame], loss_map_path: str | DataFrame | None = None, loss_map_policy: str | None = None) None [source]
Calculate loss.
- Parameters:
- decision_variables: tuple
Defines the decision variables to be included in the loss calculations. Defaults to those supported, but fewer can be used if desired. When fewer are used, the loss parameters for those not used will not be required.
- loss_model_data_paths: list of (string | DataFrame)
List of paths to data or files with loss model information. Default XY datasets can be accessed as PelicunDefault/XY. Order matters. Parameters defined in prior elements in the list take precedence over the same parameters in subsequent data paths. I.e., place the Default datasets in the back.
- loss_map_path: str or pd.DataFrame or None
Path to a csv file or DataFrame object that maps components IDs to their loss parameter definitions.
- loss_map_policy: str or None
If None, does not modify the loss map. If set to fill, each component ID that is present in the asset model but not in the loss map is mapped to itself, but excessiveRID is excluded. If set to fill_all, each component ID that is present in the asset model but not in the loss map is mapped to itself without exceptions.
- class pelicun.assessment.AssessmentBase(config_options: dict[str, Any] | None = None)[source]
Base class for Assessment objects.
Assessment objects manage the models, data, and calculations in pelicun.
- property bldg_repair: LossModel
Exists for <backwards compatibility>.
- Returns:
- model.LossModel
The loss model.
- calc_unit_scale_factor(unit: str) float [source]
Determine unit scale factor.
Determines the scale factor from input unit to the corresponding base unit.
- Parameters:
- unit: str
Either a unit name, or a quantity and a unit name separated by a space. For example: ‘ft’ or ‘100 ft’.
- Returns:
- float
Scale factor that convert values from unit to base unit
- Raises:
- KeyError
When an invalid unit is specified
- get_default_data(data_name: str) DataFrame [source]
Load a default data file.
Loads a default data file by name and returns it. This method is specifically designed to access predefined CSV files from a structured directory path related to the SimCenter fragility library.
- Parameters:
- data_name: str
The name of the CSV file to be loaded, without the ‘.csv’ extension. This name is used to construct the full path to the file.
- Returns:
- pd.DataFrame
The DataFrame containing the data loaded from the specified CSV file.
- get_default_metadata(data_name: str) dict [source]
Load a default metadata file and pass it to the user.
- Parameters:
- data_name: string
Name of the json file to be loaded
- Returns:
- dict
Default metadata
- property repair: LossModel
Exists for <backwards compatibility>.
- Returns:
- RepairModel_DS
The damage state-driven component loss model.
- scale_factor(unit: str | None) float [source]
Get scale factor of given unit.
Returns the scale factor of a given unit. If the unit is unknown it raises an error. If the unit is None it returns 1.00.
- Parameters:
- unit: str
A unit name.
- Returns:
- float
Scale factor
- Raises:
- ValueError
If the unit is unknown.
- class pelicun.assessment.DLCalculationAssessment(config_options: dict[str, Any] | None = None)[source]
Base class for the assessment objects used in DL_calculation.py.
- calculate_asset(num_stories: int, component_assignment_file: str | None, collapse_fragility_demand_type: str | None, component_sample_file: str | None, *, add_irreparable_damage_columns: bool) None [source]
Generate the asset model sample.
- Parameters:
- num_stories: int
Number of stories.
- component_assignment_file: str or None
Path to a component assignment file.
- collapse_fragility_demand_type: str or None
Optional demand type for the collapse fragility.
- add_irreparable_damage_columns: bool
Whether to add columns for irreparable damage.
- component_sample_file: str or None
Optional path to an existing component sample file.
- Raises:
- ValueError
With invalid combinations of arguments.
- calculate_damage(length_unit: str | None, component_database: str, component_database_path: str | None = None, collapse_fragility: dict | None = None, irreparable_damage: dict | None = None, damage_process_approach: str | None = None, damage_process_file_path: str | None = None, custom_model_dir: str | None = None, scaling_specification: dict | None = None, *, is_for_water_network_assessment: bool = False) None [source]
Calculate damage.
- Parameters:
- length_unitstr, optional
Unit of length to be used to add units to the demand data if needed.
- component_database: str
Name of the component database.
- component_database_path: str or None
Optional path to a component database file.
- collapse_fragility: dict or None
Collapse fragility information.
- irreparable_damage: dict or None
Information for irreparable damage.
- damage_process_approach: str or None
Approach for the damage process.
- damage_process_file_path: str or None
Optional path to a damage process file.
- custom_model_dir: str or None
Optional directory for custom models.
- scaling_specification: dict, optional
A dictionary defining the shift in median. Example: {‘CMP-1-1’: ‘1.2’, ‘CMP-1-2’: ‘/1.4’} The keys are individual components that should be present in the `capacity_sample`. The values should be strings containing an operation followed by the value formatted as a float. The operation can be ‘+’ for addition, ‘-’ for subtraction, ‘’ for multiplication, and ‘/’ for division.
- is_for_water_network_assessment: bool
Whether the assessment is for a water network.
- Raises:
- ValueError
With invalid combinations of arguments.
- calculate_demand(demand_path: Path, collapse_limits: dict[str, float] | None, length_unit: str | None, demand_calibration: dict | None, sample_size: int, demand_cloning: dict | None, residual_drift_inference: dict | None, *, coupled_demands: bool) None [source]
Calculate demands.
- Parameters:
- demand_path: str
Path to the demand data file.
- collapse_limits: dict[str, float] or None
Optional dictionary with demand types and their respective collapse limits.
- length_unitstr, optional
Unit of length to be used to add units to the demand data if needed.
- demand_calibration: dict or None
Calibration data for the demand model.
- sample_size: int
Number of realizations.
- coupled_demands: bool
Whether to preserve the raw order of the demands.
- demand_cloning: dict or None
Demand cloning configuration.
- residual_drift_inference: dict or None
Information for residual drift inference.
- Raises:
- ValueError
When an unknown residual drift method is specified.
- calculate_loss(loss_map_approach: str, occupancy_type: str, consequence_database: str, consequence_database_path: str | None = None, custom_model_dir: str | None = None, damage_process_approach: str = 'User Defined', replacement_cost_parameters: dict[str, float | str] | None = None, replacement_time_parameters: dict[str, float | str] | None = None, replacement_carbon_parameters: dict[str, float | str] | None = None, replacement_energy_parameters: dict[str, float | str] | None = None, loss_map_path: str | None = None, decision_variables: tuple[str, ...] | None = None) tuple[DataFrame, DataFrame] [source]
Calculate losses.
- Parameters:
- loss_map_approach: str
Approach for the loss map generation. Can be either User Defined or Automatic.
- occupancy_type: str
Occupancy type.
- consequence_database: str
Name of the consequence database.
- consequence_database_path: str or None
Optional path to a consequence database file.
- custom_model_dir: str or None
Optional directory for custom models.
- damage_process_approach: str
Damage process approach. Defaults to User Defined.
- replacement_cost_parameters: dict or None
Parameters for replacement cost.
- replacement_time_parameters: dict or None
Parameters for replacement time.
- replacement_carbon_parameters: dict or None
Parameters for replacement carbon.
- replacement_energy_parameters: dict or None
Parameters for replacement energy.
- loss_map_path: str or None
Optional path to a loss map file.
- decision_variables: tuple[str] or None
Optional decision variables for the assessment.
- Returns:
- tuple
Dataframe with the aggregated loss of each realization, and another boolean dataframe with information on which DV thresholds were exceeded in each realization, triggering replacement. If no thresholds are specified it only contains False values.
- Raises:
- ValueError
When an invalid loss map approach is specified.
- load_consequence_info(consequence_database: str, consequence_database_path: str | None = None, custom_model_dir: str | None = None) tuple[DataFrame, list[str]] [source]
Load consequence information for the assessment.
- Parameters:
- consequence_database: str
Name of the consequence database.
- consequence_database_path: str or None
Optional path to a consequence database file.
- custom_model_dir: str or None
Optional directory for custom models.
- Returns:
- tuple[pd.DataFrame, list[str]]
A tuple containing: - A DataFrame with the consequence data. - A list of paths to the consequence databases used.
- Raises:
- ValueError
With invalid combinations of arguments.