8.1.5.4. pelicun.model.loss_model

Loss model objects and associated methods.

Classes

LossModel(assessment[, decision_variables, ...])

Manages loss information used in assessments.

RepairModel_Base(assessment)

Base class for loss models.

RepairModel_DS(assessment)

Repair consequences for components with damage states.

RepairModel_LF(assessment)

Repair consequences for components with loss functions.

class pelicun.model.loss_model.LossModel(assessment: AssessmentBase, decision_variables: tuple[str, ...] = ('Cost', 'Time'), dv_units: dict[str, str] | None = None)[source]

Manages loss information used in assessments.

Contains a loss model for components with Damage States (DS) and one for components with Loss Functions (LF).

add_loss_map(loss_map_path: str | DataFrame | None = None, loss_map_policy: str | None = None) None[source]

Add a loss map to the loss model.

A loss map defines what loss parameter definition should be used for each component ID in the asset model.

Parameters:
loss_map_path: str or pd.DataFrame or None

Path to a csv file or DataFrame object that maps components IDs to their loss parameter definitions.

loss_map_policy: str or None

If None, does not modify the loss map. If set to fill, each component ID that is present in the asset model but not in the loss map is mapped to itself, but excessiveRID is excluded. If set to fill_all, each component ID that is present in the asset model but not in the loss map is mapped to itself without exceptions.

Raises:
ValueError

If both arguments are None.

aggregate_losses(replacement_configuration: tuple[RandomVariableRegistry, dict[str, float]] | None = None, loss_combination: dict | None = None, *, future: bool = False) DataFrame | tuple[DataFrame, DataFrame][source]

Aggregate the losses produced by each component.

Parameters:
replacement_configuration: Tuple, optional

Tuple containing a RandomVariableRegistry and a dictionary. The RandomVariableRegistry is defining building replacement consequence RVs for the active decision variables. The dictionary defines exceedance thresholds. If the aggregated value for a decision variable (conditioned on no replacement) exceeds the threshold, then replacement is triggered. This can happen for multiple decision variables at the same realization. The consequence keyword replacement is reserved to represent exclusive triggering of the replacement consequences, and other consequences are ignored for those realizations where replacement is triggered. When assigned to None, then replacement is still treated as an exclusive consequence (other consequences are set to zero when replacement is nonzero) but it is not being additionally triggered by the exceedance of any thresholds. The aggregated loss sample contains an additional column with information on whether replacement was already present or triggered by a threshold exceedance for each realization.

loss_combination: dict, optional

Dictionary defining how losses for specific components should be aggregated for a given decision variable. It has the following structure: {dv: {(c1, c2): arr, …}, …}, where dv is some decision variable, (c1, c2) is a tuple defining a component pair, arr is a NxN numpy array defining a combination table, and means that more key-value pairs with the same schema can exist in the dictionaries. The loss sample is expected to contain columns that include both c1 and c2 listed as the component. The combination is applied to all pairs of columns where the components are c1 and c2, and all of the rest of the multiindex levels match (loc, dir, uid). This means, for example, that when combining wind and flood losses, the asset model should contain both a wind and a flood component defined at the same location-direction. arr can also be an M-dimensional numpy array where each dimension has length N (NxNx…xN). This structure allows for the loss combination of M components. In this case the (c1, c2) tuple should contain M elements instead of two.

future: bool, optional

Defaults to False. When set to True, it enables the updated return type.

Returns:
dataframe or tuple

Dataframe with the aggregated loss of each realization, and another boolean dataframe with information on which DV thresholds were exceeded in each realization, triggering replacement. If no thresholds are specified it only contains False values. The second dataframe is only returned with future set to True.

Notes

Regardless of the value of the arguments, this method does not alter the state of the loss model, i.e., it does not modify the values of the .sample attributes.

calculate() None[source]

Calculate the loss of each component block.

Note: This method simply calculates the loss of each component block without any special treatment to replacement consequences. This can be done at a later step with the aggregate_losses method.

Raises:
ValueError

If the size of the demand sample and the damage sample don’t match.

consequence_scaling(scaling_specification: str) None[source]

Apply scale factors to losses.

Applies scale factors to the loss sample according to the given scaling specification. The scaling specification should be a path to a CSV file. It should contain a Decision Variable column with a specified decision variable in each row. Other optional columns are Component, Location, Direction. Each row acts as an independent scaling operation, with the scale factor defined in the Scale Factor column, which is required. If any value is missing in the optional columns, it is assumed that the scale factor should be applied to all entries of the loss sample where the other column values match. For example, if the specification has a single row with Decision Variable set to ‘Cost’, Scale Factor set to 2.0, and no other columns, this will double the ‘Cost’ DV. If instead Location was also set to 1, it would double the Cost of all components that have that location. The columns Location and Direction can contain ranges, like this: 1–3 means 1, 2, and 3. If a range is used in both Location and Direction, the factor of that row will be applied once to all combinations.

Parameters:
scaling_specification: str

Path to a CSV file containing the scaling specification.

Raises:
ValueError

If required columns are missing or contain NaNs.

property decision_variables: tuple[str, ...]

Retrieve the decision variables.

Returns:
tuple

Decision variables.

load_model(data_paths: list[str | DataFrame], loss_map: str | DataFrame, decision_variables: tuple[str, ...] | None = None) None[source]

<backwards compatibility>.

load_model_parameters(data_paths: list[str | DataFrame], decision_variables: tuple[str, ...] | None = None) None[source]

Load loss model parameters.

Parameters:
data_paths: list of (string | DataFrame)

List of paths to data or files with loss model information. Default XY datasets can be accessed as PelicunDefault/XY. Order matters. Parameters defined in prior elements in the list take precedence over the same parameters in subsequent data paths. I.e., place the Default datasets in the back.

decision_variables: tuple

Defines the decision variables to be included in the loss calculations. Defaults to those supported, but fewer can be used if desired. When fewer are used, the loss parameters for those not used will not be required.

load_sample(filepath: str | DataFrame) None[source]

<backwards compatibility>.

Saves the sample of the ds_model.

property sample: DataFrame | None

Combines the samples of the ds_model and lf_model sub-models.

Returns:
pd.DataFrame

The combined loss sample.

save_sample(filepath: str | None = None, *, save_units: bool = False) None | DataFrame | tuple[DataFrame, Series][source]

<backwards compatibility>.

Saves the sample of the ds_model.

Returns:
tuple

The output of {loss model}.ds_model.save_sample.

class pelicun.model.loss_model.RepairModel_Base(assessment: AssessmentBase)[source]

Base class for loss models.

abstract convert_loss_parameter_units() None[source]

Convert previously loaded loss parameters to base units.

drop_unused_loss_parameters(loss_map: DataFrame) None[source]

Remove loss parameter definitions.

Applicable to component IDs not present in the loss map.

Parameters:
loss_map: str or pd.DataFrame or None

Path to a csv file or DataFrame object that maps components IDs to their loss parameter definitions. Components in the asset model that are omitted from the provided loss map are mapped to the same ID.

get_available() set[source]

Get a set of components with available loss parameters.

Returns:
set

Set of components with available loss parameters.

load_model_parameters(data: DataFrame) None[source]

Load model parameters from a DataFrame.

Extends those already available. Parameters already defined take precedence, i.e. redefinitions of parameters are ignored.

Parameters:
data: DataFrame

Data with loss model information.

remove_incomplete_components() None[source]

Remove incomplete components.

Removes components that have incomplete loss model definitions from the loss model parameters.

class pelicun.model.loss_model.RepairModel_DS(assessment: AssessmentBase)[source]

Repair consequences for components with damage states.

calculate(dmg_quantities: DataFrame) None[source]

Calculate damage consequences.

Parameters:
dmg_quantities: DataFrame

A table with the quantity of damage experienced in each damage state of each performance group at each location and direction. You can use the prepare_dmg_quantities method in the DamageModel to get such a DF.

convert_loss_parameter_units() None[source]

Convert previously loaded loss parameters to base units.

drop_unused_damage_states() None[source]

Remove unused columns.

Remove columns from the loss model parameters corresponding to unused damage states.

load_sample(filepath: str | DataFrame) dict[str, str][source]

Load loss sample data.

Parameters:
filepath: str

Path to an existing sample stored in a file, or dataframe containing the existing sample.

Returns:
dict[str, str]

Dictionary mapping each decision variable to its assigned unit.

Raises:
ValueError

If the columns have an invalid number of levels.

save_sample(filepath: str | None = None, *, save_units: bool = False) None | DataFrame | tuple[DataFrame, Series][source]

Save or return the loss sample.

This method handles the storage of a sample of loss estimates, which can either be saved directly to a file or returned as a DataFrame for further manipulation. When saving to a file, additional information such as unit conversion factors and column units can be included. If the data is not being saved to a file, the method can return the DataFrame with or without units as specified.

Parameters:
filepath: str, optional

The path to the file where the loss sample should be saved. If not provided, the sample is not saved to disk but returned.

save_units: bool, default: False

Indicates whether to include a row with unit information in the returned DataFrame. This parameter is ignored if a file path is provided.

Returns:
None or tuple

If filepath is provided, the function returns None after saving the data. If no filepath is specified, returns: * DataFrame containing the loss sample. * Optionally, a Series containing the units for each column if save_units is True.

class pelicun.model.loss_model.RepairModel_LF(assessment: AssessmentBase)[source]

Repair consequences for components with loss functions.

calculate(demand_sample: DataFrame, cmp_sample: dict, cmp_marginal_params: DataFrame, demand_offset: dict, nondirectional_multipliers: dict) None[source]

Calculate repair consequences.

Parameters:
demand_sample: pd.DataFrame

The sample of the demand model to be used for the inputs of the loss functions.

cmp_sample: dict

Dict mapping each cmp-loc-dir-uid to the component quantity realizations in the asset model in the form of pd.Series objects.

cmp_marginal_params: pd.DataFrame

Dataframe containing component marginal distribution parameters.

demand_offset: dict

Dictionary specifying the demand offset.

nondirectional_multipliers: dict

Dictionary specifying the non directional multipliers used to combine the directional demands.

convert_loss_parameter_units() None[source]

Convert previously loaded loss parameters to base units.