brails.processors.FoundationClassifier.attention_utils.radam module

class brails.processors.FoundationClassifier.attention_utils.radam.AdamW(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, warmup=0)

Bases: Optimizer

step(closure=None)

Perform a single optimization step to update parameter.

Args:
closure (Callable): A closure that reevaluates the model and

returns the loss. Optional for most optimizers.

Note

Unless otherwise specified, this function should not modify the .grad field of the parameters.

class brails.processors.FoundationClassifier.attention_utils.radam.PlainRAdam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, degenerated_to_sgd=True)

Bases: Optimizer

step(closure=None)

Perform a single optimization step to update parameter.

Args:
closure (Callable): A closure that reevaluates the model and

returns the loss. Optional for most optimizers.

Note

Unless otherwise specified, this function should not modify the .grad field of the parameters.

class brails.processors.FoundationClassifier.attention_utils.radam.RAdam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, degenerated_to_sgd=True)

Bases: Optimizer

step(closure=None)

Perform a single optimization step to update parameter.

Args:
closure (Callable): A closure that reevaluates the model and

returns the loss. Optional for most optimizers.

Note

Unless otherwise specified, this function should not modify the .grad field of the parameters.