Loss

Loss function aggregators that create physics-informed loss functions from the list of defined objective terms and constraints.

Currently supported loss functions:

class neuromancer.loss.AggregateLoss(objectives, constraints)[source]

Abstract aggregate loss class for calculating constraints, objectives, and aggegate loss values.

calculate_constraints(input_dict)[source]

Calculate the values of constraints and constraints violations

calculate_objectives(input_dict)[source]

Calculate the value of the objective function for SGD

abstract forward(input_dict)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class neuromancer.loss.AugmentedLagrangeLoss(objectives, constraints, train_data, inner_loop=10, sigma=2.0, mu_max=1000.0, mu_init=0.001, eta=1.0)[source]
Augmented Lagrangian method loss function.

https://en.wikipedia.org/wiki/Augmented_Lagrangian_method

forward(input_dict)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class neuromancer.loss.BarrierLoss(objectives, constraints, barrier='log10', upper_bound=1.0, shift=1.0, alpha=0.5)[source]

Barrier loss function. * https://en.wikipedia.org/wiki/Barrier_function Available barrier functions are defined in the self.barriers dictionary. References for relaxed barrier functions: * https://arxiv.org/abs/1602.01321 * https://arxiv.org/abs/1904.04205v2 * https://ieeexplore.ieee.org/document/7493643/

calculate_constraints(input_dict)[source]
Calculate the magnitudes of constraint violations via log barriers

cviolation > 0 -> penalty cviolation <= 0 -> barrier

class neuromancer.loss.PenaltyLoss(objectives, constraints)[source]
Penalty loss function.

https://en.wikipedia.org/wiki/Penalty_method

forward(input_dict)[source]
Parameters:

input_dict – (dict {str: torch.Tensor}) Values from forward pass calculations

Returns:

(dict {str: torch.Tensor}) input_dict appended with calculated loss values

neuromancer.loss.get_loss(objectives, constraints, train_data, args)[source]