Constraint

Definition of neuromancer.Constraint class used in conjunction with neuromancer.Variable class. A Constraint has the same behavior as a Loss but with intuitive syntax for defining via Variable objects.

class neuromancer.constraint.Constraint(left, right, comparator, weight=1.0, name=None)[source]

Drop in replacement for a Loss object but constructed by a composition of Variable objects using comparative infix operators, ‘<’, ‘>’, ‘==’, ‘<=’, ‘>=’ and ‘*’ to weight loss component and ‘^’ to determine l-norm of constraint violation in determining loss.

forward(input_dict)[source]
Parameters:

input_dict – (dict, {str: torch.Tensor}) Should contain keys corresponding to self.variable_names

Returns:

0-dimensional torch.Tensor that can be cast as a floating point number

grad(input_dict, input_key=None)[source]

returns gradient of the loss w.r.t. input key

Parameters:
  • input_dict – (dict, {str: torch.Tensor}) Should contain keys corresponding to self.variable_names

  • input_key – (str) Name of variable in input dict to take gradient with respect to.

Returns:

(torch.Tensor)

update_name(name)[source]
property variable_names
class neuromancer.constraint.Eq(norm=1)[source]

Equality constraint penalizing difference between left and right hand side. Used for defining infix operator for the Variable class and calculating constraint violation losses for the forward pass of Constraint objects.

constraint: g(x) == b forward pass returns:

value = g(x) - b penalty = g(x) - b loss = torch.mean(penalty)

forward(left, right)[source]
Parameters:
  • left – torch.Tensor

  • right – torch.Tensor

Returns:

zero dimensional torch.Tensor, torch.Tensor, torch.Tensor

class neuromancer.constraint.GT(norm=1)[source]

Greater than constraint for lower bounding the left hand side by the right hand side. Used for defining infix operator for the Variable class and calculating constraint violation losses for the forward pass of Constraint objects.

constraint: g(x) >= b forward pass returns:

value = b - g(x) penalty = relu(b - g(x)) loss = torch.mean(penalty)

forward(left, right)[source]
Parameters:
  • left – torch.Tensor

  • right – torch.Tensor

Returns:

zero dimensional torch.Tensor, torch.Tensor, torch.Tensor

class neuromancer.constraint.LT(norm=1)[source]

Less than constraint for upper bounding the left hand side by the right hand side. Used for defining infix operator for the Variable class and calculating constraint violation losses for the forward pass of Constraint objects.

constraint: g(x) <= b forward pass returns:

value = g(x) - b penalty = relu(g(x) - b) loss = torch.mean(penalty)

forward(left, right)[source]
Parameters:
  • left – torch.Tensor

  • right – torch.Tensor

Returns:

zero dimensional torch.Tensor, torch.Tensor, torch.Tensor

class neuromancer.constraint.Loss(input_keys: List[str], loss: Callable[[...], Tensor], weight=1.0, name='loss')[source]

Drop in replacement for a Constraint object but relies on a list of dictionary keys and a callable function to instantiate.

forward(variables: Dict[str, Tensor]) Tensor[source]
Parameters:

variables – (dict, {str: torch.Tensor}) Should contain keys corresponding to self.variable_names

Returns:

0-dimensional torch.Tensor that can be cast as a floating point number

grad(variables, input_key=None)[source]

returns gradient of the loss w.r.t. input variables

Parameters:
  • variables

  • input_key – string

Returns:

class neuromancer.constraint.Objective(var, metric=<built-in method mean of type object>, weight=1.0, name=None)[source]

Drop in replacement for a Loss object constructed via neuromancer Variable object in the forward pass evaluates metric as torch function on Variable values

forward(input_dict)[source]
Parameters:

input_dict – (dict, {str: torch.Tensor}) Should contain keys corresponding to self.variable_names

Returns:

(dict, {str: 0-dimensional torch.Tensor}) tensor value can be cast as a floating point number

grad(input_dict, input_key=None)[source]

returns gradient of the loss w.r.t. input variables

Parameters:
  • input_dict

  • input_key – string

Returns:

property variable_names
class neuromancer.constraint.Variable(input_variables=[], func=None, key=None, display_name=None, value=None)[source]

Variable is an abstraction that allows for the definition of constraints and objectives with some nice syntactic sugar. When a Variable object is called given a dictionary a pytorch tensor is returned, and when a Variable object is subjected to a comparison operator a Constraint is returned. Mathematical operators return Variables which will instantiate and perform the sequence of mathematical operations. PyTorch callables called with variables as inputs return variables. Supported infix operators (variable * variable, variable * numeric): +, -, , @, *, /, <, <=, >, >=, ==, ^

property T
check_keys(k)[source]
property display_name
forward(datadict=None)[source]

Forward pass goes through topologically sorted nodes calculating or retrieving values.

Parameters:

datadict – (dict, {str: Tensor}) Optional dictionary for Variable graphs which take input

Returns:

(torch.Tensor) Tensor value from evaluating the variable’s computational graph.

get_value(n, datadict)[source]
grad(other)[source]
property key

Used by input Variables to retrieve Tensor values from a dictionary. Will be used as a display_name if display_name is not provided to __init__ :return: (str) String intended to be a key in a dict {str: Tensor}

property keys
property mT
make_graph(input_variables)[source]

This is the function that composes the graph of the Variable from constituent input variables which are in-nodes to the Variable. It first builds an empty graph then adds itself to the graph. Then it goes through the inputs and instantiates Variable objects for them if they are not already a Variable. Then it combines the graphs of all Variables by unioning the sets of nodes and edges. In the penultimate step edges are added to the graph from the inputs to the Variable being instantiated, taking care to shallow copy nodes when there is more than one edge between nodes. Finally, the graph is topologically sorted for swift evaluation of the directed acyclic graph.

Parameters:

input_variables – List of arbitrary inputs for self._func

Returns:

A topologically sorted list of Variable objects

minimize(metric=<built-in method mean of type object>, weight=1.0, name=None)[source]
show(figname=None)[source]

Plot and save computational graph

Parameters:

figname – (str) Name to save figure to.

unpack[source]

Creates new variables for a node that evaluates to multiple values. This is useful for unpacking results of functions that return multiple values such as torch.linalg.svd:

Parameters:

nret – (int) Number of return values from the torch function

Returns:

[Variable] List of Variable objects for each value returned by the torch function

property value
neuromancer.constraint.variable()[source]

For instantiating a trainable Variable. returns Variable with trainable value = 0dim Tensor from std. normal dist.

Parameters:

display_name – (str) for plotting graph and __repr__

Returns:

Variable with value = 0 dimensional nn.Parameter with requires_grad=True