neuromancer.modules package

Submodules

neuromancer.modules.activations module

Elementwise nonlinear tensor operations.

class neuromancer.modules.activations.APLU(nsegments=2, alpha_reg_weight=0.001, beta_reg_weight=0.001, tune_alpha=True, tune_beta=True)[source]

Bases: Module

Adaptive Piecewise Linear Units: https://arxiv.org/pdf/1412.6830.pdf

forward(x)[source]
Parameters:

x – (torch.Tensor) Arbitrary shaped tensor

Returns:

(torch.Tensor) Tensor same shape as input after elementwise application of piecewise linear activation

reg_error()[source]

L2 regularization on parameters of piecewise linear activation :return: (float) Regularization penalty

class neuromancer.modules.activations.BLU(tune_alpha=False, tune_beta=True)[source]

Bases: Module

Bendable Linear Units: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8913972

forward(x)[source]
Parameters:

x – (torch.Tensor) Arbitrary shaped input tensor

Returns:

(torch.Tensor) Tensor same shape as input after bendable linear unit adaptation

class neuromancer.modules.activations.PELU(tune_alpha=True, tune_beta=True)[source]

Bases: Module

Parametric Exponential Linear Units: https://arxiv.org/pdf/1605.09332.pdf

forward(x)[source]
Parameters:

x – (torch.Tensor) Arbitrary shaped input tensor

Returns:

(torch.Tensor) Tensor same shape as input after parametric ELU activation.

class neuromancer.modules.activations.PReLU(tune_alpha=True, tune_beta=True)[source]

Bases: Module

Parametric ReLU: https://arxiv.org/pdf/1502.01852.pdf

forward(x)[source]
Parameters:

x – (torch.Tensor) Arbitrary shaped input tensor

Returns:

(torch.Tensor) Tensor same shape as input after parametric ReLU activation.

class neuromancer.modules.activations.RectifiedSoftExp(tune_alpha=True)[source]

Bases: Module

Mysterious unexplained implementation of Soft Exponential ported from author’s Keras code: https://github.com/thelukester92/2019-blu/blob/master/python/activations/softexp.py

forward(x)[source]
Parameters:

x – (torch.Tensor) Arbitrary shaped tensor

Returns:

(torch.Tensor) Tensor same shape as input after elementwise application of soft exponential function

class neuromancer.modules.activations.SmoothedReLU(d=1.0, tune_d=True)[source]

Bases: Module

ReLU with a quadratic region in [0,d]; Rectified Huber Unit; Used to make the Lyapunov function continuously differentiable https://arxiv.org/pdf/2001.06116.pdf

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class neuromancer.modules.activations.SoftExponential(alpha=0.0, tune_alpha=True)[source]

Bases: Module

Soft exponential activation: https://arxiv.org/pdf/1602.01321.pdf

forward(x)[source]
Parameters:

x – (torch.Tensor) Arbitrary shaped tensor

Returns:

(torch.Tensor) Tensor same shape as input after elementwise application of soft exponential function

neuromancer.modules.activations.soft_exp(alpha, x)[source]

Helper function for SoftExponential learnable activation class. Also used in neuromancer.operators.InterpolateAddMultiply :param alpha: (float) Parameter controlling shape of the function. :param x: (torch.Tensor) Arbitrary shaped tensor input :return: (torch.Tensor) Result of the function applied elementwise on the tensor.

neuromancer.modules.blocks module

Function approximators of various degrees of generality which implement a consistent block interface. Neural network module building blocks for neural state space models, state estimators and control policies.

class neuromancer.modules.blocks.BasisLinear(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[64], linargs={}, expand=Poly2())[source]

Bases: Block

For mapping inputs to functional basis feature expansion. This could implement a dictionary of lifting functions. Takes a linear combination of the expanded features.

block_eval(x)[source]
Parameters:

x – (torch.Tensor, shape=[batchsize, insize])

Returns:

(torch.Tensor, shape=[batchsize, outsize])

reg_error()[source]
class neuromancer.modules.blocks.BilinearTorch(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[64], linargs={})[source]

Bases: Block

Wraps torch.nn.Bilinear to be consistent with the blocks interface

block_eval(x)[source]
reg_error()[source]
class neuromancer.modules.blocks.Block[source]

Bases: Module, ABC

Canonical abstract class of the block function approximator

abstract block_eval(x)[source]
forward(*inputs)[source]

Handling varying number of tensor inputs

Parameters:

inputs – (list(torch.Tensor, shape=[batchsize, insize]) or torch.Tensor, shape=[batchsize, insize])

Returns:

(torch.Tensor, shape=[batchsize, outsize])

class neuromancer.modules.blocks.Dropout(p=0.0, at_train=False, at_test=True)[source]

Bases: Block

block_eval(x)[source]
class neuromancer.modules.blocks.InputConvexNN(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'torch.nn.modules.activation.ReLU'>, hsizes=[64], linargs={})[source]

Bases: MLP

Input convex neural network z1 = sig(W0(x) + b0) z_i+1 = sig_i(Ui(zi) + Wi(x) + bi), i = 1, …, k-1 V = g(x) = zk

Equation 11 from https://arxiv.org/abs/2001.06116

block_eval(x)[source]
Parameters:

x – (torch.Tensor, shape=[batchsize, insize])

Returns:

(torch.Tensor, shape=[batchsize, outsize])

class neuromancer.modules.blocks.InteractionEmbeddingMLP(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[64], linargs={}, n_interactors=9)[source]

Bases: Module

Multi-Layer Perceptron which is a hypernetwork hidden state embeddings decided by interaction type and concatenated to hidden state.

forward(x, i, j)[source]
Parameters:

x – (torch.Tensor, shape=[batchsize, insize])

Returns:

(torch.Tensor, shape=[batchsize, outsize])

reg_error()[source]
class neuromancer.modules.blocks.InterpolateAddMultiply(alpha=0.0, tune_alpha=True)[source]

Bases: Module

Implementation of smooth interpolation between addition and multiplication using soft exponential activation: https://arxiv.org/pdf/1602.01321.pdf

forward(p, q)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class neuromancer.modules.blocks.Linear(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=None, hsizes=None, linargs={})[source]

Bases: Block

Linear map consistent with block interface

block_eval(x)[source]
Parameters:

x – (torch.Tensor, shape=[batchsize, insize])

Returns:

(torch.Tensor, shape=[batchsize, outsize])

reg_error()[source]
class neuromancer.modules.blocks.MLP(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[64], linargs={})[source]

Bases: Block

Multi-Layer Perceptron consistent with blocks interface

block_eval(x)[source]
Parameters:

x – (torch.Tensor, shape=[batchsize, insize])

Returns:

(torch.Tensor, shape=[batchsize, outsize])

reg_error()[source]
class neuromancer.modules.blocks.MLPDropout(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[64], linargs={}, dropout=0.0)[source]

Bases: Block

Multi-Layer Perceptron with dropout consistent with blocks interface

block_eval(x)[source]
Parameters:

x – (torch.Tensor, shape=[batchsize, insize])

Returns:

(torch.Tensor, shape=[batchsize, outsize])

reg_error()[source]
class neuromancer.modules.blocks.MLP_bounds(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[64], linargs={}, min=0.0, max=1.0, method='sigmoid_scale')[source]

Bases: MLP

Multi-Layer Perceptron consistent with blocks interface

block_eval(x)[source]
Parameters:

x – (torch.Tensor, shape=[batchsize, insize])

Returns:

(torch.Tensor, shape=[batchsize, outsize])

bound_methods = {'relu_clamp': <function relu_clamp>, 'sigmoid_scale': <function sigmoid_scale>}
class neuromancer.modules.blocks.Poly2(*args)[source]

Bases: Block

Feature expansion of network to include pairwise multiplications of features.

block_eval(x)[source]
param x:

(torch.Tensor, shape=[batchsize, N]) Input tensor

return:

(torch.Tensor, shape=[batchsize, :math:`

rac{N(N+1)}{2} + N`]) Feature expanded tensor

class neuromancer.modules.blocks.PosDef(g, max=None, eps=0.01, d=1.0, *args)[source]

Bases: Block

Enforce positive-definiteness of lyapunov function ICNN, V = g(x) Equation 12 from https://arxiv.org/abs/2001.06116

block_eval(x)[source]
class neuromancer.modules.blocks.PytorchRNN(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[10], linargs={})[source]

Bases: Block

This wraps the torch.nn.RNN class consistent with the blocks interface to give output which is a linear map from final hidden state.

block_eval(x)[source]
Parameters:

x – (torch.Tensor, shape=[nsteps, batchsize, dim]) Input sequence is expanded for order 2 tensors

Returns:

(torch.Tensor, shape=[batchsize, outsize]) Returns linear transform of final hidden state of RNN.

reg_error()[source]
class neuromancer.modules.blocks.RNN(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[1], linargs={})[source]

Bases: Block

This wraps the rnn.RNN class consistent with blocks interface to give output which is a linear map from final hidden state.

block_eval(x, hx=None)[source]

There is some logic here so that the RNN will still get context from state in open loop simulation.

Parameters:

x – (torch.Tensor, shape=[nsteps, batchsize, dim]) Input sequence is expanded for order 2 tensors

Returns:

(torch.Tensor, shape=[batchsize, outsize]) Returns linear transform of final hidden state of RNN.

reg_error()[source]
reset()[source]
class neuromancer.modules.blocks.ResMLP(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[64], linargs={}, skip=1)[source]

Bases: MLP

Residual MLP consistent with the block interface.

block_eval(x)[source]
Parameters:

x – (torch.Tensor, shape=[batchsize, insize])

Returns:

(torch.Tensor, shape=[batchsize, outsize])

neuromancer.modules.blocks.relu_clamp(x, min, max)[source]
neuromancer.modules.blocks.set_model_dropout_mode(model, at_train=None, at_test=None)[source]

Change dropout mode, useful for enabling MC sampling during inference time.

neuromancer.modules.blocks.sigmoid_scale(x, min, max)[source]

neuromancer.modules.functions module

Set of useful function transformations

neuromancer.modules.functions.bounds_clamp(x, xmin=None, xmax=None)[source]

hard bounds on variable x via ReLU clamping between xmin and xmax values :param x: :param xmin: :param xmax: :return:

neuromancer.modules.functions.bounds_scaling(x, xmin, xmax, scaling=1.0)[source]

hard bounds on variable x via sigmoid scaling between xmin and xmax values :param x: :param xmin: :param xmax: :param scaling: :return:

neuromancer.modules.rnn module

class neuromancer.modules.rnn.RNN(input_size, hsizes=(16, ), bias=False, nonlin=<class 'torch.nn.modules.activation.GELU'>, linear_map=<class 'neuromancer.slim.linear.Linear'>, linargs={})[source]

Bases: Module

forward(sequence, init_states=None)[source]
Parameters:
  • sequence – a tensor(s) of shape (seq_len, batch, input_size)

  • init_state – h_0 (num_layers, batch, hidden_size)

Returns:

  • output: (seq_len, batch, hidden_size)

  • h_n: (num_layers, batch, hidden_size)

reg_error()[source]
class neuromancer.modules.rnn.RNNCell(input_size, hidden_size, bias=False, nonlin=<class 'torch.nn.modules.activation.GELU'>, linear_map=<class 'neuromancer.slim.linear.Linear'>, linargs={})[source]

Bases: Module

forward(input, hidden)[source]
Parameters:
  • input – (torch.Tensor, shape=[batchsize, input_size])

  • hidden – (torch.Tensor, shape=[batchsize, hidden_size])

Returns:

(torch.Tensor, shape=[batchsize, hidden_size])

reg_error()[source]

neuromancer.modules.solvers module

class neuromancer.modules.solvers.GradientProjection(constraints, input_keys, output_keys=[], decay=0.1, num_steps=1, step_size=0.01, energy_update=True, name=None)[source]

Bases: Solver

Implementation of projected gradient method for gradient-based corrections of constraints violations Abstract steps of the gradient projection method:

1, compute aggregated constraints violation penalties (con_viol_energy method) 2, compute gradient of the constraints violations w.r.t. variables in input_keys (forward method) 3, update the variable values with the negative gradient scaled by step_size (forward method)

References

method: https://neos-guide.org/guide/algorithms/gradient-projection/ DC3 paper: https://arxiv.org/abs/2104.12225

con_viol_energy(input_dict)[source]

Calculate the constraints violation potential energy over batches

forward(data)[source]

forward pass of the projected gradient solver :param data: (dict: {str: Tensor}) :return: (dict: {str: Tensor})

class neuromancer.modules.solvers.IterativeSolver(constraints, input_keys, output_keys=[], num_steps=1, step_size=1.0, name=None)[source]

Bases: Module

TODO: to debug

Class for a family of iterative solvers for root-finding solutions to the problem:

\(g(x) = 0\)

general iterative solver update rules: \(x_k+1 = phi(x_k)\) \(x_k+1 = x_k + phi(x_k)\)

https://en.wikipedia.org/wiki/Iterative_method https://en.wikipedia.org/wiki/Root-finding_algorithms

Newton’s method: \(x_k+1 = x_k - J_g(x_k)^-1 g(x_k)\) \(J_g(x_k)\): Jacobian of \(g(x_k)\) w.r.t. :math:`x_k’

con_values(data)[source]

Calculate values g(x) of the constraints expressions

forward(data)[source]

foward pass of the Newton solver :param data: (dict: {str: Tensor}) :return: (dict: {str: Tensor})

newton_step(data, x)[source]

Calculate the newton step for a given variable x

property num_steps
class neuromancer.modules.solvers.Solver(objectives=[], constraints=[], input_keys=[], output_keys=[], name=None)[source]

Bases: Module, ABC

Abstract class for the differentiable solver implementation

abstract forward(data)[source]

differentiable solver update to be implemented here

Parameters:

datadict – (dict {str: Tensor}) input to solver with associated input_keys

Returns:

(dict {str: Tensor}) Output of solver with associated output_keys

Module contents