neuromancer.modules package
Submodules
neuromancer.modules.activations module
Elementwise nonlinear tensor operations.
- class neuromancer.modules.activations.APLU(nsegments=2, alpha_reg_weight=0.001, beta_reg_weight=0.001, tune_alpha=True, tune_beta=True)[source]
Bases:
Module
Adaptive Piecewise Linear Units: https://arxiv.org/pdf/1412.6830.pdf
- class neuromancer.modules.activations.BLU(tune_alpha=False, tune_beta=True)[source]
Bases:
Module
Bendable Linear Units: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8913972
- class neuromancer.modules.activations.PELU(tune_alpha=True, tune_beta=True)[source]
Bases:
Module
Parametric Exponential Linear Units: https://arxiv.org/pdf/1605.09332.pdf
- class neuromancer.modules.activations.PReLU(tune_alpha=True, tune_beta=True)[source]
Bases:
Module
Parametric ReLU: https://arxiv.org/pdf/1502.01852.pdf
- class neuromancer.modules.activations.RectifiedSoftExp(tune_alpha=True)[source]
Bases:
Module
Mysterious unexplained implementation of Soft Exponential ported from author’s Keras code: https://github.com/thelukester92/2019-blu/blob/master/python/activations/softexp.py
- class neuromancer.modules.activations.SmoothedReLU(d=1.0, tune_d=True)[source]
Bases:
Module
ReLU with a quadratic region in [0,d]; Rectified Huber Unit; Used to make the Lyapunov function continuously differentiable https://arxiv.org/pdf/2001.06116.pdf
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class neuromancer.modules.activations.SoftExponential(alpha=0.0, tune_alpha=True)[source]
Bases:
Module
Soft exponential activation: https://arxiv.org/pdf/1602.01321.pdf
- neuromancer.modules.activations.soft_exp(alpha, x)[source]
Helper function for SoftExponential learnable activation class. Also used in neuromancer.operators.InterpolateAddMultiply :param alpha: (float) Parameter controlling shape of the function. :param x: (torch.Tensor) Arbitrary shaped tensor input :return: (torch.Tensor) Result of the function applied elementwise on the tensor.
neuromancer.modules.blocks module
Function approximators of various degrees of generality which implement a consistent block interface. Neural network module building blocks for neural state space models, state estimators and control policies.
- class neuromancer.modules.blocks.BasisLinear(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[64], linargs={}, expand=Poly2())[source]
Bases:
Block
For mapping inputs to functional basis feature expansion. This could implement a dictionary of lifting functions. Takes a linear combination of the expanded features.
- class neuromancer.modules.blocks.BilinearTorch(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[64], linargs={})[source]
Bases:
Block
Wraps torch.nn.Bilinear to be consistent with the blocks interface
- class neuromancer.modules.blocks.Block[source]
Bases:
Module
,ABC
Canonical abstract class of the block function approximator
- class neuromancer.modules.blocks.InputConvexNN(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'torch.nn.modules.activation.ReLU'>, hsizes=[64], linargs={})[source]
Bases:
MLP
Input convex neural network z1 = sig(W0(x) + b0) z_i+1 = sig_i(Ui(zi) + Wi(x) + bi), i = 1, …, k-1 V = g(x) = zk
Equation 11 from https://arxiv.org/abs/2001.06116
- class neuromancer.modules.blocks.InteractionEmbeddingMLP(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[64], linargs={}, n_interactors=9)[source]
Bases:
Module
Multi-Layer Perceptron which is a hypernetwork hidden state embeddings decided by interaction type and concatenated to hidden state.
- class neuromancer.modules.blocks.InterpolateAddMultiply(alpha=0.0, tune_alpha=True)[source]
Bases:
Module
Implementation of smooth interpolation between addition and multiplication using soft exponential activation: https://arxiv.org/pdf/1602.01321.pdf
- forward(p, q)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class neuromancer.modules.blocks.Linear(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=None, hsizes=None, linargs={})[source]
Bases:
Block
Linear map consistent with block interface
- class neuromancer.modules.blocks.MLP(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[64], linargs={})[source]
Bases:
Block
Multi-Layer Perceptron consistent with blocks interface
- class neuromancer.modules.blocks.MLPDropout(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[64], linargs={}, dropout=0.0)[source]
Bases:
Block
Multi-Layer Perceptron with dropout consistent with blocks interface
- class neuromancer.modules.blocks.MLP_bounds(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[64], linargs={}, min=0.0, max=1.0, method='sigmoid_scale')[source]
Bases:
MLP
Multi-Layer Perceptron consistent with blocks interface
- block_eval(x)[source]
- Parameters:
x – (torch.Tensor, shape=[batchsize, insize])
- Returns:
(torch.Tensor, shape=[batchsize, outsize])
- bound_methods = {'relu_clamp': <function relu_clamp>, 'sigmoid_scale': <function sigmoid_scale>}
- class neuromancer.modules.blocks.Poly2(*args)[source]
Bases:
Block
Feature expansion of network to include pairwise multiplications of features.
- class neuromancer.modules.blocks.PosDef(g, max=None, eps=0.01, d=1.0, *args)[source]
Bases:
Block
Enforce positive-definiteness of lyapunov function ICNN, V = g(x) Equation 12 from https://arxiv.org/abs/2001.06116
- class neuromancer.modules.blocks.PytorchRNN(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[10], linargs={})[source]
Bases:
Block
This wraps the torch.nn.RNN class consistent with the blocks interface to give output which is a linear map from final hidden state.
- class neuromancer.modules.blocks.RNN(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[1], linargs={})[source]
Bases:
Block
This wraps the rnn.RNN class consistent with blocks interface to give output which is a linear map from final hidden state.
- block_eval(x, hx=None)[source]
There is some logic here so that the RNN will still get context from state in open loop simulation.
- Parameters:
x – (torch.Tensor, shape=[nsteps, batchsize, dim]) Input sequence is expanded for order 2 tensors
- Returns:
(torch.Tensor, shape=[batchsize, outsize]) Returns linear transform of final hidden state of RNN.
- class neuromancer.modules.blocks.ResMLP(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[64], linargs={}, skip=1)[source]
Bases:
MLP
Residual MLP consistent with the block interface.
neuromancer.modules.functions module
Set of useful function transformations
neuromancer.modules.rnn module
- class neuromancer.modules.rnn.RNN(input_size, hsizes=(16, ), bias=False, nonlin=<class 'torch.nn.modules.activation.GELU'>, linear_map=<class 'neuromancer.slim.linear.Linear'>, linargs={})[source]
Bases:
Module
- class neuromancer.modules.rnn.RNNCell(input_size, hidden_size, bias=False, nonlin=<class 'torch.nn.modules.activation.GELU'>, linear_map=<class 'neuromancer.slim.linear.Linear'>, linargs={})[source]
Bases:
Module
neuromancer.modules.solvers module
- class neuromancer.modules.solvers.GradientProjection(constraints, input_keys, output_keys=[], decay=0.1, num_steps=1, step_size=0.01, energy_update=True, name=None)[source]
Bases:
Solver
Implementation of projected gradient method for gradient-based corrections of constraints violations Abstract steps of the gradient projection method:
1, compute aggregated constraints violation penalties (con_viol_energy method) 2, compute gradient of the constraints violations w.r.t. variables in input_keys (forward method) 3, update the variable values with the negative gradient scaled by step_size (forward method)
References
method: https://neos-guide.org/guide/algorithms/gradient-projection/ DC3 paper: https://arxiv.org/abs/2104.12225
- class neuromancer.modules.solvers.IterativeSolver(constraints, input_keys, output_keys=[], num_steps=1, step_size=1.0, name=None)[source]
Bases:
Module
TODO: to debug
- Class for a family of iterative solvers for root-finding solutions to the problem:
\(g(x) = 0\)
general iterative solver update rules: \(x_k+1 = phi(x_k)\) \(x_k+1 = x_k + phi(x_k)\)
https://en.wikipedia.org/wiki/Iterative_method https://en.wikipedia.org/wiki/Root-finding_algorithms
Newton’s method: \(x_k+1 = x_k - J_g(x_k)^-1 g(x_k)\) \(J_g(x_k)\): Jacobian of \(g(x_k)\) w.r.t. :math:`x_k’
- forward(data)[source]
foward pass of the Newton solver :param data: (dict: {str: Tensor}) :return: (dict: {str: Tensor})
- property num_steps