neuromancer.modules.blocks module
Function approximators of various degrees of generality which implement a consistent block interface. Neural network module building blocks for neural state space models, state estimators and control policies.
- class neuromancer.modules.blocks.BasisLinear(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[64], linargs={}, expand=Poly2())[source]
Bases:
Block
For mapping inputs to functional basis feature expansion. This could implement a dictionary of lifting functions. Takes a linear combination of the expanded features.
- class neuromancer.modules.blocks.BilinearTorch(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[64], linargs={})[source]
Bases:
Block
Wraps torch.nn.Bilinear to be consistent with the blocks interface
- class neuromancer.modules.blocks.Block[source]
Bases:
Module
,ABC
Canonical abstract class of the block function approximator
- class neuromancer.modules.blocks.InputConvexNN(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'torch.nn.modules.activation.ReLU'>, hsizes=[64], linargs={})[source]
Bases:
MLP
Input convex neural network z1 = sig(W0(x) + b0) z_i+1 = sig_i(Ui(zi) + Wi(x) + bi), i = 1, …, k-1 V = g(x) = zk
Equation 11 from https://arxiv.org/abs/2001.06116
- class neuromancer.modules.blocks.InteractionEmbeddingMLP(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[64], linargs={}, n_interactors=9)[source]
Bases:
Module
Multi-Layer Perceptron which is a hypernetwork hidden state embeddings decided by interaction type and concatenated to hidden state.
- class neuromancer.modules.blocks.InterpolateAddMultiply(alpha=0.0, tune_alpha=True)[source]
Bases:
Module
Implementation of smooth interpolation between addition and multiplication using soft exponential activation: https://arxiv.org/pdf/1602.01321.pdf
- forward(p, q)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class neuromancer.modules.blocks.KAN(layers_hidden, grid_size=5, spline_order=3, scale_noise=0.1, scale_base=1.0, scale_spline=1.0, base_activation=<class 'torch.nn.modules.activation.SiLU'>, grid_eps=0.02, grid_range=[-1, 1])[source]
Bases:
Module
KAN module based on the efficient implementation of Kolmogorov-Arnold Network. *Reference: https://github.com/Blealtan/efficient-kan.
- forward(x: Tensor, update_grid=False)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class neuromancer.modules.blocks.KANBlock(insize, outsize, hsizes=[64], num_domains=1, grid_sizes=[5], spline_order=3, scale_noise=0.1, scale_base=1.0, scale_spline=1.0, enable_standalone_scale_spline=True, base_activation=<class 'torch.nn.modules.activation.SiLU'>, grid_eps=0.02, grid_range=[-1, 1], grid_updates=None, verbose=False)[source]
Bases:
Block
- class neuromancer.modules.blocks.KANLinear(in_features, out_features, grid_size=5, spline_order=3, scale_noise=0.1, scale_base=1.0, scale_spline=1.0, enable_standalone_scale_spline=True, base_activation=<class 'torch.nn.modules.activation.SiLU'>, grid_eps=0.02, grid_range=[-1, 1])[source]
Bases:
Module
KANLinear module based on the efficient implementation of Kolmogorov-Arnold Network. * Reference: https://github.com/Blealtan/efficient-kan.
- b_splines(x: Tensor)[source]
Compute the B-spline bases for the given input tensor.
- Parameters:
x (torch.Tensor) – Input tensor of shape (batch_size, in_features).
- Returns:
B-spline bases tensor of shape (batch_size, in_features, grid_size + spline_order).
- Return type:
torch.Tensor
- curve2coeff(x: Tensor, y: Tensor)[source]
Compute the coefficients of the curve that interpolates the given points.
- Parameters:
x (torch.Tensor) – Input tensor of shape (batch_size, in_features).
y (torch.Tensor) – Output tensor of shape (batch_size, in_features, out_features).
- Returns:
Coefficients tensor of shape (out_features, in_features, grid_size + spline_order).
- Return type:
torch.Tensor
- forward(x: Tensor)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- regularization_loss(regularize_activation=1.0, regularize_entropy=1.0)[source]
Approximate, memory-efficient implementation of the regularization loss.
- property scaled_spline_weight
- class neuromancer.modules.blocks.Linear(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=None, hsizes=None, linargs={})[source]
Bases:
Block
Linear map consistent with block interface.
- class neuromancer.modules.blocks.MLP(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[64], linargs={})[source]
Bases:
Block
Multi-Layer Perceptron consistent with blocks interface
- class neuromancer.modules.blocks.MLPDropout(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[64], linargs={}, dropout=0.0)[source]
Bases:
Block
Multi-Layer Perceptron with dropout consistent with blocks interface
- class neuromancer.modules.blocks.MLP_bounds(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[64], linargs={}, min=0.0, max=1.0, method='sigmoid_scale')[source]
Bases:
MLP
Multi-Layer Perceptron consistent with blocks interface
- block_eval(x)[source]
- Parameters:
x – (torch.Tensor, shape=[batchsize, insize])
- Returns:
(torch.Tensor, shape=[batchsize, outsize])
- bound_methods = {'relu_clamp': <function bounds_clamp>, 'sigmoid_scale': <function bounds_scaling>}
- class neuromancer.modules.blocks.Poly2(*args)[source]
Bases:
Block
Feature expansion of network to include pairwise multiplications of features.
- class neuromancer.modules.blocks.PosDef(g, max=None, eps=0.01, d=1.0, *args)[source]
Bases:
Block
Enforce positive-definiteness of lyapunov function ICNN, V = g(x) Equation 12 from https://arxiv.org/abs/2001.06116
- class neuromancer.modules.blocks.PytorchRNN(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[10], linargs={})[source]
Bases:
Block
This wraps the torch.nn.RNN class consistent with the blocks interface to give output which is a linear map from final hidden state.
- class neuromancer.modules.blocks.RNN(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[1], linargs={})[source]
Bases:
Block
This wraps the rnn.RNN class consistent with blocks interface to give output which is a linear map from final hidden state.
- block_eval(x, hx=None)[source]
There is some logic here so that the RNN will still get context from state in open loop simulation.
- Parameters:
x – (torch.Tensor, shape=[nsteps, batchsize, dim]) Input sequence is expanded for order 2 tensors
- Returns:
(torch.Tensor, shape=[batchsize, outsize]) Returns linear transform of final hidden state of RNN.
- class neuromancer.modules.blocks.ResMLP(insize, outsize, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=<class 'neuromancer.modules.activations.SoftExponential'>, hsizes=[64], linargs={}, skip=1)[source]
Bases:
MLP
Residual MLP consistent with the block interface.
- class neuromancer.modules.blocks.StackedMLP(insize, outsize, bias=True, linear_map=<class 'neuromancer.modules.blocks.Linear'>, nonlin=<class 'torch.nn.modules.activation.Tanh'>, h_sf_size=[20, 20], n_stacked_mf_layers=3, h_linear_sizes=[10, 10], h_nonlinear_sizes=[20, 20], linargs={}, alpha_init=0.1, verbose=False)[source]
Bases:
Block
Stacked Multi-Layer Perceptron (MFMLP) designed for multi-fidelity learning where multiple layers are stacked to refine the prediction progressively. Each layer is a blend of linear and nonlinear transformations controlled by an adaptive parameter alpha, influencing the trade-off between the two.
- insize
Input feature dimension.
- Type:
int
- outsize
Output feature dimension.
- Type:
int
- bias
If True, bias is used in linear transformations.
- Type:
bool
- linear_map
Linear map class used for layers, by default set to slim.Linear.
- Type:
class
- nonlin
Nonlinear activation function applied after linear transformations.
- Type:
callable
- h_sf_size
Sizes of hidden layers in the single-fidelity MLP.
- Type:
list of int
- n_stacked_mf_layers
Number of stacked multi-fidelity layers.
- Type:
int
- h_linear_sizes
Sizes of hidden layers in each linear sub-network within the multi-fidelity layers.
- Type:
list of int
- h_nonlinear_sizes
Sizes of hidden layers in each nonlinear sub-network within the multi-fidelity layers.
- Type:
list of int
- linargs
Additional arguments for the linear layer instantiation.
- Type:
dict
- alpha_init
Initial value of alpha parameter controlling linear-nonlinear blend.
- Type:
float
- verbose
If True, print messages about network progress and actions.
- Type:
bool
- class neuromancer.modules.blocks.Transformer(insize=11, outsize=1, num_heads=3, dropout=0.0, bias=True, linear_map=<class 'neuromancer.slim.linear.Linear'>, nonlin=None, hsizes=3, linargs={})[source]
Bases:
Block
This wraps the torch.nn.TransformerEncoder and torch.nn.TransformerEncoderLayer class consistent with the blocks interface to give output which is a linear map from final hidden state. Decoder is a linear layer from slim.Linear however can be extended to torch.nn.TransformerDecoder for future iterations.