neuromancer package

Subpackages

Submodules

neuromancer.arg module

This module contains an extension of the argparse.ArgumentParser class and some parsers that are generally useful for writing training scripts using the neuromancer library. ArgumentParser is extended to take advantage of grouped arguments when passing command line arguments to functions and to abbreviate argparse’s verbose method names.

class neuromancer.arg.ArgParser(prefix='', **kwargs)[source]

Bases: ArgumentParser

Subclass argparser for abbreviated method calls, separate namespaces for argument groups, and optional command line prefix so that we can reuse parser definitions.

check_for_group(group_name)[source]

This function will return the argument group if it exists

Parameters:

group_name – (str) Name of the argument group

Returns:

(argparse._ArgumentGroup or None)

group(group_name, prefix=None)[source]

Monkey patch to abbreviate verbose call to add_argument_group. If an argument group exists by the name group_name it will be returned, otherwise a new argument group will be created and returned.

Parameters:

group_name – (str) Name of the argument group

Returns:

argparse._ArgumentGroup

parse_arg_groups()[source]
Returns:

(Namespace, List(Namespace))

neuromancer.arg.add(self, argname, **kwargs)[source]

Monkey patch for argument group objects.

Parameters:
  • self – Refers to instantiated object of class being patched

  • argname – (str) Command line argument

  • kwargs – Whatever keyword args this thing accepts

Returns:

Whatever add_argument returns (I think it is an Action object)

neuromancer.arg.ctrl_loss(prefix='')[source]

Command line parser for special control loss arguments

Parameters:

prefix – (str) Optional prefix for command line arguments to resolve naming conflicts when multiple parsers are bundled as parents.

Returns:

(arg.ArgParse) A command line parser

neuromancer.arg.data(prefix='', system='CSTR')[source]

Command line parser for data arguments

Parameters:

prefix – (str) Optional prefix for command line arguments to resolve naming conflicts when multiple parsers are bundled as parents.

Returns:

(arg.ArgParse) A command line parser

neuromancer.arg.freeze(prefix='')[source]

Command line parser for weight freezing arguments

Parameters:

prefix – (str) Optional prefix for command line arguments to resolve naming conflicts when multiple parsers are bundled as parents.

Returns:

(arg.ArgParse) A command line parser

neuromancer.arg.lin(prefix='')[source]

Command line parser for linear map arguments

Parameters:

prefix – (str) Optional prefix for command line arguments to resolve naming conflicts when multiple parsers are bundled as parents.

Returns:

(arg.ArgParse) A command line parser

neuromancer.arg.log(prefix='')[source]

Command line parser for logging arguments

Parameters:

prefix – (str) Optional prefix for command line arguments to resolve naming conflicts when multiple parsers

are bundled as parents. :return: (arg.ArgParse) A command line parser

neuromancer.arg.loss(prefix='')[source]

Command line parser for loss function arguments

Parameters:

prefix – (str) Optional prefix for command line arguments to resolve naming conflicts when multiple parsers are bundled as parents.

Returns:

(arg.ArgParse) A command line parser

neuromancer.arg.opt(prefix='')[source]

Command line parser for optimization arguments

Parameters:

prefix – (str) Optional prefix for command line arguments to resolve naming conflicts when multiple parsers are bundled as parents.

Returns:

(arg.ArgParse) A command line parser

neuromancer.arg.policy(prefix='')[source]

Command line parser for control policy arguments

Parameters:

prefix – (str) Optional prefix for command line arguments to resolve naming conflicts when multiple parsers are bundled as parents.

Returns:

(arg.ArgParse) A command line parser

neuromancer.arg.ssm(prefix='')[source]

Command line parser for state space model arguments

Parameters:

prefix – (str) Optional prefix for command line arguments to resolve naming conflicts when multiple parsers are bundled as parents.

Returns:

(arg.ArgParse) A command line parser

neuromancer.callbacks module

Callback classes for versatile behavior in the Trainer object at specified checkpoints.

class neuromancer.callbacks.Callback[source]

Bases: object

Callback base class which allows for bare functionality of Trainer

begin_epoch(trainer, output)[source]
begin_eval(trainer, output)[source]
begin_test(trainer)[source]
begin_train(trainer)[source]
end_batch(trainer, output)[source]
end_epoch(trainer, output)[source]
end_eval(trainer, output)[source]
end_test(trainer, output)[source]
end_train(trainer, output)[source]

neuromancer.constraint module

Definition of neuromancer.Constraint class used in conjunction with neuromancer.Variable class. A Constraint has the same behavior as a Loss but with intuitive syntax for defining via Variable objects.

class neuromancer.constraint.Constraint(left, right, comparator, weight=1.0, name=None)[source]

Bases: Module

Drop in replacement for a Loss object but constructed by a composition of Variable objects using comparative infix operators, ‘<’, ‘>’, ‘==’, ‘<=’, ‘>=’ and ‘*’ to weight loss component and ‘^’ to determine l-norm of constraint violation in determining loss.

forward(input_dict)[source]
Parameters:

input_dict – (dict, {str: torch.Tensor}) Should contain keys corresponding to self.variable_names

Returns:

0-dimensional torch.Tensor that can be cast as a floating point number

grad(input_dict, input_key=None)[source]

returns gradient of the loss w.r.t. input key

Parameters:
  • input_dict – (dict, {str: torch.Tensor}) Should contain keys corresponding to self.variable_names

  • input_key – (str) Name of variable in input dict to take gradient with respect to.

Returns:

(torch.Tensor)

update_name(name)[source]
property variable_names
class neuromancer.constraint.Eq(norm=1)[source]

Bases: Module

Equality constraint penalizing difference between left and right hand side. Used for defining infix operator for the Variable class and calculating constraint violation losses for the forward pass of Constraint objects.

constraint: g(x) == b forward pass returns:

value = g(x) - b penalty = g(x) - b loss = torch.mean(penalty)

forward(left, right)[source]
Parameters:
  • left – torch.Tensor

  • right – torch.Tensor

Returns:

zero dimensional torch.Tensor, torch.Tensor, torch.Tensor

class neuromancer.constraint.GT(norm=1)[source]

Bases: Module

Greater than constraint for lower bounding the left hand side by the right hand side. Used for defining infix operator for the Variable class and calculating constraint violation losses for the forward pass of Constraint objects.

constraint: g(x) >= b forward pass returns:

value = b - g(x) penalty = relu(b - g(x)) loss = torch.mean(penalty)

forward(left, right)[source]
Parameters:
  • left – torch.Tensor

  • right – torch.Tensor

Returns:

zero dimensional torch.Tensor, torch.Tensor, torch.Tensor

class neuromancer.constraint.LT(norm=1)[source]

Bases: Module

Less than constraint for upper bounding the left hand side by the right hand side. Used for defining infix operator for the Variable class and calculating constraint violation losses for the forward pass of Constraint objects.

constraint: g(x) <= b forward pass returns:

value = g(x) - b penalty = relu(g(x) - b) loss = torch.mean(penalty)

forward(left, right)[source]
Parameters:
  • left – torch.Tensor

  • right – torch.Tensor

Returns:

zero dimensional torch.Tensor, torch.Tensor, torch.Tensor

class neuromancer.constraint.Loss(input_keys: List[str], loss: Callable[[...], Tensor], weight=1.0, name='loss')[source]

Bases: Module

Drop in replacement for a Constraint object but relies on a list of dictionary keys and a callable function to instantiate.

forward(variables: Dict[str, Tensor]) Tensor[source]
Parameters:

variables – (dict, {str: torch.Tensor}) Should contain keys corresponding to self.variable_names

Returns:

0-dimensional torch.Tensor that can be cast as a floating point number

grad(variables, input_key=None)[source]

returns gradient of the loss w.r.t. input variables

Parameters:
  • variables

  • input_key – string

Returns:

class neuromancer.constraint.Objective(var, metric=<built-in method mean of type object>, weight=1.0, name=None)[source]

Bases: Module

Drop in replacement for a Loss object constructed via neuromancer Variable object in the forward pass evaluates metric as torch function on Variable values

forward(input_dict)[source]
Parameters:

input_dict – (dict, {str: torch.Tensor}) Should contain keys corresponding to self.variable_names

Returns:

(dict, {str: 0-dimensional torch.Tensor}) tensor value can be cast as a floating point number

grad(input_dict, input_key=None)[source]

returns gradient of the loss w.r.t. input variables

Parameters:
  • input_dict

  • input_key – string

Returns:

property variable_names
class neuromancer.constraint.Variable(input_variables=[], func=None, key=None, display_name=None, value=None)[source]

Bases: Module

Variable is an abstraction that allows for the definition of constraints and objectives with some nice syntactic sugar. When a Variable object is called given a dictionary a pytorch tensor is returned, and when a Variable object is subjected to a comparison operator a Constraint is returned. Mathematical operators return Variables which will instantiate and perform the sequence of mathematical operations. PyTorch callables called with variables as inputs return variables. Supported infix operators (variable * variable, variable * numeric): +, -, , @, *, /, <, <=, >, >=, ==, ^

property T
check_keys(k)[source]
property display_name
forward(datadict=None)[source]

Forward pass goes through topologically sorted nodes calculating or retrieving values.

Parameters:

datadict – (dict, {str: Tensor}) Optional dictionary for Variable graphs which take input

Returns:

(torch.Tensor) Tensor value from evaluating the variable’s computational graph.

get_value(n, datadict)[source]
grad(other)[source]
property key

Used by input Variables to retrieve Tensor values from a dictionary. Will be used as a display_name if display_name is not provided to __init__ :return: (str) String intended to be a key in a dict {str: Tensor}

property keys
property mT
make_graph(input_variables)[source]

This is the function that composes the graph of the Variable from constituent input variables which are in-nodes to the Variable. It first builds an empty graph then adds itself to the graph. Then it goes through the inputs and instantiates Variable objects for them if they are not already a Variable. Then it combines the graphs of all Variables by unioning the sets of nodes and edges. In the penultimate step edges are added to the graph from the inputs to the Variable being instantiated, taking care to shallow copy nodes when there is more than one edge between nodes. Finally, the graph is topologically sorted for swift evaluation of the directed acyclic graph.

Parameters:

input_variables – List of arbitrary inputs for self._func

Returns:

A topologically sorted list of Variable objects

minimize(metric=<built-in method mean of type object>, weight=1.0, name=None)[source]
show(figname=None)[source]

Plot and save computational graph

Parameters:

figname – (str) Name to save figure to.

training: bool
unpack[source]

Creates new variables for a node that evaluates to multiple values. This is useful for unpacking results of functions that return multiple values such as torch.linalg.svd:

Parameters:

nret – (int) Number of return values from the torch function

Returns:

[Variable] List of Variable objects for each value returned by the torch function

property value
neuromancer.constraint.variable()[source]

For instantiating a trainable Variable. returns Variable with trainable value = 0dim Tensor from std. normal dist.

Parameters:

display_name – (str) for plotting graph and __repr__

Returns:

Variable with value = 0 dimensional nn.Parameter with requires_grad=True

neuromancer.dataset module

class neuromancer.dataset.DictDataset(datadict, name='train')[source]

Bases: Dataset

Basic dataset compatible with neuromancer Trainer

collate_fn(batch)[source]

Wraps the default PyTorch batch collation function and adds a name field.

Parameters:

batch – (dict str: torch.Tensor) dataset sample.

class neuromancer.dataset.GraphDataset(node_attr: Dict | None = {}, edge_attr: Dict | None = {}, graph_attr: Dict | None = {}, metadata: Dict | None = {}, seq_len: int = 6, seq_horizon: int = 1, seq_stride: int = 1, graphs: Dict | None = None, build_graphs: str | None = None, connectivity_radius: float = 0.015, graph_self_loops=True, name: str = 'data')[source]

Bases: Dataset

build_graphs(feature, self_loops)[source]
try:

from torch_geometric.nn import radius_graph

except:

static collate_fn(x)[source]

Batch collation for dictionaries of samples generated by this dataset. This wraps the default PyTorch batch collation function and does some light post-processing to transpose the data for NeuroMANCER models and add a “name” field.

Parameters:

batch – (list of dict str: torch.Tensor) dataset sample. Requires key ‘edge_index’

make_map()[source]

Order the sample sequences

shuffle()[source]

Randomizes the order of sample sequences

class neuromancer.dataset.SequenceDataset(data, nsteps=1, moving_horizon=False, name='data')[source]

Bases: Dataset

collate_fn(batch)[source]

Batch collation for dictionaries of samples generated by this dataset. This wraps the default PyTorch batch collation function and does some light post-processing to transpose the data for NeuroMANCER models and add a “name” field.

Parameters:

batch – (dict str: torch.Tensor) dataset sample.

get_full_batch()[source]
get_full_sequence()[source]
class neuromancer.dataset.StaticDataset(data, name='data')[source]

Bases: Dataset

collate_fn(batch)[source]

Batch collation for dictionaries of samples generated by this dataset. This wraps the default PyTorch batch collation function and simply adds a “name” field to a batch.

Parameters:

batch – (dict str: torch.Tensor) dataset sample.

get_full_batch()[source]
neuromancer.dataset.batch_tensor(x: Tensor, steps: int, mh: bool = False)[source]
neuromancer.dataset.denormalize_01(M, Mmin, Mmax)[source]

denormalize min max norm :param M: (2-d np.array) Data to be normalized :param Mmin: (int) Minimum value :param Mmax: (int) Maximum value :return: (2-d np.array) Un-normalized data

neuromancer.dataset.denormalize_11(M, Mmin, Mmax)[source]

denormalize min max norm :param M: (2-d np.array) Data to be normalized :param Mmin: (int) Minimum value :param Mmax: (int) Maximum value :return: (2-d np.array) Un-normalized data

neuromancer.dataset.destandardize(M, mean, std)[source]
neuromancer.dataset.get_sequence_dataloaders(data, nsteps, moving_horizon=False, norm_type=None, split_ratio=None, num_workers=0, batch_size=None)[source]

This function will generate dataloaders and open-loop sequence dictionaries for a given dictionary of data. Dataloaders are hard-coded for full-batch training to match NeuroMANCER’s original training setup.

Parameters:
  • data – (dict str: np.array or list[dict str: np.array]) data dictionary or list of data dictionaries; if latter is provided, multi-sequence datasets are created and splits are computed over the number of sequences rather than their lengths.

  • nsteps – (int) length of windowed subsequences for N-step training.

  • moving_horizon – (bool) whether to use moving horizon batching.

  • norm_type – (str) type of normalization; see function normalize_data for more info.

  • split_ratio – (list float) percentage of data in train and development splits; see function split_sequence_data for more info.

  • num_workers – (int, optional) how many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. (default: 0)

  • batch_size – (int, optional) how many samples per batch to load (default: full-batch via len(data)).

neuromancer.dataset.get_static_dataloaders(data, norm_type=None, split_ratio=None, num_workers=0, batch_size=32)[source]

This will generate dataloaders for a given dictionary of data. Dataloaders are hard-coded for full-batch training to match NeuroMANCER’s training setup.

Parameters:
  • data – (dict str: np.array or list[dict str: np.array]) data dictionary or list of data dictionaries; if latter is provided, multi-sequence datasets are created and splits are computed over the number of sequences rather than their lengths.

  • norm_type – (str) type of normalization; see function normalize_data for more info.

  • split_ratio – (list float) percentage of data in train and development splits; see function split_sequence_data for more info.get_static_dataloaders

neuromancer.dataset.normalize_01(M, Mmin=None, Mmax=None)[source]
Parameters:
  • M – (2-d np.array) Data to be normalized

  • Mmin – (int) Optional minimum. If not provided is inferred from data.

  • Mmax – (int) Optional maximum. If not provided is inferred from data.

Returns:

(2-d np.array) Min-max normalized data

neuromancer.dataset.normalize_11(M, Mmin=None, Mmax=None)[source]
Parameters:
  • M – (2-d np.array) Data to be normalized

  • Mmin – (int) Optional minimum. If not provided is inferred from data.

  • Mmax – (int) Optional maximum. If not provided is inferred from data.

Returns:

(2-d np.array) Min-max normalized data

neuromancer.dataset.normalize_data(data, norm_type, stats=None)[source]

Normalize data, optionally using arbitrary statistics (e.g. computed from train split).

Parameters:
  • data – (dict str: np.array) data dictionary.

  • norm_type – (str) type of normalization to use; can be “zero-one”, “one-one”, or “zscore”.

  • stats – (dict str: np.array) statistics to use for normalization. Default is None, in which case stats are inferred by underlying normalization function.

neuromancer.dataset.read_file(file_or_dir)[source]
neuromancer.dataset.split_sequence_data(data, nsteps, moving_horizon=False, split_ratio=None)[source]

Split a data dictionary into train, development, and test sets. Splits data into thirds by default, but arbitrary split ratios for train and development can be provided.

Parameters:
  • data – (dict str: np.array or list[str: np.array]) data dictionary.

  • nsteps – (int) N-step prediction horizon for batching data; used here to ensure split lengths are evenly divisible by N.

  • moving_horizon – (bool) whether batches use a sliding window with stride 1; else stride of N is assumed.

  • split_ratio – (list float) Two numbers indicating percentage of data included in train and development sets (out of 100.0). Default is None, which splits data into thirds.

neuromancer.dataset.split_static_data(data, split_ratio=None)[source]

Split a data dictionary into train, development, and test sets. Splits data into thirds by default, but arbitrary split ratios for train and development can be provided.

Parameters:
  • data – (dict str: np.array or list[str: np.array]) data dictionary.

  • split_ratio – (list float) Two numbers indicating percentage of data included in train and development sets (out of 100.0). Default is None, which splits data into thirds.

neuromancer.dataset.standardize(M, mean=None, std=None)[source]
neuromancer.dataset.unbatch_tensor(x: Tensor, mh: bool = False)[source]

neuromancer.gradients module

Support functions and objects for differentiating neuromancer objects Computing gradients, jacobians, and PWA forms for components, variables, and constraints

neuromancer.gradients.gradient(y, x, grad_outputs=None, create_graph=True)[source]

Compute gradients dy/dx :param y: [tensors] outputs :param x: [tensors] inputs :param grad_outputs: :return:

neuromancer.gradients.jacobian(y, x)[source]

Compute J = [dy_1/dx_1, …, dy_1/dx_n, dy_m/dx_1, …, dy_m/dx_n] computes gradients dy/dx at grad_outputs in [1, 0, …, 0], [0, 1, 0, …, 0], …., [0, …, 0, 1] :param y: [tensor] outputs :param x: tensor] inputs :return:

neuromancer.loggers module

class neuromancer.loggers.BasicLogger(args=None, savedir='test', verbosity=10, stdout=('nstep_dev_loss', 'loop_dev_loss', 'best_loop_dev_loss', 'nstep_dev_ref_loss', 'loop_dev_ref_loss'))[source]

Bases: object

clean_up()[source]
log_artifacts(artifacts)[source]

Stores artifacts created in training to disc.

Parameters:

artifacts – (dict {str: Object})

log_metrics(output, step=None)[source]

Print metrics to stdout.

Parameters:
  • output – (dict {str: tensor}) Will only record 0d tensors (scalars)

  • step – (int) Epoch of training

log_parameters()[source]

Pring experiment parameters to stdout

Parameters:

args – (Namespace) returned by argparse.ArgumentParser.parse_args()

log_weights(model)[source]
Parameters:

model – (nn.Module)

Returns:

(int) The number of learnable parameters in the model

class neuromancer.loggers.MLFlowLogger(args=None, savedir='test', verbosity=1, id=None, stdout=('nstep_dev_loss', 'loop_dev_loss', 'best_loop_dev_loss', 'nstep_dev_ref_loss', 'loop_dev_ref_loss'), logout=None)[source]

Bases: BasicLogger

clean_up()[source]

Remove temporary files from file system

log_artifacts(artifacts={})[source]

Stores artifacts created in training to mlflow.

Parameters:

artifacts – (dict {str: Object})

log_metrics(output, step=0)[source]

Record metrics to mlflow

Parameters:
  • output – (dict {str: tensor}) Will only record 0d torch.Tensors (scalars)

  • step – (int) Epoch of training

log_parameters()[source]

Print experiment parameters to stdout

log_weights(model)[source]
Parameters:

model – (nn.Module)

Returns:

(int) Number of learnable parameters in the model.

neuromancer.loss module

Loss function aggregators that create physics-informed loss functions from the list of defined objective terms and constraints.

Currently supported loss functions:

class neuromancer.loss.AggregateLoss(objectives, constraints)[source]

Bases: Module, ABC

Abstract aggregate loss class for calculating constraints, objectives, and aggegate loss values.

calculate_constraints(input_dict)[source]

Calculate the values of constraints and constraints violations

calculate_objectives(input_dict)[source]

Calculate the value of the objective function for SGD

abstract forward(input_dict)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class neuromancer.loss.AugmentedLagrangeLoss(objectives, constraints, train_data, inner_loop=10, sigma=2.0, mu_max=1000.0, mu_init=0.001, eta=1.0)[source]

Bases: AggregateLoss

Augmented Lagrangian method loss function.

https://en.wikipedia.org/wiki/Augmented_Lagrangian_method

forward(input_dict)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class neuromancer.loss.BarrierLoss(objectives, constraints, barrier='log10', upper_bound=1.0, shift=1.0, alpha=0.5)[source]

Bases: PenaltyLoss

Barrier loss function. * https://en.wikipedia.org/wiki/Barrier_function Available barrier functions are defined in the self.barriers dictionary. References for relaxed barrier functions: * https://arxiv.org/abs/1602.01321 * https://arxiv.org/abs/1904.04205v2 * https://ieeexplore.ieee.org/document/7493643/

calculate_constraints(input_dict)[source]
Calculate the magnitudes of constraint violations via log barriers

cviolation > 0 -> penalty cviolation <= 0 -> barrier

class neuromancer.loss.PenaltyLoss(objectives, constraints)[source]

Bases: AggregateLoss

Penalty loss function.

https://en.wikipedia.org/wiki/Penalty_method

forward(input_dict)[source]
Parameters:

input_dict – (dict {str: torch.Tensor}) Values from forward pass calculations

Returns:

(dict {str: torch.Tensor}) input_dict appended with calculated loss values

neuromancer.loss.get_loss(objectives, constraints, train_data, args)[source]

neuromancer.plot module

Various helper functions for plotting.

class neuromancer.plot.Animator(dynamics_model)[source]

Bases: object

find_mat(model)[source]
make_and_save(filename)[source]
class neuromancer.plot.Visualizer[source]

Bases: object

eval(outputs)[source]
train_output()[source]
train_plot(outputs, epochs)[source]
class neuromancer.plot.VisualizerClosedLoop(u_key='U_pred_policy', y_key='Y_pred_dynamics', r_key='Rf', d_key=None, ymin_key=None, ymax_key=None, umin_key=None, umax_key=None, policy=None, ctrl_outputs=None, savedir='test_control')[source]

Bases: Visualizer

eval(outputs, plot_weights=False, figname='CL_control.png')[source]
plot_matrix()[source]
class neuromancer.plot.VisualizerDobleIntegrator(dataset, model, policy, dynamics, verbosity, savedir, nstep=40, x0=array([[1.5], [1.5]]), training_visuals=False, trace_movie=False)[source]

Bases: Visualizer

custom visualizer for double integrator example

eval(trainer)[source]
train_output(trainer, epoch_policy)[source]

visualize evolution of closed-loop contro and policy landscape during training :return:

class neuromancer.plot.VisualizerOpen(model, verbosity, savedir, training_visuals=False, trace_movie=False, figname=None)[source]

Bases: Visualizer

eval(outputs)[source]
Parameters:

outputs

Returns:

plot_matrix()[source]
plot_traj(true_traj, pred_traj, figname='open_loop.png')[source]
train_output()[source]
Returns:

train_plot(outputs, epoch)[source]
Parameters:
  • outputs

  • epoch

Returns:

class neuromancer.plot.VisualizerTrajectories(model, plot_keys, verbosity)[source]

Bases: Visualizer

eval(outputs)[source]
class neuromancer.plot.VisualizerUncertaintyOpen(dataset, savedir, dynamics_name='dynamics')[source]

Bases: Visualizer

eval(outputs)[source]
Parameters:

outputs

Returns:

plot_traj(true_traj, pred_traj, pred_mean, pred_std, figname='open_loop.png')[source]
neuromancer.plot.cl_simulate(A, B, net, nstep=50, x0=array([[1.], [1.]]))[source]
Parameters:
  • A

  • B

  • net

  • nstep

  • x0

Returns:

neuromancer.plot.get_colors(k)[source]

Returns k colors evenly spaced across the color wheel. :param k: (int) Number of colors you want. :return: (np.array, shape=[k, 3])

neuromancer.plot.plot_cl(X, U, nstep=50, save_path=None, trace_movie=False)[source]
neuromancer.plot.plot_cl_train(X_list, U_list, nstep=50, save_path=None)[source]
neuromancer.plot.plot_loss_DPC(model, policy, A, B, dataset, xmin=-5, xmax=5, save_path=None)[source]

plot loss function for trained DPC model :param model: :param dataset: :param xmin: :param xmax: :param save_path: :return:

neuromancer.plot.plot_loss_mpp(model, dataset, xmin=-2, xmax=2, save_path=None)[source]

plots loss function for multiparametric problem with 2 parameters :param model: :param dataset: :param xmin: :param xmax: :param save_path: :return:

neuromancer.plot.plot_matrices(matrices, labels, figname)[source]

Plots and saves figure of a grid of matrices. Useful for inspecting layers of weights of neural networks.

Parameters:
  • matrices – (list of lists of 2-way np.arrays) Grid of matrices to plot

  • labels – (list of lists of str) Labels for plotted matrices

  • figname – (str) Figure name ending with file extension of filetype to save as.

>>> import neuromancer.plot as plot
>>> color_matrices = [[get_colors(k*j) for k in range(2, 4)] for j in range(8, 11)]
>>> labels = [[f'{k*j} X 3 matrix' for k in range(2, 4)] for j in range(8, 11)]
>>> plot_matrices(color_matrices, labels, 'matrix_grid.png')
neuromancer.plot.plot_model_graph(model, data_keys, include_objectives=True, fname='model_graph.png')[source]
neuromancer.plot.plot_policy(net, xmin=-5, xmax=5, save_path=None)[source]
neuromancer.plot.plot_policy_train(A, B, policy, policy_list, xmin=-5, xmax=5, save_path=None)[source]
neuromancer.plot.plot_solution_mpp(model, xmin=-2, xmax=2, save_path=None)[source]

plots solution landscape for problem with 2 parameters and 1 decision variable :param net: :param xmin: :param xmax: :param save_path: :return:

neuromancer.plot.plot_traj(data, figname=None)[source]
Parameters:
  • data – (dict {str: np.array}) Dictionary of labels and time series

  • figname – (str)

neuromancer.plot.plot_trajectories(traj1, traj2, labels, figname)[source]
neuromancer.plot.pltCL(Y, R=None, U=None, D=None, X=None, ctrl_outputs=None, Ymin=None, Ymax=None, Umin=None, Umax=None, figname=None)[source]

plot input output closed loop dataset

neuromancer.plot.pltCorrelate(X, figname=None)[source]

plot correlation matrices of time series data

neuromancer.plot.pltOL(Y, Ytrain=None, U=None, D=None, X=None, figname=None)[source]

plot trained open loop dataset

neuromancer.plot.pltPhase(X, figname=None)[source]
Parameters:
  • X – (np.array, shape=[numpoints, {2,3}])

  • figname – (str) Filename for plot with extension for file type.

plot phase space for 2D and 3D state spaces

>> import numpy as np
>> import neuromancer.plot as plot
>> x = np.stack([np.linspace(-10, 10, 100)]*100)
>> y = np.stack([np.linspace(-10, 10, 100)]*100).T
>> z = x**2 + y**2
>> xyz = np.stack([x.flatten(), y.flatten(), z.flatten()])
>> plot.pltPhase(xyz, figname='phase.png')
neuromancer.plot.pltRecurrence(X, figname=None)[source]

plot recurrence of time series data

neuromancer.plot.trajectory_movie(true_traj, pred_traj, figname='traj.mp4', freq=1, fps=15, dpi=100)[source]

neuromancer.problem module

class neuromancer.problem.Problem(nodes: List[Callable[[Dict[str, Tensor]], Dict[str, Tensor]]], loss: Callable[[Dict[str, Tensor]], Dict[str, Tensor]], grad_inference=False, check_overwrite=False)[source]

Bases: Module

This class is similar in spirit to a nn.Sequential module. However, by concatenating input and output dictionaries for each node module we can represent arbitrary directed acyclic computation graphs. In addition the Problem module takes care of calculating loss functions via given instantiated weighted multi-objective PenaltyLoss object which calculate objective and constraints terms from aggregated input and set of outputs from the node modules.

forward(data: Dict[str, Tensor]) Dict[str, Tensor][source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

graph(include_objectives=True)[source]
show(figname=None)[source]
step(input_dict: Dict[str, Tensor]) Dict[str, Tensor][source]

neuromancer.system module

open-loop (directed acyclic graphs) and closed-loop (directed cyclic graphs) systems components

Minimum viable product 1, system class for open-loop rollout of autonomous nn.Module class 2, system class for open-loop rollout of non-autonomous nn.Module class 3, system class for closed-loop rollout of simple DPC with neural policy and nonautonomous dynamics class (e.g. SSM, psl, …)

Notes on simple implementation:

Time delay can be handled inside nodes simply or with more complexity Sporadically sampled data can be handled prior with interpolation Different time scales can be handled with nested systems Networked systems seem like a natural fit here

class neuromancer.system.MovingHorizon(module, ndelay=1, history=None)[source]

Bases: Module

The MovingHorizon class buffers single time step inputs for time-delay modeling from past ndelay steps. This class is a wrapper which does data handling for modules which take 3-d input (batch, time, dim)

forward(input)[source]

The forward pass appends the input dictionary to the history buffer and gives last ndelay steps to the module. If history is blank the first step will be repeated ndelay times to initialize the buffer.

Parameters:

input – (dict: str: 2-d tensor (batch, dim)) Dictionary of single step tensor inputs

Returns:

(dict: str: 3-d Tensor (ndelay, batch, dim)) Dictionary of tensor outputs

class neuromancer.system.Node(callable, input_keys, output_keys, name=None)[source]

Bases: Module

Simple class to handle cyclic computational graph connections. input_keys and output_keys define computational node connections through intermediate dictionaries.

forward(data)[source]

This call function wraps the callable to receive/send dictionaries of Tensors

Parameters:

datadict – (dict {str: Tensor}) input to callable with associated input_keys

Returns:

(dict {str: Tensor}) Output of callable with associated output_keys

class neuromancer.system.System(nodes, name=None, nstep_key='X', init_func=None, nsteps=None)[source]

Bases: Module

Simple implementation for arbitrary cyclic computation

cat(data3d, data2d)[source]

Concatenates data2d contents to corresponding entries in data3d :param data3d: (dict {str: Tensor}) Input to a node :param data2d: (dict {str: Tensor}) Output of a node :return: (dict: {str: Tensor})

forward(input_dict)[source]
Parameters:

input_dict – (dict: {str: Tensor}) Tensor shapes in dictionary are asssumed to be (batch, time, dim) If an init function should be written to assure that any 2-d or 1-d tensors have 3 dims.

Returns:

(dict: {str: Tensor}) data with outputs of nstep rollout of Node interactions

graph()[source]
init(data)[source]
Parameters:

data – (dict: {str: Tensor}) Tensor shapes in dictionary are asssumed to be (batch, time, dim)

Returns:

(dict: {str: Tensor})

Any nodes in the graph that are start nodes will need some data initialized. Here is an example of initializing an x0 entry in the input_dict.

Provide in base class analysis of computational graph. Label the source nodes. Keys for source nodes have to be in the data.

show(figname=None)[source]

neuromancer.trainer module

class neuromancer.trainer.Trainer(problem: ~neuromancer.problem.Problem, train_data: ~torch.utils.data.dataloader.DataLoader, dev_data: ~torch.utils.data.dataloader.DataLoader | None = None, test_data: ~torch.utils.data.dataloader.DataLoader | None = None, optimizer: ~torch.optim.optimizer.Optimizer | None = None, logger: ~neuromancer.loggers.BasicLogger | None = None, callback=<neuromancer.callbacks.Callback object>, lr_scheduler=False, epochs=1000, epoch_verbose=1, patience=5, warmup=0, train_metric='train_loss', dev_metric='dev_loss', test_metric='test_loss', eval_metric='dev_loss', eval_mode='min', clip=100.0, device='cpu')[source]

Bases: object

Class encapsulating boilerplate PyTorch training code. Training procedure is somewhat extensible through methods in Callback objects associated with training and evaluation waypoints.

evaluate(best_model)[source]

This method is deprecated. Use self.test instead.

test(best_model)[source]

Evaluate the model on all data splits.

train()[source]

Optimize model according to train_metric and validate per-epoch according to eval_metric. Trains for self.epochs and terminates early if self.patience threshold is exceeded.

neuromancer.trainer.move_batch_to_device(batch, device='cpu')[source]

Module contents