neuromancer package
Subpackages
- neuromancer.dynamics package
- Submodules
- neuromancer.dynamics.integrators module
- neuromancer.dynamics.interpolation module
- neuromancer.dynamics.ode module
- neuromancer.dynamics.physics module
- Module contents
- neuromancer.modules package
- Submodules
- neuromancer.modules.activations module
- neuromancer.modules.blocks module
- neuromancer.modules.functions module
- neuromancer.modules.rnn module
- neuromancer.modules.solvers module
- Module contents
- neuromancer.psl package
- Submodules
- neuromancer.psl.autonomous module
- neuromancer.psl.base module
Backend
EmulatorBase
EmulatorBase.add_missing_parameters()
EmulatorBase.change_backend()
EmulatorBase.denormalize()
EmulatorBase.get_x0()
EmulatorBase.normalize()
EmulatorBase.params
EmulatorBase.restore_random_state()
EmulatorBase.save_random_state()
EmulatorBase.set_params()
EmulatorBase.set_stats()
EmulatorBase.show()
EmulatorBase.simulate()
EquationWrapper
ODE_Autonomous
ODE_NonAutonomous
cast_backend()
download()
grad()
- neuromancer.psl.building_envelope module
BuildingEnvelope
BuildingEnvelope.T_dist_idx
BuildingEnvelope.equations()
BuildingEnvelope.forward()
BuildingEnvelope.get_D()
BuildingEnvelope.get_D_obs()
BuildingEnvelope.get_R()
BuildingEnvelope.get_U()
BuildingEnvelope.get_q()
BuildingEnvelope.get_simulation_args()
BuildingEnvelope.get_xy()
BuildingEnvelope.params
BuildingEnvelope.path
BuildingEnvelope.simulate()
BuildingEnvelope.systems
BuildingEnvelope.umax
BuildingEnvelope.umin
BuildingEnvelope.url
LinearBuildingEnvelope
- neuromancer.psl.coupled_systems module
- neuromancer.psl.file_emulator module
- neuromancer.psl.gym module
- neuromancer.psl.nonautonomous module
- neuromancer.psl.norms module
- neuromancer.psl.perturb module
- neuromancer.psl.plot module
- neuromancer.psl.signals module
- neuromancer.psl.system_emulator module
- Module contents
- neuromancer.slim package
- Subpackages
- neuromancer.slim.butterfly package
- Submodules
- neuromancer.slim.butterfly.benchmark module
- neuromancer.slim.butterfly.butterfly module
- neuromancer.slim.butterfly.butterfly_multiply module
- neuromancer.slim.butterfly.complex_utils module
- neuromancer.slim.butterfly.permutation module
- neuromancer.slim.butterfly.permutation_multiply module
- neuromancer.slim.butterfly.utils module
- Module contents
- neuromancer.slim.butterfly package
- Submodules
- neuromancer.slim.bench module
- neuromancer.slim.linear module
BoundedNormLinear
ButterflyLinear
DampedSkewSymmetricLinear
GershgorinLinear
Hprod()
IdentityGradReLU
IdentityInitLinear
IdentityLinear
L0Linear
LassoLinear
LassoLinearRELU
LeftStochasticLinear
Linear
LinearBase
NonNegativeLinear
OrthogonalLinear
PSDLinear
PerronFrobeniusLinear
PowerBoundLinear
RightStochasticLinear
SVDLinear
SVDLinearLearnBounds
SchurDecompositionLinear
SkewSymmetricLinear
SpectralLinear
SplitLinear
SquareLinear
StableSplitLinear
SymmetricLinear
SymmetricSVDLinear
SymmetricSpectralLinear
SymplecticLinear
TrivialNullSpaceLinear
- neuromancer.slim.rnn module
- Module contents
- Subpackages
Submodules
neuromancer.arg module
This module contains an extension of the argparse.ArgumentParser class and some parsers that are generally useful for writing training scripts using the neuromancer library. ArgumentParser is extended to take advantage of grouped arguments when passing command line arguments to functions and to abbreviate argparse’s verbose method names.
- class neuromancer.arg.ArgParser(prefix='', **kwargs)[source]
Bases:
ArgumentParser
Subclass argparser for abbreviated method calls, separate namespaces for argument groups, and optional command line prefix so that we can reuse parser definitions.
- check_for_group(group_name)[source]
This function will return the argument group if it exists
- Parameters:
group_name – (str) Name of the argument group
- Returns:
(argparse._ArgumentGroup or None)
- group(group_name, prefix=None)[source]
Monkey patch to abbreviate verbose call to add_argument_group. If an argument group exists by the name group_name it will be returned, otherwise a new argument group will be created and returned.
- Parameters:
group_name – (str) Name of the argument group
- Returns:
argparse._ArgumentGroup
- neuromancer.arg.add(self, argname, **kwargs)[source]
Monkey patch for argument group objects.
- Parameters:
self – Refers to instantiated object of class being patched
argname – (str) Command line argument
kwargs – Whatever keyword args this thing accepts
- Returns:
Whatever add_argument returns (I think it is an Action object)
- neuromancer.arg.ctrl_loss(prefix='')[source]
Command line parser for special control loss arguments
- Parameters:
prefix – (str) Optional prefix for command line arguments to resolve naming conflicts when multiple parsers are bundled as parents.
- Returns:
(arg.ArgParse) A command line parser
- neuromancer.arg.data(prefix='', system='CSTR')[source]
Command line parser for data arguments
- Parameters:
prefix – (str) Optional prefix for command line arguments to resolve naming conflicts when multiple parsers are bundled as parents.
- Returns:
(arg.ArgParse) A command line parser
- neuromancer.arg.freeze(prefix='')[source]
Command line parser for weight freezing arguments
- Parameters:
prefix – (str) Optional prefix for command line arguments to resolve naming conflicts when multiple parsers are bundled as parents.
- Returns:
(arg.ArgParse) A command line parser
- neuromancer.arg.lin(prefix='')[source]
Command line parser for linear map arguments
- Parameters:
prefix – (str) Optional prefix for command line arguments to resolve naming conflicts when multiple parsers are bundled as parents.
- Returns:
(arg.ArgParse) A command line parser
- neuromancer.arg.log(prefix='')[source]
Command line parser for logging arguments
- Parameters:
prefix – (str) Optional prefix for command line arguments to resolve naming conflicts when multiple parsers
are bundled as parents. :return: (arg.ArgParse) A command line parser
- neuromancer.arg.loss(prefix='')[source]
Command line parser for loss function arguments
- Parameters:
prefix – (str) Optional prefix for command line arguments to resolve naming conflicts when multiple parsers are bundled as parents.
- Returns:
(arg.ArgParse) A command line parser
- neuromancer.arg.opt(prefix='')[source]
Command line parser for optimization arguments
- Parameters:
prefix – (str) Optional prefix for command line arguments to resolve naming conflicts when multiple parsers are bundled as parents.
- Returns:
(arg.ArgParse) A command line parser
neuromancer.callbacks module
Callback classes for versatile behavior in the Trainer object at specified checkpoints.
neuromancer.constraint module
Definition of neuromancer.Constraint class used in conjunction with neuromancer.Variable class. A Constraint has the same behavior as a Loss but with intuitive syntax for defining via Variable objects.
- class neuromancer.constraint.Constraint(left, right, comparator, weight=1.0, name=None)[source]
Bases:
Module
Drop in replacement for a Loss object but constructed by a composition of Variable objects using comparative infix operators, ‘<’, ‘>’, ‘==’, ‘<=’, ‘>=’ and ‘*’ to weight loss component and ‘^’ to determine l-norm of constraint violation in determining loss.
- forward(input_dict)[source]
- Parameters:
input_dict – (dict, {str: torch.Tensor}) Should contain keys corresponding to self.variable_names
- Returns:
0-dimensional torch.Tensor that can be cast as a floating point number
- grad(input_dict, input_key=None)[source]
returns gradient of the loss w.r.t. input key
- Parameters:
input_dict – (dict, {str: torch.Tensor}) Should contain keys corresponding to self.variable_names
input_key – (str) Name of variable in input dict to take gradient with respect to.
- Returns:
(torch.Tensor)
- property variable_names
- class neuromancer.constraint.Eq(norm=1)[source]
Bases:
Module
Equality constraint penalizing difference between left and right hand side. Used for defining infix operator for the Variable class and calculating constraint violation losses for the forward pass of Constraint objects.
constraint: g(x) == b forward pass returns:
value = g(x) - b penalty = g(x) - b loss = torch.mean(penalty)
- class neuromancer.constraint.GT(norm=1)[source]
Bases:
Module
Greater than constraint for lower bounding the left hand side by the right hand side. Used for defining infix operator for the Variable class and calculating constraint violation losses for the forward pass of Constraint objects.
constraint: g(x) >= b forward pass returns:
value = b - g(x) penalty = relu(b - g(x)) loss = torch.mean(penalty)
- class neuromancer.constraint.LT(norm=1)[source]
Bases:
Module
Less than constraint for upper bounding the left hand side by the right hand side. Used for defining infix operator for the Variable class and calculating constraint violation losses for the forward pass of Constraint objects.
constraint: g(x) <= b forward pass returns:
value = g(x) - b penalty = relu(g(x) - b) loss = torch.mean(penalty)
- class neuromancer.constraint.Loss(input_keys: List[str], loss: Callable[[...], Tensor], weight=1.0, name='loss')[source]
Bases:
Module
Drop in replacement for a Constraint object but relies on a list of dictionary keys and a callable function to instantiate.
- class neuromancer.constraint.Objective(var, metric=<built-in method mean of type object>, weight=1.0, name=None)[source]
Bases:
Module
Drop in replacement for a Loss object constructed via neuromancer Variable object in the forward pass evaluates metric as torch function on Variable values
- forward(input_dict)[source]
- Parameters:
input_dict – (dict, {str: torch.Tensor}) Should contain keys corresponding to self.variable_names
- Returns:
(dict, {str: 0-dimensional torch.Tensor}) tensor value can be cast as a floating point number
- grad(input_dict, input_key=None)[source]
returns gradient of the loss w.r.t. input variables
- Parameters:
input_dict –
input_key – string
- Returns:
- property variable_names
- class neuromancer.constraint.Variable(input_variables=[], func=None, key=None, display_name=None, value=None)[source]
Bases:
Module
Variable is an abstraction that allows for the definition of constraints and objectives with some nice syntactic sugar. When a Variable object is called given a dictionary a pytorch tensor is returned, and when a Variable object is subjected to a comparison operator a Constraint is returned. Mathematical operators return Variables which will instantiate and perform the sequence of mathematical operations. PyTorch callables called with variables as inputs return variables. Supported infix operators (variable * variable, variable * numeric): +, -, , @, *, /, <, <=, >, >=, ==, ^
- property T
- property display_name
- forward(datadict=None)[source]
Forward pass goes through topologically sorted nodes calculating or retrieving values.
- Parameters:
datadict – (dict, {str: Tensor}) Optional dictionary for Variable graphs which take input
- Returns:
(torch.Tensor) Tensor value from evaluating the variable’s computational graph.
- property key
Used by input Variables to retrieve Tensor values from a dictionary. Will be used as a display_name if display_name is not provided to __init__ :return: (str) String intended to be a key in a dict {str: Tensor}
- property keys
- property mT
- make_graph(input_variables)[source]
This is the function that composes the graph of the Variable from constituent input variables which are in-nodes to the Variable. It first builds an empty graph then adds itself to the graph. Then it goes through the inputs and instantiates Variable objects for them if they are not already a Variable. Then it combines the graphs of all Variables by unioning the sets of nodes and edges. In the penultimate step edges are added to the graph from the inputs to the Variable being instantiated, taking care to shallow copy nodes when there is more than one edge between nodes. Finally, the graph is topologically sorted for swift evaluation of the directed acyclic graph.
- Parameters:
input_variables – List of arbitrary inputs for self._func
- Returns:
A topologically sorted list of Variable objects
- show(figname=None)[source]
Plot and save computational graph
- Parameters:
figname – (str) Name to save figure to.
- training: bool
- unpack[source]
Creates new variables for a node that evaluates to multiple values. This is useful for unpacking results of functions that return multiple values such as torch.linalg.svd:
- Parameters:
nret – (int) Number of return values from the torch function
- Returns:
[Variable] List of Variable objects for each value returned by the torch function
- property value
- neuromancer.constraint.variable()[source]
For instantiating a trainable Variable. returns Variable with trainable value = 0dim Tensor from std. normal dist.
- Parameters:
display_name – (str) for plotting graph and __repr__
- Returns:
Variable with value = 0 dimensional nn.Parameter with requires_grad=True
neuromancer.dataset module
- class neuromancer.dataset.DictDataset(datadict, name='train')[source]
Bases:
Dataset
Basic dataset compatible with neuromancer Trainer
- class neuromancer.dataset.GraphDataset(node_attr: Dict | None = {}, edge_attr: Dict | None = {}, graph_attr: Dict | None = {}, metadata: Dict | None = {}, seq_len: int = 6, seq_horizon: int = 1, seq_stride: int = 1, graphs: Dict | None = None, build_graphs: str | None = None, connectivity_radius: float = 0.015, graph_self_loops=True, name: str = 'data')[source]
Bases:
Dataset
- static collate_fn(x)[source]
Batch collation for dictionaries of samples generated by this dataset. This wraps the default PyTorch batch collation function and does some light post-processing to transpose the data for NeuroMANCER models and add a “name” field.
- Parameters:
batch – (list of dict str: torch.Tensor) dataset sample. Requires key ‘edge_index’
- class neuromancer.dataset.SequenceDataset(data, nsteps=1, moving_horizon=False, name='data')[source]
Bases:
Dataset
- collate_fn(batch)[source]
Batch collation for dictionaries of samples generated by this dataset. This wraps the default PyTorch batch collation function and does some light post-processing to transpose the data for NeuroMANCER models and add a “name” field.
- Parameters:
batch – (dict str: torch.Tensor) dataset sample.
- class neuromancer.dataset.StaticDataset(data, name='data')[source]
Bases:
Dataset
- neuromancer.dataset.denormalize_01(M, Mmin, Mmax)[source]
denormalize min max norm :param M: (2-d np.array) Data to be normalized :param Mmin: (int) Minimum value :param Mmax: (int) Maximum value :return: (2-d np.array) Un-normalized data
- neuromancer.dataset.denormalize_11(M, Mmin, Mmax)[source]
denormalize min max norm :param M: (2-d np.array) Data to be normalized :param Mmin: (int) Minimum value :param Mmax: (int) Maximum value :return: (2-d np.array) Un-normalized data
- neuromancer.dataset.get_sequence_dataloaders(data, nsteps, moving_horizon=False, norm_type=None, split_ratio=None, num_workers=0, batch_size=None)[source]
This function will generate dataloaders and open-loop sequence dictionaries for a given dictionary of data. Dataloaders are hard-coded for full-batch training to match NeuroMANCER’s original training setup.
- Parameters:
data – (dict str: np.array or list[dict str: np.array]) data dictionary or list of data dictionaries; if latter is provided, multi-sequence datasets are created and splits are computed over the number of sequences rather than their lengths.
nsteps – (int) length of windowed subsequences for N-step training.
moving_horizon – (bool) whether to use moving horizon batching.
norm_type – (str) type of normalization; see function normalize_data for more info.
split_ratio – (list float) percentage of data in train and development splits; see function split_sequence_data for more info.
num_workers – (int, optional) how many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. (default: 0)
batch_size – (int, optional) how many samples per batch to load (default: full-batch via len(data)).
- neuromancer.dataset.get_static_dataloaders(data, norm_type=None, split_ratio=None, num_workers=0, batch_size=32)[source]
This will generate dataloaders for a given dictionary of data. Dataloaders are hard-coded for full-batch training to match NeuroMANCER’s training setup.
- Parameters:
data – (dict str: np.array or list[dict str: np.array]) data dictionary or list of data dictionaries; if latter is provided, multi-sequence datasets are created and splits are computed over the number of sequences rather than their lengths.
norm_type – (str) type of normalization; see function normalize_data for more info.
split_ratio – (list float) percentage of data in train and development splits; see function split_sequence_data for more info.get_static_dataloaders
- neuromancer.dataset.normalize_01(M, Mmin=None, Mmax=None)[source]
- Parameters:
M – (2-d np.array) Data to be normalized
Mmin – (int) Optional minimum. If not provided is inferred from data.
Mmax – (int) Optional maximum. If not provided is inferred from data.
- Returns:
(2-d np.array) Min-max normalized data
- neuromancer.dataset.normalize_11(M, Mmin=None, Mmax=None)[source]
- Parameters:
M – (2-d np.array) Data to be normalized
Mmin – (int) Optional minimum. If not provided is inferred from data.
Mmax – (int) Optional maximum. If not provided is inferred from data.
- Returns:
(2-d np.array) Min-max normalized data
- neuromancer.dataset.normalize_data(data, norm_type, stats=None)[source]
Normalize data, optionally using arbitrary statistics (e.g. computed from train split).
- Parameters:
data – (dict str: np.array) data dictionary.
norm_type – (str) type of normalization to use; can be “zero-one”, “one-one”, or “zscore”.
stats – (dict str: np.array) statistics to use for normalization. Default is None, in which case stats are inferred by underlying normalization function.
- neuromancer.dataset.split_sequence_data(data, nsteps, moving_horizon=False, split_ratio=None)[source]
Split a data dictionary into train, development, and test sets. Splits data into thirds by default, but arbitrary split ratios for train and development can be provided.
- Parameters:
data – (dict str: np.array or list[str: np.array]) data dictionary.
nsteps – (int) N-step prediction horizon for batching data; used here to ensure split lengths are evenly divisible by N.
moving_horizon – (bool) whether batches use a sliding window with stride 1; else stride of N is assumed.
split_ratio – (list float) Two numbers indicating percentage of data included in train and development sets (out of 100.0). Default is None, which splits data into thirds.
- neuromancer.dataset.split_static_data(data, split_ratio=None)[source]
Split a data dictionary into train, development, and test sets. Splits data into thirds by default, but arbitrary split ratios for train and development can be provided.
- Parameters:
data – (dict str: np.array or list[str: np.array]) data dictionary.
split_ratio – (list float) Two numbers indicating percentage of data included in train and development sets (out of 100.0). Default is None, which splits data into thirds.
neuromancer.gradients module
Support functions and objects for differentiating neuromancer objects Computing gradients, jacobians, and PWA forms for components, variables, and constraints
neuromancer.loggers module
- class neuromancer.loggers.BasicLogger(args=None, savedir='test', verbosity=10, stdout=('nstep_dev_loss', 'loop_dev_loss', 'best_loop_dev_loss', 'nstep_dev_ref_loss', 'loop_dev_ref_loss'))[source]
Bases:
object
- log_artifacts(artifacts)[source]
Stores artifacts created in training to disc.
- Parameters:
artifacts – (dict {str: Object})
- log_metrics(output, step=None)[source]
Print metrics to stdout.
- Parameters:
output – (dict {str: tensor}) Will only record 0d tensors (scalars)
step – (int) Epoch of training
- class neuromancer.loggers.MLFlowLogger(args=None, savedir='test', verbosity=1, id=None, stdout=('nstep_dev_loss', 'loop_dev_loss', 'best_loop_dev_loss', 'nstep_dev_ref_loss', 'loop_dev_ref_loss'), logout=None)[source]
Bases:
BasicLogger
- log_artifacts(artifacts={})[source]
Stores artifacts created in training to mlflow.
- Parameters:
artifacts – (dict {str: Object})
neuromancer.loss module
Loss function aggregators that create physics-informed loss functions from the list of defined objective terms and constraints.
Currently supported loss functions:
- class neuromancer.loss.AggregateLoss(objectives, constraints)[source]
Bases:
Module
,ABC
Abstract aggregate loss class for calculating constraints, objectives, and aggegate loss values.
- calculate_constraints(input_dict)[source]
Calculate the values of constraints and constraints violations
- abstract forward(input_dict)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class neuromancer.loss.AugmentedLagrangeLoss(objectives, constraints, train_data, inner_loop=10, sigma=2.0, mu_max=1000.0, mu_init=0.001, eta=1.0)[source]
Bases:
AggregateLoss
- Augmented Lagrangian method loss function.
- forward(input_dict)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class neuromancer.loss.BarrierLoss(objectives, constraints, barrier='log10', upper_bound=1.0, shift=1.0, alpha=0.5)[source]
Bases:
PenaltyLoss
Barrier loss function. * https://en.wikipedia.org/wiki/Barrier_function Available barrier functions are defined in the self.barriers dictionary. References for relaxed barrier functions: * https://arxiv.org/abs/1602.01321 * https://arxiv.org/abs/1904.04205v2 * https://ieeexplore.ieee.org/document/7493643/
- class neuromancer.loss.PenaltyLoss(objectives, constraints)[source]
Bases:
AggregateLoss
- Penalty loss function.
neuromancer.plot module
Various helper functions for plotting.
- class neuromancer.plot.VisualizerClosedLoop(u_key='U_pred_policy', y_key='Y_pred_dynamics', r_key='Rf', d_key=None, ymin_key=None, ymax_key=None, umin_key=None, umax_key=None, policy=None, ctrl_outputs=None, savedir='test_control')[source]
Bases:
Visualizer
- class neuromancer.plot.VisualizerDobleIntegrator(dataset, model, policy, dynamics, verbosity, savedir, nstep=40, x0=array([[1.5], [1.5]]), training_visuals=False, trace_movie=False)[source]
Bases:
Visualizer
custom visualizer for double integrator example
- class neuromancer.plot.VisualizerOpen(model, verbosity, savedir, training_visuals=False, trace_movie=False, figname=None)[source]
Bases:
Visualizer
- class neuromancer.plot.VisualizerTrajectories(model, plot_keys, verbosity)[source]
Bases:
Visualizer
- class neuromancer.plot.VisualizerUncertaintyOpen(dataset, savedir, dynamics_name='dynamics')[source]
Bases:
Visualizer
- neuromancer.plot.cl_simulate(A, B, net, nstep=50, x0=array([[1.], [1.]]))[source]
- Parameters:
A –
B –
net –
nstep –
x0 –
- Returns:
- neuromancer.plot.get_colors(k)[source]
Returns k colors evenly spaced across the color wheel. :param k: (int) Number of colors you want. :return: (np.array, shape=[k, 3])
- neuromancer.plot.plot_loss_DPC(model, policy, A, B, dataset, xmin=-5, xmax=5, save_path=None)[source]
plot loss function for trained DPC model :param model: :param dataset: :param xmin: :param xmax: :param save_path: :return:
- neuromancer.plot.plot_loss_mpp(model, dataset, xmin=-2, xmax=2, save_path=None)[source]
plots loss function for multiparametric problem with 2 parameters :param model: :param dataset: :param xmin: :param xmax: :param save_path: :return:
- neuromancer.plot.plot_matrices(matrices, labels, figname)[source]
Plots and saves figure of a grid of matrices. Useful for inspecting layers of weights of neural networks.
- Parameters:
matrices – (list of lists of 2-way np.arrays) Grid of matrices to plot
labels – (list of lists of str) Labels for plotted matrices
figname – (str) Figure name ending with file extension of filetype to save as.
>>> import neuromancer.plot as plot >>> color_matrices = [[get_colors(k*j) for k in range(2, 4)] for j in range(8, 11)] >>> labels = [[f'{k*j} X 3 matrix' for k in range(2, 4)] for j in range(8, 11)] >>> plot_matrices(color_matrices, labels, 'matrix_grid.png')
- neuromancer.plot.plot_model_graph(model, data_keys, include_objectives=True, fname='model_graph.png')[source]
- neuromancer.plot.plot_policy_train(A, B, policy, policy_list, xmin=-5, xmax=5, save_path=None)[source]
- neuromancer.plot.plot_solution_mpp(model, xmin=-2, xmax=2, save_path=None)[source]
plots solution landscape for problem with 2 parameters and 1 decision variable :param net: :param xmin: :param xmax: :param save_path: :return:
- neuromancer.plot.plot_traj(data, figname=None)[source]
- Parameters:
data – (dict {str: np.array}) Dictionary of labels and time series
figname – (str)
- neuromancer.plot.pltCL(Y, R=None, U=None, D=None, X=None, ctrl_outputs=None, Ymin=None, Ymax=None, Umin=None, Umax=None, figname=None)[source]
plot input output closed loop dataset
- neuromancer.plot.pltCorrelate(X, figname=None)[source]
plot correlation matrices of time series data
- neuromancer.plot.pltOL(Y, Ytrain=None, U=None, D=None, X=None, figname=None)[source]
plot trained open loop dataset
- neuromancer.plot.pltPhase(X, figname=None)[source]
- Parameters:
X – (np.array, shape=[numpoints, {2,3}])
figname – (str) Filename for plot with extension for file type.
plot phase space for 2D and 3D state spaces
https://matplotlib.org/3.2.1/gallery/images_contours_and_fields/plot_streamplot.html
https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.streamplot.html
https://matplotlib.org/3.2.1/api/_as_gen/matplotlib.pyplot.quiver.html
http://kitchingroup.cheme.cmu.edu/blog/2013/02/21/Phase-portraits-of-a-system-of-ODEs/
http://systems-sciences.uni-graz.at/etextbook/sw2/phpl_python.html
>> import numpy as np >> import neuromancer.plot as plot >> x = np.stack([np.linspace(-10, 10, 100)]*100) >> y = np.stack([np.linspace(-10, 10, 100)]*100).T >> z = x**2 + y**2 >> xyz = np.stack([x.flatten(), y.flatten(), z.flatten()]) >> plot.pltPhase(xyz, figname='phase.png')
- neuromancer.plot.pltRecurrence(X, figname=None)[source]
plot recurrence of time series data
neuromancer.problem module
- class neuromancer.problem.Problem(nodes: List[Callable[[Dict[str, Tensor]], Dict[str, Tensor]]], loss: Callable[[Dict[str, Tensor]], Dict[str, Tensor]], grad_inference=False, check_overwrite=False)[source]
Bases:
Module
This class is similar in spirit to a nn.Sequential module. However, by concatenating input and output dictionaries for each node module we can represent arbitrary directed acyclic computation graphs. In addition the Problem module takes care of calculating loss functions via given instantiated weighted multi-objective PenaltyLoss object which calculate objective and constraints terms from aggregated input and set of outputs from the node modules.
- forward(data: Dict[str, Tensor]) Dict[str, Tensor] [source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
neuromancer.system module
open-loop (directed acyclic graphs) and closed-loop (directed cyclic graphs) systems components
Minimum viable product 1, system class for open-loop rollout of autonomous nn.Module class 2, system class for open-loop rollout of non-autonomous nn.Module class 3, system class for closed-loop rollout of simple DPC with neural policy and nonautonomous dynamics class (e.g. SSM, psl, …)
Notes on simple implementation:
Time delay can be handled inside nodes simply or with more complexity Sporadically sampled data can be handled prior with interpolation Different time scales can be handled with nested systems Networked systems seem like a natural fit here
- class neuromancer.system.MovingHorizon(module, ndelay=1, history=None)[source]
Bases:
Module
The MovingHorizon class buffers single time step inputs for time-delay modeling from past ndelay steps. This class is a wrapper which does data handling for modules which take 3-d input (batch, time, dim)
- forward(input)[source]
The forward pass appends the input dictionary to the history buffer and gives last ndelay steps to the module. If history is blank the first step will be repeated ndelay times to initialize the buffer.
- Parameters:
input – (dict: str: 2-d tensor (batch, dim)) Dictionary of single step tensor inputs
- Returns:
(dict: str: 3-d Tensor (ndelay, batch, dim)) Dictionary of tensor outputs
- class neuromancer.system.Node(callable, input_keys, output_keys, name=None)[source]
Bases:
Module
Simple class to handle cyclic computational graph connections. input_keys and output_keys define computational node connections through intermediate dictionaries.
- class neuromancer.system.System(nodes, name=None, nstep_key='X', init_func=None, nsteps=None)[source]
Bases:
Module
Simple implementation for arbitrary cyclic computation
- cat(data3d, data2d)[source]
Concatenates data2d contents to corresponding entries in data3d :param data3d: (dict {str: Tensor}) Input to a node :param data2d: (dict {str: Tensor}) Output of a node :return: (dict: {str: Tensor})
- forward(input_dict)[source]
- Parameters:
input_dict – (dict: {str: Tensor}) Tensor shapes in dictionary are asssumed to be (batch, time, dim) If an init function should be written to assure that any 2-d or 1-d tensors have 3 dims.
- Returns:
(dict: {str: Tensor}) data with outputs of nstep rollout of Node interactions
- init(data)[source]
- Parameters:
data – (dict: {str: Tensor}) Tensor shapes in dictionary are asssumed to be (batch, time, dim)
- Returns:
(dict: {str: Tensor})
Any nodes in the graph that are start nodes will need some data initialized. Here is an example of initializing an x0 entry in the input_dict.
Provide in base class analysis of computational graph. Label the source nodes. Keys for source nodes have to be in the data.
neuromancer.trainer module
- class neuromancer.trainer.Trainer(problem: ~neuromancer.problem.Problem, train_data: ~torch.utils.data.dataloader.DataLoader, dev_data: ~torch.utils.data.dataloader.DataLoader | None = None, test_data: ~torch.utils.data.dataloader.DataLoader | None = None, optimizer: ~torch.optim.optimizer.Optimizer | None = None, logger: ~neuromancer.loggers.BasicLogger | None = None, callback=<neuromancer.callbacks.Callback object>, lr_scheduler=False, epochs=1000, epoch_verbose=1, patience=5, warmup=0, train_metric='train_loss', dev_metric='dev_loss', test_metric='test_loss', eval_metric='dev_loss', eval_mode='min', clip=100.0, device='cpu')[source]
Bases:
object
Class encapsulating boilerplate PyTorch training code. Training procedure is somewhat extensible through methods in Callback objects associated with training and evaluation waypoints.