neuromancer.slim.rnn module
Recurrent Neural Network implementation for use with structured linear maps.
- class neuromancer.slim.rnn.RNN(input_size, hidden_size=16, num_layers=1, cell_args={})[source]
Bases:
Module
- forward(sequence, init_states=None)[source]
- Parameters:
sequence – (torch.Tensor, shape=[seq_len, batch, input_size]) Input sequence to RNN
init_state – (torch.Tensor, shape=[num_layers, batch, hidden_size]) \(h_0\), initial hidden states for stacked RNNCells
- Returns:
output: (seq_len, batch, hidden_size) Sequence of outputs
\(h_n\): (num_layers, batch, hidden_size) Final hidden states for stack of RNN cells.
>>> import neuromancer.slim as slim, torch >>> rnn = slim.RNN(5, hidden_size=8, num_layers=3, cell_args={'hidden_map': slim.PerronFrobeniusLinear}) >>> x = torch.rand(20, 10, 5) >>> output, h_n = rnn(x) >>> output.shape, h_n.shape (torch.Size([20, 10, 8]), torch.Size([3, 10, 8]))
- class neuromancer.slim.rnn.RNNCell(input_size, hidden_size, bias=False, nonlin=<built-in function gelu>, hidden_map=<class 'neuromancer.slim.linear.Linear'>, input_map=<class 'neuromancer.slim.linear.Linear'>, input_args={}, hidden_args={})[source]
Bases:
Module
- forward(input, hidden)[source]
- Parameters:
input – (torch.Tensor, shape=[batch_size, input_size]) Input to cell
hidden – (torch.Tensor, shape=[batch_size, hidden_size]) Hidden state (typically previous output of cell)
- Returns:
(torch.Tensor, shape=[batchsize, hidden_size]) Cell output
>>> import neuromancer.slim as slim, torch >>> cell = slim.RNNCell(5, 8, input_map=slim.Linear, hidden_map=slim.PerronFrobeniusLinear) >>> x, h = torch.rand(20, 5), torch.rand(20, 8) >>> output = cell(x, h) >>> output.shape torch.Size([20, 8])