neuromancer.modules.activations module
Elementwise nonlinear tensor operations.
- class neuromancer.modules.activations.APLU(nsegments=2, alpha_reg_weight=0.001, beta_reg_weight=0.001, tune_alpha=True, tune_beta=True)[source]
Bases:
Module
Adaptive Piecewise Linear Units: https://arxiv.org/pdf/1412.6830.pdf
- class neuromancer.modules.activations.BLU(tune_alpha=False, tune_beta=True)[source]
Bases:
Module
Bendable Linear Units: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8913972
- class neuromancer.modules.activations.PELU(tune_alpha=True, tune_beta=True)[source]
Bases:
Module
Parametric Exponential Linear Units: https://arxiv.org/pdf/1605.09332.pdf
- class neuromancer.modules.activations.PReLU(tune_alpha=True, tune_beta=True)[source]
Bases:
Module
Parametric ReLU: https://arxiv.org/pdf/1502.01852.pdf
- class neuromancer.modules.activations.RectifiedSoftExp(tune_alpha=True)[source]
Bases:
Module
Mysterious unexplained implementation of Soft Exponential ported from author’s Keras code: https://github.com/thelukester92/2019-blu/blob/master/python/activations/softexp.py
- class neuromancer.modules.activations.SmoothedReLU(d=1.0, tune_d=True)[source]
Bases:
Module
ReLU with a quadratic region in [0,d]; Rectified Huber Unit; Used to make the Lyapunov function continuously differentiable https://arxiv.org/pdf/2001.06116.pdf
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class neuromancer.modules.activations.SoftExponential(alpha=0.0, tune_alpha=True)[source]
Bases:
Module
Soft exponential activation: https://arxiv.org/pdf/1602.01321.pdf
- neuromancer.modules.activations.soft_exp(alpha, x)[source]
Helper function for SoftExponential learnable activation class. Also used in neuromancer.operators.InterpolateAddMultiply :param alpha: (float) Parameter controlling shape of the function. :param x: (torch.Tensor) Arbitrary shaped tensor input :return: (torch.Tensor) Result of the function applied elementwise on the tensor.