neuromancer.modules.lopo module
- class neuromancer.modules.lopo.ADMMSolver(f_obj=None, F_ineq=None, F_eq=None, x_dim=0, n_ineq=0, n_eq=0, JF_fixed=False, Hf_fixed=False, num_steps=3, metric=None, state_slack_bound=1000.0, alpha=0.5)[source]
Bases:
Module
Implementation of an ADMM Solution routine for problems of the form min f(x) subject to: F_ineq(x) <= 0 F_eq(x)= 0
The problem is reformulated as min f(x) subject to: F(x,s) = 0 s>=0
for slack variables s, and F(x,s) defined as F(x,s) = [ F_eq(x) ; F_ineq(x) + s ]
ADMM is an operator splitting approach, here applied to the splitting
min g_1(x,s) + g_2(x,s)
- with
g_1(x,s) = f(x) + i_{ (x,s) : F(x,s) = 0} g_2(x) = i_{ s : s>=0 }
where i_{S} is the indicator function on set S.
The solver uses a second order approximation of the objective and first order approximation of the constraints.
- forward(x, parms)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class neuromancer.modules.lopo.DRSolver(f_obj=None, F_ineq=None, F_eq=None, x_dim=0, n_ineq=0, n_eq=0, JF_fixed=False, Hf_fixed=False, num_steps=3, metric=None, state_slack_bound=1000.0)[source]
Bases:
Module
Implementation of a Parameteric Douglas Rachford (DR) Solution routine for problems of the form
min f(x) subject to: F_ineq(x) <= 0 F_eq(x)= 0
The problem is reformulated as
min f(x) subject to: F(x,s) = 0 s>=0
for slack variables s, and F(x,s) defined as
F(x,s) = [ F_eq(x) ; F_ineq(x) + s ]
DR is an operator splitting approach, here applied to the splitting
min g_1(x,s) + g_2(x,s)
- with
g_1(x,s) = f(x) + i_{ (x,s) : F(x,s) = 0} g_2(x) = i_{ s : s>=0 }
where i_{S} is the indicator function on set S.
The solver uses a second order approximation of the objective and first order approximation of the constraints.
- forward(x, parms)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class neuromancer.modules.lopo.ParaMetricDiagonal(n_dim, parm_dim, upper_bound, lower_bound, scl_upper_bound=0.2, scl_lower_bound=0.05)[source]
Bases:
Module
- forward(x, parms)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class neuromancer.modules.lopo.ProxBoxConstraint(f_lower_bound, f_upper_bound)[source]
Bases:
Module
Computes the projection onto the box constraints l_b <= x <= u_b for constants l_b and u_b.
- forward(x, parms)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class neuromancer.modules.lopo.ProxObjectivePlusEqualityConstraint(f, F, metric=None, JF_fixed=False, Hf_fixed=False, gamma=2.0)[source]
Bases:
Module
Computes an approximation of the prox operator prox_g(x) = argmin_y g(y) + (1/2*gamma)|| x - y ||^2_{M} with respect to Metric M, defined by a positive definite matrix where g(x) is a function of the form g(x) = f(x) + i_{ x : F(x) = 0 } where F: R^{n}->R^{m}, and m <= n , and is assumed to be differentiable everywhere
i_{} is the indicator function f: R^{n}-> R is scalar valued and assumed to be twice differentiable everywhere
For a given x the prox operator is computed for the approximation f(x) + grad_f(x)^T(y - x) + (y-x)^T H_f(x) (y-x) + i_{ y: F(x) + J_F(x)*(y - x) = 0 } with grad_f(x) the gradient of f at x, H_f(x) the hessian of f at x, and J_F(x) the Jacobian of F at x
Note that if f is quadratic and F is linear this is exact.
- forward(x, parms)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.