chop.constraints

Constraints.

This module contains classes representing constraints. The methods on each constraint object function batch-wise. Reshaping will be of order if the constraints are used on the parameters of a model.

This uses an API similar to the one for the COPT project, https://github.com/openopt/copt. Part of this code is adapted from https://github.com/ZIB-IOL.

Functions

create_lp_constraints(model[, p, value, mode])

Create LpBall constraints for each layer of model, and value depends on mode (either radius or factor to multiply average initialization norm with)

euclidean_proj_l1ball(v[, s])

Compute the Euclidean projection on a L1-ball Solves the optimization problem (using the algorithm from [1]): ..math:: min_w 0.5 * || w - v ||_2^2 , s.t. || w ||_1 <= s.

euclidean_proj_simplex(v[, s])

Compute the Euclidean projection on a positive simplex Solves the optimization problem (using the algorithm from [1]): ..math:: min_w 0.5 * || w - v ||_2^2 , s.t. sum_i w_i = s, w_i >= 0.

get_avg_init_norm(layer[, param_type, p, …])

Computes the average norm of default layer initialization

make_LpBall(alpha[, p])

Classes

GroupL1Ball(alpha, groups)

L1Ball(alpha)

L2Ball(alpha)

LinfBall(alpha)

LpBall(alpha)

NuclearNormBall(alpha)

Nuclear norm constraint, i.e. sum of absolute eigenvalues.

Simplex(alpha)

class chop.constraints.NuclearNormBall(alpha)[source]

Nuclear norm constraint, i.e. sum of absolute eigenvalues. Also known as the Schatten-1 norm. We consider the last two dimensions of the input are the ones we compute the Nuclear Norm on.

lmo(grad, iterate)[source]

Computes the LMO for the Nuclear Norm Ball on the last two dimensions. Returns :math: s - $iterate$ where

..math::

s = rgmin_u u^ op grad.

Parameters
  • grad – torch.Tensor of shape (*, m, n)

  • iterate – torch.Tensor of shape (*, m, n)

Returns

torch.Tensor of shape (*, m, n)

Return type

update_direction

prox(x, step_size=None)[source]

Projection operator on the Nuclear Norm constraint set.

chop.constraints.create_lp_constraints(model, p=2, value=300, mode='initialization')[source]

Create LpBall constraints for each layer of model, and value depends on mode (either radius or factor to multiply average initialization norm with)

chop.constraints.euclidean_proj_l1ball(v, s=1.0)[source]

Compute the Euclidean projection on a L1-ball Solves the optimization problem (using the algorithm from [1]):

..math::

min_w 0.5 * || w - v ||_2^2 , s.t. || w ||_1 <= s

Parameters
  • v – (n,) numpy array, n-dimensional vector to project

  • s – float, optional, default: 1, radius of the L1-ball

Returns

(n,) numpy array,

Euclidean projection of v on the L1-ball of radius s

Return type

w

Notes

Solves the problem by a reduction to the positive simplex case

chop.constraints.euclidean_proj_simplex(v, s=1.0)[source]

Compute the Euclidean projection on a positive simplex Solves the optimization problem (using the algorithm from [1]):

..math::

min_w 0.5 * || w - v ||_2^2 , s.t. sum_i w_i = s, w_i >= 0

Parameters
  • v ((n,) numpy array,) – n-dimensional vector to project

  • s (float, optional, default: 1,) – radius of the simplex

Returns

w – Euclidean projection of v on the simplex

Return type

(n,) numpy array,

Notes

The complexity of this algorithm is in O(n log(n)) as it involves sorting v. Better alternatives exist for high-dimensional sparse vectors (cf. [1]) However, this implementation still easily scales to millions of dimensions.

References

[1] Efficient Projections onto the .1-Ball for Learning in High Dimensions

John Duchi, Shai Shalev-Shwartz, Yoram Singer, and Tushar Chandra. International Conference on Machine Learning (ICML 2008) http://www.cs.berkeley.edu/~jduchi/projects/DuchiSiShCh08.pdf

chop.constraints.get_avg_init_norm(layer, param_type=None, p=2, repetitions=100)[source]

Computes the average norm of default layer initialization