API - 优化器

TensorLayer provides rich layer implementations trailed for various benchmarks and domain-specific problems. In addition, we also support transparent access to native TensorFlow parameters. For example, we provide not only layers for local response normalization, but also layers that allow user to apply tf.nn.lrn on network.outputs. More functions can be found in TensorFlow API.

我们提供和TensorFlow兼容的新型优化器API,以节省您的开发时间。

优化器预览表

AMSGrad([learning_rate, beta1, beta2, ...]) Implementation of the AMSGrad optimization algorithm.

AMSGrad 优化器

class tensorlayer.optimizers.AMSGrad(learning_rate=0.01, beta1=0.9, beta2=0.99, epsilon=1e-08, use_locking=False, name='AMSGrad')[源代码]

Implementation of the AMSGrad optimization algorithm.

See: On the Convergence of Adam and Beyond - [Reddi et al., 2018].

参数:
  • learning_rate (float) -- A Tensor or a floating point value. The learning rate.
  • beta1 (float) -- A float value or a constant float tensor. The exponential decay rate for the 1st moment estimates.
  • beta2 (float) -- A float value or a constant float tensor. The exponential decay rate for the 2nd moment estimates.
  • epsilon (float) -- A small constant for numerical stability. This epsilon is "epsilon hat" in the Kingma and Ba paper (in the formula just before Section 2.1), not the epsilon in Algorithm 1 of the paper.
  • use_locking (bool) -- If True use locks for update operations.
  • name (str) -- Optional name for the operations created when applying gradients. Defaults to "AMSGrad".