API - 激活函数

为了尽可能地保持TensorLayer的简洁性,我们最小化激活函数的数量,因此我们鼓励用户直接使用 TensorFlow官方的函数,比如 tf.nn.relu, tf.nn.relu6, tf.nn.elu, tf.nn.softplus, tf.nn.softsign 等等。更多TensorFlow官方激活函数请看 这里.

自定义激活函数

在TensorLayer中创造自定义激活函数非常简单。

下面的例子实现了把输入乘以2。对于更加复杂的激活函数,你需要用到TensorFlow的API。

def double_activation(x):
    return x * 2

A file containing various activation functions.

leaky_relu(x[, alpha, name])

leaky_relu can be used through its shortcut: tl.act.lrelu().

leaky_relu6(x[, alpha, name])

leaky_relu6() can be used through its shortcut: tl.act.lrelu6().

leaky_twice_relu6(x[, alpha_low, ...])

leaky_twice_relu6() can be used through its shortcut: :func:`tl.act.ltrelu6().

ramp(x[, v_min, v_max, name])

Ramp activation function.

swish(x[, name])

Swish function.

sign(x)

Sign function.

hard_tanh(x[, name])

Hard tanh activation function.

pixel_wise_softmax(x[, name])

Return the softmax outputs of images, every pixels have multiple label, the sum of a pixel is 1.

Ramp

tensorlayer.activation.ramp(x, v_min=0, v_max=1, name=None)[源代码]

Ramp activation function.

Reference: [tf.clip_by_value]<https://www.tensorflow.org/api_docs/python/tf/clip_by_value>

参数
  • x (Tensor) -- input.

  • v_min (float) -- cap input to v_min as a lower bound.

  • v_max (float) -- cap input to v_max as a upper bound.

  • name (str) -- The function name (optional).

返回

A Tensor in the same type as x.

返回类型

Tensor

Leaky ReLU

tensorlayer.activation.leaky_relu(x, alpha=0.2, name='leaky_relu')[源代码]

leaky_relu can be used through its shortcut: tl.act.lrelu().

This function is a modified version of ReLU, introducing a nonzero gradient for negative input. Introduced by the paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013]

The function return the following results:
  • When x < 0: f(x) = alpha_low * x.

  • When x >= 0: f(x) = x.

参数
  • x (Tensor) -- Support input type float, double, int32, int64, uint8, int16, or int8.

  • alpha (float) -- Slope.

  • name (str) -- The function name (optional).

实际案例

>>> import tensorlayer as tl
>>> net = tl.layers.Input([10, 200])
>>> net = tl.layers.Dense(n_units=100, act=lambda x : tl.act.lrelu(x, 0.2), name='dense')(net)
返回

A Tensor in the same type as x.

返回类型

Tensor

引用

Leaky ReLU6

tensorlayer.activation.leaky_relu6(x, alpha=0.2, name='leaky_relu6')[源代码]

leaky_relu6() can be used through its shortcut: tl.act.lrelu6().

This activation function is a modified version leaky_relu() introduced by the following paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013]

This activation function also follows the behaviour of the activation function tf.nn.relu6() introduced by the following paper: Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010]

The function return the following results:
  • When x < 0: f(x) = alpha_low * x.

  • When x in [0, 6]: f(x) = x.

  • When x > 6: f(x) = 6.

参数
  • x (Tensor) -- Support input type float, double, int32, int64, uint8, int16, or int8.

  • alpha (float) -- Slope.

  • name (str) -- The function name (optional).

实际案例

>>> import tensorlayer as tl
>>> net = tl.layers.Input([10, 200])
>>> net = tl.layers.Dense(n_units=100, act=lambda x : tl.act.leaky_relu6(x, 0.2), name='dense')(net)
返回

A Tensor in the same type as x.

返回类型

Tensor

引用

Twice Leaky ReLU6

tensorlayer.activation.leaky_twice_relu6(x, alpha_low=0.2, alpha_high=0.2, name='leaky_relu6')[源代码]

leaky_twice_relu6() can be used through its shortcut: :func:`tl.act.ltrelu6().

This activation function is a modified version leaky_relu() introduced by the following paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013]

This activation function also follows the behaviour of the activation function tf.nn.relu6() introduced by the following paper: Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010]

This function push further the logic by adding leaky behaviour both below zero and above six.

The function return the following results:
  • When x < 0: f(x) = alpha_low * x.

  • When x in [0, 6]: f(x) = x.

  • When x > 6: f(x) = 6 + (alpha_high * (x-6)).

参数
  • x (Tensor) -- Support input type float, double, int32, int64, uint8, int16, or int8.

  • alpha_low (float) -- Slope for x < 0: f(x) = alpha_low * x.

  • alpha_high (float) -- Slope for x < 6: f(x) = 6 (alpha_high * (x-6)).

  • name (str) -- The function name (optional).

实际案例

>>> import tensorlayer as tl
>>> net = tl.layers.Input([10, 200])
>>> net = tl.layers.Dense(n_units=100, act=lambda x : tl.act.leaky_twice_relu6(x, 0.2, 0.2), name='dense')(net)
返回

A Tensor in the same type as x.

返回类型

Tensor

引用

Swish

tensorlayer.activation.swish(x, name='swish')[源代码]

Swish function.

参数
  • x (Tensor) -- input.

  • name (str) -- function name (optional).

返回

A Tensor in the same type as x.

返回类型

Tensor

Sign

tensorlayer.activation.sign(x)[源代码]

Sign function.

Clip and binarize tensor using the straight through estimator (STE) for the gradient, usually be used for quantizing values in Binarized Neural Networks: https://arxiv.org/abs/1602.02830.

参数

x (Tensor) -- input.

返回

A Tensor in the same type as x.

返回类型

Tensor

引用

Hard Tanh

tensorlayer.activation.hard_tanh(x, name='htanh')[源代码]

Hard tanh activation function.

Which is a ramp function with low bound of -1 and upper bound of 1, shortcut is htanh.

参数
  • x (Tensor) -- input.

  • name (str) -- The function name (optional).

返回

A Tensor in the same type as x.

返回类型

Tensor

Pixel-wise softmax

tensorlayer.activation.pixel_wise_softmax(x, name='pixel_wise_softmax')[源代码]

Return the softmax outputs of images, every pixels have multiple label, the sum of a pixel is 1.

警告

THIS FUNCTION IS DEPRECATED: It will be removed after after 2018-06-30. Instructions for updating: This API will be deprecated soon as tf.nn.softmax can do the same thing.

Usually be used for image segmentation.

参数
  • x (Tensor) --

    input.
    • For 2d image, 4D tensor (batch_size, height, weight, channel), where channel >= 2.

    • For 3d image, 5D tensor (batch_size, depth, height, weight, channel), where channel >= 2.

  • name (str) -- function name (optional)

返回

A Tensor in the same type as x.

返回类型

Tensor

实际案例

>>> outputs = pixel_wise_softmax(network.outputs)
>>> dice_loss = 1 - dice_coe(outputs, y_, epsilon=1e-5)

引用

带有参数的激活函数

请见神经网络层。