API - 激活函数¶
为了尽可能地保持TensorLayer的简洁性,我们最小化激活函数的数量,因此我们鼓励用户直接使用
TensorFlow官方的函数,比如
tf.nn.relu
, tf.nn.relu6
, tf.nn.elu
, tf.nn.softplus
,
tf.nn.softsign
等等。更多TensorFlow官方激活函数请看
这里.
自定义激活函数¶
在TensorLayer中创造自定义激活函数非常简单。
下面的例子实现了把输入乘以2。对于更加复杂的激活函数,你需要用到TensorFlow的API。
def double_activation(x):
return x * 2
A file containing various activation functions.
|
leaky_relu can be used through its shortcut: |
|
|
|
|
|
Ramp activation function. |
|
Swish function. |
|
Sign function. |
|
Hard tanh activation function. |
|
Return the softmax outputs of images, every pixels have multiple label, the sum of a pixel is 1. |
Ramp¶
-
tensorlayer.activation.
ramp
(x, v_min=0, v_max=1, name=None)[源代码]¶ Ramp activation function.
Reference: [tf.clip_by_value]<https://www.tensorflow.org/api_docs/python/tf/clip_by_value>
- 参数
x (Tensor) -- input.
v_min (float) -- cap input to v_min as a lower bound.
v_max (float) -- cap input to v_max as a upper bound.
name (str) -- The function name (optional).
- 返回
A
Tensor
in the same type asx
.- 返回类型
Tensor
Leaky ReLU¶
-
tensorlayer.activation.
leaky_relu
(x, alpha=0.2, name='leaky_relu')[源代码]¶ leaky_relu can be used through its shortcut:
tl.act.lrelu()
.This function is a modified version of ReLU, introducing a nonzero gradient for negative input. Introduced by the paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013]
- The function return the following results:
When x < 0:
f(x) = alpha_low * x
.When x >= 0:
f(x) = x
.
- 参数
x (Tensor) -- Support input type
float
,double
,int32
,int64
,uint8
,int16
, orint8
.alpha (float) -- Slope.
name (str) -- The function name (optional).
实际案例
>>> import tensorlayer as tl >>> net = tl.layers.Input([10, 200]) >>> net = tl.layers.Dense(n_units=100, act=lambda x : tl.act.lrelu(x, 0.2), name='dense')(net)
- 返回
A
Tensor
in the same type asx
.- 返回类型
Tensor
引用
Leaky ReLU6¶
-
tensorlayer.activation.
leaky_relu6
(x, alpha=0.2, name='leaky_relu6')[源代码]¶ leaky_relu6()
can be used through its shortcut:tl.act.lrelu6()
.This activation function is a modified version
leaky_relu()
introduced by the following paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013]This activation function also follows the behaviour of the activation function
tf.nn.relu6()
introduced by the following paper: Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010]- The function return the following results:
When x < 0:
f(x) = alpha_low * x
.When x in [0, 6]:
f(x) = x
.When x > 6:
f(x) = 6
.
- 参数
x (Tensor) -- Support input type
float
,double
,int32
,int64
,uint8
,int16
, orint8
.alpha (float) -- Slope.
name (str) -- The function name (optional).
实际案例
>>> import tensorlayer as tl >>> net = tl.layers.Input([10, 200]) >>> net = tl.layers.Dense(n_units=100, act=lambda x : tl.act.leaky_relu6(x, 0.2), name='dense')(net)
- 返回
A
Tensor
in the same type asx
.- 返回类型
Tensor
引用
Twice Leaky ReLU6¶
-
tensorlayer.activation.
leaky_twice_relu6
(x, alpha_low=0.2, alpha_high=0.2, name='leaky_relu6')[源代码]¶ leaky_twice_relu6()
can be used through its shortcut::func:`tl.act.ltrelu6()
.This activation function is a modified version
leaky_relu()
introduced by the following paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013]This activation function also follows the behaviour of the activation function
tf.nn.relu6()
introduced by the following paper: Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010]This function push further the logic by adding leaky behaviour both below zero and above six.
- The function return the following results:
When x < 0:
f(x) = alpha_low * x
.When x in [0, 6]:
f(x) = x
.When x > 6:
f(x) = 6 + (alpha_high * (x-6))
.
- 参数
x (Tensor) -- Support input type
float
,double
,int32
,int64
,uint8
,int16
, orint8
.alpha_low (float) -- Slope for x < 0:
f(x) = alpha_low * x
.alpha_high (float) -- Slope for x < 6:
f(x) = 6 (alpha_high * (x-6))
.name (str) -- The function name (optional).
实际案例
>>> import tensorlayer as tl >>> net = tl.layers.Input([10, 200]) >>> net = tl.layers.Dense(n_units=100, act=lambda x : tl.act.leaky_twice_relu6(x, 0.2, 0.2), name='dense')(net)
- 返回
A
Tensor
in the same type asx
.- 返回类型
Tensor
引用
Swish¶
Sign¶
-
tensorlayer.activation.
sign
(x)[源代码]¶ Sign function.
Clip and binarize tensor using the straight through estimator (STE) for the gradient, usually be used for quantizing values in Binarized Neural Networks: https://arxiv.org/abs/1602.02830.
- 参数
x (Tensor) -- input.
- 返回
A
Tensor
in the same type asx
.- 返回类型
Tensor
引用
- Rectifier Nonlinearities Improve Neural Network Acoustic Models, Maas et al. (2013)
http://web.stanford.edu/~awni/papers/relu_hybrid_icml2013_final.pdf
- BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1, Courbariaux et al. (2016)
Hard Tanh¶
Pixel-wise softmax¶
-
tensorlayer.activation.
pixel_wise_softmax
(x, name='pixel_wise_softmax')[源代码]¶ Return the softmax outputs of images, every pixels have multiple label, the sum of a pixel is 1.
警告
THIS FUNCTION IS DEPRECATED: It will be removed after after 2018-06-30. Instructions for updating: This API will be deprecated soon as tf.nn.softmax can do the same thing.
Usually be used for image segmentation.
- 参数
x (Tensor) --
- input.
For 2d image, 4D tensor (batch_size, height, weight, channel), where channel >= 2.
For 3d image, 5D tensor (batch_size, depth, height, weight, channel), where channel >= 2.
name (str) -- function name (optional)
- 返回
A
Tensor
in the same type asx
.- 返回类型
Tensor
实际案例
>>> outputs = pixel_wise_softmax(network.outputs) >>> dice_loss = 1 - dice_coe(outputs, y_, epsilon=1e-5)
引用
带有参数的激活函数¶
请见神经网络层。