API - 神经网络层

神经网络层列表

Layer([name, act])

The basic Layer class represents a single layer of a neural network.

Input(shape[, dtype, name])

The Input class is the starting layer of a neural network.

OneHot([depth, on_value, off_value, axis, ...])

The OneHot class is the starting layer of a neural network, see tf.one_hot.

Word2vecEmbedding(vocabulary_size, ...[, ...])

The Word2vecEmbedding class is a fully connected layer.

Embedding(vocabulary_size, embedding_size[, ...])

The Embedding class is a look-up table for word embedding.

AverageEmbedding(vocabulary_size, embedding_size)

The AverageEmbedding averages over embeddings of inputs.

Dense(n_units[, act, W_init, b_init, ...])

The Dense class is a fully connected layer.

Dropout(keep[, seed, name])

The Dropout class is a noise layer which randomly set some activations to zero according to a keeping probability.

GaussianNoise([mean, stddev, is_train, ...])

The GaussianNoise class is noise layer that adding noise with gaussian distribution to the activation.

DropconnectDense([keep, n_units, act, ...])

The DropconnectDense class is Dense with DropConnect behaviour which randomly removes connections between this layer and the previous layer according to a keeping probability.

UpSampling2d(scale[, method, antialias, ...])

The UpSampling2d class is a up-sampling 2D layer.

DownSampling2d(scale[, method, antialias, ...])

The DownSampling2d class is down-sampling 2D layer.

Conv1d([n_filter, filter_size, stride, act, ...])

Simplified version of Conv1dLayer.

Conv2d([n_filter, filter_size, strides, ...])

Simplified version of Conv2dLayer.

Conv3d([n_filter, filter_size, strides, ...])

Simplified version of Conv3dLayer.

DeConv2d([n_filter, filter_size, strides, ...])

Simplified version of DeConv2dLayer, see tf.nn.conv3d_transpose.

DeConv3d([n_filter, filter_size, strides, ...])

Simplified version of DeConv3dLayer, see tf.nn.conv3d_transpose.

DepthwiseConv2d([filter_size, strides, act, ...])

Separable/Depthwise Convolutional 2D layer, see tf.nn.depthwise_conv2d.

SeparableConv1d([n_filter, filter_size, ...])

The SeparableConv1d class is a 1D depthwise separable convolutional layer.

SeparableConv2d([n_filter, filter_size, ...])

The SeparableConv2d class is a 2D depthwise separable convolutional layer.

DeformableConv2d([offset_layer, n_filter, ...])

The DeformableConv2d class is a 2D Deformable Convolutional Networks.

GroupConv2d([n_filter, filter_size, ...])

The GroupConv2d class is 2D grouped convolution, see here.

PadLayer([padding, mode, name])

The PadLayer class is a padding layer for any mode and dimension.

PoolLayer([filter_size, strides, padding, ...])

The PoolLayer class is a Pooling layer.

ZeroPad1d(padding[, name])

The ZeroPad1d class is a 1D padding layer for signal [batch, length, channel].

ZeroPad2d(padding[, name])

The ZeroPad2d class is a 2D padding layer for image [batch, height, width, channel].

ZeroPad3d(padding[, name])

The ZeroPad3d class is a 3D padding layer for volume [batch, depth, height, width, channel].

MaxPool1d([filter_size, strides, padding, ...])

Max pooling for 1D signal.

MeanPool1d([filter_size, strides, padding, ...])

Mean pooling for 1D signal.

MaxPool2d([filter_size, strides, padding, ...])

Max pooling for 2D image.

MeanPool2d([filter_size, strides, padding, ...])

Mean pooling for 2D image [batch, height, width, channel].

MaxPool3d([filter_size, strides, padding, ...])

Max pooling for 3D volume.

MeanPool3d([filter_size, strides, padding, ...])

Mean pooling for 3D volume.

GlobalMaxPool1d([data_format, name])

The GlobalMaxPool1d class is a 1D Global Max Pooling layer.

GlobalMeanPool1d([data_format, name])

The GlobalMeanPool1d class is a 1D Global Mean Pooling layer.

GlobalMaxPool2d([data_format, name])

The GlobalMaxPool2d class is a 2D Global Max Pooling layer.

GlobalMeanPool2d([data_format, name])

The GlobalMeanPool2d class is a 2D Global Mean Pooling layer.

GlobalMaxPool3d([data_format, name])

The GlobalMaxPool3d class is a 3D Global Max Pooling layer.

GlobalMeanPool3d([data_format, name])

The GlobalMeanPool3d class is a 3D Global Mean Pooling layer.

CornerPool2d([mode, name])

Corner pooling for 2D image [batch, height, width, channel], see here.

SubpixelConv1d([scale, act, in_channels, name])

It is a 1D sub-pixel up-sampling layer.

SubpixelConv2d([scale, n_out_channels, act, ...])

It is a 2D sub-pixel up-sampling layer, usually be used for Super-Resolution applications, see SRGAN for example.

SpatialTransformer2dAffine([in_channels, ...])

The SpatialTransformer2dAffine class is a 2D Spatial Transformer Layer for 2D Affine Transformation.

transformer(U, theta, out_size[, name])

Spatial Transformer Layer for 2D Affine Transformation , see SpatialTransformer2dAffine class.

batch_transformer(U, thetas, out_size[, name])

Batch Spatial Transformer function for 2D Affine Transformation.

BatchNorm([decay, epsilon, act, is_train, ...])

The BatchNorm is a batch normalization layer for both fully-connected and convolution outputs.

BatchNorm1d([decay, epsilon, act, is_train, ...])

The BatchNorm1d applies Batch Normalization over 3D input (a mini-batch of 1D inputs with additional channel dimension), of shape (N, L, C) or (N, C, L).

BatchNorm2d([decay, epsilon, act, is_train, ...])

The BatchNorm2d applies Batch Normalization over 4D input (a mini-batch of 2D inputs with additional channel dimension) of shape (N, H, W, C) or (N, C, H, W).

BatchNorm3d([decay, epsilon, act, is_train, ...])

The BatchNorm3d applies Batch Normalization over 5D input (a mini-batch of 3D inputs with additional channel dimension) with shape (N, D, H, W, C) or (N, C, D, H, W).

LocalResponseNorm([depth_radius, bias, ...])

The LocalResponseNorm layer is for Local Response Normalization.

InstanceNorm([act, epsilon, beta_init, ...])

The InstanceNorm is an instance normalization layer for both fully-connected and convolution outputs.

InstanceNorm1d([act, epsilon, beta_init, ...])

The InstanceNorm1d applies Instance Normalization over 3D input (a mini-instance of 1D inputs with additional channel dimension), of shape (N, L, C) or (N, C, L).

InstanceNorm2d([act, epsilon, beta_init, ...])

The InstanceNorm2d applies Instance Normalization over 4D input (a mini-instance of 2D inputs with additional channel dimension) of shape (N, H, W, C) or (N, C, H, W).

InstanceNorm3d([act, epsilon, beta_init, ...])

The InstanceNorm3d applies Instance Normalization over 5D input (a mini-instance of 3D inputs with additional channel dimension) with shape (N, D, H, W, C) or (N, C, D, H, W).

LayerNorm([center, scale, act, epsilon, ...])

The LayerNorm class is for layer normalization, see tf.contrib.layers.layer_norm.

GroupNorm([groups, epsilon, act, ...])

The GroupNorm layer is for Group Normalization.

SwitchNorm([act, epsilon, beta_init, ...])

The SwitchNorm is a switchable normalization.

RNN(cell[, return_last_output, ...])

The RNN class is a fixed length recurrent layer for implementing simple RNN, LSTM, GRU and etc.

SimpleRNN

GRURNN

LSTMRNN

BiRNN(fw_cell, bw_cell[, return_seq_2d, ...])

The BiRNN class is a fixed length Bidirectional recurrent layer.

retrieve_seq_length_op(data)

An op to compute the length of a sequence from input shape of [batch_size, n_step(max), n_features], it can be used when the features of padding (on right hand side) are all zeros.

retrieve_seq_length_op2(data)

An op to compute the length of a sequence, from input shape of [batch_size, n_step(max)], it can be used when the features of padding (on right hand side) are all zeros.

retrieve_seq_length_op3(data[, pad_val])

An op to compute the length of a sequence, the data shape can be [batch_size, n_step(max)] or [batch_size, n_step(max), n_features].

target_mask_op

Flatten([name])

A layer that reshapes high-dimension input into a vector.

Reshape(shape[, name])

A layer that reshapes a given tensor.

Transpose([perm, conjugate, name])

A layer that transposes the dimension of a tensor.

Shuffle(group[, name])

A layer that shuffle a 2D image [batch, height, width, channel], see here.

Lambda(fn[, fn_weights, fn_args, name])

A layer that takes a user-defined function using Lambda.

Concat([concat_dim, name])

A layer that concats multiple tensors according to given axis.

Elementwise([combine_fn, act, name])

A layer that combines multiple Layer that have the same output shapes according to an element-wise operation.

ElementwiseLambda(fn[, fn_weights, fn_args, ...])

A layer that use a custom function to combine multiple Layer inputs.

ExpandDims(axis[, name])

The ExpandDims class inserts a dimension of 1 into a tensor's shape, see tf.expand_dims() .

Tile([multiples, name])

The Tile class constructs a tensor by tiling a given tensor, see tf.tile() .

Stack([axis, name])

The Stack class is a layer for stacking a list of rank-R tensors into one rank-(R+1) tensor, see tf.stack().

UnStack([num, axis, name])

The UnStack class is a layer for unstacking the given dimension of a rank-R tensor into rank-(R-1) tensors., see tf.unstack().

Sign([name])

The SignLayer class is for quantizing the layer outputs to -1 or 1 while inferencing.

Scale([init_scale, name])

The Scale class is to multiple a trainable scale value to the layer outputs.

BinaryDense([n_units, act, use_gemm, ...])

The BinaryDense class is a binary fully connected layer, which weights are either -1 or 1 while inferencing.

BinaryConv2d([n_filter, filter_size, ...])

The BinaryConv2d class is a 2D binary CNN layer, which weights are either -1 or 1 while inference.

TernaryDense([n_units, act, use_gemm, ...])

The TernaryDense class is a ternary fully connected layer, which weights are either -1 or 1 or 0 while inference.

TernaryConv2d([n_filter, filter_size, ...])

The TernaryConv2d class is a 2D ternary CNN layer, which weights are either -1 or 1 or 0 while inference.

DorefaDense([bitW, bitA, n_units, act, ...])

The DorefaDense class is a binary fully connected layer, which weights are 'bitW' bits and the output of the previous layer are 'bitA' bits while inferencing.

DorefaConv2d([bitW, bitA, n_filter, ...])

The DorefaConv2d class is a 2D quantized convolutional layer, which weights are 'bitW' bits and the output of the previous layer are 'bitA' bits while inferencing.

QuantizedDense

QuantizedDenseWithBN

QuantizedConv2d

QuantizedConv2dWithBN

PRelu([channel_shared, in_channels, a_init, ...])

The PRelu class is Parametric Rectified Linear layer.

PRelu6([channel_shared, in_channels, ...])

The PRelu6 class is Parametric Rectified Linear layer integrating ReLU6 behaviour.

PTRelu6([channel_shared, in_channels, ...])

The PTRelu6 class is Parametric Rectified Linear layer integrating ReLU6 behaviour.

flatten_reshape(variable[, name])

Reshapes a high-dimension vector input.

initialize_rnn_state(state[, feed_dict])

Returns the initialized RNN state.

list_remove_repeat(x)

Remove the repeated items in a list, and return the processed list.

层基础类

class tensorlayer.layers.Layer(name=None, act=None, *args, **kwargs)[源代码]

The basic Layer class represents a single layer of a neural network.

It should be subclassed when implementing new types of layers.

参数

name (str or None) -- A unique layer name. If None, a unique name will be automatically assigned.

__init__()[源代码]

Initializing the Layer.

__call__()[源代码]
  1. Building the Layer if necessary. (2) Forwarding the computation.

all_weights()

Return a list of Tensor which are all weights of this Layer.

trainable_weights()

Return a list of Tensor which are all trainable weights of this Layer.

nontrainable_weights()

Return a list of Tensor which are all nontrainable weights of this Layer.

build()[源代码]

Abstract method. Build the Layer. All trainable weights should be defined in this function.

forward()[源代码]

Abstract method. Forward computation and return computation results.

输入层

普通输入层

tensorlayer.layers.Input(shape, dtype=tensorflow.float32, name=None)[源代码]

The Input class is the starting layer of a neural network.

参数
  • shape (tuple (int)) -- Including batch size.

  • name (None or str) -- A unique layer name.

One-hot 输入层

class tensorlayer.layers.OneHot(depth=None, on_value=None, off_value=None, axis=None, dtype=None, name=None)[源代码]

The OneHot class is the starting layer of a neural network, see tf.one_hot. Useful link: https://www.tensorflow.org/api_docs/python/tf/one_hot.

参数
  • depth (None or int) -- If the input indices is rank N, the output will have rank N+1. The new axis is created at dimension axis (default: the new axis is appended at the end).

  • on_value (None or number) -- The value to represnt ON. If None, it will default to the value 1.

  • off_value (None or number) -- The value to represnt OFF. If None, it will default to the value 0.

  • axis (None or int) -- The axis.

  • dtype (None or TensorFlow dtype) -- The data type, None means tf.float32.

  • name (str) -- A unique layer name.

实际案例

>>> import tensorflow as tf
>>> import tensorlayer as tl
>>> net = tl.layers.Input([32], dtype=tf.int32)
>>> onehot = tl.layers.OneHot(depth=8)
>>> print(onehot)
OneHot(depth=8, name='onehot')
>>> tensor = tl.layers.OneHot(depth=8)(net)
>>> print(tensor)
tf.Tensor([...], shape=(32, 8), dtype=float32)

Word2Vec Embedding 输入层

class tensorlayer.layers.Word2vecEmbedding(vocabulary_size, embedding_size, num_sampled=64, activate_nce_loss=True, nce_loss_args=None, E_init=<tensorlayer.initializers.RandomUniform object>, nce_W_init=<tensorlayer.initializers.TruncatedNormal object>, nce_b_init=<tensorlayer.initializers.Constant object>, name=None)[源代码]

The Word2vecEmbedding class is a fully connected layer. For Word Embedding, words are input as integer index. The output is the embedded word vector.

The layer integrates NCE loss by default (activate_nce_loss=True). If the NCE loss is activated, in a dynamic model, the computation of nce loss can be turned off in customised forward feeding by setting use_nce_loss=False when the layer is called. The NCE loss can be deactivated by setting activate_nce_loss=False.

参数
  • vocabulary_size (int) -- The size of vocabulary, number of words

  • embedding_size (int) -- The number of embedding dimensions

  • num_sampled (int) -- The number of negative examples for NCE loss

  • activate_nce_loss (boolean) -- Whether activate nce loss or not. By default, True If True, the layer will return both outputs of embedding and nce_cost in forward feeding. If False, the layer will only return outputs of embedding. In a dynamic model, the computation of nce loss can be turned off in forward feeding by setting use_nce_loss=False when the layer is called. In a static model, once the model is constructed, the computation of nce loss cannot be changed (always computed or not computed).

  • nce_loss_args (dictionary) -- The arguments for tf.nn.nce_loss()

  • E_init (initializer) -- The initializer for initializing the embedding matrix

  • nce_W_init (initializer) -- The initializer for initializing the nce decoder weight matrix

  • nce_b_init (initializer) -- The initializer for initializing of the nce decoder bias vector

  • name (str) -- A unique layer name

outputs

The embedding layer outputs.

Type

Tensor

normalized_embeddings

Normalized embedding matrix.

Type

Tensor

nce_weights

The NCE weights only when activate_nce_loss is True.

Type

Tensor

nce_biases

The NCE biases only when activate_nce_loss is True.

Type

Tensor

实际案例

Word2Vec With TensorLayer (Example in examples/text_word_embedding/tutorial_word2vec_basic.py)

>>> import tensorflow as tf
>>> import tensorlayer as tl
>>> batch_size = 8
>>> embedding_size = 50
>>> inputs = tl.layers.Input([batch_size], dtype=tf.int32)
>>> labels = tl.layers.Input([batch_size, 1], dtype=tf.int32)
>>> emb_net = tl.layers.Word2vecEmbedding(
>>>     vocabulary_size=10000,
>>>     embedding_size=embedding_size,
>>>     num_sampled=100,
>>>     activate_nce_loss=True, # the nce loss is activated
>>>     nce_loss_args={},
>>>     E_init=tl.initializers.random_uniform(minval=-1.0, maxval=1.0),
>>>     nce_W_init=tl.initializers.truncated_normal(stddev=float(1.0 / np.sqrt(embedding_size))),
>>>     nce_b_init=tl.initializers.constant(value=0.0),
>>>     name='word2vec_layer',
>>> )
>>> print(emb_net)
Word2vecEmbedding(vocabulary_size=10000, embedding_size=50, num_sampled=100, activate_nce_loss=True, nce_loss_args={})
>>> embed_tensor = emb_net(inputs, use_nce_loss=False) # the nce loss is turned off and no need to provide labels
>>> embed_tensor = emb_net([inputs, labels], use_nce_loss=False) # the nce loss is turned off and the labels will be ignored
>>> embed_tensor, embed_nce_loss = emb_net([inputs, labels]) # the nce loss is calculated
>>> outputs = tl.layers.Dense(n_units=10, name="dense")(embed_tensor)
>>> model = tl.models.Model(inputs=[inputs, labels], outputs=[outputs, embed_nce_loss], name="word2vec_model") # a static model
>>> out = model([data_x, data_y], is_train=True) # where data_x is inputs and data_y is labels

引用

https://www.tensorflow.org/tutorials/representation/word2vec

Embedding 输入层

class tensorlayer.layers.Embedding(vocabulary_size, embedding_size, E_init=<tensorlayer.initializers.RandomUniform object>, name=None)[源代码]

The Embedding class is a look-up table for word embedding.

Word content are accessed using integer indexes, then the output is the embedded word vector. To train a word embedding matrix, you can used Word2vecEmbedding. If you have a pre-trained matrix, you can assign the parameters into it.

参数
  • vocabulary_size (int) -- The size of vocabulary, number of words.

  • embedding_size (int) -- The number of embedding dimensions.

  • E_init (initializer) -- The initializer for the embedding matrix.

  • E_init_args (dictionary) -- The arguments for embedding matrix initializer.

  • name (str) -- A unique layer name.

outputs

The embedding layer output is a 3D tensor in the shape: (batch_size, num_steps(num_words), embedding_size).

Type

tensor

实际案例

>>> import tensorflow as tf
>>> import tensorlayer as tl
>>> input = tl.layers.Input([8, 100], dtype=tf.int32)
>>> embed = tl.layers.Embedding(vocabulary_size=1000, embedding_size=50, name='embed')
>>> print(embed)
Embedding(vocabulary_size=1000, embedding_size=50)
>>> tensor = embed(input)
>>> print(tensor)
tf.Tensor([...], shape=(8, 100, 50), dtype=float32)

Average Embedding 输入层

class tensorlayer.layers.AverageEmbedding(vocabulary_size, embedding_size, pad_value=0, E_init=<tensorlayer.initializers.RandomUniform object>, name=None)[源代码]

The AverageEmbedding averages over embeddings of inputs. This is often used as the input layer for models like DAN[1] and FastText[2].

参数
  • vocabulary_size (int) -- The size of vocabulary.

  • embedding_size (int) -- The dimension of the embedding vectors.

  • pad_value (int) -- The scalar padding value used in inputs, 0 as default.

  • E_init (initializer) -- The initializer of the embedding matrix.

  • name (str) -- A unique layer name.

outputs

The embedding layer output is a 2D tensor in the shape: (batch_size, embedding_size).

Type

tensor

引用

  • [1] Iyyer, M., Manjunatha, V., Boyd-Graber, J., & Daum’e III, H. (2015). Deep Unordered Composition Rivals Syntactic Methods for Text Classification. In Association for Computational Linguistics.

  • [2] Joulin, A., Grave, E., Bojanowski, P., & Mikolov, T. (2016). Bag of Tricks for Efficient Text Classification.

实际案例

>>> import tensorflow as tf
>>> import tensorlayer as tl
>>> batch_size = 8
>>> length = 5
>>> input = tl.layers.Input([batch_size, length], dtype=tf.int32)
>>> avgembed = tl.layers.AverageEmbedding(vocabulary_size=1000, embedding_size=50, name='avg')
>>> print(avgembed)
AverageEmbedding(vocabulary_size=1000, embedding_size=50, pad_value=0)
>>> tensor = avgembed(input)
>>> print(tensor)
tf.Tensor([...], shape=(8, 50), dtype=float32)

有参数激活函数层

PReLU 层

class tensorlayer.layers.PRelu(channel_shared=False, in_channels=None, a_init=<tensorlayer.initializers.TruncatedNormal object>, name=None)[源代码]

The PRelu class is Parametric Rectified Linear layer. It follows f(x) = alpha * x for x < 0, f(x) = x for x >= 0, where alpha is a learned array with the same shape as x.

参数
  • channel_shared (boolean) -- If True, single weight is shared by all channels.

  • in_channels (int) -- The number of channels of the previous layer. If None, it will be automatically detected when the layer is forwarded for the first time.

  • a_init (initializer) -- The initializer for initializing the alpha(s).

  • name (None or str) -- A unique layer name.

实际案例

>>> inputs = tl.layers.Input([10, 5])
>>> prelulayer = tl.layers.PRelu(channel_shared=True)
>>> print(prelulayer)
PRelu(channel_shared=True,in_channels=None,name=prelu)
>>> prelu = prelulayer(inputs)
>>> model = tl.models.Model(inputs=inputs, outputs=prelu)
>>> out = model(data, is_train=True)

引用

PReLU6 层

class tensorlayer.layers.PRelu6(channel_shared=False, in_channels=None, a_init=<tensorlayer.initializers.TruncatedNormal object>, name=None)[源代码]

The PRelu6 class is Parametric Rectified Linear layer integrating ReLU6 behaviour.

This Layer is a modified version of the PRelu.

This activation layer use a modified version tl.act.leaky_relu() introduced by the following paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013]

This activation function also use a modified version of the activation function tf.nn.relu6() introduced by the following paper: Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010]

This activation layer push further the logic by adding leaky behaviour both below zero and above six.

The function return the following results:
  • When x < 0: f(x) = alpha_low * x.

  • When x in [0, 6]: f(x) = x.

  • When x > 6: f(x) = 6.

参数
  • channel_shared (boolean) -- If True, single weight is shared by all channels.

  • in_channels (int) -- The number of channels of the previous layer. If None, it will be automatically detected when the layer is forwarded for the first time.

  • a_init (initializer) -- The initializer for initializing the alpha(s).

  • name (None or str) -- A unique layer name.

引用

PTReLU6 层

class tensorlayer.layers.PTRelu6(channel_shared=False, in_channels=None, a_init=<tensorlayer.initializers.TruncatedNormal object>, name=None)[源代码]

The PTRelu6 class is Parametric Rectified Linear layer integrating ReLU6 behaviour.

This Layer is a modified version of the PRelu.

This activation layer use a modified version tl.act.leaky_relu() introduced by the following paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013]

This activation function also use a modified version of the activation function tf.nn.relu6() introduced by the following paper: Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010]

This activation layer push further the logic by adding leaky behaviour both below zero and above six.

The function return the following results:
  • When x < 0: f(x) = alpha_low * x.

  • When x in [0, 6]: f(x) = x.

  • When x > 6: f(x) = 6 + (alpha_high * (x-6)).

This version goes one step beyond PRelu6 by introducing leaky behaviour on the positive side when x > 6.

参数
  • channel_shared (boolean) -- If True, single weight is shared by all channels.

  • in_channels (int) -- The number of channels of the previous layer. If None, it will be automatically detected when the layer is forwarded for the first time.

  • a_init (initializer) -- The initializer for initializing the alpha(s).

  • name (None or str) -- A unique layer name.

引用

卷积层

卷积层

Conv1d

class tensorlayer.layers.Conv1d(n_filter=32, filter_size=5, stride=1, act=None, padding='SAME', data_format='channels_last', dilation_rate=1, W_init=<tensorlayer.initializers.TruncatedNormal object>, b_init=<tensorlayer.initializers.Constant object>, in_channels=None, name=None)[源代码]

Simplified version of Conv1dLayer.

参数
  • n_filter (int) -- The number of filters

  • filter_size (int) -- The filter size

  • stride (int) -- The stride step

  • dilation_rate (int) -- Specifying the dilation rate to use for dilated convolution.

  • act (activation function) -- The function that is applied to the layer activations

  • padding (str) -- The padding algorithm type: "SAME" or "VALID".

  • data_format (str) -- "channel_last" (NWC, default) or "channels_first" (NCW).

  • W_init (initializer) -- The initializer for the weight matrix.

  • b_init (initializer or None) -- The initializer for the bias vector. If None, skip biases.

  • in_channels (int) -- The number of in channels.

  • name (None or str) -- A unique layer name

实际案例

With TensorLayer

>>> net = tl.layers.Input([8, 100, 1], name='input')
>>> conv1d = tl.layers.Conv1d(n_filter=32, filter_size=5, stride=2, b_init=None, in_channels=1, name='conv1d_1')
>>> print(conv1d)
>>> tensor = tl.layers.Conv1d(n_filter=32, filter_size=5, stride=2, act=tf.nn.relu, name='conv1d_2')(net)
>>> print(tensor)

Conv2d

class tensorlayer.layers.Conv2d(n_filter=32, filter_size=(3, 3), strides=(1, 1), act=None, padding='SAME', data_format='channels_last', dilation_rate=(1, 1), W_init=<tensorlayer.initializers.TruncatedNormal object>, b_init=<tensorlayer.initializers.Constant object>, in_channels=None, name=None)[源代码]

Simplified version of Conv2dLayer.

参数
  • n_filter (int) -- The number of filters.

  • filter_size (tuple of int) -- The filter size (height, width).

  • strides (tuple of int) -- The sliding window strides of corresponding input dimensions. It must be in the same order as the shape parameter.

  • dilation_rate (tuple of int) -- Specifying the dilation rate to use for dilated convolution.

  • act (activation function) -- The activation function of this layer.

  • padding (str) -- The padding algorithm type: "SAME" or "VALID".

  • data_format (str) -- "channels_last" (NHWC, default) or "channels_first" (NCHW).

  • W_init (initializer) -- The initializer for the the weight matrix.

  • b_init (initializer or None) -- The initializer for the the bias vector. If None, skip biases.

  • in_channels (int) -- The number of in channels.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([8, 400, 400, 3], name='input')
>>> conv2d = tl.layers.Conv2d(n_filter=32, filter_size=(3, 3), stride=(2, 2), b_init=None, in_channels=3, name='conv2d_1')
>>> print(conv2d)
>>> tensor = tl.layers.Conv2d(n_filter=32, filter_size=(3, 3), stride=(2, 2), act=tf.nn.relu, name='conv2d_2')(net)
>>> print(tensor)

Conv3d

class tensorlayer.layers.Conv3d(n_filter=32, filter_size=(3, 3, 3), strides=(1, 1, 1), act=None, padding='SAME', data_format='channels_last', dilation_rate=(1, 1, 1), W_init=<tensorlayer.initializers.TruncatedNormal object>, b_init=<tensorlayer.initializers.Constant object>, in_channels=None, name=None)[源代码]

Simplified version of Conv3dLayer.

参数
  • n_filter (int) -- The number of filters.

  • filter_size (tuple of int) -- The filter size (height, width).

  • strides (tuple of int) -- The sliding window strides of corresponding input dimensions. It must be in the same order as the shape parameter.

  • dilation_rate (tuple of int) -- Specifying the dilation rate to use for dilated convolution.

  • act (activation function) -- The activation function of this layer.

  • padding (str) -- The padding algorithm type: "SAME" or "VALID".

  • data_format (str) -- "channels_last" (NDHWC, default) or "channels_first" (NCDHW).

  • W_init (initializer) -- The initializer for the the weight matrix.

  • b_init (initializer or None) -- The initializer for the the bias vector. If None, skip biases.

  • in_channels (int) -- The number of in channels.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([8, 20, 20, 20, 3], name='input')
>>> conv3d = tl.layers.Conv2d(n_filter=32, filter_size=(3, 3, 3), stride=(2, 2, 2), b_init=None, in_channels=3, name='conv3d_1')
>>> print(conv3d)
>>> tensor = tl.layers.Conv2d(n_filter=32, filter_size=(3, 3, 3), stride=(2, 2, 2), act=tf.nn.relu, name='conv3d_2')(net)
>>> print(tensor)

反卷积层

DeConv2d

class tensorlayer.layers.DeConv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), act=None, padding='SAME', dilation_rate=(1, 1), data_format='channels_last', W_init=<tensorlayer.initializers.TruncatedNormal object>, b_init=<tensorlayer.initializers.Constant object>, in_channels=None, name=None)[源代码]

Simplified version of DeConv2dLayer, see tf.nn.conv3d_transpose.

参数
  • n_filter (int) -- The number of filters.

  • filter_size (tuple of int) -- The filter size (height, width).

  • strides (tuple of int) -- The stride step (height, width).

  • padding (str) -- The padding algorithm type: "SAME" or "VALID".

  • act (activation function) -- The activation function of this layer.

  • data_format (str) -- "channels_last" (NHWC, default) or "channels_first" (NCHW).

  • dilation_rate (int of tuple of int) -- The dilation rate to use for dilated convolution

  • W_init (initializer) -- The initializer for the weight matrix.

  • b_init (initializer or None) -- The initializer for the bias vector. If None, skip biases.

  • in_channels (int) -- The number of in channels.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([5, 100, 100, 32], name='input')
>>> deconv2d = tl.layers.DeConv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), in_channels=32, name='DeConv2d_1')
>>> print(deconv2d)
>>> tensor = tl.layers.DeConv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), name='DeConv2d_2')(net)
>>> print(tensor)

DeConv3d

class tensorlayer.layers.DeConv3d(n_filter=32, filter_size=(3, 3, 3), strides=(2, 2, 2), padding='SAME', act=None, data_format='channels_last', W_init=<tensorlayer.initializers.TruncatedNormal object>, b_init=<tensorlayer.initializers.Constant object>, in_channels=None, name=None)[源代码]

Simplified version of DeConv3dLayer, see tf.nn.conv3d_transpose.

参数
  • n_filter (int) -- The number of filters.

  • filter_size (tuple of int) -- The filter size (depth, height, width).

  • strides (tuple of int) -- The stride step (depth, height, width).

  • padding (str) -- The padding algorithm type: "SAME" or "VALID".

  • act (activation function) -- The activation function of this layer.

  • data_format (str) -- "channels_last" (NDHWC, default) or "channels_first" (NCDHW).

  • W_init (initializer) -- The initializer for the weight matrix.

  • b_init (initializer or None) -- The initializer for the bias vector. If None, skip bias.

  • in_channels (int) -- The number of in channels.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([5, 100, 100, 100, 32], name='input')
>>> deconv3d = tl.layers.DeConv3d(n_filter=32, filter_size=(3, 3, 3), strides=(2, 2, 2), in_channels=32, name='DeConv3d_1')
>>> print(deconv3d)
>>> tensor = tl.layers.DeConv3d(n_filter=32, filter_size=(3, 3, 3), strides=(2, 2, 2), name='DeConv3d_2')(net)
>>> print(tensor)

Deformable 卷积层

DeformableConv2d

class tensorlayer.layers.DeformableConv2d(offset_layer=None, n_filter=32, filter_size=(3, 3), act=None, padding='SAME', W_init=<tensorlayer.initializers.TruncatedNormal object>, b_init=<tensorlayer.initializers.Constant object>, in_channels=None, name=None)[源代码]

The DeformableConv2d class is a 2D Deformable Convolutional Networks.

参数
  • offset_layer (tf.Tensor) -- To predict the offset of convolution operations. The shape is (batchsize, input height, input width, 2*(number of element in the convolution kernel)) e.g. if apply a 3*3 kernel, the number of the last dimension should be 18 (2*3*3)

  • n_filter (int) -- The number of filters.

  • filter_size (tuple of int) -- The filter size (height, width).

  • act (activation function) -- The activation function of this layer.

  • padding (str) -- The padding algorithm type: "SAME" or "VALID".

  • W_init (initializer) -- The initializer for the weight matrix.

  • b_init (initializer or None) -- The initializer for the bias vector. If None, skip biases.

  • in_channels (int) -- The number of in channels.

  • name (str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.InputLayer([5, 10, 10, 16], name='input')
>>> offset1 = tl.layers.Conv2d(
...     n_filter=18, filter_size=(3, 3), strides=(1, 1), padding='SAME', name='offset1'
... )(net)
>>> deformconv1 = tl.layers.DeformableConv2d(
...     offset_layer=offset1, n_filter=32, filter_size=(3, 3), name='deformable1'
... )(net)
>>> offset2 = tl.layers.Conv2d(
...     n_filter=18, filter_size=(3, 3), strides=(1, 1), padding='SAME', name='offset2'
... )(deformconv1)
>>> deformconv2 = tl.layers.DeformableConv2d(
...     offset_layer=offset2, n_filter=64, filter_size=(3, 3), name='deformable2'
... )(deformconv1)

引用

  • The deformation operation was adapted from the implementation in here

提示

  • The padding is fixed to 'SAME'.

  • The current implementation is not optimized for memory usgae. Please use it carefully.

Depthwise 卷积层

DepthwiseConv2d

class tensorlayer.layers.DepthwiseConv2d(filter_size=(3, 3), strides=(1, 1), act=None, padding='SAME', data_format='channels_last', dilation_rate=(1, 1), depth_multiplier=1, W_init=<tensorlayer.initializers.TruncatedNormal object>, b_init=<tensorlayer.initializers.Constant object>, in_channels=None, name=None)[源代码]

Separable/Depthwise Convolutional 2D layer, see tf.nn.depthwise_conv2d.

Input:

4-D Tensor (batch, height, width, in_channels).

Output:

4-D Tensor (batch, new height, new width, in_channels * depth_multiplier).

参数
  • filter_size (tuple of 2 int) -- The filter size (height, width).

  • strides (tuple of 2 int) -- The stride step (height, width).

  • act (activation function) -- The activation function of this layer.

  • padding (str) -- The padding algorithm type: "SAME" or "VALID".

  • data_format (str) -- "channels_last" (NHWC, default) or "channels_first" (NCHW).

  • dilation_rate (tuple of 2 int) -- The dilation rate in which we sample input values across the height and width dimensions in atrous convolution. If it is greater than 1, then all values of strides must be 1.

  • depth_multiplier (int) -- The number of channels to expand to.

  • W_init (initializer) -- The initializer for the weight matrix.

  • b_init (initializer or None) -- The initializer for the bias vector. If None, skip bias.

  • in_channels (int) -- The number of in channels.

  • name (str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([8, 200, 200, 32], name='input')
>>> depthwiseconv2d = tl.layers.DepthwiseConv2d(
...     filter_size=(3, 3), strides=(1, 1), dilation_rate=(2, 2), act=tf.nn.relu, depth_multiplier=2, name='depthwise'
... )(net)
>>> print(depthwiseconv2d)
>>> output shape : (8, 200, 200, 64)

引用

Group 卷积层

GroupConv2d

class tensorlayer.layers.GroupConv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), n_group=2, act=None, padding='SAME', data_format='channels_last', dilation_rate=(1, 1), W_init=<tensorlayer.initializers.TruncatedNormal object>, b_init=<tensorlayer.initializers.Constant object>, in_channels=None, name=None)[源代码]

The GroupConv2d class is 2D grouped convolution, see here.

参数
  • n_filter (int) -- The number of filters.

  • filter_size (tuple of int) -- The filter size.

  • strides (tuple of int) -- The stride step.

  • n_group (int) -- The number of groups.

  • act (activation function) -- The activation function of this layer.

  • padding (str) -- The padding algorithm type: "SAME" or "VALID".

  • data_format (str) -- "channels_last" (NHWC, default) or "channels_first" (NCHW).

  • dilation_rate (tuple of int) -- Specifying the dilation rate to use for dilated convolution.

  • W_init (initializer) -- The initializer for the weight matrix.

  • b_init (initializer or None) -- The initializer for the bias vector. If None, skip biases.

  • in_channels (int) -- The number of in channels.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([8, 24, 24, 32], name='input')
>>> groupconv2d = tl.layers.QuanConv2d(
...     n_filter=64, filter_size=(3, 3), strides=(2, 2), n_group=2, name='group'
... )(net)
>>> print(groupconv2d)
>>> output shape : (8, 12, 12, 64)

Separable 卷积层

SeparableConv1d

class tensorlayer.layers.SeparableConv1d(n_filter=100, filter_size=3, strides=1, act=None, padding='valid', data_format='channels_last', dilation_rate=1, depth_multiplier=1, depthwise_init=None, pointwise_init=None, b_init=<tensorlayer.initializers.Constant object>, in_channels=None, name=None)[源代码]

The SeparableConv1d class is a 1D depthwise separable convolutional layer.

This layer performs a depthwise convolution that acts separately on channels, followed by a pointwise convolution that mixes channels.

参数
  • n_filter (int) -- The dimensionality of the output space (i.e. the number of filters in the convolution).

  • filter_size (int) -- Specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions.

  • strides (int) -- Specifying the stride of the convolution. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.

  • padding (str) -- One of "valid" or "same" (case-insensitive).

  • data_format (str) -- One of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width).

  • dilation_rate (int) -- Specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1.

  • depth_multiplier (int) -- The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to num_filters_in * depth_multiplier.

  • depthwise_init (initializer) -- for the depthwise convolution kernel.

  • pointwise_init (initializer) -- For the pointwise convolution kernel.

  • b_init (initializer) -- For the bias vector. If None, ignore bias in the pointwise part only.

  • in_channels (int) -- The number of in channels.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([8, 50, 64], name='input')
>>> separableconv1d = tl.layers.Conv1d(n_filter=32, filter_size=3, strides=2, padding='SAME', act=tf.nn.relu, name='separable_1d')(net)
>>> print(separableconv1d)
>>> output shape : (8, 25, 32)

SeparableConv2d

class tensorlayer.layers.SeparableConv2d(n_filter=100, filter_size=(3, 3), strides=(1, 1), act=None, padding='valid', data_format='channels_last', dilation_rate=(1, 1), depth_multiplier=1, depthwise_init=None, pointwise_init=None, b_init=<tensorlayer.initializers.Constant object>, in_channels=None, name=None)[源代码]

The SeparableConv2d class is a 2D depthwise separable convolutional layer.

This layer performs a depthwise convolution that acts separately on channels, followed by a pointwise convolution that mixes channels. While DepthwiseConv2d performs depthwise convolution only, which allow us to add batch normalization between depthwise and pointwise convolution.

参数
  • n_filter (int) -- The dimensionality of the output space (i.e. the number of filters in the convolution).

  • filter_size (tuple/list of 2 int) -- Specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions.

  • strides (tuple/list of 2 int) -- Specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1.

  • padding (str) -- One of "valid" or "same" (case-insensitive).

  • data_format (str) -- One of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width).

  • dilation_rate (integer or tuple/list of 2 int) -- Specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1.

  • depth_multiplier (int) -- The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to num_filters_in * depth_multiplier.

  • depthwise_init (initializer) -- for the depthwise convolution kernel.

  • pointwise_init (initializer) -- For the pointwise convolution kernel.

  • b_init (initializer) -- For the bias vector. If None, ignore bias in the pointwise part only.

  • in_channels (int) -- The number of in channels.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([8, 50, 50, 64], name='input')
>>> separableconv2d = tl.layers.Conv1d(n_filter=32, filter_size=(3, 3), strides=(2, 2), act=tf.nn.relu, padding='VALID', name='separableconv2d')(net)
>>> print(separableconv2d)
>>> output shape : (8, 24, 24, 32)

SubPixel 卷积层

SubpixelConv1d

class tensorlayer.layers.SubpixelConv1d(scale=2, act=None, in_channels=None, name=None)[源代码]

It is a 1D sub-pixel up-sampling layer.

Calls a TensorFlow function that directly implements this functionality. We assume input has dim (batch, width, r)

参数
  • scale (int) -- The up-scaling ratio, a wrong setting will lead to Dimension size error.

  • act (activation function) -- The activation function of this layer.

  • in_channels (int) -- The number of in channels.

  • name (str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([8, 25, 32], name='input')
>>> subpixelconv1d = tl.layers.SubpixelConv1d(scale=2, name='subpixelconv1d')(net)
>>> print(subpixelconv1d)
>>> output shape : (8, 50, 16)

引用

Audio Super Resolution Implementation.

SubpixelConv2d

class tensorlayer.layers.SubpixelConv2d(scale=2, n_out_channels=None, act=None, in_channels=None, name=None)[源代码]

It is a 2D sub-pixel up-sampling layer, usually be used for Super-Resolution applications, see SRGAN for example.

参数
  • scale (int) -- The up-scaling ratio, a wrong setting will lead to dimension size error.

  • n_out_channel (int or None) -- The number of output channels. - If None, automatically set n_out_channel == the number of input channels / (scale x scale). - The number of input channels == (scale x scale) x The number of output channels.

  • act (activation function) -- The activation function of this layer.

  • in_channels (int) -- The number of in channels.

  • name (str) -- A unique layer name.

实际案例

With TensorLayer

>>> # examples here just want to tell you how to set the n_out_channel.
>>> net = tl.layers.Input([2, 16, 16, 4], name='input1')
>>> subpixelconv2d = tl.layers.SubpixelConv2d(scale=2, n_out_channel=1, name='subpixel_conv2d1')(net)
>>> print(subpixelconv2d)
>>> output shape : (2, 32, 32, 1)
>>> net = tl.layers.Input([2, 16, 16, 4*10], name='input2')
>>> subpixelconv2d = tl.layers.SubpixelConv2d(scale=2, n_out_channel=10, name='subpixel_conv2d2')(net)
>>> print(subpixelconv2d)
>>> output shape : (2, 32, 32, 10)
>>> net = tl.layers.Input([2, 16, 16, 25*10], name='input3')
>>> subpixelconv2d = tl.layers.SubpixelConv2d(scale=5, n_out_channel=10, name='subpixel_conv2d3')(net)
>>> print(subpixelconv2d)
>>> output shape : (2, 80, 80, 10)

引用

全连接层

全连接层

Drop Connection 全连接层

class tensorlayer.layers.DropconnectDense(keep=0.5, n_units=100, act=None, W_init=<tensorlayer.initializers.TruncatedNormal object>, b_init=<tensorlayer.initializers.Constant object>, in_channels=None, name=None)[源代码]

The DropconnectDense class is Dense with DropConnect behaviour which randomly removes connections between this layer and the previous layer according to a keeping probability.

参数
  • keep (float) -- The keeping probability. The lower the probability it is, the more activations are set to zero.

  • n_units (int) -- The number of units of this layer.

  • act (activation function) -- The activation function of this layer.

  • W_init (weights initializer) -- The initializer for the weight matrix.

  • b_init (biases initializer) -- The initializer for the bias vector.

  • in_channels (int) -- The number of channels of the previous layer. If None, it will be automatically detected when the layer is forwarded for the first time.

  • name (str) -- A unique layer name.

实际案例

>>> net = tl.layers.Input([None, 784], name='input')
>>> net = tl.layers.DropconnectDense(keep=0.8,
...         n_units=800, act=tf.nn.relu, name='relu1')(net)
>>> net = tl.layers.DropconnectDense(keep=0.5,
...         n_units=800, act=tf.nn.relu, name='relu2')(net)
>>> net = tl.layers.DropconnectDense(keep=0.5,
...         n_units=10, name='output')(net)

引用

Dropout 层

class tensorlayer.layers.Dropout(keep, seed=None, name=None)[源代码]

The Dropout class is a noise layer which randomly set some activations to zero according to a keeping probability.

参数
  • keep (float) -- The keeping probability. The lower the probability it is, the more activations are set to zero.

  • seed (int or None) -- The seed for random dropout.

  • name (None or str) -- A unique layer name.

拓展层

Expand Dims 层

class tensorlayer.layers.ExpandDims(axis, name=None)[源代码]

The ExpandDims class inserts a dimension of 1 into a tensor's shape, see tf.expand_dims() .

参数
  • axis (int) -- The dimension index at which to expand the shape of input.

  • name (str) -- A unique layer name. If None, a unique name will be automatically assigned.

实际案例

>>> x = tl.layers.Input([10, 3], name='in')
>>> y = tl.layers.ExpandDims(axis=-1)(x)
[10, 3, 1]

Tile 层

class tensorlayer.layers.Tile(multiples=None, name=None)[源代码]

The Tile class constructs a tensor by tiling a given tensor, see tf.tile() .

参数
  • multiples (tensor) -- Must be one of the following types: int32, int64. 1-D Length must be the same as the number of dimensions in input.

  • name (None or str) -- A unique layer name.

实际案例

>>> x = tl.layers.Input([10, 3], name='in')
>>> y = tl.layers.Tile(multiples=[2, 3])(x)
[20, 9]

图像重采样层

2D 上采样层

class tensorlayer.layers.UpSampling2d(scale, method='bilinear', antialias=False, data_format='channel_last', name=None)[源代码]

The UpSampling2d class is a up-sampling 2D layer.

See tf.image.resize_images.

参数
  • scale (int/float or tuple of int/float) -- (height, width) scale factor.

  • method (str) --

    The resize method selected through the given string. Default 'bilinear'.
    • 'bilinear', Bilinear interpolation.

    • 'nearest', Nearest neighbor interpolation.

    • 'bicubic', Bicubic interpolation.

    • 'area', Area interpolation.

  • antialias (boolean) -- Whether to use an anti-aliasing filter when downsampling an image.

  • data_format (str) -- channels_last 'channel_last' (default) or channels_first.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> ni = tl.layers.Input([None, 50, 50, 32], name='input')
>>> ni = tl.layers.UpSampling2d(scale=(2, 2))(ni)
>>> output shape : [None, 100, 100, 32]

2D 下采样层

class tensorlayer.layers.DownSampling2d(scale, method='bilinear', antialias=False, data_format='channel_last', name=None)[源代码]

The DownSampling2d class is down-sampling 2D layer.

See tf.image.resize_images.

参数
  • scale (int/float or tuple of int/float) -- (height, width) scale factor.

  • method (str) --

    The resize method selected through the given string. Default 'bilinear'.
    • 'bilinear', Bilinear interpolation.

    • 'nearest', Nearest neighbor interpolation.

    • 'bicubic', Bicubic interpolation.

    • 'area', Area interpolation.

  • antialias (boolean) -- Whether to use an anti-aliasing filter when downsampling an image.

  • data_format (str) -- channels_last 'channel_last' (default) or channels_first.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> ni = tl.layers.Input([None, 50, 50, 32], name='input')
>>> ni = tl.layers.DownSampling2d(scale=(2, 2))(ni)
>>> output shape : [None, 25, 25, 32]

Lambda 层

普通 Lambda 层

class tensorlayer.layers.Lambda(fn, fn_weights=None, fn_args=None, name=None)[源代码]

A layer that takes a user-defined function using Lambda. If the function has trainable weights, the weights should be provided. Remember to make sure the weights provided when the layer is constructed are SAME as the weights used when the layer is forwarded. For multiple inputs see ElementwiseLambda.

参数
  • fn (function) -- The function that applies to the inputs (e.g. tensor from the previous layer).

  • fn_weights (list) -- The trainable weights for the function if any. Optional.

  • fn_args (dict) -- The arguments for the function if any. Optional.

  • name (str or None) -- A unique layer name.

实际案例

Non-parametric and non-args case This case is supported in the Model.save() / Model.load() to save / load the whole model architecture and weights(optional).

>>> x = tl.layers.Input([8, 3], name='input')
>>> y = tl.layers.Lambda(lambda x: 2*x, name='lambda')(x)

Non-parametric and with args case This case is supported in the Model.save() / Model.load() to save / load the whole model architecture and weights(optional).

>>> def customize_func(x, foo=42): # x is the inputs, foo is an argument
>>>     return foo * x
>>> x = tl.layers.Input([8, 3], name='input')
>>> lambdalayer = tl.layers.Lambda(customize_func, fn_args={'foo': 2}, name='lambda')(x)

Any function with outside variables This case has not been supported in Model.save() / Model.load() yet. Please avoid using Model.save() / Model.load() to save / load models that contain such Lambda layer. Instead, you may use Model.save_weights() / Model.load_weights() to save / load model weights. Note: In this case, fn_weights should be a list, and then the trainable weights in this Lambda layer can be added into the weights of the whole model.

>>> vara = [tf.Variable(1.0)]
>>> def func(x):
>>>     return x + vara
>>> x = tl.layers.Input([8, 3], name='input')
>>> y = tl.layers.Lambda(func, fn_weights=a, name='lambda')(x)

Parametric case, merge other wrappers into TensorLayer This case is supported in the Model.save() / Model.load() to save / load the whole model architecture and weights(optional).

>>> layers = [
>>>     tf.keras.layers.Dense(10, activation=tf.nn.relu),
>>>     tf.keras.layers.Dense(5, activation=tf.nn.sigmoid),
>>>     tf.keras.layers.Dense(1, activation=tf.identity)
>>> ]
>>> perceptron = tf.keras.Sequential(layers)
>>> # in order to compile keras model and get trainable_variables of the keras model
>>> _ = perceptron(np.random.random([100, 5]).astype(np.float32))
>>> class CustomizeModel(tl.models.Model):
>>>     def __init__(self):
>>>         super(CustomizeModel, self).__init__()
>>>         self.dense = tl.layers.Dense(in_channels=1, n_units=5)
>>>         self.lambdalayer = tl.layers.Lambda(perceptron, perceptron.trainable_variables)
>>>     def forward(self, x):
>>>         z = self.dense(x)
>>>         z = self.lambdalayer(z)
>>>         return z
>>> optimizer = tf.optimizers.Adam(learning_rate=0.1)
>>> model = CustomizeModel()
>>> model.train()
>>> for epoch in range(50):
>>>     with tf.GradientTape() as tape:
>>>         pred_y = model(data_x)
>>>         loss = tl.cost.mean_squared_error(pred_y, data_y)
>>>     gradients = tape.gradient(loss, model.trainable_weights)
>>>     optimizer.apply_gradients(zip(gradients, model.trainable_weights))

逐点 Lambda 层

class tensorlayer.layers.ElementwiseLambda(fn, fn_weights=None, fn_args=None, name=None)[源代码]

A layer that use a custom function to combine multiple Layer inputs. If the function has trainable weights, the weights should be provided. Remember to make sure the weights provided when the layer is constructed are SAME as the weights used when the layer is forwarded.

参数
  • fn (function) -- The function that applies to the inputs (e.g. tensor from the previous layer).

  • fn_weights (list) -- The trainable weights for the function if any. Optional.

  • fn_args (dict) -- The arguments for the function if any. Optional.

  • name (str or None) -- A unique layer name.

实际案例

Non-parametric and with args case This case is supported in the Model.save() / Model.load() to save / load the whole model architecture and weights(optional).

z = mean + noise * tf.exp(std * 0.5) + foo >>> def func(noise, mean, std, foo=42): >>> return mean + noise * tf.exp(std * 0.5) + foo

>>> noise = tl.layers.Input([100, 1])
>>> mean = tl.layers.Input([100, 1])
>>> std = tl.layers.Input([100, 1])
>>> out = tl.layers.ElementwiseLambda(fn=func, fn_args={'foo': 84}, name='elementwiselambda')([noise, mean, std])

Non-parametric and non-args case This case is supported in the Model.save() / Model.load() to save / load the whole model architecture and weights(optional).

z = mean + noise * tf.exp(std * 0.5) >>> noise = tl.layers.Input([100, 1]) >>> mean = tl.layers.Input([100, 1]) >>> std = tl.layers.Input([100, 1]) >>> out = tl.layers.ElementwiseLambda(fn=lambda x, y, z: x + y * tf.exp(z * 0.5), name='elementwiselambda')([noise, mean, std])

Any function with outside variables This case has not been supported in Model.save() / Model.load() yet. Please avoid using Model.save() / Model.load() to save / load models that contain such ElementwiseLambda layer. Instead, you may use Model.save_weights() / Model.load_weights() to save / load model weights. Note: In this case, fn_weights should be a list, and then the trainable weights in this ElementwiseLambda layer can be added into the weights of the whole model.

z = mean + noise * tf.exp(std * 0.5) + vara >>> vara = [tf.Variable(1.0)] >>> def func(noise, mean, std): >>> return mean + noise * tf.exp(std * 0.5) + vara >>> noise = tl.layers.Input([100, 1]) >>> mean = tl.layers.Input([100, 1]) >>> std = tl.layers.Input([100, 1]) >>> out = tl.layers.ElementwiseLambda(fn=func, fn_weights=vara, name='elementwiselambda')([noise, mean, std])

合并层

合并连接层

class tensorlayer.layers.Concat(concat_dim=-1, name=None)[源代码]

A layer that concats multiple tensors according to given axis.

参数
  • concat_dim (int) -- The dimension to concatenate.

  • name (None or str) -- A unique layer name.

实际案例

>>> class CustomModel(tl.models.Model):
>>>     def __init__(self):
>>>         super(CustomModel, self).__init__(name="custom")
>>>         self.dense1 = tl.layers.Dense(in_channels=20, n_units=10, act=tf.nn.relu, name='relu1_1')
>>>         self.dense2 = tl.layers.Dense(in_channels=20, n_units=10, act=tf.nn.relu, name='relu2_1')
>>>         self.concat = tl.layers.Concat(concat_dim=1, name='concat_layer')
>>>     def forward(self, inputs):
>>>         d1 = self.dense1(inputs)
>>>         d2 = self.dense2(inputs)
>>>         outputs = self.concat([d1, d2])
>>>         return outputs

逐点合并层

class tensorlayer.layers.Elementwise(combine_fn=tensorflow.minimum, act=None, name=None)[源代码]

A layer that combines multiple Layer that have the same output shapes according to an element-wise operation. If the element-wise operation is complicated, please consider to use ElementwiseLambda.

参数
  • combine_fn (a TensorFlow element-wise combine function) -- e.g. AND is tf.minimum ; OR is tf.maximum ; ADD is tf.add ; MUL is tf.multiply and so on. See TensorFlow Math API . If the combine function is more complicated, please consider to use ElementwiseLambda.

  • act (activation function) -- The activation function of this layer.

  • name (None or str) -- A unique layer name.

实际案例

>>> class CustomModel(tl.models.Model):
>>>     def __init__(self):
>>>         super(CustomModel, self).__init__(name="custom")
>>>         self.dense1 = tl.layers.Dense(in_channels=20, n_units=10, act=tf.nn.relu, name='relu1_1')
>>>         self.dense2 = tl.layers.Dense(in_channels=20, n_units=10, act=tf.nn.relu, name='relu2_1')
>>>         self.element = tl.layers.Elementwise(combine_fn=tf.minimum, name='minimum', act=tf.identity)
>>>     def forward(self, inputs):
>>>         d1 = self.dense1(inputs)
>>>         d2 = self.dense2(inputs)
>>>         outputs = self.element([d1, d2])
>>>         return outputs

噪声层

class tensorlayer.layers.GaussianNoise(mean=0.0, stddev=1.0, is_train=True, seed=None, name=None)[源代码]

The GaussianNoise class is noise layer that adding noise with gaussian distribution to the activation.

参数
  • mean (float) -- The mean. Default is 0.0.

  • stddev (float) -- The standard deviation. Default is 1.0.

  • is_train (boolean) -- Is trainable layer. If False, skip this layer. default is True.

  • seed (int or None) -- The seed for random noise.

  • name (str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([64, 200], name='input')
>>> net = tl.layers.Dense(n_units=100, act=tf.nn.relu, name='dense')(net)
>>> gaussianlayer = tl.layers.GaussianNoise(name='gaussian')(net)
>>> print(gaussianlayer)
>>> output shape : (64, 100)

标准化层

Batch 标准化层

class tensorlayer.layers.BatchNorm(decay=0.9, epsilon=1e-05, act=None, is_train=False, beta_init=<tensorlayer.initializers.Zeros object>, gamma_init=<tensorlayer.initializers.RandomNormal object>, moving_mean_init=<tensorlayer.initializers.Zeros object>, moving_var_init=<tensorlayer.initializers.Zeros object>, num_features=None, data_format='channels_last', name=None)[源代码]

The BatchNorm is a batch normalization layer for both fully-connected and convolution outputs. See tf.nn.batch_normalization and tf.nn.moments.

参数
  • decay (float) -- A decay factor for ExponentialMovingAverage. Suggest to use a large value for large dataset.

  • epsilon (float) -- Eplison.

  • act (activation function) -- The activation function of this layer.

  • is_train (boolean) -- Is being used for training or inference.

  • beta_init (initializer or None) -- The initializer for initializing beta, if None, skip beta. Usually you should not skip beta unless you know what happened.

  • gamma_init (initializer or None) -- The initializer for initializing gamma, if None, skip gamma. When the batch normalization layer is use instead of 'biases', or the next layer is linear, this can be disabled since the scaling can be done by the next layer. see Inception-ResNet-v2

  • moving_mean_init (initializer or None) -- The initializer for initializing moving mean, if None, skip moving mean.

  • moving_var_init (initializer or None) -- The initializer for initializing moving var, if None, skip moving var.

  • num_features (int) -- Number of features for input tensor. Useful to build layer if using BatchNorm1d, BatchNorm2d or BatchNorm3d, but should be left as None if using BatchNorm. Default None.

  • data_format (str) -- channels_last 'channel_last' (default) or channels_first.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([None, 50, 50, 32], name='input')
>>> net = tl.layers.BatchNorm()(net)

提示

The BatchNorm is universally suitable for 3D/4D/5D input in static model, but should not be used in dynamic model where layer is built upon class initialization. So the argument 'num_features' should only be used for subclasses BatchNorm1d, BatchNorm2d and BatchNorm3d. All the three subclasses are suitable under all kinds of conditions.

引用

Batch1d 标准化层

class tensorlayer.layers.BatchNorm1d(decay=0.9, epsilon=1e-05, act=None, is_train=False, beta_init=<tensorlayer.initializers.Zeros object>, gamma_init=<tensorlayer.initializers.RandomNormal object>, moving_mean_init=<tensorlayer.initializers.Zeros object>, moving_var_init=<tensorlayer.initializers.Zeros object>, num_features=None, data_format='channels_last', name=None)[源代码]

The BatchNorm1d applies Batch Normalization over 3D input (a mini-batch of 1D inputs with additional channel dimension), of shape (N, L, C) or (N, C, L). See more details in BatchNorm.

实际案例

With TensorLayer

>>> # in static model, no need to specify num_features
>>> net = tl.layers.Input([None, 50, 32], name='input')
>>> net = tl.layers.BatchNorm1d()(net)
>>> # in dynamic model, build by specifying num_features
>>> conv = tl.layers.Conv1d(32, 5, 1, in_channels=3)
>>> bn = tl.layers.BatchNorm1d(num_features=32)

Batch2d 标准化层

class tensorlayer.layers.BatchNorm2d(decay=0.9, epsilon=1e-05, act=None, is_train=False, beta_init=<tensorlayer.initializers.Zeros object>, gamma_init=<tensorlayer.initializers.RandomNormal object>, moving_mean_init=<tensorlayer.initializers.Zeros object>, moving_var_init=<tensorlayer.initializers.Zeros object>, num_features=None, data_format='channels_last', name=None)[源代码]

The BatchNorm2d applies Batch Normalization over 4D input (a mini-batch of 2D inputs with additional channel dimension) of shape (N, H, W, C) or (N, C, H, W). See more details in BatchNorm.

实际案例

With TensorLayer

>>> # in static model, no need to specify num_features
>>> net = tl.layers.Input([None, 50, 50, 32], name='input')
>>> net = tl.layers.BatchNorm2d()(net)
>>> # in dynamic model, build by specifying num_features
>>> conv = tl.layers.Conv2d(32, (5, 5), (1, 1), in_channels=3)
>>> bn = tl.layers.BatchNorm2d(num_features=32)

Batch3d 标准化层

class tensorlayer.layers.BatchNorm3d(decay=0.9, epsilon=1e-05, act=None, is_train=False, beta_init=<tensorlayer.initializers.Zeros object>, gamma_init=<tensorlayer.initializers.RandomNormal object>, moving_mean_init=<tensorlayer.initializers.Zeros object>, moving_var_init=<tensorlayer.initializers.Zeros object>, num_features=None, data_format='channels_last', name=None)[源代码]

The BatchNorm3d applies Batch Normalization over 5D input (a mini-batch of 3D inputs with additional channel dimension) with shape (N, D, H, W, C) or (N, C, D, H, W). See more details in BatchNorm.

实际案例

With TensorLayer

>>> # in static model, no need to specify num_features
>>> net = tl.layers.Input([None, 50, 50, 50, 32], name='input')
>>> net = tl.layers.BatchNorm3d()(net)
>>> # in dynamic model, build by specifying num_features
>>> conv = tl.layers.Conv3d(32, (5, 5, 5), (1, 1), in_channels=3)
>>> bn = tl.layers.BatchNorm3d(num_features=32)

Local Response 标准化层

class tensorlayer.layers.LocalResponseNorm(depth_radius=None, bias=None, alpha=None, beta=None, name=None)[源代码]

The LocalResponseNorm layer is for Local Response Normalization. See tf.nn.local_response_normalization or tf.nn.lrn for new TF version. The 4-D input tensor is a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted square-sum of inputs within depth_radius.

参数
  • depth_radius (int) -- Depth radius. 0-D. Half-width of the 1-D normalization window.

  • bias (float) -- An offset which is usually positive and shall avoid dividing by 0.

  • alpha (float) -- A scale factor which is usually positive.

  • beta (float) -- An exponent.

  • name (None or str) -- A unique layer name.

Instance 标准化层

class tensorlayer.layers.InstanceNorm(act=None, epsilon=1e-05, beta_init=<tensorlayer.initializers.Zeros object>, gamma_init=<tensorlayer.initializers.RandomNormal object>, num_features=None, data_format='channels_last', name=None)[源代码]

The InstanceNorm is an instance normalization layer for both fully-connected and convolution outputs. See tf.nn.batch_normalization and tf.nn.moments.

参数
  • act (activation function.) -- The activation function of this layer.

  • epsilon (float) -- Eplison.

  • beta_init (initializer or None) -- The initializer for initializing beta, if None, skip beta. Usually you should not skip beta unless you know what happened.

  • gamma_init (initializer or None) -- The initializer for initializing gamma, if None, skip gamma. When the instance normalization layer is use instead of 'biases', or the next layer is linear, this can be disabled since the scaling can be done by the next layer. see Inception-ResNet-v2

  • num_features (int) -- Number of features for input tensor. Useful to build layer if using InstanceNorm1d, InstanceNorm2d or InstanceNorm3d, but should be left as None if using InstanceNorm. Default None.

  • data_format (str) -- channels_last 'channel_last' (default) or channels_first.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([None, 50, 50, 32], name='input')
>>> net = tl.layers.InstanceNorm()(net)

提示

The InstanceNorm is universally suitable for 3D/4D/5D input in static model, but should not be used in dynamic model where layer is built upon class initialization. So the argument 'num_features' should only be used for subclasses InstanceNorm1d, InstanceNorm2d and InstanceNorm3d. All the three subclasses are suitable under all kinds of conditions.

Instance1d 标准化层

class tensorlayer.layers.InstanceNorm1d(act=None, epsilon=1e-05, beta_init=<tensorlayer.initializers.Zeros object>, gamma_init=<tensorlayer.initializers.RandomNormal object>, num_features=None, data_format='channels_last', name=None)[源代码]

The InstanceNorm1d applies Instance Normalization over 3D input (a mini-instance of 1D inputs with additional channel dimension), of shape (N, L, C) or (N, C, L). See more details in InstanceNorm.

实际案例

With TensorLayer

>>> # in static model, no need to specify num_features
>>> net = tl.layers.Input([None, 50, 32], name='input')
>>> net = tl.layers.InstanceNorm1d()(net)
>>> # in dynamic model, build by specifying num_features
>>> conv = tl.layers.Conv1d(32, 5, 1, in_channels=3)
>>> bn = tl.layers.InstanceNorm1d(num_features=32)

Instance2d 标准化层

class tensorlayer.layers.InstanceNorm2d(act=None, epsilon=1e-05, beta_init=<tensorlayer.initializers.Zeros object>, gamma_init=<tensorlayer.initializers.RandomNormal object>, num_features=None, data_format='channels_last', name=None)[源代码]

The InstanceNorm2d applies Instance Normalization over 4D input (a mini-instance of 2D inputs with additional channel dimension) of shape (N, H, W, C) or (N, C, H, W). See more details in InstanceNorm.

实际案例

With TensorLayer

>>> # in static model, no need to specify num_features
>>> net = tl.layers.Input([None, 50, 50, 32], name='input')
>>> net = tl.layers.InstanceNorm2d()(net)
>>> # in dynamic model, build by specifying num_features
>>> conv = tl.layers.Conv2d(32, (5, 5), (1, 1), in_channels=3)
>>> bn = tl.layers.InstanceNorm2d(num_features=32)

Instance3d 标准化层

class tensorlayer.layers.InstanceNorm3d(act=None, epsilon=1e-05, beta_init=<tensorlayer.initializers.Zeros object>, gamma_init=<tensorlayer.initializers.RandomNormal object>, num_features=None, data_format='channels_last', name=None)[源代码]

The InstanceNorm3d applies Instance Normalization over 5D input (a mini-instance of 3D inputs with additional channel dimension) with shape (N, D, H, W, C) or (N, C, D, H, W). See more details in InstanceNorm.

实际案例

With TensorLayer

>>> # in static model, no need to specify num_features
>>> net = tl.layers.Input([None, 50, 50, 50, 32], name='input')
>>> net = tl.layers.InstanceNorm3d()(net)
>>> # in dynamic model, build by specifying num_features
>>> conv = tl.layers.Conv3d(32, (5, 5, 5), (1, 1), in_channels=3)
>>> bn = tl.layers.InstanceNorm3d(num_features=32)

Layer 标准化层

class tensorlayer.layers.LayerNorm(center=True, scale=True, act=None, epsilon=1e-12, begin_norm_axis=1, begin_params_axis=-1, beta_init=<tensorlayer.initializers.Zeros object>, gamma_init=<tensorlayer.initializers.Ones object>, data_format='channels_last', name=None)[源代码]

The LayerNorm class is for layer normalization, see tf.contrib.layers.layer_norm.

参数

Group 标准化层

class tensorlayer.layers.GroupNorm(groups=32, epsilon=1e-06, act=None, data_format='channels_last', name=None)[源代码]

The GroupNorm layer is for Group Normalization. See tf.contrib.layers.group_norm.

参数
  • prev_layer (#) --

  • The previous layer. (#) --

  • groups (int) -- The number of groups

  • act (activation function) -- The activation function of this layer.

  • epsilon (float) -- Eplison.

  • data_format (str) -- channels_last 'channel_last' (default) or channels_first.

  • name (None or str) -- A unique layer name

Switch 标准化层

class tensorlayer.layers.SwitchNorm(act=None, epsilon=1e-05, beta_init=<tensorlayer.initializers.Constant object>, gamma_init=<tensorlayer.initializers.Constant object>, moving_mean_init=<tensorlayer.initializers.Zeros object>, data_format='channels_last', name=None)[源代码]

The SwitchNorm is a switchable normalization.

参数
  • act (activation function) -- The activation function of this layer.

  • epsilon (float) -- Eplison.

  • beta_init (initializer or None) -- The initializer for initializing beta, if None, skip beta. Usually you should not skip beta unless you know what happened.

  • gamma_init (initializer or None) -- The initializer for initializing gamma, if None, skip gamma. When the batch normalization layer is use instead of 'biases', or the next layer is linear, this can be disabled since the scaling can be done by the next layer. see Inception-ResNet-v2

  • moving_mean_init (initializer or None) -- The initializer for initializing moving mean, if None, skip moving mean.

  • data_format (str) -- channels_last 'channel_last' (default) or channels_first.

  • name (None or str) -- A unique layer name.

引用

填充层

填充层 (底层 API)

class tensorlayer.layers.PadLayer(padding=None, mode='CONSTANT', name=None)[源代码]

The PadLayer class is a padding layer for any mode and dimension. Please see tf.pad for usage.

参数
  • padding (list of lists of 2 ints, or a Tensor of type int32.) -- The int32 values to pad.

  • mode (str) -- "CONSTANT", "REFLECT", or "SYMMETRIC" (case-insensitive).

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([None, 224, 224, 3], name='input')
>>> padlayer = tl.layers.PadLayer([[0, 0], [3, 3], [3, 3], [0, 0]], "REFLECT", name='inpad')(net)
>>> print(padlayer)
>>> output shape : (None, 106, 106, 3)

1D Zero 填充层

class tensorlayer.layers.ZeroPad1d(padding, name=None)[源代码]

The ZeroPad1d class is a 1D padding layer for signal [batch, length, channel].

参数
  • padding (int, or tuple of 2 ints) --

    • If int, zeros to add at the beginning and end of the padding dimension (axis 1).

    • If tuple of 2 ints, zeros to add at the beginning and at the end of the padding dimension.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([None, 100, 1], name='input')
>>> pad1d = tl.layers.ZeroPad1d(padding=(2, 3))(net)
>>> print(pad1d)
>>> output shape : (None, 106, 1)

2D Zero 填充层

class tensorlayer.layers.ZeroPad2d(padding, name=None)[源代码]

The ZeroPad2d class is a 2D padding layer for image [batch, height, width, channel].

参数
  • padding (int, or tuple of 2 ints, or tuple of 2 tuples of 2 ints.) --

    • If int, the same symmetric padding is applied to width and height.

    • If tuple of 2 ints, interpreted as two different symmetric padding values for height and width as (symmetric_height_pad, symmetric_width_pad).

    • If tuple of 2 tuples of 2 ints, interpreted as ((top_pad, bottom_pad), (left_pad, right_pad)).

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([None, 100, 100, 3], name='input')
>>> pad2d = tl.layers.ZeroPad2d(padding=((3, 3), (4, 4)))(net)
>>> print(pad2d)
>>> output shape : (None, 106, 108, 3)

3D Zero 填充层

class tensorlayer.layers.ZeroPad3d(padding, name=None)[源代码]

The ZeroPad3d class is a 3D padding layer for volume [batch, depth, height, width, channel].

参数
  • padding (int, or tuple of 2 ints, or tuple of 2 tuples of 2 ints.) --

    • If int, the same symmetric padding is applied to width and height.

    • If tuple of 2 ints, interpreted as two different symmetric padding values for height and width as (symmetric_dim1_pad, symmetric_dim2_pad, symmetric_dim3_pad).

    • If tuple of 2 tuples of 2 ints, interpreted as ((left_dim1_pad, right_dim1_pad), (left_dim2_pad, right_dim2_pad), (left_dim3_pad, right_dim3_pad)).

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([None, 100, 100, 100, 3], name='input')
>>> pad3d = tl.layers.ZeroPad3d(padding=((3, 3), (4, 4), (5, 5)))(net)
>>> print(pad3d)
>>> output shape : (None, 106, 108, 110, 3)

池化层

池化层 (底层 API)

class tensorlayer.layers.PoolLayer(filter_size=(1, 2, 2, 1), strides=(1, 2, 2, 1), padding='SAME', pool=tensorflow.nn.max_pool, name=None)[源代码]

The PoolLayer class is a Pooling layer. You can choose tf.nn.max_pool and tf.nn.avg_pool for 2D input or tf.nn.max_pool3d and tf.nn.avg_pool3d for 3D input.

参数
  • filter_size (tuple of int) -- The size of the window for each dimension of the input tensor. Note that: len(filter_size) >= 4.

  • strides (tuple of int) -- The stride of the sliding window for each dimension of the input tensor. Note that: len(strides) >= 4.

  • padding (str) -- The padding algorithm type: "SAME" or "VALID".

  • pool (pooling function) -- One of tf.nn.max_pool, tf.nn.avg_pool, tf.nn.max_pool3d and f.nn.avg_pool3d. See TensorFlow pooling APIs

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([None, 50, 50, 32], name='input')
>>> net = tl.layers.PoolLayer()(net)
>>> output shape : [None, 25, 25, 32]

1D Max 池化层

class tensorlayer.layers.MaxPool1d(filter_size=3, strides=2, padding='SAME', data_format='channels_last', dilation_rate=1, name=None)[源代码]

Max pooling for 1D signal.

参数
  • filter_size (int) -- Pooling window size.

  • strides (int) -- Stride of the pooling operation.

  • padding (str) -- The padding method: 'VALID' or 'SAME'.

  • data_format (str) -- One of channels_last (default, [batch, length, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([None, 50, 32], name='input')
>>> net = tl.layers.MaxPool1d(filter_size=3, strides=2, padding='SAME', name='maxpool1d')(net)
>>> output shape : [None, 25, 32]

1D Mean 池化层

class tensorlayer.layers.MeanPool1d(filter_size=3, strides=2, padding='SAME', data_format='channels_last', dilation_rate=1, name=None)[源代码]

Mean pooling for 1D signal.

参数
  • filter_size (int) -- Pooling window size.

  • strides (int) -- Strides of the pooling operation.

  • padding (str) -- The padding method: 'VALID' or 'SAME'.

  • data_format (str) -- One of channels_last (default, [batch, length, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([None, 50, 32], name='input')
>>> net = tl.layers.MeanPool1d(filter_size=3, strides=2, padding='SAME')(net)
>>> output shape : [None, 25, 32]

2D Max 池化层

class tensorlayer.layers.MaxPool2d(filter_size=(3, 3), strides=(2, 2), padding='SAME', data_format='channels_last', name=None)[源代码]

Max pooling for 2D image.

参数
  • filter_size (tuple of int) -- (height, width) for filter size.

  • strides (tuple of int) -- (height, width) for strides.

  • padding (str) -- The padding method: 'VALID' or 'SAME'.

  • data_format (str) -- One of channels_last (default, [batch, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([None, 50, 50, 32], name='input')
>>> net = tl.layers.MaxPool2d(filter_size=(3, 3), strides=(2, 2), padding='SAME')(net)
>>> output shape : [None, 25, 25, 32]

2D Mean 池化层

class tensorlayer.layers.MeanPool2d(filter_size=(3, 3), strides=(2, 2), padding='SAME', data_format='channels_last', name=None)[源代码]

Mean pooling for 2D image [batch, height, width, channel].

参数
  • filter_size (tuple of int) -- (height, width) for filter size.

  • strides (tuple of int) -- (height, width) for strides.

  • padding (str) -- The padding method: 'VALID' or 'SAME'.

  • data_format (str) -- One of channels_last (default, [batch, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([None, 50, 50, 32], name='input')
>>> net = tl.layers.MeanPool2d(filter_size=(3, 3), strides=(2, 2), padding='SAME')(net)
>>> output shape : [None, 25, 25, 32]

3D Max 池化层

class tensorlayer.layers.MaxPool3d(filter_size=(3, 3, 3), strides=(2, 2, 2), padding='VALID', data_format='channels_last', name=None)[源代码]

Max pooling for 3D volume.

参数
  • filter_size (tuple of int) -- Pooling window size.

  • strides (tuple of int) -- Strides of the pooling operation.

  • padding (str) -- The padding method: 'VALID' or 'SAME'.

  • data_format (str) -- One of channels_last (default, [batch, depth, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) -- A unique layer name.

返回

A max pooling 3-D layer with a output rank as 5.

返回类型

tf.Tensor

实际案例

With TensorLayer

>>> net = tl.layers.Input([None, 50, 50, 50, 32], name='input')
>>> net = tl.layers.MaxPool3d(filter_size=(3, 3, 3), strides=(2, 2, 2), padding='SAME')(net)
>>> output shape : [None, 25, 25, 25, 32]

3D Mean 池化层

class tensorlayer.layers.MeanPool3d(filter_size=(3, 3, 3), strides=(2, 2, 2), padding='VALID', data_format='channels_last', name=None)[源代码]

Mean pooling for 3D volume.

参数
  • filter_size (tuple of int) -- Pooling window size.

  • strides (tuple of int) -- Strides of the pooling operation.

  • padding (str) -- The padding method: 'VALID' or 'SAME'.

  • data_format (str) -- One of channels_last (default, [batch, depth, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) -- A unique layer name.

返回

A mean pooling 3-D layer with a output rank as 5.

返回类型

tf.Tensor

实际案例

With TensorLayer

>>> net = tl.layers.Input([None, 50, 50, 50, 32], name='input')
>>> net = tl.layers.MeanPool3d(filter_size=(3, 3, 3), strides=(2, 2, 2), padding='SAME')(net)
>>> output shape : [None, 25, 25, 25, 32]

1D Global Max 池化层

class tensorlayer.layers.GlobalMaxPool1d(data_format='channels_last', name=None)[源代码]

The GlobalMaxPool1d class is a 1D Global Max Pooling layer.

参数
  • data_format (str) -- One of channels_last (default, [batch, length, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([None, 100, 30], name='input')
>>> net = tl.layers.GlobalMaxPool1d()(net)
>>> output shape : [None, 30]

1D Global Mean 池化层

class tensorlayer.layers.GlobalMeanPool1d(data_format='channels_last', name=None)[源代码]

The GlobalMeanPool1d class is a 1D Global Mean Pooling layer.

参数
  • data_format (str) -- One of channels_last (default, [batch, length, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([None, 100, 30], name='input')
>>> net = tl.layers.GlobalMeanPool1d()(net)
>>> output shape : [None, 30]

2D Global Max 池化层

class tensorlayer.layers.GlobalMaxPool2d(data_format='channels_last', name=None)[源代码]

The GlobalMaxPool2d class is a 2D Global Max Pooling layer.

参数
  • data_format (str) -- One of channels_last (default, [batch, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([None, 100, 100, 30], name='input')
>>> net = tl.layers.GlobalMaxPool2d()(net)
>>> output shape : [None, 30]

2D Global Mean 池化层

class tensorlayer.layers.GlobalMeanPool2d(data_format='channels_last', name=None)[源代码]

The GlobalMeanPool2d class is a 2D Global Mean Pooling layer.

参数
  • data_format (str) -- One of channels_last (default, [batch, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([None, 100, 100, 30], name='input')
>>> net = tl.layers.GlobalMeanPool2d()(net)
>>> output shape : [None, 30]

3D Global Max 池化层

class tensorlayer.layers.GlobalMaxPool3d(data_format='channels_last', name=None)[源代码]

The GlobalMaxPool3d class is a 3D Global Max Pooling layer.

参数
  • data_format (str) -- One of channels_last (default, [batch, depth, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([None, 100, 100, 100, 30], name='input')
>>> net = tl.layers.GlobalMaxPool3d()(net)
>>> output shape : [None, 30]

3D Global Mean 池化层

class tensorlayer.layers.GlobalMeanPool3d(data_format='channels_last', name=None)[源代码]

The GlobalMeanPool3d class is a 3D Global Mean Pooling layer.

参数
  • data_format (str) -- One of channels_last (default, [batch, depth, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([None, 100, 100, 100, 30], name='input')
>>> net = tl.layers.GlobalMeanPool3d()(net)
>>> output shape : [None, 30]

2D Corner 池化层

class tensorlayer.layers.CornerPool2d(mode='TopLeft', name=None)[源代码]

Corner pooling for 2D image [batch, height, width, channel], see here.

参数
  • mode (str) -- TopLeft for the top left corner, Bottomright for the bottom right corner.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([None, 32, 32, 8], name='input')
>>> net = tl.layers.CornerPool2d(mode='TopLeft',name='cornerpool2d')(net)
>>> output shape : [None, 32, 32, 8]

量化网络层

这些层目前还是用矩阵实现的运算,未来我们将提供 bit-count 操作,以实现加速。

Sign

class tensorlayer.layers.Sign(name=None)[源代码]

The SignLayer class is for quantizing the layer outputs to -1 or 1 while inferencing.

参数

name (a str) -- A unique layer name.

Scale

class tensorlayer.layers.Scale(init_scale=0.05, name='scale')[源代码]

The Scale class is to multiple a trainable scale value to the layer outputs. Usually be used on the output of binary net.

参数
  • init_scale (float) -- The initial value for the scale factor.

  • name (a str) -- A unique layer name.

  • Examples --

  • ---------- --

  • inputs = tl.layers.Input([8, 3]) (>>>) --

  • dense = tl.layers.Dense(n_units=10)(inputs) (>>>) --

  • outputs = tl.layers.Scale(init_scale=0.5)(dense) (>>>) --

  • model = tl.models.Model(inputs=inputs, outputs=[dense, outputs]) (>>>) --

  • dense_out, scale_out = model(data, is_train=True) (>>>) --

Binary 全连接层

class tensorlayer.layers.BinaryDense(n_units=100, act=None, use_gemm=False, W_init=<tensorlayer.initializers.TruncatedNormal object>, b_init=<tensorlayer.initializers.Constant object>, in_channels=None, name=None)[源代码]

The BinaryDense class is a binary fully connected layer, which weights are either -1 or 1 while inferencing.

Note that, the bias vector would not be binarized.

参数
  • n_units (int) -- The number of units of this layer.

  • act (activation function) -- The activation function of this layer, usually set to tf.act.sign or apply Sign after BatchNorm.

  • use_gemm (boolean) -- If True, use gemm instead of tf.matmul for inference. (TODO).

  • W_init (initializer) -- The initializer for the weight matrix.

  • b_init (initializer or None) -- The initializer for the bias vector. If None, skip biases.

  • in_channels (int) -- The number of channels of the previous layer. If None, it will be automatically detected when the layer is forwarded for the first time.

  • name (None or str) -- A unique layer name.

Binary (De)卷积层

BinaryConv2d

class tensorlayer.layers.BinaryConv2d(n_filter=32, filter_size=(3, 3), strides=(1, 1), act=None, padding='SAME', use_gemm=False, data_format='channels_last', dilation_rate=(1, 1), W_init=<tensorlayer.initializers.TruncatedNormal object>, b_init=<tensorlayer.initializers.Constant object>, in_channels=None, name=None)[源代码]

The BinaryConv2d class is a 2D binary CNN layer, which weights are either -1 or 1 while inference.

Note that, the bias vector would not be binarized.

参数
  • n_filter (int) -- The number of filters.

  • filter_size (tuple of int) -- The filter size (height, width).

  • strides (tuple of int) -- The sliding window strides of corresponding input dimensions. It must be in the same order as the shape parameter.

  • act (activation function) -- The activation function of this layer.

  • padding (str) -- The padding algorithm type: "SAME" or "VALID".

  • use_gemm (boolean) -- If True, use gemm instead of tf.matmul for inference. TODO: support gemm

  • data_format (str) -- "channels_last" (NHWC, default) or "channels_first" (NCHW).

  • dilation_rate (tuple of int) -- Specifying the dilation rate to use for dilated convolution.

  • W_init (initializer) -- The initializer for the the weight matrix.

  • b_init (initializer or None) -- The initializer for the the bias vector. If None, skip biases.

  • in_channels (int) -- The number of in channels.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([8, 100, 100, 32], name='input')
>>> binaryconv2d = tl.layers.QuanConv2d(
...     n_filter=64, filter_size=(3, 3), strides=(2, 2), act=tf.nn.relu, in_channels=32, name='binaryconv2d'
... )(net)
>>> print(binaryconv2d)
>>> output shape : (8, 50, 50, 64)

Ternary 全连接层

TernaryDense

class tensorlayer.layers.TernaryDense(n_units=100, act=None, use_gemm=False, W_init=<tensorlayer.initializers.TruncatedNormal object>, b_init=<tensorlayer.initializers.Constant object>, in_channels=None, name=None)[源代码]

The TernaryDense class is a ternary fully connected layer, which weights are either -1 or 1 or 0 while inference.

Note that, the bias vector would not be tenaried.

参数
  • n_units (int) -- The number of units of this layer.

  • act (activation function) -- The activation function of this layer, usually set to tf.act.sign or apply SignLayer after BatchNormLayer.

  • use_gemm (boolean) -- If True, use gemm instead of tf.matmul for inference. (TODO).

  • W_init (initializer) -- The initializer for the weight matrix.

  • b_init (initializer or None) -- The initializer for the bias vector. If None, skip biases.

  • in_channels (int) -- The number of channels of the previous layer. If None, it will be automatically detected when the layer is forwarded for the first time.

  • name (None or str) -- A unique layer name.

Ternary 卷积层

TernaryConv2d

class tensorlayer.layers.TernaryConv2d(n_filter=32, filter_size=(3, 3), strides=(1, 1), act=None, padding='SAME', use_gemm=False, data_format='channels_last', dilation_rate=(1, 1), W_init=<tensorlayer.initializers.TruncatedNormal object>, b_init=<tensorlayer.initializers.Constant object>, in_channels=None, name=None)[源代码]

The TernaryConv2d class is a 2D ternary CNN layer, which weights are either -1 or 1 or 0 while inference.

Note that, the bias vector would not be tenarized.

参数
  • n_filter (int) -- The number of filters.

  • filter_size (tuple of int) -- The filter size (height, width).

  • strides (tuple of int) -- The sliding window strides of corresponding input dimensions. It must be in the same order as the shape parameter.

  • act (activation function) -- The activation function of this layer.

  • padding (str) -- The padding algorithm type: "SAME" or "VALID".

  • use_gemm (boolean) -- If True, use gemm instead of tf.matmul for inference. TODO: support gemm

  • data_format (str) -- "channels_last" (NHWC, default) or "channels_first" (NCHW).

  • dilation_rate (tuple of int) -- Specifying the dilation rate to use for dilated convolution.

  • W_init (initializer) -- The initializer for the the weight matrix.

  • b_init (initializer or None) -- The initializer for the the bias vector. If None, skip biases.

  • in_channels (int) -- The number of in channels.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([8, 12, 12, 32], name='input')
>>> ternaryconv2d = tl.layers.QuanConv2d(
...     n_filter=64, filter_size=(5, 5), strides=(1, 1), act=tf.nn.relu, padding='SAME', name='ternaryconv2d'
... )(net)
>>> print(ternaryconv2d)
>>> output shape : (8, 12, 12, 64)

DoReFa 卷积层

DorefaConv2d

class tensorlayer.layers.DorefaConv2d(bitW=1, bitA=3, n_filter=32, filter_size=(3, 3), strides=(1, 1), act=None, padding='SAME', use_gemm=False, data_format='channels_last', dilation_rate=(1, 1), W_init=<tensorlayer.initializers.TruncatedNormal object>, b_init=<tensorlayer.initializers.Constant object>, in_channels=None, name=None)[源代码]

The DorefaConv2d class is a 2D quantized convolutional layer, which weights are 'bitW' bits and the output of the previous layer are 'bitA' bits while inferencing.

Note that, the bias vector would not be binarized.

参数
  • bitW (int) -- The bits of this layer's parameter

  • bitA (int) -- The bits of the output of previous layer

  • n_filter (int) -- The number of filters.

  • filter_size (tuple of int) -- The filter size (height, width).

  • strides (tuple of int) -- The sliding window strides of corresponding input dimensions. It must be in the same order as the shape parameter.

  • act (activation function) -- The activation function of this layer.

  • padding (str) -- The padding algorithm type: "SAME" or "VALID".

  • use_gemm (boolean) -- If True, use gemm instead of tf.matmul for inferencing. TODO: support gemm

  • data_format (str) -- "channels_last" (NHWC, default) or "channels_first" (NCHW).

  • dilation_rate (tuple of int) -- Specifying the dilation rate to use for dilated convolution.

  • W_init (initializer) -- The initializer for the the weight matrix.

  • b_init (initializer or None) -- The initializer for the the bias vector. If None, skip biases.

  • in_channels (int) -- The number of in channels.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([8, 12, 12, 32], name='input')
>>> dorefaconv2d = tl.layers.QuanConv2d(
...     n_filter=32, filter_size=(5, 5), strides=(1, 1), act=tf.nn.relu, padding='SAME', name='dorefaconv2d'
... )(net)
>>> print(dorefaconv2d)
>>> output shape : (8, 12, 12, 32)

DoReFa 卷积层

DorefaConv2d

class tensorlayer.layers.DorefaConv2d(bitW=1, bitA=3, n_filter=32, filter_size=(3, 3), strides=(1, 1), act=None, padding='SAME', use_gemm=False, data_format='channels_last', dilation_rate=(1, 1), W_init=<tensorlayer.initializers.TruncatedNormal object>, b_init=<tensorlayer.initializers.Constant object>, in_channels=None, name=None)[源代码]

The DorefaConv2d class is a 2D quantized convolutional layer, which weights are 'bitW' bits and the output of the previous layer are 'bitA' bits while inferencing.

Note that, the bias vector would not be binarized.

参数
  • bitW (int) -- The bits of this layer's parameter

  • bitA (int) -- The bits of the output of previous layer

  • n_filter (int) -- The number of filters.

  • filter_size (tuple of int) -- The filter size (height, width).

  • strides (tuple of int) -- The sliding window strides of corresponding input dimensions. It must be in the same order as the shape parameter.

  • act (activation function) -- The activation function of this layer.

  • padding (str) -- The padding algorithm type: "SAME" or "VALID".

  • use_gemm (boolean) -- If True, use gemm instead of tf.matmul for inferencing. TODO: support gemm

  • data_format (str) -- "channels_last" (NHWC, default) or "channels_first" (NCHW).

  • dilation_rate (tuple of int) -- Specifying the dilation rate to use for dilated convolution.

  • W_init (initializer) -- The initializer for the the weight matrix.

  • b_init (initializer or None) -- The initializer for the the bias vector. If None, skip biases.

  • in_channels (int) -- The number of in channels.

  • name (None or str) -- A unique layer name.

实际案例

With TensorLayer

>>> net = tl.layers.Input([8, 12, 12, 32], name='input')
>>> dorefaconv2d = tl.layers.QuanConv2d(
...     n_filter=32, filter_size=(5, 5), strides=(1, 1), act=tf.nn.relu, padding='SAME', name='dorefaconv2d'
... )(net)
>>> print(dorefaconv2d)
>>> output shape : (8, 12, 12, 32)

Quantization 全连接层

QuantizedDense

QuantizedDenseWithBN 全连接层+批标准化

Quantization 卷积层

Quantization

QuantizedConv2dWithBN

循环层

普通循环层

RNN 层

class tensorlayer.layers.RNN(cell, return_last_output=False, return_seq_2d=False, return_last_state=True, in_channels=None, name=None)[源代码]

The RNN class is a fixed length recurrent layer for implementing simple RNN, LSTM, GRU and etc.

参数
  • cell (TensorFlow cell function) --

    A RNN cell implemented by tf.keras
    • E.g. tf.keras.layers.SimpleRNNCell, tf.keras.layers.LSTMCell, tf.keras.layers.GRUCell

    • Note TF2.0+, TF1.0+ and TF1.0- are different

  • return_last_output (boolean) --

    Whether return last output or all outputs in a sequence.
    • If True, return the last output, "Sequence input and single output"

    • If False, return all outputs, "Synced sequence input and output"

    • In other word, if you want to stack more RNNs on this layer, set to False

    In a dynamic model, return_last_output can be updated when it is called in customised forward(). By default, False.

  • return_seq_2d (boolean) --

    Only consider this argument when return_last_output is False
    • If True, return 2D Tensor [batch_size * n_steps, n_hidden], for stacking Dense layer after it.

    • If False, return 3D Tensor [batch_size, n_steps, n_hidden], for stacking multiple RNN after it.

    In a dynamic model, return_seq_2d can be updated when it is called in customised forward(). By default, False.

  • return_last_state (boolean) --

    Whether to return the last state of the RNN cell. The state is a list of Tensor. For simple RNN and GRU, last_state = [last_output]; For LSTM, last_state = [last_output, last_cell_state]

    • If True, the layer will return outputs and the final state of the cell.

    • If False, the layer will return outputs only.

    In a dynamic model, return_last_state can be updated when it is called in customised forward(). By default, False.

  • in_channels (int) -- Optional, the number of channels of the previous layer which is normally the size of embedding. If given, the layer will be built when init. If None, it will be automatically detected when the layer is forwarded for the first time.

  • name (str) -- A unique layer name.

实际案例

For synced sequence input and output, see PTB example

A simple regression model below. >>> inputs = tl.layers.Input([batch_size, num_steps, embedding_size]) >>> rnn_out, lstm_state = tl.layers.RNN( >>> cell=tf.keras.layers.LSTMCell(units=hidden_size, dropout=0.1), >>> in_channels=embedding_size, >>> return_last_output=True, return_last_state=True, name='lstmrnn' >>> )(inputs) >>> outputs = tl.layers.Dense(n_units=1)(rnn_out) >>> rnn_model = tl.models.Model(inputs=inputs, outputs=[outputs, rnn_state[0], rnn_state[1]], name='rnn_model') >>> # If LSTMCell is applied, the rnn_state is [h, c] where h the hidden state and c the cell state of LSTM.

A stacked RNN model. >>> inputs = tl.layers.Input([batch_size, num_steps, embedding_size]) >>> rnn_out1 = tl.layers.RNN( >>> cell=tf.keras.layers.SimpleRNNCell(units=hidden_size, dropout=0.1), >>> return_last_output=False, return_seq_2d=False, return_last_state=False >>> )(inputs) >>> rnn_out2 = tl.layers.RNN( >>> cell=tf.keras.layers.SimpleRNNCell(units=hidden_size, dropout=0.1), >>> return_last_output=True, return_last_state=False >>> )(rnn_out1) >>> outputs = tl.layers.Dense(n_units=1)(rnn_out2) >>> rnn_model = tl.models.Model(inputs=inputs, outputs=outputs)

提示

Input dimension should be rank 3 : [batch_size, n_steps, n_features], if no, please see layer Reshape.

基本RNN层 (使用简单循环单元)

基于GRU的RNN层(使用GRU循环单元)

基于LSTM的RNN层(使用LSTM循环单元)

Bidirectional 层

class tensorlayer.layers.BiRNN(fw_cell, bw_cell, return_seq_2d=False, return_last_state=False, in_channels=None, name=None)[源代码]

The BiRNN class is a fixed length Bidirectional recurrent layer.

参数
  • fw_cell (TensorFlow cell function for forward direction) -- A RNN cell implemented by tf.keras, e.g. tf.keras.layers.SimpleRNNCell, tf.keras.layers.LSTMCell, tf.keras.layers.GRUCell. Note TF2.0+, TF1.0+ and TF1.0- are different

  • bw_cell (TensorFlow cell function for backward direction similar with fw_cell) --

  • return_seq_2d (boolean.) -- If True, return 2D Tensor [batch_size * n_steps, n_hidden], for stacking Dense layer after it. If False, return 3D Tensor [batch_size, n_steps, n_hidden], for stacking multiple RNN after it. In a dynamic model, return_seq_2d can be updated when it is called in customised forward(). By default, False.

  • return_last_state (boolean) --

    Whether to return the last state of the two cells. The state is a list of Tensor.
    • If True, the layer will return outputs, the final state of fw_cell and the final state of bw_cell.

    • If False, the layer will return outputs only.

    In a dynamic model, return_last_state can be updated when it is called in customised forward(). By default, False.

  • in_channels (int) -- Optional, the number of channels of the previous layer which is normally the size of embedding. If given, the layer will be built when init. If None, it will be automatically detected when the layer is forwarded for the first time.

  • name (str) -- A unique layer name.

实际案例

A simple regression model below. >>> inputs = tl.layers.Input([batch_size, num_steps, embedding_size]) >>> # the fw_cell and bw_cell can be different >>> rnnlayer = tl.layers.BiRNN( >>> fw_cell=tf.keras.layers.SimpleRNNCell(units=hidden_size, dropout=0.1), >>> bw_cell=tf.keras.layers.SimpleRNNCell(units=hidden_size + 1, dropout=0.1), >>> return_seq_2d=True, return_last_state=True >>> ) >>> # if return_last_state=True, the final state of the two cells will be returned together with the outputs >>> # if return_last_state=False, only the outputs will be returned >>> rnn_out, rnn_fw_state, rnn_bw_state = rnnlayer(inputs) >>> # if the BiRNN is followed by a Dense, return_seq_2d should be True. >>> # if the BiRNN is followed by other RNN, return_seq_2d can be False. >>> dense = tl.layers.Dense(n_units=1)(rnn_out) >>> outputs = tl.layers.Reshape([-1, num_steps])(dense) >>> rnn_model = tl.models.Model(inputs=inputs, outputs=[outputs, rnn_out, rnn_fw_state[0], rnn_bw_state[0]])

A stacked BiRNN model. >>> inputs = tl.layers.Input([batch_size, num_steps, embedding_size]) >>> rnn_out1 = tl.layers.BiRNN( >>> fw_cell=tf.keras.layers.SimpleRNNCell(units=hidden_size, dropout=0.1), >>> bw_cell=tf.keras.layers.SimpleRNNCell(units=hidden_size + 1, dropout=0.1), >>> return_seq_2d=False, return_last_state=False >>> )(inputs) >>> rnn_out2 = tl.layers.BiRNN( >>> fw_cell=tf.keras.layers.SimpleRNNCell(units=hidden_size, dropout=0.1), >>> bw_cell=tf.keras.layers.SimpleRNNCell(units=hidden_size + 1, dropout=0.1), >>> return_seq_2d=True, return_last_state=False >>> )(rnn_out1) >>> dense = tl.layers.Dense(n_units=1)(rnn_out2) >>> outputs = tl.layers.Reshape([-1, num_steps])(dense) >>> rnn_model = tl.models.Model(inputs=inputs, outputs=outputs)

提示

Input dimension should be rank 3 : [batch_size, n_steps, n_features]. If not, please see layer Reshape.

计算步长函数

方法 1

tensorlayer.layers.retrieve_seq_length_op(data)[源代码]

An op to compute the length of a sequence from input shape of [batch_size, n_step(max), n_features], it can be used when the features of padding (on right hand side) are all zeros.

参数

data (tensor) -- [batch_size, n_step(max), n_features] with zero padding on right hand side.

实际案例

Single feature

>>> data = [[[1],[2],[0],[0],[0]],
>>>         [[1],[2],[3],[0],[0]],
>>>         [[1],[2],[6],[1],[0]]]
>>> data = tf.convert_to_tensor(data, dtype=tf.float32)
>>> length = tl.layers.retrieve_seq_length_op(data)
[2 3 4]

Multiple features

>>> data = [[[1,2],[2,2],[1,2],[1,2],[0,0]],
>>>          [[2,3],[2,4],[3,2],[0,0],[0,0]],
>>>          [[3,3],[2,2],[5,3],[1,2],[0,0]]]
>>> data = tf.convert_to_tensor(data, dtype=tf.float32)
>>> length = tl.layers.retrieve_seq_length_op(data)
[4 3 4]

引用

Borrow from TFlearn.

方法 2

tensorlayer.layers.retrieve_seq_length_op2(data)[源代码]

An op to compute the length of a sequence, from input shape of [batch_size, n_step(max)], it can be used when the features of padding (on right hand side) are all zeros.

参数

data (tensor) -- [batch_size, n_step(max)] with zero padding on right hand side.

实际案例

>>> data = [[1,2,0,0,0],
>>>         [1,2,3,0,0],
>>>         [1,2,6,1,0]]
>>> data = tf.convert_to_tensor(data, dtype=tf.float32)
>>> length = tl.layers.retrieve_seq_length_op2(data)
[2 3 4]

方法 3

tensorlayer.layers.retrieve_seq_length_op3(data, pad_val=0)[源代码]

An op to compute the length of a sequence, the data shape can be [batch_size, n_step(max)] or [batch_size, n_step(max), n_features].

If the data has type of tf.string and pad_val is assigned as empty string (''), this op will compute the length of the string sequence.

参数
  • data (tensor) -- [batch_size, n_step(max)] or [batch_size, n_step(max), n_features] with zero padding on the right hand side.

  • pad_val -- By default 0. If the data is tf.string, please assign this as empty string ('')

实际案例

>>> data = [[[1],[2],[0],[0],[0]],
>>>         [[1],[2],[3],[0],[0]],
>>>         [[1],[2],[6],[1],[0]]]
>>> data = tf.convert_to_tensor(data, dtype=tf.float32)
>>> length = tl.layers.retrieve_seq_length_op3(data)
[2, 3, 4]
>>> data = [[[1,2],[2,2],[1,2],[1,2],[0,0]],
>>>         [[2,3],[2,4],[3,2],[0,0],[0,0]],
>>>         [[3,3],[2,2],[5,3],[1,2],[0,0]]]
>>> data = tf.convert_to_tensor(data, dtype=tf.float32)
>>> length = tl.layers.retrieve_seq_length_op3(data)
[4, 3, 4]
>>> data = [[1,2,0,0,0],
>>>         [1,2,3,0,0],
>>>         [1,2,6,1,0]]
>>> data = tf.convert_to_tensor(data, dtype=tf.float32)
>>> length = tl.layers.retrieve_seq_length_op3(data)
[2, 3, 4]
>>> data = [['hello','world','','',''],
>>>         ['hello','world','tensorlayer','',''],
>>>         ['hello','world','tensorlayer','2.0','']]
>>> data = tf.convert_to_tensor(data, dtype=tf.string)
>>> length = tl.layers.retrieve_seq_length_op3(data, pad_val='')
[2, 3, 4]

方法 4

形状修改层

Flatten 层

class tensorlayer.layers.Flatten(name=None)[源代码]

A layer that reshapes high-dimension input into a vector.

Then we often apply Dense, RNN, Concat and etc on the top of a flatten layer. [batch_size, mask_row, mask_col, n_mask] ---> [batch_size, mask_row * mask_col * n_mask]

参数

name (None or str) -- A unique layer name.

实际案例

>>> x = tl.layers.Input([8, 4, 3], name='input')
>>> y = tl.layers.Flatten(name='flatten')(x)
[8, 12]

Reshape 层

class tensorlayer.layers.Reshape(shape, name=None)[源代码]

A layer that reshapes a given tensor.

参数
  • shape (tuple of int) -- The output shape, see tf.reshape.

  • name (str) -- A unique layer name.

实际案例

>>> x = tl.layers.Input([8, 4, 3], name='input')
>>> y = tl.layers.Reshape(shape=[-1, 12], name='reshape')(x)
(8, 12)

Transpose 层

class tensorlayer.layers.Transpose(perm=None, conjugate=False, name=None)[源代码]

A layer that transposes the dimension of a tensor.

See tf.transpose() .

参数
  • perm (list of int) -- The permutation of the dimensions, similar with numpy.transpose. If None, it is set to (n-1...0), where n is the rank of the input tensor.

  • conjugate (bool) -- By default False. If True, returns the complex conjugate of complex numbers (and transposed) For example [[1+1j, 2+2j]] --> [[1-1j], [2-2j]]

  • name (str) -- A unique layer name.

实际案例

>>> x = tl.layers.Input([8, 4, 3], name='input')
>>> y = tl.layers.Transpose(perm=[0, 2, 1], conjugate=False, name='trans')(x)
(8, 3, 4)

Shuffle 层

class tensorlayer.layers.Shuffle(group, name=None)[源代码]

A layer that shuffle a 2D image [batch, height, width, channel], see here.

参数
  • group (int) -- The number of groups.

  • name (str) -- A unique layer name.

实际案例

>>> x = tl.layers.Input([1, 16, 16, 8], name='input')
>>> y = tl.layers.Shuffle(group=2, name='shuffle')(x)
(1, 16, 16, 8)

空间变换层

2D Affine Transformation 层

class tensorlayer.layers.SpatialTransformer2dAffine(in_channels=None, out_size=(40, 40), name=None)[源代码]

The SpatialTransformer2dAffine class is a 2D Spatial Transformer Layer for 2D Affine Transformation.

参数
  • in_channels --

  • out_size (tuple of int or None) --

    • The size of the output of the network (height, width), the feature maps will be resized by this.

  • name (str) --

    • A unique layer name.

引用

2D Affine Transformation 函数

tensorlayer.layers.transformer(U, theta, out_size, name='SpatialTransformer2dAffine')[源代码]

Spatial Transformer Layer for 2D Affine Transformation , see SpatialTransformer2dAffine class.

参数
  • U (list of float) -- The output of a convolutional net should have the shape [num_batch, height, width, num_channels].

  • theta (float) -- The output of the localisation network should be [num_batch, 6], value range should be [0, 1] (via tanh).

  • out_size (tuple of int) -- The size of the output of the network (height, width)

  • name (str) -- Optional function name

返回

The transformed tensor.

返回类型

Tensor

引用

提示

To initialize the network to the identity transform init.

>>> import tensorflow as tf
>>> # ``theta`` to
>>> identity = np.array([[1., 0., 0.], [0., 1., 0.]])
>>> identity = identity.flatten()
>>> theta = tf.Variable(initial_value=identity)

Batch 2D Affine Transformation 函数

tensorlayer.layers.batch_transformer(U, thetas, out_size, name='BatchSpatialTransformer2dAffine')[源代码]

Batch Spatial Transformer function for 2D Affine Transformation.

参数
  • U (list of float) -- tensor of inputs [batch, height, width, num_channels]

  • thetas (list of float) -- a set of transformations for each input [batch, num_transforms, 6]

  • out_size (list of int) -- the size of the output [out_height, out_width]

  • name (str) -- optional function name

返回

Tensor of size [batch * num_transforms, out_height, out_width, num_channels]

返回类型

float

堆叠层

堆叠层

class tensorlayer.layers.Stack(axis=1, name=None)[源代码]

The Stack class is a layer for stacking a list of rank-R tensors into one rank-(R+1) tensor, see tf.stack().

参数
  • axis (int) -- New dimension along which to stack.

  • name (str) -- A unique layer name.

实际案例

>>> import tensorflow as tf
>>> import tensorlayer as tl
>>> ni = tl.layers.Input([None, 784], name='input')
>>> net1 = tl.layers.Dense(10, name='dense1')(ni)
>>> net2 = tl.layers.Dense(10, name='dense2')(ni)
>>> net3 = tl.layers.Dense(10, name='dense3')(ni)
>>> net = tl.layers.Stack(axis=1, name='stack')([net1, net2, net3])
(?, 3, 10)

反堆叠层

class tensorlayer.layers.UnStack(num=None, axis=0, name=None)[源代码]

The UnStack class is a layer for unstacking the given dimension of a rank-R tensor into rank-(R-1) tensors., see tf.unstack().

参数
  • num (int or None) -- The length of the dimension axis. Automatically inferred if None (the default).

  • axis (int) -- Dimension along which axis to concatenate.

  • name (str) -- A unique layer name.

返回

The list of layer objects unstacked from the input.

返回类型

list of Layer

实际案例

>>> ni = Input([4, 10], name='input')
>>> nn = Dense(n_units=5)(ni)
>>> nn = UnStack(axis=1)(nn)  # unstack in channel axis
>>> len(nn)  # 5
>>> nn[0].shape  # (4,)

帮助函数

Flatten 函数

tensorlayer.layers.flatten_reshape(variable, name='flatten')[源代码]

Reshapes a high-dimension vector input.

[batch_size, mask_row, mask_col, n_mask] ---> [batch_size, mask_row x mask_col x n_mask]

参数
  • variable (TensorFlow variable or tensor) -- The variable or tensor to be flatten.

  • name (str) -- A unique layer name.

返回

Flatten Tensor

返回类型

Tensor

初始化 循环层 状态

tensorlayer.layers.initialize_rnn_state(state, feed_dict=None)[源代码]

Returns the initialized RNN state. The inputs are LSTMStateTuple or State of RNNCells, and an optional feed_dict.

参数
  • state (RNN state.) -- The TensorFlow's RNN state.

  • feed_dict (dictionary) -- Initial RNN state; if None, returns zero state.

返回

The TensorFlow's RNN state.

返回类型

RNN state

去除列表中重复元素

tensorlayer.layers.list_remove_repeat(x)[源代码]

Remove the repeated items in a list, and return the processed list. You may need it to create merged layer like Concat, Elementwise and etc.

参数

x (list) -- Input

返回

A list that after removing it's repeated items

返回类型

list

实际案例

>>> l = [2, 3, 4, 2, 3]
>>> l = list_remove_repeat(l)
[2, 3, 4]