API - 自然语言处理

自然语言处理与词向量。

generate_skip_gram_batch(data, batch_size, ...)

Generate a training batch for the Skip-Gram model.

sample([a, temperature])

Sample an index from a probability array.

sample_top([a, top_k])

Sample from top_k probabilities.

SimpleVocabulary(vocab, unk_id)

Simple vocabulary wrapper, see create_vocab().

Vocabulary(vocab_file[, start_word, ...])

Create Vocabulary class from a given vocabulary and its id-word, word-id convert.

process_sentence(sentence[, start_word, ...])

Seperate a sentence string into a list of string words, add start_word and end_word, see create_vocab() and tutorial_tfrecord3.py.

create_vocab(sentences, word_counts_output_file)

Creates the vocabulary of word to word_id.

simple_read_words([filename])

Read context from file without any preprocessing.

read_words([filename, replace])

Read list format context from a file.

read_analogies_file([eval_file, word2id])

Reads through an analogy question file, return its id format.

build_vocab(data)

Build vocabulary.

build_reverse_dictionary(word_to_id)

Given a dictionary that maps word to integer id.

build_words_dataset([words, ...])

Build the words dictionary and replace rare words with 'UNK' token.

save_vocab([count, name])

Save the vocabulary to a file so the model can be reloaded.

words_to_word_ids([data, word_to_id, unk_key])

Convert a list of string (words) to IDs.

word_ids_to_words(data, id_to_word)

Convert a list of integer to strings (words).

basic_tokenizer(sentence[, _WORD_SPLIT])

Very basic tokenizer: split the sentence into a list of tokens.

create_vocabulary(vocabulary_path, ...[, ...])

Create vocabulary file (if it does not exist yet) from data file.

initialize_vocabulary(vocabulary_path)

Initialize vocabulary from file, return the word_to_id (dictionary) and id_to_word (list).

sentence_to_token_ids(sentence, vocabulary)

Convert a string to list of integers representing token-ids.

data_to_token_ids(data_path, target_path, ...)

Tokenize data file and turn into token-ids using given vocabulary file.

moses_multi_bleu(hypotheses, references[, ...])

Calculate the bleu score for hypotheses and references using the MOSES ulti-bleu.perl script.

训练嵌入矩阵的迭代函数

tensorlayer.nlp.generate_skip_gram_batch(data, batch_size, num_skips, skip_window, data_index=0)[源代码]

Generate a training batch for the Skip-Gram model.

See Word2Vec example.

参数
  • data (list of data) -- To present context, usually a list of integers.

  • batch_size (int) -- Batch size to return.

  • num_skips (int) -- How many times to reuse an input to generate a label.

  • skip_window (int) -- How many words to consider left and right.

  • data_index (int) -- Index of the context location. This code use data_index to instead of yield like tl.iterate.

返回

  • batch (list of data) -- Inputs.

  • labels (list of data) -- Labels

  • data_index (int) -- Index of the context location.

实际案例

Setting num_skips=2, skip_window=1, use the right and left words. In the same way, num_skips=4, skip_window=2 means use the nearby 4 words.

>>> data = [1,2,3,4,5,6,7,8,9,10,11]
>>> batch, labels, data_index = tl.nlp.generate_skip_gram_batch(data=data, batch_size=8, num_skips=2, skip_window=1, data_index=0)
>>> print(batch)
[2 2 3 3 4 4 5 5]
>>> print(labels)
[[3]
[1]
[4]
[2]
[5]
[3]
[4]
[6]]

抽样方法

简单抽样

tensorlayer.nlp.sample(a=None, temperature=1.0)[源代码]

Sample an index from a probability array.

参数
  • a (list of float) -- List of probabilities.

  • temperature (float or None) --

    The higher the more uniform. When a = [0.1, 0.2, 0.7],
    • temperature = 0.7, the distribution will be sharpen [0.05048273, 0.13588945, 0.81362782]

    • temperature = 1.0, the distribution will be the same [0.1, 0.2, 0.7]

    • temperature = 1.5, the distribution will be filtered [0.16008435, 0.25411807, 0.58579758]

    • If None, it will be np.argmax(a)

提示

  • No matter what is the temperature and input list, the sum of all probabilities will be one. Even if input list = [1, 100, 200], the sum of all probabilities will still be one.

  • For large vocabulary size, choice a higher temperature or tl.nlp.sample_top to avoid error.

从top k中抽样

tensorlayer.nlp.sample_top(a=None, top_k=10)[源代码]

Sample from top_k probabilities.

参数
  • a (list of float) -- List of probabilities.

  • top_k (int) -- Number of candidates to be considered.

词的向量表示

词汇类 (class)

Simple vocabulary class

class tensorlayer.nlp.SimpleVocabulary(vocab, unk_id)[源代码]

Simple vocabulary wrapper, see create_vocab().

参数
  • vocab (dictionary) -- A dictionary that maps word to ID.

  • unk_id (int) -- The ID for 'unknown' word.

Vocabulary class

class tensorlayer.nlp.Vocabulary(vocab_file, start_word='<S>', end_word='</S>', unk_word='<UNK>', pad_word='<PAD>')[源代码]

Create Vocabulary class from a given vocabulary and its id-word, word-id convert. See create_vocab() and tutorial_tfrecord3.py.

参数
  • vocab_file (str) -- The file contains the vocabulary (can be created via tl.nlp.create_vocab), where the words are the first whitespace-separated token on each line (other tokens are ignored) and the word ids are the corresponding line numbers.

  • start_word (str) -- Special word denoting sentence start.

  • end_word (str) -- Special word denoting sentence end.

  • unk_word (str) -- Special word denoting unknown words.

vocab

A dictionary that maps word to ID.

Type

dictionary

reverse_vocab

A list that maps ID to word.

Type

list of int

start_id

For start ID.

Type

int

end_id

For end ID.

Type

int

unk_id

For unknown ID.

Type

int

pad_id

For Padding ID.

Type

int

实际案例

The vocab file looks like follow, includes start_word , end_word ...

>>> a 969108
>>> <S> 586368
>>> </S> 586368
>>> . 440479
>>> on 213612
>>> of 202290
>>> the 196219
>>> in 182598
>>> with 152984
>>> and 139109
>>> is 97322

Process sentence

tensorlayer.nlp.process_sentence(sentence, start_word='<S>', end_word='</S>')[源代码]

Seperate a sentence string into a list of string words, add start_word and end_word, see create_vocab() and tutorial_tfrecord3.py.

参数
  • sentence (str) -- A sentence.

  • start_word (str or None) -- The start word. If None, no start word will be appended.

  • end_word (str or None) -- The end word. If None, no end word will be appended.

返回

A list of strings that separated into words.

返回类型

list of str

实际案例

>>> c = "how are you?"
>>> c = tl.nlp.process_sentence(c)
>>> print(c)
['<S>', 'how', 'are', 'you', '?', '</S>']

提示

Create vocabulary

tensorlayer.nlp.create_vocab(sentences, word_counts_output_file, min_word_count=1)[源代码]

Creates the vocabulary of word to word_id.

See tutorial_tfrecord3.py.

The vocabulary is saved to disk in a text file of word counts. The id of each word in the file is its corresponding 0-based line number.

参数
  • sentences (list of list of str) -- All sentences for creating the vocabulary.

  • word_counts_output_file (str) -- The file name.

  • min_word_count (int) -- Minimum number of occurrences for a word.

返回

The simple vocabulary object, see Vocabulary for more.

返回类型

SimpleVocabulary

实际案例

Pre-process sentences

>>> captions = ["one two , three", "four five five"]
>>> processed_capts = []
>>> for c in captions:
>>>     c = tl.nlp.process_sentence(c, start_word="<S>", end_word="</S>")
>>>     processed_capts.append(c)
>>> print(processed_capts)
...[['<S>', 'one', 'two', ',', 'three', '</S>'], ['<S>', 'four', 'five', 'five', '</S>']]

Create vocabulary

>>> tl.nlp.create_vocab(processed_capts, word_counts_output_file='vocab.txt', min_word_count=1)
Creating vocabulary.
  Total words: 8
  Words in vocabulary: 8
  Wrote vocabulary file: vocab.txt

Get vocabulary object

>>> vocab = tl.nlp.Vocabulary('vocab.txt', start_word="<S>", end_word="</S>", unk_word="<UNK>")
INFO:tensorflow:Initializing vocabulary from file: vocab.txt
[TL] Vocabulary from vocab.txt : <S> </S> <UNK>
vocabulary with 10 words (includes start_word, end_word, unk_word)
    start_id: 2
    end_id: 3
    unk_id: 9
    pad_id: 0

从文件中读取文本

Simple read file

tensorlayer.nlp.simple_read_words(filename='nietzsche.txt')[源代码]

Read context from file without any preprocessing.

参数

filename (str) -- A file path (like .txt file)

返回

The context in a string.

返回类型

str

Read file

tensorlayer.nlp.read_words(filename='nietzsche.txt', replace=None)[源代码]

Read list format context from a file.

For customized read_words method, see tutorial_generate_text.py.

参数
  • filename (str) -- a file path.

  • replace (list of str) -- replace original string by target string.

返回

The context in a list (split using space).

返回类型

list of str

从文件中读取类比题目

tensorlayer.nlp.read_analogies_file(eval_file='questions-words.txt', word2id=None)[源代码]

Reads through an analogy question file, return its id format.

参数
  • eval_file (str) -- The file name.

  • word2id (dictionary) -- a dictionary that maps word to ID.

返回

A [n_examples, 4] numpy array containing the analogy question's word IDs.

返回类型

numpy.array

实际案例

The file should be in this format

>>> : capital-common-countries
>>> Athens Greece Baghdad Iraq
>>> Athens Greece Bangkok Thailand
>>> Athens Greece Beijing China
>>> Athens Greece Berlin Germany
>>> Athens Greece Bern Switzerland
>>> Athens Greece Cairo Egypt
>>> Athens Greece Canberra Australia
>>> Athens Greece Hanoi Vietnam
>>> Athens Greece Havana Cuba

Get the tokenized analogy question data

>>> words = tl.files.load_matt_mahoney_text8_dataset()
>>> data, count, dictionary, reverse_dictionary = tl.nlp.build_words_dataset(words, vocabulary_size, True)
>>> analogy_questions = tl.nlp.read_analogies_file(eval_file='questions-words.txt', word2id=dictionary)
>>> print(analogy_questions)
[[ 3068  1248  7161  1581]
[ 3068  1248 28683  5642]
[ 3068  1248  3878   486]
...,
[ 1216  4309 19982 25506]
[ 1216  4309  3194  8650]
[ 1216  4309   140   312]]

建立词汇表、文本与ID转换字典及文本ID化

为单词到ID建立字典

tensorlayer.nlp.build_vocab(data)[源代码]

Build vocabulary.

Given the context in list format. Return the vocabulary, which is a dictionary for word to id. e.g. {'campbell': 2587, 'atlantic': 2247, 'aoun': 6746 .... }

参数

data (list of str) -- The context in list format

返回

that maps word to unique ID. e.g. {'campbell': 2587, 'atlantic': 2247, 'aoun': 6746 .... }

返回类型

dictionary

引用

实际案例

>>> data_path = os.getcwd() + '/simple-examples/data'
>>> train_path = os.path.join(data_path, "ptb.train.txt")
>>> word_to_id = build_vocab(read_txt_words(train_path))

为ID到单词建立字典

tensorlayer.nlp.build_reverse_dictionary(word_to_id)[源代码]

Given a dictionary that maps word to integer id. Returns a reverse dictionary that maps a id to word.

参数

word_to_id (dictionary) -- that maps word to ID.

返回

A dictionary that maps IDs to words.

返回类型

dictionary

建立字典,统计表等

tensorlayer.nlp.build_words_dataset(words=None, vocabulary_size=50000, printable=True, unk_key='UNK')[源代码]

Build the words dictionary and replace rare words with 'UNK' token. The most common word has the smallest integer id.

参数
  • words (list of str or byte) -- The context in list format. You may need to do preprocessing on the words, such as lower case, remove marks etc.

  • vocabulary_size (int) -- The maximum vocabulary size, limiting the vocabulary size. Then the script replaces rare words with 'UNK' token.

  • printable (boolean) -- Whether to print the read vocabulary size of the given words.

  • unk_key (str) -- Represent the unknown words.

返回

  • data (list of int) -- The context in a list of ID.

  • count (list of tuple and list) --

    Pair words and IDs.
    • count[0] is a list : the number of rare words

    • count[1:] are tuples : the number of occurrence of each word

    • e.g. [['UNK', 418391], (b'the', 1061396), (b'of', 593677), (b'and', 416629), (b'one', 411764)]

  • dictionary (dictionary) -- It is word_to_id that maps word to ID.

  • reverse_dictionary (a dictionary) -- It is id_to_word that maps ID to word.

实际案例

>>> words = tl.files.load_matt_mahoney_text8_dataset()
>>> vocabulary_size = 50000
>>> data, count, dictionary, reverse_dictionary = tl.nlp.build_words_dataset(words, vocabulary_size)

引用

保存词汇表

tensorlayer.nlp.save_vocab(count=None, name='vocab.txt')[源代码]

Save the vocabulary to a file so the model can be reloaded.

参数

count (a list of tuple and list) -- count[0] is a list : the number of rare words, count[1:] are tuples : the number of occurrence of each word, e.g. [['UNK', 418391], (b'the', 1061396), (b'of', 593677), (b'and', 416629), (b'one', 411764)]

实际案例

>>> words = tl.files.load_matt_mahoney_text8_dataset()
>>> vocabulary_size = 50000
>>> data, count, dictionary, reverse_dictionary = tl.nlp.build_words_dataset(words, vocabulary_size, True)
>>> tl.nlp.save_vocab(count, name='vocab_text8.txt')
>>> vocab_text8.txt
UNK 418391
the 1061396
of 593677
and 416629
one 411764
in 372201
a 325873
to 316376

文本转ID,ID转文本

These functions can be done by Vocabulary class.

单词到ID

tensorlayer.nlp.words_to_word_ids(data=None, word_to_id=None, unk_key='UNK')[源代码]

Convert a list of string (words) to IDs.

参数
  • data (list of string or byte) -- The context in list format

  • word_to_id (a dictionary) -- that maps word to ID.

  • unk_key (str) -- Represent the unknown words.

返回

A list of IDs to represent the context.

返回类型

list of int

实际案例

>>> words = tl.files.load_matt_mahoney_text8_dataset()
>>> vocabulary_size = 50000
>>> data, count, dictionary, reverse_dictionary = tl.nlp.build_words_dataset(words, vocabulary_size, True)
>>> context = [b'hello', b'how', b'are', b'you']
>>> ids = tl.nlp.words_to_word_ids(words, dictionary)
>>> context = tl.nlp.word_ids_to_words(ids, reverse_dictionary)
>>> print(ids)
[6434, 311, 26, 207]
>>> print(context)
[b'hello', b'how', b'are', b'you']

引用

ID到单词

tensorlayer.nlp.word_ids_to_words(data, id_to_word)[源代码]

Convert a list of integer to strings (words).

参数
  • data (list of int) -- The context in list format.

  • id_to_word (dictionary) -- a dictionary that maps ID to word.

返回

A list of string or byte to represent the context.

返回类型

list of str

实际案例

see tl.nlp.words_to_word_ids

机器翻译相关函数

文本ID化

tensorlayer.nlp.basic_tokenizer(sentence, _WORD_SPLIT=re.compile(b'([., !?"\':;)(])'))[源代码]

Very basic tokenizer: split the sentence into a list of tokens.

参数
  • sentence (tensorflow.python.platform.gfile.GFile Object) --

  • _WORD_SPLIT (regular expression for word spliting.) --

实际案例

>>> see create_vocabulary
>>> from tensorflow.python.platform import gfile
>>> train_path = "wmt/giga-fren.release2"
>>> with gfile.GFile(train_path + ".en", mode="rb") as f:
>>>    for line in f:
>>>       tokens = tl.nlp.basic_tokenizer(line)
>>>       tl.logging.info(tokens)
>>>       exit()
[b'Changing', b'Lives', b'|', b'Changing', b'Society', b'|', b'How',
  b'It', b'Works', b'|', b'Technology', b'Drives', b'Change', b'Home',
  b'|', b'Concepts', b'|', b'Teachers', b'|', b'Search', b'|', b'Overview',
  b'|', b'Credits', b'|', b'HHCC', b'Web', b'|', b'Reference', b'|',
  b'Feedback', b'Virtual', b'Museum', b'of', b'Canada', b'Home', b'Page']

引用

  • Code from /tensorflow/models/rnn/translation/data_utils.py

建立或读取词汇表

tensorlayer.nlp.create_vocabulary(vocabulary_path, data_path, max_vocabulary_size, tokenizer=None, normalize_digits=True, _DIGIT_RE=re.compile(b'\\d'), _START_VOCAB=None)[源代码]

Create vocabulary file (if it does not exist yet) from data file.

Data file is assumed to contain one sentence per line. Each sentence is tokenized and digits are normalized (if normalize_digits is set). Vocabulary contains the most-frequent tokens up to max_vocabulary_size. We write it to vocabulary_path in a one-token-per-line format, so that later token in the first line gets id=0, second line gets id=1, and so on.

参数
  • vocabulary_path (str) -- Path where the vocabulary will be created.

  • data_path (str) -- Data file that will be used to create vocabulary.

  • max_vocabulary_size (int) -- Limit on the size of the created vocabulary.

  • tokenizer (function) -- A function to use to tokenize each data sentence. If None, basic_tokenizer will be used.

  • normalize_digits (boolean) -- If true, all digits are replaced by 0.

  • _DIGIT_RE (regular expression function) -- Default is re.compile(br"\d").

  • _START_VOCAB (list of str) -- The pad, go, eos and unk token, default is [b"_PAD", b"_GO", b"_EOS", b"_UNK"].

引用

  • Code from /tensorflow/models/rnn/translation/data_utils.py

tensorlayer.nlp.initialize_vocabulary(vocabulary_path)[源代码]

Initialize vocabulary from file, return the word_to_id (dictionary) and id_to_word (list).

We assume the vocabulary is stored one-item-per-line, so a file will result in a vocabulary {"dog": 0, "cat": 1}, and this function will also return the reversed-vocabulary ["dog", "cat"].

参数

vocabulary_path (str) -- Path to the file containing the vocabulary.

返回

  • vocab (dictionary) -- a dictionary that maps word to ID.

  • rev_vocab (list of int) -- a list that maps ID to word.

实际案例

>>> Assume 'test' contains
dog
cat
bird
>>> vocab, rev_vocab = tl.nlp.initialize_vocabulary("test")
>>> print(vocab)
>>> {b'cat': 1, b'dog': 0, b'bird': 2}
>>> print(rev_vocab)
>>> [b'dog', b'cat', b'bird']

:raises ValueError : if the provided vocabulary_path does not exist.:

文本转ID,ID转文本

tensorlayer.nlp.sentence_to_token_ids(sentence, vocabulary, tokenizer=None, normalize_digits=True, UNK_ID=3, _DIGIT_RE=re.compile(b'\\d'))[源代码]

Convert a string to list of integers representing token-ids.

For example, a sentence "I have a dog" may become tokenized into ["I", "have", "a", "dog"] and with vocabulary {"I": 1, "have": 2, "a": 4, "dog": 7"} this function will return [1, 2, 4, 7].

参数
  • sentence (tensorflow.python.platform.gfile.GFile Object) -- The sentence in bytes format to convert to token-ids, see basic_tokenizer() and data_to_token_ids().

  • vocabulary (dictionary) -- Mmapping tokens to integers.

  • tokenizer (function) -- A function to use to tokenize each sentence. If None, basic_tokenizer will be used.

  • normalize_digits (boolean) -- If true, all digits are replaced by 0.

返回

The token-ids for the sentence.

返回类型

list of int

tensorlayer.nlp.data_to_token_ids(data_path, target_path, vocabulary_path, tokenizer=None, normalize_digits=True, UNK_ID=3, _DIGIT_RE=re.compile(b'\\d'))[源代码]

Tokenize data file and turn into token-ids using given vocabulary file.

This function loads data line-by-line from data_path, calls the above sentence_to_token_ids, and saves the result to target_path. See comment for sentence_to_token_ids on the details of token-ids format.

参数
  • data_path (str) -- Path to the data file in one-sentence-per-line format.

  • target_path (str) -- Path where the file with token-ids will be created.

  • vocabulary_path (str) -- Path to the vocabulary file.

  • tokenizer (function) -- A function to use to tokenize each sentence. If None, basic_tokenizer will be used.

  • normalize_digits (boolean) -- If true, all digits are replaced by 0.

引用

  • Code from /tensorflow/models/rnn/translation/data_utils.py

衡量指标

BLEU

tensorlayer.nlp.moses_multi_bleu(hypotheses, references, lowercase=False)[源代码]

Calculate the bleu score for hypotheses and references using the MOSES ulti-bleu.perl script.

参数
  • hypotheses (numpy.array.string) -- A numpy array of strings where each string is a single example.

  • references (numpy.array.string) -- A numpy array of strings where each string is a single example.

  • lowercase (boolean) -- If True, pass the "-lc" flag to the multi-bleu script

实际案例

>>> hypotheses = ["a bird is flying on the sky"]
>>> references = ["two birds are flying on the sky", "a bird is on the top of the tree", "an airplane is on the sky",]
>>> score = tl.nlp.moses_multi_bleu(hypotheses, references)
返回

The BLEU score

返回类型

float

引用