Skip to content
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
126 changes: 125 additions & 1 deletion word_embedding/README.md
Original file line number Diff line number Diff line change
@@ -1 +1,125 @@
TBD
# Hsigmoid加速Word Embedding训练
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hsigmoid加速词向量训练

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

## 背景介绍
在自然语言处理领域中,传统做法通常使用one-hot向量来表示词,比如词典为['我', '你', '喜欢'],可以用[1,0,0]、[0,1,0]和[0,0,1]这三个向量分别表示'我'、'你'和'喜欢'。这种表示方式比较简洁,但是当词表很大时,容易产生维度爆炸问题,而且任意两个词的向量是正交的,向量包含的信息有限。为了避免或减轻one-hot表示的缺点,目前通常使用词嵌入向量来取代one-hot表示,词嵌入向量也就是word embedding,具体地,使用一个低维稠密的实向量取代高维稀疏的one-hot向量。训练embedding词表的方法有很多种,神经网络模型是其中之一,包括CBOW等,这些模型本质上是一个分类模型,当词表较大也即类别较多时,传统的softmax将非常消耗时间,针对这类场景,PaddlePaddle提供了hsigmoid等层,来加速模型的训练过程。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. ,(逗号改成分号)而且任意两个词
  2. 词嵌入向量改成词向量
  3. 词向量也就是word embedding,即使用一个低维稠密的实向量取代高维稀疏的one-hot向量
  4. 训练embedding词表-》训练词向量的方法有很多种,神经网络模型是其中之一,包括CBOW(除了CBOW,再列举一些)等。
  5. 这些模型本质上都是一个分类模型,当词表较大即类别较多时,传统的softmax将非常消耗时间。(逗号改成句号)
  6. PaddlePaddle提供了hsigmoid等层。这里的“等”,还有哪些层,请列举出来,如果没有,请去掉这个等。

找个地方可以加上book中词向量的链接。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

## Hsigmoid Layer
Hsigmoid Layer引用自论文\[[1](#参考文献)\],原理是通过构建一个分类二叉树来降低计算复杂度,二叉树中每个叶子节点代表一个类别,每个非叶子节点代表一个二类别分类器。例如我们一共有4个类别分别是0,1,2,3,softmax会分别计算4个类别的得分,然后归一化得到概率,当类别数很多时,计算每个类别的概率将非常耗时,Hsigmoid Layer会根据类别数构建一个平衡二叉树,如下:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Hsigmoid是Hierarchical-sigmoid的意思么,可以说一下H代表什么。
  2. 0,1,2,3中间的逗号都改成顿号
  3. ,(逗号改句号)当类别数
  4. 计算每个类别的概率(去掉将)非常耗时。可以说一下时间复杂度从n降到log(n)了?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


<p align="center">
<img src="images/binary_tree.png" width="220" hspace='10'/> <img src="images/path_to_1.png" width="220" hspace='10'/> <br/>
左图为平衡分类二叉树,右图展示了从根节点到类别1的路径
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

图需要有标号。第9行改为:图1. (a)为平衡二叉树,(b)为根节点到类别1的路径
在两个子图下面画上(a)和(b)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

</p>

二叉树中每个非叶子节点是一个二类别分类器(例如sigmoid),如果类别是0,则取左子节点继续分类判断,反之取右子节点,直至达到叶节点。按照这种方式,每个类别均对应一条路径,例如从root到类别1的路径编码为0,1。训练阶段我们按照真实类别对应的路径,依次计算对应分类器的损失,然后综合所有损失得到最终损失,详细理论细节可参照论文。预测阶段,模型会输出各个非叶节点分类器的概率,我们可以根据概率获取路径编码,然后遍历路径编码就可以得到最终预测类别,具体实现细节见下文。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. 0,1中间的逗号改成顿号。
  2. 可参照论文[1]
  3. 见下文,下文是哪儿?是代码还是论文?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


# 数据准备
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

数据准备前面是##,一篇文章建议只有一个#

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

本文采用Penn Treebank (PTB)数据集(Tomas Mikolov预处理版本),共包含train、valid和test三个文件。其中使用train作为训练数据,valid作为测试数据。本文训练的是5-gram模型,每条数据的前4个词用来预测第5个词。PaddlePaddle提供了对应PTB数据集的python包paddle.dataset.imikolov,自动做数据的下载与预处理。预处理会把数据集中的每一句话前后加上开始符号\<s>以及结束符号\<e>。然后依据窗口大小(本文为5),从头到尾每次向右滑动窗口并生成一条数据。如"I have a dream that one day"可以生成\<s> I have a dream、I have a dream that、have a dream that one、a dream that one day、dream that one day \<e>,PaddlePaddle会把词转换成id数据作为最终输入。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. 数据集请放链接
  2. 即用每条数据的前4个词来预测第5个词
  3. python包paddle.dataset.imikolov,请放源码地址链接
  4. 预处理会把数据集中每一句话的前后XXX,(句号改逗号)然后XXX
  5. 最后一句“PaddlePaddle会把词转换成id数据作为最终输入。”最终输入-》预处理的输出。

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个我解释一下。数据需要解释,我的意见不需要解释PTB这个数据。核心是,如果给定用户自己的一个输入数据,怎么使用这个配置重新在自己的数据上训练模型。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

# 编程实现
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

16行的编程实现可以去掉,直接17行的网络结构开始描述。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

## 网络结构
本文通过训练N-gram语言模型来获得词向量,具体地使用前4个词来预测当前词。网络输入为词的id,然后查询embedding词表获取embedding向量,接着拼接4个词的embedding向量,然后接入一个全连接隐层,最后是hsigmoid层。详细网络结构见下图:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. 详细网络结构见图2
  2. embedding可以改成词向量。
  3. hsigmoid前面是Hsigmoid,请统一

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


<p align="center">
<img src="images/network_conf.png" width = "70%" align="center"/><br/>
网络配置结构
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

图需要有编号

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

</p>

代码实现如下:

```python
import math
import paddle.v2 as paddle


def network_conf(hidden_size, embed_size, dict_size, is_train=True):
first_word = paddle.layer.data(
name='firstw', type=paddle.data_type.integer_value(dict_size))
second_word = paddle.layer.data(
name='secondw', type=paddle.data_type.integer_value(dict_size))
third_word = paddle.layer.data(
name='thirdw', type=paddle.data_type.integer_value(dict_size))
fourth_word = paddle.layer.data(
name='fourthw', type=paddle.data_type.integer_value(dict_size))
target_word = paddle.layer.data(
name='fifthw', type=paddle.data_type.integer_value(dict_size))

embed_param_attr = paddle.attr.Param(
name="_proj", initial_std=0.001, learning_rate=1, l2_rate=0)
embed_first_word = paddle.layer.embedding(
input=first_word, size=embed_size, param_attr=embed_param_attr)
embed_second_word = paddle.layer.embedding(
input=second_word, size=embed_size, param_attr=embed_param_attr)
embed_third_word = paddle.layer.embedding(
input=third_word, size=embed_size, param_attr=embed_param_attr)
embed_fourth_word = paddle.layer.embedding(
input=fourth_word, size=embed_size, param_attr=embed_param_attr)

embed_context = paddle.layer.concat(input=[
embed_first_word, embed_second_word, embed_third_word, embed_fourth_word
])

hidden_layer = paddle.layer.fc(
input=embed_context,
size=hidden_size,
act=paddle.activation.Sigmoid(),
layer_attr=paddle.attr.Extra(drop_rate=0.5),
bias_attr=paddle.attr.Param(learning_rate=2),
param_attr=paddle.attr.Param(
initial_std=1. / math.sqrt(embed_size * 8), learning_rate=1))

if is_train == True:
cost = paddle.layer.hsigmoid(
input=hidden_layer,
label=target_word,
num_classes=dict_size,
param_attr=paddle.attr.Param(name='sigmoid_w'),
bias_attr=paddle.attr.Param(name='sigmoid_b'))
return cost
else:
with paddle.layer.mixed(
size=dict_size - 1,
act=paddle.activation.Sigmoid(),
bias_attr=paddle.attr.Param(name='sigmoid_b')) as prediction:
prediction += paddle.layer.trans_full_matrix_projection(
input=hidden_layer,
param_attr=paddle.attr.Param(name='sigmoid_w'))
return prediction
```

需要注意,在预测阶段,我们需要对hsigmoid参数做一次转置,这里输出的类别数为词典大小减1,对应非叶节点的数量。

## 预测阶段
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

预测阶段-》如何预测

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

预测阶段最重要的就是根据概率得到编码路径,然后遍历路径获取最终的预测类别,这部分逻辑如下:

```python
def decode_res(infer_res, dict_size):
"""
Inferring probabilities are orginized as a complete binary tree.
The actual labels are leaves (indices are counted from class number).
This function travels paths decoded from inferring results.
If the probability >0.5 then go to right child, otherwise go to left child.

param infer_res: inferring result
param dict_size: class number
return predict_lbls: actual class
"""
predict_lbls = []
infer_res = infer_res > 0.5
for i, probs in enumerate(infer_res):
idx = 0
result = 1
while idx < len(probs):
result <<= 1
if probs[idx]:
result |= 1
if probs[idx]:
idx = idx * 2 + 2 # right child
else:
idx = idx * 2 + 1 # left child

predict_lbl = result - dict_size
predict_lbls.append(predict_lbl)
return predict_lbls
```

函数的输入是一个batch样本的预测概率以及词表的大小,里面的循环是对每条样本的输出概率进行解码,解码方式就是按照左0右1的准则,不断遍历路径,直至到达叶子节点。需要注意的是,本文选用的数据集需要较长的时间训练才能得到较好的结果,预测程序选用第一轮的模型,仅为展示方便,不保证效果。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

还需要两个小节:

  1. 再增加一节 “如何运行”吧,一两句话即可。例如:训练可直接运行python hsigmoid_train.py
  2. “使用自己的数据进行训练”,这一节贴一段 reader的代码吧,然后调用一下 create_reader。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里还是用一两句话解释一下原始数据长什么样子吧。
例如:假设原始数据一行是一句话,以空格分隔。下面一条示例数据。类似这样。

# 参考文献
1. Morin, F., & Bengio, Y. (2005, January). [Hierarchical Probabilistic Neural Network Language Model](http://www.iro.umontreal.ca/~lisa/pointeurs/hierarchical-nnlm-aistats05.pdf). In Aistats (Vol. 5, pp. 246-252).
60 changes: 60 additions & 0 deletions word_embedding/hsigmoid_conf.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-

import math
import paddle.v2 as paddle


def network_conf(hidden_size, embed_size, dict_size, is_train=True):
first_word = paddle.layer.data(
name='firstw', type=paddle.data_type.integer_value(dict_size))
second_word = paddle.layer.data(
name='secondw', type=paddle.data_type.integer_value(dict_size))
third_word = paddle.layer.data(
name='thirdw', type=paddle.data_type.integer_value(dict_size))
fourth_word = paddle.layer.data(
name='fourthw', type=paddle.data_type.integer_value(dict_size))
target_word = paddle.layer.data(
name='fifthw', type=paddle.data_type.integer_value(dict_size))

embed_param_attr = paddle.attr.Param(
name="_proj", initial_std=0.001, learning_rate=1, l2_rate=0)
embed_first_word = paddle.layer.embedding(
input=first_word, size=embed_size, param_attr=embed_param_attr)
embed_second_word = paddle.layer.embedding(
input=second_word, size=embed_size, param_attr=embed_param_attr)
embed_third_word = paddle.layer.embedding(
input=third_word, size=embed_size, param_attr=embed_param_attr)
embed_fourth_word = paddle.layer.embedding(
input=fourth_word, size=embed_size, param_attr=embed_param_attr)

embed_context = paddle.layer.concat(input=[
embed_first_word, embed_second_word, embed_third_word, embed_fourth_word
])

hidden_layer = paddle.layer.fc(
input=embed_context,
size=hidden_size,
act=paddle.activation.Sigmoid(),
layer_attr=paddle.attr.Extra(drop_rate=0.5),
bias_attr=paddle.attr.Param(learning_rate=2),
param_attr=paddle.attr.Param(
initial_std=1. / math.sqrt(embed_size * 8), learning_rate=1))

if is_train == True:
cost = paddle.layer.hsigmoid(
input=hidden_layer,
label=target_word,
num_classes=dict_size,
param_attr=paddle.attr.Param(name='sigmoid_w'),
bias_attr=paddle.attr.Param(name='sigmoid_b'))
return cost
else:
with paddle.layer.mixed(
size=dict_size - 1,
act=paddle.activation.Sigmoid(),
bias_attr=paddle.attr.Param(name='sigmoid_b')) as prediction:
prediction += paddle.layer.trans_full_matrix_projection(
input=hidden_layer,
param_attr=paddle.attr.Param(name='sigmoid_w'))
return prediction
83 changes: 83 additions & 0 deletions word_embedding/hsigmoid_predict.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-

import paddle.v2 as paddle
from hsigmoid_conf import network_conf
import gzip


def decode_res(infer_res, dict_size):
"""
Inferring probabilities are orginized as a complete binary tree.
The actual labels are leaves (indices are counted from class number).
This function travels paths decoded from inferring results.
If the probability >0.5 then go to right child, otherwise go to left child.

param infer_res: inferring result
param dict_size: class number
return predict_lbls: actual class
"""
predict_lbls = []
infer_res = infer_res > 0.5
for i, probs in enumerate(infer_res):
idx = 0
result = 1
while idx < len(probs):
result <<= 1
if probs[idx]:
result |= 1
if probs[idx]:
idx = idx * 2 + 2 # right child
else:
idx = idx * 2 + 1 # left child

predict_lbl = result - dict_size
predict_lbls.append(predict_lbl)
return predict_lbls


def predict(batch_ins, idx_word_dict, dict_size, prediction_layer, parameters):
infer_res = paddle.infer(
output_layer=prediction_layer, parameters=parameters, input=batch_ins)

predict_lbls = decode_res(infer_res, dict_size)
predict_words = [idx_word_dict[lbl] for lbl in predict_lbls] # map to word

# Ouput format: word1 word2 word3 word4 -> predict label
for i, ins in enumerate(batch_ins):
print(idx_word_dict[ins[0]] + ' ' + \
idx_word_dict[ins[1]] + ' ' + \
idx_word_dict[ins[2]] + ' ' + \
idx_word_dict[ins[3]] + ' ' + \
' -> ' + predict_words[i])


def main():
paddle.init(use_gpu=False, trainer_count=1)
word_dict = paddle.dataset.imikolov.build_dict(min_word_freq=2)
dict_size = len(word_dict)
prediction_layer = network_conf(
is_train=False, hidden_size=256, embed_size=32, dict_size=dict_size)

with gzip.open('./models/model_pass_00000.tar.gz') as f:
parameters = paddle.parameters.Parameters.from_tar(f)

idx_word_dict = dict((v, k) for k, v in word_dict.items())
batch_size = 64
batch_ins = []
ins_iter = paddle.dataset.imikolov.test(word_dict, 5)

for ins in ins_iter():
batch_ins.append(ins[:-1])
if len(batch_ins) == batch_size:
predict(batch_ins, idx_word_dict, dict_size, prediction_layer,
parameters)
batch_ins = []

if len(batch_ins) > 0:
predict(batch_ins, idx_word_dict, dict_size, prediction_layer,
parameters)


if __name__ == '__main__':
main()
56 changes: 56 additions & 0 deletions word_embedding/hsigmoid_train.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-

import paddle.v2 as paddle
from hsigmoid_conf import network_conf
import gzip


def main():
paddle.init(use_gpu=False, trainer_count=1)
word_dict = paddle.dataset.imikolov.build_dict(min_word_freq=2)
dict_size = len(word_dict)
cost = network_conf(
is_train=True, hidden_size=256, embed_size=32, dict_size=dict_size)

def event_handler(event):
if isinstance(event, paddle.event.EndPass):
model_name = './models/model_pass_%05d.tar.gz' % event.pass_id
print("Save model into %s ..." % model_name)
with gzip.open(model_name, 'w') as f:
parameters.to_tar(f)

if isinstance(event, paddle.event.EndIteration):
if event.batch_id % 100 == 0:
result = trainer.test(
paddle.batch(
paddle.dataset.imikolov.test(word_dict, 5), 32))
print("Pass %d, Batch %d, Cost %f, Test Cost %f" %
(event.pass_id, event.batch_id, event.cost, result.cost))

feeding = {
'firstw': 0,
'secondw': 1,
'thirdw': 2,
'fourthw': 3,
'fifthw': 4
}

parameters = paddle.parameters.create(cost)
adam_optimizer = paddle.optimizer.Adam(
learning_rate=3e-3,
regularization=paddle.optimizer.L2Regularization(8e-4))
trainer = paddle.trainer.SGD(cost, parameters, adam_optimizer)

trainer.train(
paddle.batch(
paddle.reader.shuffle(
lambda: paddle.dataset.imikolov.train(word_dict, 5)(),
buf_size=1000), 64),
num_passes=30,
event_handler=event_handler,
feeding=feeding)


if __name__ == '__main__':
main()
Binary file added word_embedding/images/binary_tree.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added word_embedding/images/network_conf.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added word_embedding/images/path_to_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.