Skip to content

test_understand_sentiment_lstm.py only use fixed data #5780

@JiayiFeng

Description

@JiayiFeng

In the test_understand_sentiment_lstm.py, the training code is like this:

def main():
    word_dict = paddle.dataset.imdb.word_dict()
    cost, acc = lstm_net(dict_dim=len(word_dict), class_dim=2)

    batch_size = 100
    train_data = paddle.batch(
        paddle.reader.buffered(
            paddle.dataset.imdb.train(word_dict), size=batch_size * 10),
        batch_size=batch_size)

    data = chop_data(next(train_data()))

    place = core.CPUPlace()
    tensor_words, tensor_label = prepare_feed_data(data, place)
    exe = Executor(place)
    exe.run(framework.default_startup_program())

    while True:
        outs = exe.run(framework.default_main_program(),
                       feed={"words": tensor_words,
                             "label": tensor_label},
                       fetch_list=[cost, acc])
        cost_val = np.array(outs[0])
        acc_val = np.array(outs[1])

        print("cost=" + str(cost_val) + " acc=" + str(acc_val))
        if acc_val > 0.9:
            break

Obviously, every batch the same data are fed into the model. It might be a good way to test basic functions of lstm, but it's not enough to check its performance on the real data.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions