Skip to content

Commit 5fdb54e

Browse files
mraunakJavier Turek
andauthored
Add Information Gain Filtration algorithm (huggingface#16953)
* Add information gain filtration algorithm * Complying with black requirements * Added author * Fixed import order * flake8 corrections Co-authored-by: Javier Turek <[email protected]>
1 parent 91ede48 commit 5fdb54e

File tree

6 files changed

+963
-0
lines changed

6 files changed

+963
-0
lines changed
Lines changed: 100 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,100 @@
1+
2+
# Information Gain Filtration(IGF)
3+
4+
Authors @Tuko @mraunak
5+
6+
This folder contains the code how to implement IGF for finetuning on GPT-2.
7+
8+
## What is IGF?
9+
10+
Here we present a general fine-tuning method that we call information gain filtration for improving the overall training efficiency and final
11+
performance of language model fine-tuning(see paper below). The method is an alternative fine-tuning method that trains
12+
a secondary model (e.g., a simple convolutional network) to predict the amount of information
13+
gained over a given pre-trained model. The secondary model is lightweight and trained to
14+
predict the Information Gain measure. Information Gain is defined as the change in a loss
15+
function for a model before and after an SGD update with a sample (Equation X in the paper).
16+
A small subset of the training set named the “objective” set, is used to measure information
17+
gain on the pre-trained model, and consequently to train the secondary model. After
18+
training, the model is used for filtering samples for the fine-tuning process. Therefore,
19+
a high information gain value would suggest a sample is informative, whereas a low value
20+
would suggest a non-informative sample that should be filtered out. Thus, a thresholding
21+
strategy is defined to select informative samples. With such a strategy, samples are filtered
22+
and once enough samples are selected to form a mini-batch and a usual fine-tuning/optimization
23+
step is applied. The filtration process is repeated until the fine-tuning process is over.
24+
25+
Paper [Selecting Informative Contexts Improves Language Model Finetuning](https://arxiv.org/abs/2005.00175)
26+
27+
# Results
28+
29+
Several experiments were conducted to show the robustness of the IGF method versus the
30+
standard fine-tuning process. For example, we achieve a median perplexity of 54.0 on the
31+
Books dataset compared to 57.3 for standard fine-tuning on GPT-2 Small. The code was
32+
implemented using the Transformers library and Pytorch. While the method may seem more
33+
expensive, we saw enough evidence that it may lead to a performance benefit in the final models.
34+
35+
![IGF performance](result_igf.png)
36+
37+
Figure 1: Comparing IGF to Standard Fine-tuning:
38+
IGF with constant (p < 10−3 , t-test) and shifting(p < 10−6 , t-test) thresholding significantly outperform standard fine-tuning. The left-hand figure shows
39+
test-set perplexity after each fine-tuning batch, averaged over 50 runs (error bars denote ± one standard error). The right-hand figure shows the perplexity of each
40+
method after 60 batches. IGF with shifting thresholding (red) clearly improves over standard batched fine-tuning with Adam
41+
42+
## How to use this project?
43+
44+
To fine-tune a transformer model with IGF on a language modeling task, use the following script:
45+
46+
- `model_name_or_path`: Path to pretrained model or model identifier from huggingface.co/models
47+
- `data_file`: A jbl file containing tokenized data which can be split as objective dataset,
48+
train_dataset and test_dataset
49+
- `igf_data_file`: A jbl file containing the context and information gain pairs to train secondary learner.
50+
- `context_len`: The maximum total input sequence length after tokenization. Sequences longer
51+
than this will be truncated, sequences shorter will be padded.
52+
- `size_objective_set`: Number of articles that are long enough to be used as our objective set"
53+
- `min_len`: The minimum length of the article to be used as objective set
54+
- `trim`: Truncate the example if it exceeds context length
55+
- `eval_freq`: Secondary model evaluation can be triggered at eval_freq
56+
- `max_steps`: To calculate training epochs
57+
- `number`: The number of examples split to be used as objective_set/test_data
58+
- `secondary_learner_batch_size`: The batch size of training data for secondary learner
59+
- `secondary_learner_max_epochs`: The number of epochs to train secondary learner
60+
- `recopy_model`: Reset the model to the original pretrained GPT-2 weights after each iteration
61+
- `eval_interval`: Decay the selectivity of our secondary learner filter from"
62+
1 standard deviation above average to 1 below average after eval_interval(10) batches"
63+
64+
65+
```python
66+
python run_clm_igf.py\
67+
--model_name_or_path "gpt2" \
68+
--data_file="data/tokenized_stories_train_wikitext103" \
69+
--igf_data_file="data/IGF_values" \
70+
--context_len 32 \
71+
--size_objective_set 100 \
72+
--min_len 1026 \
73+
--trim True \
74+
--eval_freq 100 \
75+
--max_steps 1000 \
76+
--secondary_learner_batch_size 128 \
77+
--secondary_learner_max_epochs 15 \
78+
--number 100 \
79+
--recopy_model \
80+
--eval_interval 10 \
81+
```
82+
83+
## Citation
84+
85+
If you find the resource useful, please cite the following paper
86+
87+
```
88+
@inproceedings{antonello-etal-2021-selecting,
89+
title = "Selecting Informative Contexts Improves Language Model Fine-tuning",
90+
author = "Antonello, Richard and Beckage, Nicole and Turek, Javier and Huth, Alexander",
91+
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
92+
month = aug,
93+
year = "2021",
94+
address = "Online",
95+
publisher = "Association for Computational Linguistics",
96+
url = "https://aclanthology.org/2021.acl-long.87",
97+
doi = "10.18653/v1/2021.acl-long.87",
98+
pages = "1072--1085",
99+
}
100+
```

examples/research_projects/information-gain-filtration/igf/__init__.py

Whitespace-only changes.

0 commit comments

Comments
 (0)