|
| 1 | +<!--Copyright 2022 The HuggingFace Team. All rights reserved. |
| 2 | + |
| 3 | +Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with |
| 4 | +the License. You may obtain a copy of the License at |
| 5 | + |
| 6 | +http://www.apache.org/licenses/LICENSE-2.0 |
| 7 | + |
| 8 | +Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on |
| 9 | +an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the |
| 10 | +specific language governing permissions and limitations under the License. |
| 11 | +--> |
| 12 | + |
| 13 | +# Selección múltiple |
| 14 | + |
| 15 | +La tarea de selección múltiple es parecida a la de responder preguntas, con la excepción de que se dan varias opciones de respuesta junto con el contexto. El modelo se entrena para escoger la respuesta correcta |
| 16 | +entre varias opciones a partir del contexto dado. |
| 17 | + |
| 18 | +Esta guía te mostrará como hacerle fine-tuning a [BERT](https://huggingface.co/bert-base-uncased) en la configuración `regular` del dataset [SWAG](https://huggingface.co/datasets/swag), de forma |
| 19 | +que seleccione la mejor respuesta a partir de varias opciones y algún contexto. |
| 20 | + |
| 21 | +## Cargar el dataset SWAG |
| 22 | + |
| 23 | +Carga el dataset SWAG con la biblioteca 🤗 Datasets: |
| 24 | + |
| 25 | +```py |
| 26 | +>>> from datasets import load_dataset |
| 27 | + |
| 28 | +>>> swag = load_dataset("swag", "regular") |
| 29 | +``` |
| 30 | +
|
| 31 | +Ahora, échale un vistazo a un ejemplo del dataset: |
| 32 | +
|
| 33 | +```py |
| 34 | +>>> swag["train"][0] |
| 35 | +{'ending0': 'passes by walking down the street playing their instruments.', |
| 36 | + 'ending1': 'has heard approaching them.', |
| 37 | + 'ending2': "arrives and they're outside dancing and asleep.", |
| 38 | + 'ending3': 'turns the lead singer watches the performance.', |
| 39 | + 'fold-ind': '3416', |
| 40 | + 'gold-source': 'gold', |
| 41 | + 'label': 0, |
| 42 | + 'sent1': 'Members of the procession walk down the street holding small horn brass instruments.', |
| 43 | + 'sent2': 'A drum line', |
| 44 | + 'startphrase': 'Members of the procession walk down the street holding small horn brass instruments. A drum line', |
| 45 | + 'video-id': 'anetv_jkn6uvmqwh4'} |
| 46 | +``` |
| 47 | + |
| 48 | +Los campos `sent1` y `sent2` muestran cómo comienza una oración, y cada campo `ending` indica cómo podría terminar. Dado el comienzo de la oración, el modelo debe escoger el final de oración correcto indicado por el campo `label`. |
| 49 | + |
| 50 | +## Preprocesmaiento |
| 51 | + |
| 52 | +Carga el tokenizer de BERT para procesar el comienzo de cada oración y los cuatro finales posibles: |
| 53 | + |
| 54 | +```py |
| 55 | +>>> from transformers import AutoTokenizer |
| 56 | + |
| 57 | +>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") |
| 58 | +``` |
| 59 | + |
| 60 | +La función de preprocesmaiento debe hacer lo siguiente: |
| 61 | + |
| 62 | +1. Hacer cuatro copias del campo `sent1` de forma que se pueda combinar cada una con el campo `sent2` para recrear la forma en que empieza la oración. |
| 63 | +2. Combinar `sent2` con cada uno de los cuatro finales de oración posibles. |
| 64 | +3. Aplanar las dos listas para que puedas tokenizarlas, y luego des-aplanarlas para que cada ejemplo tenga los campos `input_ids`, `attention_mask` y `labels` correspondientes. |
| 65 | + |
| 66 | +```py |
| 67 | +>>> ending_names = ["ending0", "ending1", "ending2", "ending3"] |
| 68 | + |
| 69 | + |
| 70 | +>>> def preprocess_function(examples): |
| 71 | +... first_sentences = [[context] * 4 for context in examples["sent1"]] |
| 72 | +... question_headers = examples["sent2"] |
| 73 | +... second_sentences = [ |
| 74 | +... [f"{header} {examples[end][i]}" for end in ending_names] for i, header in enumerate(question_headers) |
| 75 | +... ] |
| 76 | + |
| 77 | +... first_sentences = sum(first_sentences, []) |
| 78 | +... second_sentences = sum(second_sentences, []) |
| 79 | + |
| 80 | +... tokenized_examples = tokenizer(first_sentences, second_sentences, truncation=True) |
| 81 | +... return {k: [v[i : i + 4] for i in range(0, len(v), 4)] for k, v in tokenized_examples.items()} |
| 82 | +``` |
| 83 | + |
| 84 | +Usa la función [`~datasets.Dataset.map`] de 🤗 Datasets para aplicarle la función de preprocesamiento al dataset entero. Puedes acelerar la función `map` haciendo `batched=True` para procesar varios elementos del dataset a la vez. |
| 85 | + |
| 86 | +```py |
| 87 | +tokenized_swag = swag.map(preprocess_function, batched=True) |
| 88 | +``` |
| 89 | + |
| 90 | +🤗 Transformers no tiene un collator de datos para la tarea de selección múltiple, así que tendrías que crear uno. Puedes adaptar el [`DataCollatorWithPadding`] para crear un lote de ejemplos para selección múltiple. Este también |
| 91 | +le *añadirá relleno de manera dinámica* a tu texto y a las etiquetas para que tengan la longitud del elemento más largo en su lote, de forma que tengan una longitud uniforme. Aunque es posible rellenar el texto en la función `tokenizer` haciendo |
| 92 | +`padding=True`, el rellenado dinámico es más eficiente. |
| 93 | + |
| 94 | +El `DataCollatorForMultipleChoice` aplanará todas las entradas del modelo, les aplicará relleno y luego des-aplanará los resultados: |
| 95 | + |
| 96 | +<frameworkcontent> |
| 97 | +<pt> |
| 98 | +```py |
| 99 | +>>> from dataclasses import dataclass |
| 100 | +>>> from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy |
| 101 | +>>> from typing import Optional, Union |
| 102 | +>>> import torch |
| 103 | + |
| 104 | + |
| 105 | +>>> @dataclass |
| 106 | +... class DataCollatorForMultipleChoice: |
| 107 | +... """ |
| 108 | +... Collator de datos que le añadirá relleno de forma automática a las entradas recibidas para |
| 109 | +... una tarea de selección múltiple. |
| 110 | +... """ |
| 111 | + |
| 112 | +... tokenizer: PreTrainedTokenizerBase |
| 113 | +... padding: Union[bool, str, PaddingStrategy] = True |
| 114 | +... max_length: Optional[int] = None |
| 115 | +... pad_to_multiple_of: Optional[int] = None |
| 116 | + |
| 117 | +... def __call__(self, features): |
| 118 | +... label_name = "label" if "label" in features[0].keys() else "labels" |
| 119 | +... labels = [feature.pop(label_name) for feature in features] |
| 120 | +... batch_size = len(features) |
| 121 | +... num_choices = len(features[0]["input_ids"]) |
| 122 | +... flattened_features = [ |
| 123 | +... [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features |
| 124 | +... ] |
| 125 | +... flattened_features = sum(flattened_features, []) |
| 126 | + |
| 127 | +... batch = self.tokenizer.pad( |
| 128 | +... flattened_features, |
| 129 | +... padding=self.padding, |
| 130 | +... max_length=self.max_length, |
| 131 | +... pad_to_multiple_of=self.pad_to_multiple_of, |
| 132 | +... return_tensors="pt", |
| 133 | +... ) |
| 134 | + |
| 135 | +... batch = {k: v.view(batch_size, num_choices, -1) for k, v in batch.items()} |
| 136 | +... batch["labels"] = torch.tensor(labels, dtype=torch.int64) |
| 137 | +... return batch |
| 138 | +``` |
| 139 | +</pt> |
| 140 | +<tf> |
| 141 | +```py |
| 142 | +>>> from dataclasses import dataclass |
| 143 | +>>> from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy |
| 144 | +>>> from typing import Optional, Union |
| 145 | +>>> import tensorflow as tf |
| 146 | + |
| 147 | + |
| 148 | +>>> @dataclass |
| 149 | +... class DataCollatorForMultipleChoice: |
| 150 | +... """ |
| 151 | +... Data collator that will dynamically pad the inputs for multiple choice received. |
| 152 | +... """ |
| 153 | + |
| 154 | +... tokenizer: PreTrainedTokenizerBase |
| 155 | +... padding: Union[bool, str, PaddingStrategy] = True |
| 156 | +... max_length: Optional[int] = None |
| 157 | +... pad_to_multiple_of: Optional[int] = None |
| 158 | + |
| 159 | +... def __call__(self, features): |
| 160 | +... label_name = "label" if "label" in features[0].keys() else "labels" |
| 161 | +... labels = [feature.pop(label_name) for feature in features] |
| 162 | +... batch_size = len(features) |
| 163 | +... num_choices = len(features[0]["input_ids"]) |
| 164 | +... flattened_features = [ |
| 165 | +... [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features |
| 166 | +... ] |
| 167 | +... flattened_features = sum(flattened_features, []) |
| 168 | + |
| 169 | +... batch = self.tokenizer.pad( |
| 170 | +... flattened_features, |
| 171 | +... padding=self.padding, |
| 172 | +... max_length=self.max_length, |
| 173 | +... pad_to_multiple_of=self.pad_to_multiple_of, |
| 174 | +... return_tensors="tf", |
| 175 | +... ) |
| 176 | + |
| 177 | +... batch = {k: tf.reshape(v, (batch_size, num_choices, -1)) for k, v in batch.items()} |
| 178 | +... batch["labels"] = tf.convert_to_tensor(labels, dtype=tf.int64) |
| 179 | +... return batch |
| 180 | +``` |
| 181 | +</tf> |
| 182 | +</frameworkcontent> |
| 183 | + |
| 184 | +## Entrenamiento |
| 185 | + |
| 186 | +<frameworkcontent> |
| 187 | +<pt> |
| 188 | +Carga el modelo BERT con [`AutoModelForMultipleChoice`]: |
| 189 | + |
| 190 | +```py |
| 191 | +>>> from transformers import AutoModelForMultipleChoice, TrainingArguments, Trainer |
| 192 | + |
| 193 | +>>> model = AutoModelForMultipleChoice.from_pretrained("bert-base-uncased") |
| 194 | +``` |
| 195 | + |
| 196 | +<Tip> |
| 197 | + |
| 198 | +Para familiarizarte con el fine-tuning con [`Trainer`], ¡mira el tutorial básico [aquí](../training#finetune-with-trainer)! |
| 199 | + |
| 200 | +</Tip> |
| 201 | + |
| 202 | +En este punto, solo quedan tres pasos: |
| 203 | + |
| 204 | +1. Definir tus hiperparámetros de entrenamiento en [`TrainingArguments`]. |
| 205 | +2. Pasarle los argumentos del entrenamiento al [`Trainer`] jnto con el modelo, el dataset, el tokenizer y el collator de datos. |
| 206 | +3. Invocar el método [`~Trainer.train`] para realizar el fine-tuning del modelo. |
| 207 | + |
| 208 | +```py |
| 209 | +>>> training_args = TrainingArguments( |
| 210 | +... output_dir="./results", |
| 211 | +... evaluation_strategy="epoch", |
| 212 | +... learning_rate=5e-5, |
| 213 | +... per_device_train_batch_size=16, |
| 214 | +... per_device_eval_batch_size=16, |
| 215 | +... num_train_epochs=3, |
| 216 | +... weight_decay=0.01, |
| 217 | +... ) |
| 218 | + |
| 219 | +>>> trainer = Trainer( |
| 220 | +... model=model, |
| 221 | +... args=training_args, |
| 222 | +... train_dataset=tokenized_swag["train"], |
| 223 | +... eval_dataset=tokenized_swag["validation"], |
| 224 | +... tokenizer=tokenizer, |
| 225 | +... data_collator=DataCollatorForMultipleChoice(tokenizer=tokenizer), |
| 226 | +... ) |
| 227 | + |
| 228 | +>>> trainer.train() |
| 229 | +``` |
| 230 | +</pt> |
| 231 | +<tf> |
| 232 | +Para realizar el fine-tuning de un modelo en TensorFlow, primero convierte tus datasets al formato `tf.data.Dataset` con el método [`~TFPreTrainedModel.prepare_tf_dataset`]. |
| 233 | + |
| 234 | +```py |
| 235 | +>>> data_collator = DataCollatorForMultipleChoice(tokenizer=tokenizer) |
| 236 | +>>> tf_train_set = model.prepare_tf_dataset( |
| 237 | +... tokenized_swag["train"], |
| 238 | +... shuffle=True, |
| 239 | +... batch_size=batch_size, |
| 240 | +... collate_fn=data_collator, |
| 241 | +... ) |
| 242 | + |
| 243 | +>>> tf_validation_set = model.prepare_tf_dataset( |
| 244 | +... tokenized_swag["validation"], |
| 245 | +... shuffle=False, |
| 246 | +... batch_size=batch_size, |
| 247 | +... collate_fn=data_collator, |
| 248 | +... ) |
| 249 | +``` |
| 250 | + |
| 251 | +<Tip> |
| 252 | + |
| 253 | +Para familiarizarte con el fine-tuning con Keras, ¡mira el tutorial básico [aquí](training#finetune-with-keras)! |
| 254 | + |
| 255 | +</Tip> |
| 256 | + |
| 257 | +Prepara una función de optimización, un programa para la tasa de aprendizaje y algunos hiperparámetros de entrenamiento: |
| 258 | + |
| 259 | +```py |
| 260 | +>>> from transformers import create_optimizer |
| 261 | + |
| 262 | +>>> batch_size = 16 |
| 263 | +>>> num_train_epochs = 2 |
| 264 | +>>> total_train_steps = (len(tokenized_swag["train"]) // batch_size) * num_train_epochs |
| 265 | +>>> optimizer, schedule = create_optimizer(init_lr=5e-5, num_warmup_steps=0, num_train_steps=total_train_steps) |
| 266 | +``` |
| 267 | + |
| 268 | +Carga el modelo BERT con [`TFAutoModelForMultipleChoice`]: |
| 269 | + |
| 270 | +```py |
| 271 | +>>> from transformers import TFAutoModelForMultipleChoice |
| 272 | + |
| 273 | +>>> model = TFAutoModelForMultipleChoice.from_pretrained("bert-base-uncased") |
| 274 | +``` |
| 275 | + |
| 276 | +Configura el modelo para entrenarlo con [`compile`](https://keras.io/api/models/model_training_apis/#compile-method): |
| 277 | + |
| 278 | +```py |
| 279 | +>>> model.compile(optimizer=optimizer) |
| 280 | +``` |
| 281 | + |
| 282 | +Invoca el método [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) para realizar el fine-tuning del modelo: |
| 283 | + |
| 284 | +```py |
| 285 | +>>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=2) |
| 286 | +``` |
| 287 | +</tf> |
| 288 | +</frameworkcontent> |
0 commit comments